@MASTERSTHESIS\{IMM2017-07008, author = "N. N. Larsen", title = "Implementation and Evaluation of Estimation-of-Distribution Algorithms", year = "2017", school = "Technical University of Denmark, Department of Applied Mathematics and Computer Science", address = "Richard Petersens Plads, Building 324, {DK-}2800 Kgs. Lyngby, Denmark, compute@compute.dtu.dk", type = "", note = "{DTU} supervisor: Carsten Witt, cawi@dtu.dk, {DTU} Compute", url = "http://www.compute.dtu.dk/English.aspx", abstract = "In this master thesis I examine a new type of computer algorithms called Estimation-of-Distribution Algorithms ({EDA} for short). These algorithms are designed to find the optimal solution to a given problem, and are a type of metaheuristic. They use probabilities in order to determine the optimal solution. The algorithms can be configured in different ways by changing their input parameters which alters their behavior slightly. I examine the performance of these algorithms by developing a benchmark program designed to test these algorithms on well-known benchmark problems. The program will report the running time for a configured algorithm on a given problem. The different configurations are then analyzed in order to determine which traits the best configuration(s) possess. The algorithms examined in this thesis are the Univariate Marginal Distribution Algorithm (UMDA), the Compact Genetic Algorithm (cGA), the Bayesian Optimization Algorithm (BOA) and the Max-Min Ant System (MMAS). These algorithms are tested on the well-known benchmark problems OneMax, LeadingOnes, Jump and Trap. Scientific literature has been published regarding the running time of these algorithms and it is taken into account when the algorithms are examined and are used as initial guidelines for the experiments. This literature does not cover all algorithms and problems, and I then use previous experiments and already collected data as a starting point for the experiments. The different algorithms have different strengths which causes the best algorithm to vary for the different problems. For OneMax the variance in performance is relatively low with {UMDA} and cGA coming out on top. LeadingOnes is a much more difficult problem to solve for the algorithms, requiring many more evaluations on average. {BOA} uses far less evaluations than the other algorithms because its bayesian network allows it to model the problem perfectly. {BOA} is also the best performing algorithm in Jump, being able to complete a larger jump than the other algorithms and also requiring fewer evaluations on average. Trap is a difficult problem to asses because its design causes the algorithms to either terminate quickly or possibly not at all, causing a high variance in the running times. {BOA} and {MMAS} did seem to come out on top, being able to complete larger problem sizes than {UMDA} and cGA. Finally, the boundaries of the project are explored with extra suggested work that could expand the understanding of the algorithms and their performance." }