EVOLUTIONARY ALGORITHM T := 0 // start with an initial time INITPOPULATION P(T) // initialize a usually random population of individuals EVALUATE P(T)

Similar documents
Machine Learning: Algorithms and Applications

The Effects of Supermajority on Multi-Parent Crossover

Using Multi-chromosomes to Solve. Hans J. Pierrot and Robert Hinterding. Victoria University of Technology

EVOLUTIONARY ALGORITHMS AT CHOICE: FROM GA TO GP EVOLŪCIJAS ALGORITMI PĒC IZVĒLES: NO GA UZ GP

A Mating Strategy for Multi-parent Genetic Algorithms by Integrating Tabu Search

Keywords Genetic Algorithm (GA), Evolutionary, Representation, Binary, Floating Point, Operator

APPLICATION OF COMPUTER FOR ANALYZING WORLD CO2 EMISSION

ESQUIVEL S.C., LEIVA H. A., GALLARD, R.H.

Deterministic Crowding, Recombination And Self-Similarity

Evolutionary Algorithms - Population management and popular algorithms Kai Olav Ellefsen

Evolutionary Algorithms - Introduction and representation Jim Tørresen

A Gene Based Adaptive Mutation Strategy for Genetic Algorithms

Evolutionary Algorithms

Machine Learning. Genetic Algorithms

Machine Learning. Genetic Algorithms

What is an Evolutionary Algorithm? Presented by: Faramarz Safi (Ph.D.) Faculty of Computer Engineering Islamic Azad University, Najafabad Branch

Genetic'Algorithms'::' ::'Algoritmi'Genetici'1

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

Genetic Algorithm: An Optimization Technique Concept

The Metaphor. Individuals living in that environment Individual s degree of adaptation to its surrounding environment

Genetic Algorithms for Optimizations

An introduction to evolutionary computation

Part 1: Motivation, Basic Concepts, Algorithms

Evolutionary Computation

CSE /CSE6602E - Soft Computing Winter Lecture 9. Genetic Algorithms & Evolution Strategies. Guest lecturer: Xiangdong An

Genetic Algorithm: A Search of Complex Spaces

Evolutionary Computation. Lecture 1 January, 2007 Ivan Garibay

What is Evolutionary Computation? Genetic Algorithms. Components of Evolutionary Computing. The Argument. When changes occur...

Genetic Algorithms. Part 3: The Component of Genetic Algorithms. Spring 2009 Instructor: Dr. Masoud Yaghini

Improving Differential Evolution Algorithm with Activation Strategy

A Heuristic for Improved Genetic Bin Packing. Arthur L. CORCORAN and Roger L. WAINWRIGHT

initial set of random solutions called population satisfying boundary and/or system

The genetic algorithm is simplied to a minimal form which retains selection, recombination

Implementation of Genetic Algorithm for Agriculture System

Timetabling with Genetic Algorithms

Evolutionary Computation. Lecture 3. Evolutionary Computation. X 2 example: crossover. x 2 example: selection

GENETIC ALGORITHMS. Narra Priyanka. K.Naga Sowjanya. Vasavi College of Engineering. Ibrahimbahg,Hyderabad.

Adaptive Mutation with Fitness and Allele Distribution Correlation for Genetic Algorithms

A Comparison between Genetic Algorithms and Evolutionary Programming based on Cutting Stock Problem

Genetic Algorithm with Upgrading Operator

Chromosome Reduction in Genetic Algorithms. The University of Tulsa. 600 South College Avenue. Tulsa, OK , USA

Evolutionary computing

Novel Encoding Scheme in Genetic Algorithms for Better Fitness

CEng 713 Evolutionary Computation, Lecture Notes

Intro. ANN & Fuzzy Systems. Lecture 36 GENETIC ALGORITHM (1)

2. Genetic Algorithms - An Overview

Strategy Parameter Variety in Self-Adaptation of Mutation Rates

Recessive Trait Cross Over Approach of GAs Population Inheritance for Evolutionary Optimisation

Introduction Evolutionary Algorithm Implementation

Processor Scheduling Algorithms in Environment of Genetics

GENETIC ALGORITHM CHAPTER 2

Detecting and Pruning Introns for Faster Decision Tree Evolution

CHAPTER 3 RESEARCH METHODOLOGY

Minimizing Makespan for Machine Scheduling and Worker Assignment Problem in Identical Parallel Machine Models Using GA

Optimisation and Operations Research

College of information technology Department of software

Introduction To Genetic Algorithms

Logistics. Final exam date. Project Presentation. Plan for this week. Evolutionary Algorithms. Crossover and Mutation

Balancing the Effects of Parameter Settings on a Genetic Algorithm for Multiple Fault Diagnosis

VISHVESHWARAIAH TECHNOLOGICAL UNIVERSITY S.D.M COLLEGE OF ENGINEERING AND TECHNOLOGY. A seminar report on GENETIC ALGORITHMS.

Energy management using genetic algorithms

Genetic Programming for Symbolic Regression

Available online at International Journal of Current Research Vol. 9, Issue, 07, pp , July, 2017

Genetic Algorithm for Variable Selection. Genetic Algorithms Step by Step. Genetic Algorithm (Holland) Flowchart of GA

A Genetic Algorithm on Inventory Routing Problem

Dominant and Recessive Genes in Evolutionary Systems Applied to Spatial Reasoning

Genetic approach to solve non-fractional knapsack problem S. M Farooq 1, G. Madhavi 2 and S. Kiran 3

Computational Intelligence Lecture 20:Intorcution to Genetic Algorithm

Plan for today GENETIC ALGORITHMS. Randomised search. Terminology: The GA cycle. Decoding genotypes

PARALLEL LINE AND MACHINE JOB SCHEDULING USING GENETIC ALGORITHM

TIMETABLING EXPERIMENTS USING GENETIC ALGORITHMS. Liviu Lalescu, Costin Badica

On the Virtues of Parameterized Uniform Crossover

Genetic algorithms. History

Using Problem Generators to Explore the Effects of Epistasis

Genetic Algorithms and Genetic Programming Lecture 13

Design and Implementation of Genetic Algorithm as a Stimulus Generator for Memory Verification

The Impact of Population Size on Knowledge Acquisition in Genetic Algorithms Paradigm: Finding Solutions in the Game of Sudoku

PDGA: the Primal-Dual Genetic Algorithm

Comparative Study of Different Selection Techniques in Genetic Algorithm

Selecting Genetic Algorithm Operators for CEM Problems

Changing Mutation Operator of Genetic Algorithms for optimizing Multiple Sequence Alignment

Introduction To Genetic Algorithms

EMM4131 Popülasyon Temelli Algoritmalar (Population-based Algorithms) Introduction to Meta-heuristics and Evolutionary Algorithms

IMPLEMENTATION OF AN OPTIMIZATION TECHNIQUE: GENETIC ALGORITHM

Performance Analysis of Multi Clustered Parallel Genetic Algorithm with Gray Value

A METAEVOLUTIONARY APPROACH IN SEARCHING OF THE BEST COMBINATION OF CROSSOVER OPERATORS FOR THE TSP

GRAPH OF BEST PERFORMANCE std genetic algorthm (GA) FUNCTION 5. std GA using ranking. GENITOR using fitness. GENITOR using rank

GRAPH OF BEST PERFORMANCE std genetic algorthm (GA) FUNCTION 5. std GA using ranking. GENITOR using fitness. GENITOR using rank

Optimal Design of Laminated Composite Plates by Using Advanced Genetic Algorithm

Introduction to Genetic Algorithm (GA) Presented By: Rabiya Khalid Department of Computer Science

Tumor Detection Using Genetic Algorithm

Implementation of CSP Cross Over in Solving Travelling Salesman Problem Using Genetic Algorithms

Journal of Global Research in Computer Science PREMATURE CONVERGENCE AND GENETIC ALGORITHM UNDER OPERATING SYSTEM PROCESS SCHEDULING PROBLEM

Genetic algorithms and code optimization. A quiet revolution

Intelligent Techniques Lesson 4 (Examples about Genetic Algorithm)

Reducing Premature Convergence Problem in Genetic Algorithm: Application on Travel Salesman Problem

Genetic Algorithms in Matrix Representation and Its Application in Synthetic Data

Evolutionary Algorithms and Simulated Annealing in the Topological Configuration of the Spanning Tree

TRAINING FEED FORWARD NEURAL NETWORK USING GENETIC ALGORITHM TO PREDICT MEAN TEMPERATURE

Evolutionary Algorithms

Transcription:

Evolutionary exploration of search spaces A.E. Eiben gusz@wi.leidenuniv.nl Leiden University, Department of Computer Science Abstract. Exploration and exploitation are the two cornerstones of problem solving by search. Evolutionary Algorithms (EAs) are search algorithms that explore the search space by the genetic search operators, while exploitation is done by selection. During the history of EAs dierent operators have emerged, mimicing asexual and sexual reproduction in Nature. Here we give an overview of the variety of these operators, review results discussing the (dis)advantages of asexual and sexual mechanisms and touch on a new phenomenon: multi-parent reproduction. 1 Introduction Generate-and-test search algorithms obtain their power from two sources: exploration and exploitation. Exploration means the discovery of new regions in the search space, exploitation amounts to using collected information in order to direct further search to promising regions. Evolutionary Algorithms (EAs) are stochastic generate-and-test search algorithms having a number of particular properties. The standard pseudo-code for an EA is given in Figure 1, after [18]. There are dierent types of EAs, the most common classication distinguishes Genetic Algorithms (GA), Evolution Strategies (ES) and Evolutionary Programming (EP), [3]. Genetic Programming (GP) has grown out of GAs and can be seen as a sub-class of them. Besides the dierent historical roots and philosophy there are also technical dierences between the three main streams in evolutionary computation. These dierences concern the applied representation and the corresponding genetic search operators, the selection mechanism and the role of self-adaptation. Here we will concentrate on the question of operators. 2 Evolutionary search operators Search operators can be roughly classied by their arity and their type, the latter actually meaning the data type or representation they are dened for. In traditional GAs binary representation is used, that is candidate solutions (individuals or chromosomes) are bit-strings with a xed length L. The study of sequencing problems, such as routing and scheduling, has yielded order-based representation, where each individual is a permutation, [16, 32]. Parameter optimization problems with variables over continous domains has led to real-valued, or oating point, representation. Genetic Programming uses individuals that are

EVOLUTIONARY ALGORITHM T := 0 // start with an initial time INITPOPULATION P(T) // initialize a usually random population of individuals EVALUATE P(T) // evaluate fitness of all initial individuals in population WHILE NOT DONE DO // test for termination criterion (time, fitness, etc.) T := T + 1 // increase the time counter P'(T) := SELECTPARENTS P(T) // select sub-population for offspring production RECOMBINE P'(T) // recombine the "genes" of selected parents MUTATE P'(T) // perturb the mated population stochastically EVALUATE P'(T) // evaluate it's new fitness P := SURVIVE P,P'(T) // select the survivors based on actual fitness OD END EVOLUTIONARY ALGORITHM Fig. 1. Pseudo-code of an Evolutionary Algorithm tree structures, [24]. Accordingly, the search operators are dened for trees instead of strings. Real valued numerical optimization is the standard application area of Evolution Strategies [29]. Therefore, real-valued representation is the standard in ES. The original Evolutionary Programming scheme was applied to nite state machines, using appropriately dened operators, [13]. More recently, combinatorial and numerical optimization problems have been treated by EP, thus involving real-valued representation in the EP paradigm too. As far as their arity is concerned, the operators used commonly in EAs are either unary or binary. Unary operators are based on analogies of asexual reproduction mechanisms in nature, while binary operators simulate sexual reproduction. The so-called global recombination in ES and recently introduced multi-parent operators in GAs are n-ary. 2.1 Mutation The most commonly used unary operator is mutation which is meant to cause a random, unbiased change in an individual, thus resulting in one new individual. In GAs for bit-representation a so-called mutation rate p m is given as

external control parameter. Mutation works by ipping each bit independently with probability p m in the individual it is applied to. Mutation rate is thus a parameter that is used during performing mutation in case of bit representation. The probability that an individual is actually mutated is a derived measure: P [individual is mutated] = 1? (1? p m ) L In case of order-based representation swapping or shifting randomly selected genes within an individual can cause the random perturbation. In this case the mutation rate p m = P [individual is mutated] is used before mutating to decide whether or not the mutation operator will be applied. For tree-structured individuals replacing a sub-tree (possibly one single node) by a random tree is appropriate. Mutation rate in GP is interpreted as the frequency of performing mutation, similarily to order-based GAs. It is remarkable that the general advice concerning mutation rate in GP is to set p m to zero, [24]. For the real-valued representation as used in ES, EP and lately in GAs the basic unit of inheritance, i.e. one gene, is a oating point number x i, rather than a bit. In ES and EP a separate mutation mechanism is used for each gene, i.e. object variable. Mutating a certain value x i means perturbing that value with a random number drawn form a Gaussian distribution N(0; i ) by x i 0 = x i + i N(0; 1): That is, i is the standard deviation, also called the mean step size, concerning x i. It is essential that the i 's are not static external control parameters but are part of the individuals and undergo evolution. In particular, an individual contains not only the object variables x i, but also the strategy parameters i, resulting in 2 n genes for n object variables. (For this discussion we disregard the incorporation and adaptation of covariances.) In ES the mean step sizes are modied log-normally: i 0 = i exp( 0 N(0; 1) + N i (0; 1)) where 0 N(0; 1) and N i (0; 1) provide for global, repsectively individual control of step sizes, [3]. In Evolutionary Programming the so-called meta-ep scheme is introduced for adapting mean step sizes. It works by modifying 's normally: i 0 = i + i N(0; 1) where is a scaling constant, [13]. Summarizing, the main dierences between the GA and ES/EP mutation mechanisms are that { in a GA every gene (bit) of every individual is mutated by the same mutation rate p m, while ES/EP mutation is based on a dierent step size for every individual and every gene (object variable); { p m is constant during the evolution in a GA, while the step sizes undergo adaptation in ES/EP.

Let us note that in [12] exponentially decreasing mutation rates were succesfully applied in a GA. Cross-fertilization of ideas from ES and GAs has led to using individual mutation rates that undergo self-adaptation in a GA with binary representation, [2]. The basic idea is to extend each individual with extra bits that represent the individual's own mutation rate. Mutation, then, happens by rst decoding these extra bits to the individual's own mutation rate p, mutating the extra bits by probability p, again decoding the (now mutated) extra bits to obtain the mutation rate p 0 which is nally applied to the 'normal' bits of this individual. Recently, oating point representation is gaining recognition in GAs for numerical function optimization. Mutating a gene, then, can happen by uniform mutation, replacing x i by a number x 0 i drawn form its domain by a uniform distribution, or by non-uniform mutation, where 0 xi + (t; UB x i = i? x i ) if a random digit is 0 x i? (t; x i? LB i ) if a random digit is 1 UB i and LB i being the upper, repectively lower bound of the domain and t the generation number (time counter), [25]. The function (t; y) returns a value from [0; y] getting closer to 0 as t increases. This mechanism yields a dynamically changing mutation step size. This resembles ES and EP, but diers from them in an important aspect. Namely, while in ES and EP step sizes are adapted by the evolutionary process itself, here the changes follow a previously determined schedule. An interesting GA using bit representation combined with Gaussian mutation is presented in [20, 21]. The individuals are binary coded and undergo crossover acting on bits as usual. However, mutation is applied on the decoded values of the object variables x i and not on their binary codes. Before mutation random numbers drawn from a Poisson distribution determine the object variables that are to be mutated. Mutation itself works by decoding that part of the binary code that represents a selected variable x i to a real value and perturbing it by Gaussian noise from N(0; i ), where i is 0.1 times the maximum value of that gene. In the self-adapting version of this mechanism an extra gene, coded by a binary sequence, is added to the (front of the) individual. During mutation the extra bits coding the extra gene are decoded to a value, this value is perturbed by an N(0; ) Gaussian noise ( is a control parameter), resulting in 0 and nally the object variables are mutated according to an N(0; 0 ) distribution. 2.2 Crossover As for sexual reproduction, the binary operator in GAs is called crossover, it creates two children from two parents. The crossover rate p c prescribes the probability that a selected pair of would-be parents will actually mate, i.e. that crossover will be applied to them. It is thus used before crossing over. The interpretation of p m and p c is therefore dierent in GAs with bit representation. There are two basic crossover mechanisms: n-point crossover and uniform crossover.

The n-point crossover cuts the two parents of length L into n + 1 segments along the randomly chosen crossover points (the same points in both parents) and creates child 1 by gluing the odd segments form the father and the even segments from the mother. In child 2 odd segments come from the mother and even segments come from the father. In the majority of GA applications 1-point crossover or 2-point crossover are used, higher n's are applied seldomly. Uniform crossover decides per bit whether to insert the bit value from the father or from the mother in child 1 ; child 2 is created by taking the opposite decisions. Orderbased representation requires that each individual is a permutation. Applying n-point and uniform crossover to permutations may create children containing multiple occurrences of genes, i.e. children of permutations will not necessarily be permutations. This fact has led to the design of special order-based crossovers that do preserve the property of 'being a permutation', [16, 32]. One example is the order crossover OX that resembles 2-point crossover. OX cuts the parents in three segments along two randomly chosen crossover points. Child 1 inherits the middle segment from parent 1 without modication. The remaining positions are lled with genes from parent 2 in the order in which they appear, beginning with the rst position after the second crossover point and skipping genes already present in child 1. Child 2 is created alternatively. Clearly, tree-based representation in GP requires special crossover operators too. The standard GP crossover works by randomly chosing one node in parent 1 and in parent 2 independently. These nodes dene a subtree in each parent, crossover exchanges these subtrees by cutting them from parent x and inserting it at the crossover node in parent y. In Evolution Strategies there are dierent sexual recombination mechanisms all creating one child from a number of parents. The usual form of recombination, [3], produces one new individual x 0 from two parents x and y by 0 xi or y x i = i x i + (y i? x i ) discrete recombination intermediate recombination Discrete recombination is pretty much like uniform crossover creating only one child. Intermediate recombination (where 2 [0; 1] is a uniform random variable) makes a child that is the weighted average of the two parents. Just like mutation, recombination is applied to the object variables (x i 's) as well as to the strategy parameters ( i 's). Empirically, discrete recombination on object variables and intermediate recombination on strategy parameters gives best results. Similar operators are also introduced for GAs with real valued representation applied to convex search spaces [25]. These operators create two children of two parents. The so-called simple crossover choses a random crossover point k and crosses the parents after the k-th position applying a contraction coecient a to guarantee that the ospring fall inside the convex search space. In particular, if x and y are the two parents then child 1 is hx 1 ; : : : ; x k ; a y k+1 + (1? a) x k+1 ; : : : ; a y n + (1? a) x n i and child 2 is obtained by interchanging x and y. Single arithmetic crossover eects only one gene in each parent. The children of x and y obtained by this

mechanism are: hx 1 ; : : : ; a y k + (1? a) x k ; : : : ; x n i and hy 1 ; : : : ; a x k + (1? a) y k ; : : : ; y n i where a is a dynamic parameter, drawn randomly from a domain calculated for each pair of parents in such a way that the children are in the convex space. Whole arithmetical crossover is somewhat similar to intermediate recombination in ES in that it creates the linear combinations a x + (1? a) y and a y + (1? a) x as children of x and y. Here again, the choice of a has to guarantee that the children are in the convex space. A very particular feature within the EP paradigm is the total absence of sexual reproduction mechanisms. Mutation is the only search operator, and 'recombination in evolutionary programming is a nonissue beacause each solution is typically viewed as the analog of a species, and there is no sexual communication between species.', cf. [13] p. 103. Indeed, this view justies the omittance of recombination, however, this view is only justied by the human operator's freedom in taking any arbitrary perspective. Nevertheless, the fact that apparently all higher species apply sexual recombination indicates that it does have a competitive advantage in natural evolution. This ought to make one cautious and not rejecting it a priori. 3 Sex or no sex: importance of crossover and mutation The traditional views of EA streams on genetic operators are summarized in Table 1, after [3]. Triggered by the provoking success of EP without recombina- ES EP GA mutation main operator only operator background operator recombination important for none main operator self-adaptation Table 1. Role of mutation and recombination in dierent EA paradigms tion, as well as by 'introspective' motives in GA and ES there is more and more research devoted to the usefulness of mutation and recombination. Although neither GAs nor EP were originally developed as function optimizers, [5, 13], such investigations are usually performed on function optimization problems. Classical GA investigations tried to obtain a robust parameter setting concerning mutation rate and crossover rate, in combination with the pool size [4, 17, 27]. The generally acknowledged good heuristic values for mutation rate

and crossover rate are 1=chromosome:length, respectively 0.7-0.8. Recent studies, for instance [2, 12, 19, 23] provide theoretical and experimental evidence that decreasing mutation rate along the evolution is optimal. This suggests that mutation is important at the beginning of the search but becomes 'risky' at the end. In the meanwhile, GA's 'public opinion' lately acknowledges that mutation has a more important role than (re)introducting absent or lost genes in the population. As far as recombination is concerned, the main area of interest is traditionally the crossover's ability for combining and/or disrupting pieces of information, called schemata. Formally, a schema can be seen as a partial instantiation of the variables, i.e. a string based on a ternary alphabet 0; 1; #, where # means undened or don't care. Investigations in, for instance [6, 10, 30, 31], cumulated substantial knowledge on the disruptiveness of n-point and uniform crossover. The most important observation from [6] concerns the relationship between the disruptiveness and the power to recombine schemata. '... if one operator is better than another for survival, [i.e. less disruptive] it is worse for recombination (and vice versa). This... suggests very strongly that one cannot increase the recombination power without a corresponding increase in disruption. Another important feature of crossover operators is their exploration power, that is the variety of dierent regions in the search space that can be sampled. In other words, the number of dierent chromosomes that can be created by applying an operator. Clearly, 'exploration in the form of new recombinations comes at the cost of disruption', [10]. Recent research has thus shown that disruption is not necessarily disadvantageous. Nevertheless, no general conclusions on the relationship between the disruptiveness of genetic operators and the quality of the GA they are applied in could be established so far. Most probably this relation is (to some extent) problem dependent. Besides comparisons of crossover operators, the relative importance, i.e. search power, of sexual recombination and asexual, unary operators is investigated. The experiments reported in [28] show that mutation and selection are more powerful than previously believed. Nevertheless, it is observed that a GA with highly disruptive crossover outperforms a GA with mutation alone on problems with a low level of interactions between genes. It is also remarked that the power of an operator is strongly related to the selection mechanism. In [11] 'crossover's niche' is sought, i.e. problems where pair-wise mating has competitive advantages. This niche turns out to be non-empty, in the meanwhile the authors suspect that sexual recombination in GAs might be less powerful than generally believed. The usefulness of recombination is investigated on NK landscapes that allow gradual tuning of the ruggedness of the landscape in [22]. The conclusion is that 'recombination is useless on uncorrelated landscapes, but useful when high peaks are near one another and hence carry mutual information about their joint locations in the tness landscape'. The view presented in [30] relativizes the mutation-orcrossover battle by noting that both operators serve another purpose. 'Mutation serves to create random diversity in the population, while crossover serves as an accelerator that promotes emergent behavior from components.' Additionally,

crossover is useful for maximizing the accumulated payo, while it can be harmful if optimality is sought. Based on a comparison of EP and GA in [14] it is argued that crossover does not have any competitive advantage above mutation. Critiques on this investigation, for instance in [28], initiated more extensive comparisons of GAs and EP on function optimization problems. The results in [15] show a clear advatage of EP and make the authors conclude that 'no consistent advantage accrues from representing real-valued parameters as binary strings and allowing crossover to do within-parameter search.' Summarizing, at the moment there is a lot of eort payed to establishing the (dis)advantages of sexual, respectively asexual reproduction in simulated evolution. So far there is no generally valid judgement, and it is possible that preferences for either type of recombination are context and goal dependent. 4 More sex: n-ary operators N-ary recombination operators were rst introduced in ESs. The global form of recombination produces one new individual x 0 by 0 xi or y x i = i global discrete recombination x i + i (y i? x i ) global intermediate recombination where x and y are two parent individuals selected at random from the whole population for each i anew and drawing i 2 [0; 1] also happens for every 0 i independently, [3]. Thus, although for each component x i only two mating partners are consulted, the resampling mechanism causes that the child may inherit genes from more parents. This results in a higher mixing of the genetic information than the 2-parent versions, [1]. In GAs there are three general multi-parent crosover mechanisms. Scanning crossover and diagonal crossover were introduced in [7], where the performance of scanning was studied. In [9] diagonal crossover was investigated, compared to scanning and the classical 2-parent n-point crossover. An extensive overview of experiments is given in [8]. Scanning crossover generalizes uniform crossover, although creating only one child, by chosing one of the i-th genes of the n parents to be the i-th gene of the child. Formally, x i 0 = C(x 1 i ; : : : ; xn i ) where C is a choice mechanism choosing a value C(u 1 ; : : : ; u n ) from u 1 ; : : : ; u n. The choice can be random, based on a uniform distribution (uniform scanning), or biased by the tness of the parents (tness based scanning). It can be deterministic, based on the number of occurrences of the genes (occurrence based scanning). Diagonal crossover generalizes n-point crossover by selecting (n? 1) crossover points and composing n children by taking the resulting n chromosome segments from the parents 'along the diagonals'. Figure 2 illustrates this idea. The Gene Pool Recombination (GPR) in GAs was introduced in [26]. It was further studied and extended with a fuzzy mechanism in [33]. In GPR rst

X 0000000 1111111 0000000 1111111 X PARENT 1 PARENT 2 PARENT 3 CHILD 1 CHILD 2 CHILD 3 Fig. 2. Diagonal crossover with three parents a set of parents has to be selected, these form the so-called gene pool. Then the two parent alleles of an ospring are chosen randomly for each position with replacement from the gene pool (as opposed to ES, where they are chosen from the whole population). The ospring's allele is computed using any of the standard recombination schemes that work on two parents. GPR allows for theoretical analysis of GA convergence for innite populations with binomial tness distribution. As for the performance of multi-parent recombination, at the moment there are not many accessible results within ES. In [29] p. 146 the following is stated: 'A crude test yielded only a slight further increase in the rate of progress in changing from the bisexual to the multisexual scheme, whereas appreciable acceleration was achieved by introducing the bisexual in place of the asexual scheme, which allowed no recombination.' Scanning crossover and diagonal crossover show increased performance when using more parents on several numerical functions [7, 9, 8], and ongoing research proves the same on NK-landscapes. GPR was reported to converge 25 % faster than two parent recombination (TPR) on the ONEMAX problem [26], and the fuzzyed version of GPR proved to outperform fuzzyed TPR in speed as well as in realized heritability on the spherical function [33]. 5 Conclusions Evolutionary Algorithms perform search by means of creating genetic diversity by genetic operators and reducing this diversity by selection. There exists a great variety of genetic operators, in this paper we gave an overview. Forced by space constraints this overview was necessarily limited, there are numerous other operatos developed for special purposes. Grouping genetic operators by arity we discussed asexual reproduction (mutation) and sexual recombination between two parents. The importance of sex in Evolution, more technically the relative power of unary and binary operators, is a hot issue in recent EA research. We briey reviewed some of the related results, noting that at the moment none

of them can be given a clear preference. At the end we touched on a new phenomenon: sexual recombination between more than two parents. (This is multiparent, rather than multi-sexual recombination, as there are no genders in EAs.) The rst results on multi-parent operators in GAs are very promising, showing that they lead to enhanced search. In other words, sexual recombination is more powerful than previously believed, it just takes more sex, i.e. more parents. Although a lot of work is still to be done, this might give new impetus to the sex or no sex debate. References 1. T. Back F. Homeister and H.-P. Schwefel. A survey of evolution strategies. In Fourth International Conference on Genetic Algorithms, pages 2{9. Morgan Kaufmann, 1991. 2. T. Back. The interaction of mutation rate, selection, and self-adaptation within a genetic algorithm. In Parallel Problem Solving from Nature - 2, pages 85{94. North-Holland, 1992. 3. T. Back and H.-P. Schwefel. An overview of evolutionary algorithms for parameter optimization. Journal of Evolutionary Computation, 1:1{23, 1993. 4. K.A. De Jong. An analysis of the behavior of a class of genetic adaptive systems. Doctoral dissertation, University of Michigan, 1975. 5. K.A. De Jong. Are genetic algorithms function optimizers? In R. Manner and B. Manderick, editors, Parallel Problem Solving from Nature - 2, pages 3{13. North-Holland, 1992. 6. K.A. De Jong and W.M. Spears. A formal analysis of the role of multi-point crossover in genetic algorithms. Annals of Mathematics and Articial Intelligence, 5:1{26, 1992. 7. A.E. Eiben, P-E. Raue, and Zs. Ruttkay. Genetic algorithms with multi-parent recombination. In Parallel Problem Solving from Nature - 3, LNCS 866, pages 78{87. Springer-Verlag, 1994. 8. A.E. Eiben and C.H.M. van Kemenade. Performance of multi-parent crossover operators on numerical function optimization problems. Technical Report TR-95-33, Leiden University, 1995. 9. A.E. Eiben, C.H.M. van Kemenade, and J.N. Kok. Orgy in the computer: Multiparent reproduction in genetic algorithms. In Third European Conference on Articial Life, LNAI 929, pages 934{945. Springer-Verlag, 1995. 10. L.J. Eshelman and R.A. Caruana andj.d. Schaer. Biases in the crossover landscape. In Third International Conference on Genetic Algorithms, pages 10{19, 1989. 11. L.J. Eshelman and J.D. Schaer. Crossover's niche. In Fifth International Conference on Genetic Algorithms, pages 9{14, 1993. 12. T. Fogarty. Varying the probability of mutation in the genetic algorithm. In Third International Conference on Genetic Algorithms, pages 104{109, 1989. 13. D.B. Fogel. Evolutionary Computation. IEEE Press, 1995. 14. D.B. Fogel and J.W. Atmar. Comparing genetic operators with gaussian mutations in simulated evolutionary processes using linear systems. Biological Cybernetics, 63:111{114, 1990.

15. D.B. Fogel and L.C. Stayton. On the eectiveness of crossover in simulated evolutionary optimization. Biosystems, 32:3:171{182, 1994. 16. B.R. Fox and M.B. McMahon. Genetic operators for sequencing problems. In Foundations of Genetic Algorithms - 1, pages 284{300, 1991. 17. D.E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, 1989. 18. J. Heitkotter and D. Beasley. The Hitch-Hiker's Guide to Evolutionary Computation. FAQ for comp.ai.genetic, 1996. 19. Hesser and Manner. Towards an optimal mutation probability for genetic algorithms. In Parallel Problem Solving from Nature - 1, LNCS 496, pages 23{32. Springer-Verlag, 1990. 20. R. Hinterding. Gaussian mutation and self-adaptation for numeric genetic algorithms. In Second IEEE conference on Evolutionary Computation, pages 384{389, 1995. 21. R. Hinterding, H. Gielewski, and T.C. Peachey. The nature of mutation in genetic algorithms. In Sixth International Conference on Genetic Algorithms, pages 65{72, 1995. 22. W. Hordijk and B. Manderick. The usefulness of recombination. In Third European Conference on Articial Life, LNAI 929, pages 908{919. Springer-Verlag, 1995. 23. B.A. Julstrom. What have you done for me lately? adapting operator probabilities in a steady-state genetic algorithm. In Sixth International Conference on Genetic Algorithms, pages 81{87, 1995. 24. J.R. Koza. Genetic Programming. MIT Press, 1992. 25. Z. Michalewicz. Genetic Algorithms + Data structures = Evolution programs. Springer-Verlag, second edition, 1994. 26. H. Muhlenbein and H.-M. Voigt. Gene pool recombination in genetic algorithms. In Proc. of the Metaheuristics Conference. Kluwer Academic Publishers, 1995. 27. J.D. Schaer R.A. Caruana, L.J. Eshelman and R. Das. A study of control parameters aecting the online performance of genetic algorithms. In Third International Conference on Genetic Algorithms, pages 51{60, 1989. 28. J.D. Schaer and L.J. Eshelman. On crossover as an evolutionary viable strategy. In Fourth International Conference on Genetic Algorithms, pages 61{68, 1991. 29. H.-P. Schwefel. Evolution and Optimum Seeking. Sixth-Generation Computer Technology Series. Wiley, New York, 1995. 30. W.M. Spears. Crossover or mutation? In Foundations of Genetic Algorithms - 2, pages 221{238, 1993. 31. G. Syswerda. Uniform crossover in genetic algorithms. In Third International Conference on Genetic Algorithms, pages 2{9, 1989. 32. T. Starkweather, S. McDaniel K. Mathias D. Whitley and C. Whitley. A comparison of genetic sequenceing operators. In Fourth International Conference on Genetic Algorithms, pages 69{76, 1991. 33. H.-M. Voigt and H. Muhlenbein. Gene pool recombination and utilization of covariances for the Breeder Genetic Algorithm. In Second IEEE conference on Evolutionary Computation, pages 172{177, 1995. This article was processed using the LATEX macro package with LLNCS style