Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

Size: px
Start display at page:

Download "Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST"

Transcription

1 Introduction to Artificial Intelligence Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

2 Chapter 9 Evolutionary Computation

3 Introduction Intelligence can be defined as the capability of a system to adapt its behavior to an ever-changing environment. We are products of evolution, so by modelling the process of evolution, we might expect to create intelligent behavior. Computation simulates evolution on a computer. Such a simulation results is a series of optimization algorithms based on a simple set of rules. Optimization iteratively improves the quality of solutions until an optimal, or at least feasible, solution is found.

4 Introduction Evolution really intelligent? If the organism survives over successive generations, we can say that this organism can learn to predict changes in its environment. Evolution is a slow process from the human perspective, but the computer simulation of evolution does not take billions of years! The evolutionary approach for intelligent system designs is based on computational models of natural selection and genetics. We call it evolutionary computation which includes genetic algorithms, evolution strategies; it simulate evolution by using the processes of selection, mutation and reproduction.

5 Simulation of natural evolution On July 1 st, 1858, Charles Darwin presented his theory of evolution; this day marks the beginning of a revolution in biology. Darwin s classical theory of evolution, together with some other concepts, now represent the Neo-Darwinism. Neo-Darwinism is based on processes of reproduction, mutation, competition, and selection.

6 Simulation of natural evolution The power to reproduce appears to be an essential property of life. The power to mutate is also guaranteed in any living organism that reproduces itself in a continuously changing environment. Processes of competition and selection normally take place in the natural world, where expanding populations of different species are limited by a finite space.

7 Simulation of natural evolution If the process of evolution is to be emulated on a computer, what is optimized by evolution in natural life? Evolution can be seen as a process leading to the maintenance or increase of a population s ability to survive and reproduce in a specific environment. This ability is called evolutionary fitness. To illustrate fitness, for example, we can represent a environment by a landscape where each peak corresponds to the optimized fitness of a species. As evolution takes place, each species moves up the slopes of the landscape towards the peaks.

8 Simulation of natural evolution Environmental conditions change over time, so the species have to continuously adjust their routes; only the fittest can reach the peaks. The goal of evolution is to generate a population of individuals with increasing fitness. How is a population with increasing fitness generated? For example, some rabbits are faster than others, and we may say that these rabbits possess superior fitness because they have a greater chance of avoiding foxes, surviving and then breeding.

9 Simulation of natural evolution Of course, some slower rabbits may survive too. Some slow rabbits breed with fast rabbits, some fast with other fast rabbits, and some slow rabbits with other slow rabbits. The breeding generates a mixture of rabbit genes. If two parents have superior fitness, there is a good chance that a combination of their genes will produce an offspring with even higher fitness. Over time, the entire population of rabbits becomes faster to meet their environmental challenges in the face of foxes.

10 Simulation of natural evolution However, environmental conditions could change in favor of smart rabbits. To optimize survival, the genetic structure of the rabbit population will change accordingly. At the same time, faster and smarter rabbits encourage the breeding of faster and smarter foxes. Natural evolution is a never-ending process.

11 Simulation of natural evolution Can we simulate the process of natural evolution in a computer? Several different methods of evolutionary computation are now known. They all simulate natural evolution, generally by creating a population of individuals, evaluating their fitness, generating a new population through genetic operations, and repeating this process a number of times. There are different ways of performing evolutionary computation. First we start with genetic algorithms.

12 Genetic algorithms John Holland first introduced the concept of genetic algorithms (GA). Holland s GA can be represented by a sequence of procedural steps for moving from one population of artificial chromosomes to a new population. It uses natural selection and genetics-inspired techniques known as crossover and mutation. Each chromosome consists of a number of genes, and each gene is represented by 0 or 1, as shown in Figure 7.1.

13 Genetic algorithms Nature finds good chromosomes blindly. GAs do the same works. Two mechanisms link a GA: encoding and evaluation. The encoding is carried out by representing chromosomes as strings of ones and zeros. The evaluation function is used to measure the chromosome s performance, or fitness, for the problem to be solved.

14 Genetic algorithms The GA uses a measure of fitness of individual chromosomes to carry out reproduction. As reproduction takes place, the crossover operator exchanges parts of two single chromosomes, and the mutation operator changes the gene value in some randomly chosen location of the chromosome. After a number of successive reproductions, the less fit chromosomes become extinct, while those best able to survive gradually come to dominate the population. This simple reproduction mechanisms display highly complex behavior and can solve some difficult problems.

15 Genetic algorithms Figure 7.2 show a basic GA.

16 Genetic algorithms GA applies the following major steps:

17 Genetic algorithms

18 Genetic algorithms Are any conventional termination criteria used in genetic algorithms? Because GAs use a stochastic search method, the fitness of a population may remain stable for a number of generations before a superior chromosome appears. This makes conventional termination criteria problematic. A common practice is to terminate a GA after a specified number of generations and then examine the best chromosomes in the population. If no satisfactory solution is found, the GA is restarted.

19 Genetic algorithms A simple example will help us to understand how a GA works. Let us find the maximum value of the function (15x-x 2 ) where the x varies between 0 and 15. For simplicity, we may assume that x takes only integer values. Thus, chromosomes can be built with four genes:

20 Genetic algorithms Suppose that the size of the chromosome population N is 6, the crossover probability p c equals 0.7, and the mutation probability p m equals The fitness function in our example is defined by: The GA creates an initial population of chromosomes by filling six 4-bit strings with randomly generated ones and zeros.

21 Genetic algorithms The initial population is shown in Table 7.1. Sum: 218

22 Genetic algorithms The chromosomes initial locations on the fitness function are illustrated in Figure 7.3(a).

23 Genetic algorithms A real practical problem would typically have a population of thousands of chromosomes. The next step is to calculate the fitness of each individual chromosome. The results are also shown in Table 7.1. The average fitness of the initial population is 36. In order to improve it, the initial population is modified by using selection, crossover and mutation, the genetic operators.

24 Genetic algorithms In natural selection, only the fittest species can survive, breed, and thereby pass their genes on to the next generation. GAs use a similar approach, but unlike nature, the size of the chromosome population remains unchanged from one generation to the next.

25 Genetic algorithms How can we maintain the size of the population constant, and at the same time improve its average fitness? The last column in Table 7.1 shows the ratio of the individual chromosome s fitness to the population s total fitness. This ratio determines the chromosome s chance of being selected for mating. Thus, the chromosomes X5 and X6 stand a fair chance, while the chromosomes X3 and X4 have a very low probability of being selected. As a result, the chromosome s average fitness improves from one generation to the next.

26 Genetic algorithms One of the commonly used chromosome selection techniques is the roulette wheel selection. Figure 7.4 illustrates the roulette wheel for our example. Each chromosome is given a slice of a circular roulette wheel; the area of the slice within the wheel is equal to the chromosome fitness ratio.

27 Genetic algorithms For instance, the chromosomes X5 and X6 (the most fit chromosomes) occupy the largest areas, whereas the chromosomes X3 and X4 (the least fit) have much smaller segments in the roulette wheel. To select a chromosome for mating, a random number is generated in the interval [0, 100], and the chromosome s segment which covers the random number is selected. It is like spinning a roulette wheel where each chromosome has a segment on the wheel proportional to its fitness. We spin the roulette wheel and when the arrow comes to rest on one of the segments, the corresponding chromosome is selected.

28 Genetic algorithms In our example, we have an initial population of six chromosomes. To keep the same population in the next generation, the roulette wheel is spun six times. The first two spins might select chromosomes X6 and X2 to become parents, the second pair of spins might choose chromosomes X1 and X5, and the last two spins might select chromosomes X2 and X5. Once a pair of parent chromosomes is selected, the crossover operator is applied.

29 Genetic algorithms How does the crossover operator work? First, the crossover operator randomly chooses a crossover point where two parent chromosomes break, and then exchanges the chromosome parts after that point. As a result, two new offspring are created.

30 Genetic algorithms For example, the chromosomes X6 and X2 are crossed over after the second gene in each to produce the two offspring, as shown in the following figure.

31 Genetic algorithms If a pair of chromosomes does not cross over, then chromosome cloning takes place, and the offspring are created as exact copies of each parent. For example, the parent chromosomes X2 and X5 may not cross over; they create the offspring that are their exact copies (see the previous figure). A value of 0.7 for the crossover probability generally produces good results. After selection and crossover, the average fitness of the chromosome population has improved and gone from 36 to 42.

32 Genetic algorithms What does mutation represent? Its role is to provide a guarantee that the search algorithm is not trapped on a local optimum. The sequence of selection and crossover operations may stagnate at any homogeneous set of solutions. Under such conditions, all chromosomes are identical, and thus the average fitness of the population cannot be improved. The solution might become optimal, or rather locally optimal, because the search algorithm can not proceed any further. Mutation is equivalent to a random search, and help us to avoid the loss of genetic diversity.

33 Genetic algorithms How does the mutation operator work? The mutation operator flips a randomly selected gene in a chromosome. For example, the chromosome X1 is mutated in its second gene, and the chromosome X2 in its third gene, as shown in the following figure.

34 Genetic algorithms Mutation can occur at any gene in a chromosome with some probability. The mutation probability is quite small in nature, and is kept quite low for GAs, typically in the range between and Genetic algorithms assure the continuous improvement of the average fitness of the population, and after a number of generations (typically several hundred) the population evolves to a near-optimal solution.

35 Genetic algorithms In our example, the final population consist of only chromosomes and. The chromosome s final locations on the fitness function are illustrated in Figure 7.3(b).

36 Genetic algorithms

37 Genetic algorithms The previous example has only one variable. Suppose we find the maximum of the peak function of two variables: where parameters x and y vary between -3 and 3. The first step is to represent the problem variables as a chromosome. We represent parameters x and y as concatenated binary string: Each parameter is represented by eight binary bits.

38 Genetic algorithms Then, we choose the size of the chromosome population, for instance 6, and randomly generate an initial population. The next step is to calculate the fitness of each chromosome. This is done in two stages: First, a chromosome is decoded by converting it into two real numbers, x and y, in the interval between -3 and 3. Then the decoded values of x and y are put into the peak function.

39 Genetic algorithms How is decoding done? First, a chromosome, that is a string of 16 bits, is partitioned into two 8-bit strings: Then these strings are converted from binary (base 2) to decimal (base 10):

40 Genetic algorithms How is decoding done? The range of integers that can be handled by 8- bits is from 0 to (2 8-1) It can be mapped to the actual range of parameters x and y, that is the range from -3 to 3: [0 6] -> (0, , 2* ,, 255* ) To obtain the actual values of x and y, we multiply their decimal values by and subtract 3 from the results:

41 Genetic algorithms Using decoded values of x and y as inputs in the mathematical function, the GA calculates the fitness of each chromosome. To find the maximum of the peak function, we use crossover with the probability equal to 0.7 and mutation with the probability equal to Suppose the desired number of generations is 100; the GA will create 100 generations of 6 chromosomes before stopping.

42 Genetic algorithms Figure 7.6(a) shows the initial locations of the chromosomes on the contour plot of the peak function. Each chromosome is marked with a sphere. The initial population consists of randomly generated individuals.

43 Genetic algorithms Starting from the second generation, crossover begins to recombine features of the best chromosomes, and the population begins to converge on the peak containing the maximum, as shown in Figure 7.6(b). From then until the final generation, the GA is searching around this peak with mutation, resulting in diversity. Figure 7.6(c) shows the final chromosome generation. However, the population has converged on a chromosome lying on a local maximum of the peak function.

44 Genetic algorithms We are looking for the global maximum, so can we be sure the search is for the optimal solution? The most serious problem in the use of GAs is concerned with the quality of the results, in particular whether or not an optimal solution is being reached. One way of providing some degree of insurance is to compare results obtained under different rates of mutation. For example, increase the mutation rate to 0.01 and rerun the GA. The population now converge on the chromosomes shown in Figure 7.6(d).

45 Genetic algorithms To be sure of steady results we need to increase the size of the chromosome population. Fitness functions for real problems cannot be easily represented graphically. Instead, we can use performance graphs.

46 Genetic algorithms What is a performance graph? Since genetic algorithms are stochastic, their performance usually varies from generation to generation. So, a curve showing the average performance of the entire population of chromosomes as well as a curve showing the performance of the best individual in the population is a useful way of examining the behavior of a GA over the chosen number of generations.

47 Genetic algorithms Figures 7.7(a) and (b) show plots of the best and average values of the fitness function across 100 generations.

48 Genetic algorithms To ensure diversity and at the same time to reduce the harmful effects of mutation, we can increase the size of the chromosome population. Figure 7.8 shows performance graphs for 20 generations of 60 chromosomes; the population fitness converges on the nearly optimal solution.

49 Why genetic algorithms work A schema is a set of bit strings of ones, zeros and asterisks, where each asterisk can assume either value 1 or 0. The ones and zeros represent the fixed positions of a schema, while asterisks represent wild cards. For example, the schema stands for a set of 4-bit strings.

50 Why genetic algorithms work What is the relationship between a schema and a chromosome? A chromosome matches a schema when the fixed positions in the schema match the corresponding positions in the chromosome. For example, the schema H It matches the following set of 4-bit chromosomes:

51 Why genetic algorithms work Each chromosome begins with 1 and ends with 0. These chromosomes are said to be instances of the schema H. The number of defined bits (non-asterisks) in a schema is called the order. The schema H, for example, has two defined bits, and thus its order is 2.

52 Why genetic algorithms work In short, genetic algorithms manipulate schema when they run. If GAs use a technique that makes the probability of reproduction proportional to chromosome fitness, then according to the schema theorem, we can predict the presence of a given schema in the next chromosome generation. In other words, we can describe the GA s behavior in terms of the increase or decrease in the number of instances of a given schema.

53 Why genetic algorithms work Assume that at least one instance of the schema H is present in the chromosome initial generation i. Let be the number of instances of the schema H in the generation i, and be the average fitness of these instances. We want to calculate the number of instances in the next generation,.

54 Why genetic algorithms work As the probability of reproduction is proportional to chromosome fitness, we can easily calculate the expected number of offspring of a chromosome x in the next generation: where is the fitness of the chromosome x, and is the average fitness of the chromosome initial generation i. Then, assuming that the chromosome x is an instance of the schema H, we obtain

55 Why genetic algorithms work Since, by definition, So, a schema with above average fitness will tend to occur more frequently in the next generation of chromosomes, and a schema with below average fitness will tend to occur less frequently.

56 Why genetic algorithms work How about effects caused by crossover and mutation? Crossover and mutation can both create and destroy instances of a schema. Here we will consider only destructive effects, that is effects that decrease the number of instances of the schema H. Let us first quantify the destruction caused by the crossover operator. The schema will survive after crossover if at least one of its offspring is also its instance. This is the case when crossover does not occur within the defining length of the schema.

57 Why genetic algorithms work What is the defining length of a schema? The distance between the outermost defined bits of a schema is called defining length. For example, the defining length of is 3, is 5 and of is 7. If crossover takes place within the defining length, the schema H can be destroyed and offspring that are not instances of H can be created. The schema H will not be destroyed if two identical chromosomes cross over, even when crossover occurs within the defining length.

58 Why genetic algorithms work Thus, the probability that the schema H will survive after crossover can be defined as: where p c is the crossover probability, and l and l d are, respectively, the length and the defining length of the schema H. It is clear, that the probability of survival under crossover is higher for short schema rather than for long ones.

59 Why genetic algorithms work Consider the destructive effects of mutation. Let p m be the mutation probability for any bit of the schema H, and n be the order of the schema H. Then (1- p m ) represents the probability that the bit will not be mutated, and thus the probability that the schema H will survive after mutation is determined as:

60 Why genetic algorithms work It is also clear that the probability of survival under mutation is higher for low-order schema than for high-order ones. We can now amend Eq. (7.3) to take into account the destructive effects of crossover and mutation: This equation describes the growth of a schema from one generation to the next. It is known as the Schema Theorem. Because Eq. (7.6) considers the destructive effects of crossover and mutation, it gives us a lower bound on the number of instances of the schema H in the next generation.

61 Case study: maintenance scheduling with genetic algorithms There is no theoretical basis to support that a GA will outperform other optimization techniques. We consider a simple application of the GA to problems of scheduling resources. One of the most successful areas for GA applications includes the problem of resources. Scheduling problems are difficult to solve since they are complicated by many constraints. The key to the success of the GA is to define a fitness function to incorporate all these constraints.

62 Case study: maintenance scheduling with genetic algorithms Our problem is the maintenance scheduling in modern power systems. This work should be carried out under several constraints and uncertainties, such as failures and forced outages of power equipment and delays in obtaining spare parts. Human experts work out the maintenance scheduling by hand, and there is no guarantee that the (near) optimum schedule is produced.

63 Case study: maintenance scheduling with genetic algorithms A typical process of the GA development includes the following steps: 1. Specify the problem, define constraints and optimum criteria. 2. Represent the problem domain as a chromosome. 3. Define a fitness function to evaluate the chromosome s performance. 4. Construct the genetic operators. 5. Run the GA and tune its parameters.

64 Case study: maintenance scheduling with genetic algorithms Step 1: Specify the problem, define constraints and optimum criteria This is the most important step in developing a GA, because if it is not correct and complete a viable schedule cannot be obtained. The purpose of maintenance scheduling is to find the sequence of outages of power units over a given period of time (normally a year) such that the security of a power system is maximized.

65 Case study: maintenance scheduling with genetic algorithms Any outage in a power system is associated with some loss in security. The security margin is determined by the system s net reserve. The net reserve is defined as the total installed generating capacity of the system minus the power lost due to a scheduled outage and minus the maximum load forecast during the maintenance period.

66 Case study: maintenance scheduling with genetic algorithms For instance, if we assume that the total installed capacity is 150MW and a unit of 20MW is scheduled for maintenance during the period when the maximum load is predicted to be 100MW, then the net reserve will be 30MW. Maintenance scheduling must ensure that sufficient net reserve is provided for secure power supply during any maintenance period.

67 Case study: maintenance scheduling with genetic algorithms Suppose, there are seven power units to be maintained in four equal intervals. The maximum loads expected during these intervals are 80, 90, 65 and 70MW. The unit capacities and their maintenance requirements are presented in Table 7.2.

68 Case study: maintenance scheduling with genetic algorithms The constraints for this problem can be specified as follows: Maintenance of any unit starts at the beginning of an interval and finishes at the end of the same or adjacent interval; the maintenance cannot be finished earlier than scheduled. The net reserve of the power system must be greater than or equal to zero at any interval. The optimum criterion here is that the net reserve must be at the maximum during any maintenance period.

69 Case study: maintenance scheduling with genetic algorithms Step 2: Represent the problem domain as a chromosome Our scheduling problem is essentially an ordering problem, requiring us to list the tasks in a particular order. A complete schedule may consist of a number of overlapping tasks, but not all orderings are legal, since they may violate the constraints. Our job is to represent a complete schedule as a chromosome of a fixed length.

70 Case study: maintenance scheduling with genetic algorithms An obvious coding scheme is to assign each unit a binary number and let the chromosome be a sequence of these binary numbers. However, an ordering of the units in a sequence is not yet a schedule. Some units can be maintained simultaneously, and we must also incorporate the time required for unit maintenance into the schedule. Thus, rather than ordering units in a sequence, we might build a sequence of maintenance schedules of individual units.

71 Case study: maintenance scheduling with genetic algorithms The unit schedule can be easily represented as a 4-bit string, where each bit is a maintenance interval. If a unit is to be maintained in a particular interval, the corresponding bit assumes value 1, otherwise it is 0. For example, the string presents a schedule for a unit to be maintained in the second interval. It also shows that the number of intervals required for maintenance of this unit is equal to 1. Thus, a complete maintenance schedule for our problem can be represented as a 28-bit chromosome.

72 Case study: maintenance scheduling with genetic algorithms Crossover and mutation operators could easily create binary strings that call for maintaining some units more than once and others not at all. In addition, we could call for maintenance periods that would exceed the number of intervals required for unit maintenance. A better approach is to change the chromosome syntax. A chromosome is a collection of elementary parts called genes; each gene is represented by only one bit and cannot be broken into smaller elements.

73 Case study: maintenance scheduling with genetic algorithms For our problem, we can adopt the same concept, but represent a gene by four bits. In other words, the smallest indivisible part of our chromosome is a 4-bit string. This representation allows crossover and mutation operators to act according to the theoretical grounding of genetic algorithms. What remains to be done is to produce a pool of genes for each unit:

74 Case study: maintenance scheduling with genetic algorithms The GA can now create an initial population of chromosomes by filling 7-gene chromosomes with genes randomly selected from the corresponding pools. A sample of such a chromosome is shown in Figure 7.9.

75 Case study: maintenance scheduling with genetic algorithms Step 3: Define a fitness function to evaluate the chromosome performance The chromosome evaluation is a crucial part of the GA, because chromosomes should selected for mating based on their fitness. The fitness function must capture what makes a maintenance schedule either good or bad for the user. For our problem we apply a fairly simple function concerned with constraint violations and the net reserve at each interval.

76 Case study: maintenance scheduling with genetic algorithms The evaluation of a chromosome starts with the sum of capacities of the units scheduled for maintenance at each interval. For the chromosome shown in Figure 7.9, we obtain:

77 Case study: maintenance scheduling with genetic algorithms Then these values are subtracted from the total installed capacity of the power system (in our case, 150 MW): And finally, by subtracting the maximum loads expected at each interval, we obtain the respective net reserves:

78 Case study: maintenance scheduling with genetic algorithms Since all the results are positive, this particular chromosome does not violate any constraint, and thus represents a legal schedule. The chromosome s fitness is determined as the lowest of the net reserves; in our case it is 20. If, however, the net reserve at any interval is negative, the schedule is illegal, and the fitness function returns zero. At the beginning of a run, a randomly built initial population might consist of all illegal schedules. In this case, chromosome fitness values remain unchanged, and selection takes place in accordance with the actual fitness values.

79 Case study: maintenance scheduling with genetic algorithms Step 4: Construct the genetic operators Constructing genetic operators is challenging and we must experiment to make crossover and mutation work correctly. Each gene in a chromosome is represented by a 4- bit indivisible string, which consists of a possible maintenance schedule for a particular unit. Thus, any random mutation of a gene or recombination of several genes from two parent chromosomes may result in changes of the maintenance schedules for individual units.

80 Case study: maintenance scheduling with genetic algorithms Figure 7.10(a) shows an example of the crossover application during a run of the GA.

81 Case study: maintenance scheduling with genetic algorithms The children can be made by cutting the parents at the randomly selected point denoted by the vertical line and exchanging parental genes after the cut. Figure 7.10(b) demonstrates an example of mutation. The mutation operator randomly selects a 4-bit gene in a chromosome and replaces it by a gene randomly selected from the corresponding pool. In the example shown in Figure 7.10(b), the chromosome is mutated in its third gene, which is replaced by the gene chosen from the pool of genes for the Unit 3.

82 Case study: maintenance scheduling with genetic algorithms Step 5: Run the GA and tune its parameters To run the GA, first, we must choose the population size and the number of generations. Common sense suggests that a larger population can achieve better solutions than a smaller one, but will work more slowly. The GA can run only a finite number of generations to obtain a solution. We could choose a very large population and run it only once, or we could choose a smaller population and run it several times.

83 Case study: maintenance scheduling with genetic algorithms Figure 7.11(a) presents performance graphs and the best schedule created by 50 generations of 20 chromosomes; the minimum of the net reserves for the best schedule is 15MW.

84 Case study: maintenance scheduling with genetic algorithms Figure 7.11(b) presents the results with 100 generations. The best schedule provides the minimum net reserve of 20MW. In both cases, the best individuals appeared in the initial generation, and the increasing number of generations did not affect the final solution. It indicates that we should try increasing the population size.

85 Case study: maintenance scheduling with genetic algorithms Figure 7.12(a) shows fitness function values across 100 generations, and the best schedule; the minimum net reserve has increased to 25MW. To make sure of the quality of the best schedule, we must compare results obtained under different rates of mutation. Let us increase the mutation rate to 0.01 and rerun the GA once more.

86 Case study: maintenance scheduling with genetic algorithms Figure 7.12(b) presents the results. The minimum net reserve is still 25MW. Now we can confidently say that the optimum solution has been found.

87 Evolution strategies Basically, evolution strategies can be used in technical optimization problems when no analytical objective function is available, and no conventional optimization method existed, thus engineers rely only on their intuition. Unlike GAs, evolution strategies use only a mutation operator.

88 Evolution strategies How do we implement an evolution strategy? In the simplest form as a (1+1)-evolution strategy, one parent generates one offspring per generation by applying normally distributed mutation. The (1+1)-evolution strategy can be implemented as follows:

89 Evolution strategies

90 Evolution strategies The (1+1)-evolution strategy can be represented as a block-diagram shown in Figure 7.13.

91 Evolution strategies Why do we vary all the parameters simultaneously when generating a new solution? An evolution strategy reflects the nature of a chromosome. A single characteristic of an individual may be determined by simultaneous interactions of several genes. Evolution strategies can solve a wide range of constrained and unconstrained non-linear optimization problems and produce better results than many conventional, highly complex, nonlinear optimization techniques.

92 Evolution strategies What are the differences between genetic algorithms and evolution strategies? The principal difference between a GA and an evolution strategy is that the former uses both crossover and mutation whereas the latter uses only mutation. In addition, when we use an evolution strategy we do not need to represent the problem in a coded form.

93 Evolution strategies Which method works best? An evolution strategy uses a purely numerical optimization procedure, similar to a Monte Carlo search. GAs have more general applications, but the hardest part of applying a GA is coding the problem. In general, to answer the question about which method works best, we have to experiment to find out; it is application-dependent.