Metaheuristic Optimization

Size: px
Start display at page:

Download "Metaheuristic Optimization"

Transcription

1 Metaheuristic Optimization 6. Random Walk Thomas Weise 汤卫思 Hefei University, South Campus 2 合肥学院南艳湖校区 / 南 2 区 Faculty of Computer Science and Technology 计算机科学与技术系 Institute of Applied Optimization 应用优化研究所 Shushan District, Hefei, Anhui, China 中国安徽省合肥市蜀山区 Econ. & Tech. Devel. Zone, Jinxiu Dadao 99 经济技术开发区锦绣大道 99 号

2 Outline 1 Random Walks website Metaheuristic Optimization Thomas Weise 2/12

3 Why do Hill Climbers work? ❼ Hill climbers work under the assumption that there is some structure in the search space. Metaheuristic Optimization Thomas Weise 3/12

4 Why do Hill Climbers work? ❼ Hill climbers work under the assumption that there is some structure in the search space. ❼ The chance that a good solution is neighboring another good solution should be higher than that it is surrounded by only bad solutions or at a random location. Metaheuristic Optimization Thomas Weise 3/12

5 Why do Hill Climbers work? ❼ Hill climbers work under the assumption that there is some structure in the search space. ❼ The chance that a good solution is neighboring another good solution should be higher than that it is surrounded by only bad solutions or at a random location. ❼ Based on this idea, the hill climber generates modified copies of the current solution and accepts them if they are better than the old solution. Metaheuristic Optimization Thomas Weise 3/12

6 Why do Hill Climbers work? ❼ Hill climbers work under the assumption that there is some structure in the search space. ❼ The chance that a good solution is neighboring another good solution should be higher than that it is surrounded by only bad solutions or at a random location. ❼ Based on this idea, the hill climber generates modified copies of the current solution and accepts them if they are better than the old solution. ❼ What would happen if we would always accept them? Metaheuristic Optimization Thomas Weise 3/12

7 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks Metaheuristic Optimization Thomas Weise 4/12

8 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search Metaheuristic Optimization Thomas Weise 4/12

9 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location Metaheuristic Optimization Thomas Weise 4/12

10 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

11 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

12 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

13 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

14 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

15 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

16 Random Walks ❼ Random walks [1 4] are also known as Drunkard s walks ❼ Also do not utilize the information gathered during the search ❼ Start at a (random) location and take random steps Metaheuristic Optimization Thomas Weise 4/12

17 Random Walks p best randomwalk(f) Input: f: the objective function subject to minization Input: [implicit] shouldterminate: the termination criterion Data: p new: the new solution to be tested Output: p best : the best individual ever discovered begin p best.g create() p best.x gpm(p best.g) p best.y f(p best.x) p new p best while shouldterminate do p new.g mutation(p new.g) p new.x gpm(p new.g) p new.y f(p new.x) if p new.y p best.y then p best p new return p best Metaheuristic Optimization Thomas Weise 5/12

18 Random Walks p best randomwalk(f) Input: f: the objective function subject to minization Input: [implicit] shouldterminate: the termination criterion Data: p new: the new solution to be tested Output: p best : the best individual ever discovered begin p best.g create() p best.x gpm(p best.g) p best.y f(p best.x) p new p best while shouldterminate do p new.g mutation(p new.g) p new.x gpm(p new.g) p new.y f(p new.x) if p new.y p best.y then p best p new 1 create initial candidate solution p best (also store it in p new) return p best Metaheuristic Optimization Thomas Weise 5/12

19 Random Walks p best randomwalk(f) Input: f: the objective function subject to minization Input: [implicit] shouldterminate: the termination criterion Data: p new: the new solution to be tested Output: p best : the best individual ever discovered begin p best.g create() p best.x gpm(p best.g) p best.y f(p best.x) p new p best while shouldterminate do p new.g mutation(p new.g) p new.x gpm(p new.g) p new.y f(p new.x) if p new.y p best.y then p best p new 1 create initial candidate solution p best (also store it in p new) 2 derive new solution p new from p new return p best Metaheuristic Optimization Thomas Weise 5/12

20 Random Walks p best randomwalk(f) Input: f: the objective function subject to minization Input: [implicit] shouldterminate: the termination criterion Data: p new: the new solution to be tested Output: p best : the best individual ever discovered begin p best.g create() p best.x gpm(p best.g) p best.y f(p best.x) p new p best while shouldterminate do p new.g mutation(p new.g) p new.x gpm(p new.g) p new.y f(p new.x) if p new.y p best.y then p best p new 1 create initial candidate solution p best (also store it in p new) 2 derive new solution p new from p new 3 if p new is better than p best, set p best = p new return p best Metaheuristic Optimization Thomas Weise 5/12

21 Random Walks p best randomwalk(f) Input: f: the objective function subject to minization Input: [implicit] shouldterminate: the termination criterion Data: p new: the new solution to be tested Output: p best : the best individual ever discovered begin p best.g create() p best.x gpm(p best.g) p best.y f(p best.x) p new p best while shouldterminate do p new.g mutation(p new.g) p new.x gpm(p new.g) p new.y f(p new.x) if p new.y p best.y then p best p new return p best 1 create initial candidate solution p best (also store it in p new) 2 derive new solution p new from p new 3 if p new is better than p best, set p best = p new 4 go back to 2, until termination criterion is met Metaheuristic Optimization Thomas Weise 5/12

22 Random Walks ❼ Let us implement a random walk Metaheuristic Optimization Thomas Weise 6/12

23 Random Walks ❼ Let us implement a random walk for 1 numerical optimization (over R n ) and for Metaheuristic Optimization Thomas Weise 6/12

24 Random Walks ❼ Let us implement a random walk for 1 numerical optimization (over R n ) and for 2 combinatorial optimization (e.g., for TSP over permutations). Metaheuristic Optimization Thomas Weise 6/12

25 Implementing the Random Walk Listing: The Random Walk Algorithm public class RandomWalk <G, X> extends OptimizationAlgorithm <G, X> { public Individual <G, X> solve( final IObjectiveFunction <X> f) { Individual <G, X> pstar, pnew; pstar = new Individual <>(); pnew = new Individual <>(); pstar.g = this. nullary. create( this. random); pstar.x = this.gpm.gpm( pstar.g); pstar.v = f. compute( pstar.x); pnew. assign( pstar); while (!( this. termination. shouldterminate())) { pnew.g = this. unary. mutate( pnew.g, this. random); pnew.x = this.gpm.gpm( pnew.g); pnew.v = f. compute( pnew.x); } if (pnew.v <= pstar.v) { pstar. assign( pnew); } } } return pstar; Metaheuristic Optimization Thomas Weise 7/12

26 When is optimization effective? ❼ Now we have: first simple metaheuristic algorithm Metaheuristic Optimization Thomas Weise 8/12

27 When is optimization effective? ❼ Now we have: first simple metaheuristic algorithm ❼ Is it good? Metaheuristic Optimization Thomas Weise 8/12

28 When is optimization effective? ❼ Now we have: first simple metaheuristic algorithm ❼ Is it good? ❼ When is optimization efficient? Metaheuristic Optimization Thomas Weise 8/12

29 When is optimization effective? ❼ Now we have: first simple metaheuristic algorithm ❼ Is it good? ❼ When is optimization efficient? ❼ When the information we get, e.g., from objective function evaluation, is used efficiently Metaheuristic Optimization Thomas Weise 8/12

30 When is optimization effective? ❼ Now we have: first simple metaheuristic algorithm ❼ Is it good? ❼ When is optimization efficient? ❼ When the information we get, e.g., from objective function evaluation, is used efficiently ❼ Comparison with algorithms that do not use this information! Metaheuristic Optimization Thomas Weise 8/12

31 Summary ❼ Random walks are like hill climbers, with the exception that they do not use the objective function to guide the search direction. Metaheuristic Optimization Thomas Weise 9/12

32 Summary ❼ Random walks are like hill climbers, with the exception that they do not use the objective function to guide the search direction. ❼ Hence, their performance is much worse, similar to random sampling. Metaheuristic Optimization Thomas Weise 9/12

33 Summary ❼ Random walks are like hill climbers, with the exception that they do not use the objective function to guide the search direction. ❼ Hence, their performance is much worse, similar to random sampling. ❼ This means that the idea of expection some sort of gradient in the search space towards better solutions is reasonable. Metaheuristic Optimization Thomas Weise 9/12

34 谢谢 Thank you Thomas Weise [ 汤卫思 ] tweise@hfuu.edu.cn Hefei University, South Campus 2 Institute of Applied Optimization Shushan District, Hefei, Anhui, China Caspar David Friedrich, Der Wanderer über dem Nebelmeer, Metaheuristic Optimization Thomas Weise 10/12

35 Bibliography Metaheuristic Optimization Thomas Weise 11/12

36 Bibliography I 1. Karl Pearson. The problem of the random walk. Nature, 72:294, July 27, doi: /072294b0. 2. William Feller. An Introduction to Probability Theory and Its Applications, Volume 1. Wiley Series in Probability and Mathematical Statistics Applied Probability and Statistics Section Series. Chichester, West Sussex, UK: Wiley Interscience, 3rd edition, ISBN and URL 3. Barry D. Hughes. Random Walks and Random Environments: Volume 1: Random Walks. New York, NY, USA: Oxford University Press, Inc., May 16, ISBN and URL 4. George H. Weiss and Robert J. Rubin. Random Walks: Theory and Selected Applications, volume 52 of Advances in Chemical Physics. Hoboken, NJ, USA: John Wiley & Sons, Inc., March 14, ISBN and doi: / ch5. Metaheuristic Optimization Thomas Weise 12/12