本科毕业设计(论文)外文参考文献译文及原文
学院轻工化工学院
专业制药工程
(天然药物方向)年级班别20 09级(2)班
学号3109002300
学生姓名黄学润
指导教师魏关锋
2013年6月
遗传/模拟退火算法及其应用
Guangming Lv, Xiaomeng Sun, Jian Wang
College of Mechanical and Electronic Engineering, Harbin Institute of Technology, Harbin Heilongjiang, China
lgmhit@https://www.sodocs.net/doc/412463072.html,
摘要:本文将模拟退火算法和遗传算法相结合,提出了一种新的算法。遗传算法(GA)中嵌入模拟退火算法(SA),结合成一个新的全局优化算法。SA的使用降低了GA的参数选择的困难。此外,新算法可以缩减组合的搜索区域,并避免了遗传算法中存在的“过早收敛”问题,提高了算法的收敛性。遗传操作的交叉算子在该算法中发挥着重要作用。通过计算机仿真,我们可以看到新的算法相对于传统的遗传算法和模拟退火算法更具优势。
关键词:模拟退火法;遗传算法;过早收敛;交叉算子
I.引言
遗传算法(GA)首先由密歇根大学教授J.Holland提出,源于对自然和人工系统的自适应行为的研究。GA是模拟生物在自然环境中的遗传和进化过程而形成的一种基于达尔文的进化论和孟德尔的遗传学说的自适应全局优化概率搜索算法。对于复杂的优化问题,没有必要使用GA的建模和复杂操作[1]。与传统的搜索算法相比,GA将优化问题的解空间转换成遗传空间。它从一个种群中产生问题的一个解,并根据“优胜劣汰”的原则,一代又一代的达到问题的最优解或最近解。
遗传算法的主要特点是:处理对象不是参数本身,而是参数集的编码操作;GA同时处理的几个群体中个体,即同时估计在搜索空间中的几个解;GA只利用问题的目标函数,不需要任何其他条件或辅助信息;GA不采取一定的角色,而采用概率的变化规律来指导搜索方法;GA可以在较大的解空间快速搜索。
GA通过选择复制的行为和遗传因素保持优化种群的进化使得他们最终收敛到最优解。选择复制给予个体更大的适应性和函数值更大的复制概率,并能加速
算法的收敛速度。交叉算子可以由父代个体的基因互换以帮助搜索出更好的个体。变异操作可以带来遗传基因进化群,避免陷入局部极值点。
近年来,两种类型的全局随机优化方法-模拟退火(SA)和遗传算法已经得到了广泛应用[2]。它们在难以由基于梯度的传统方法解决的复杂的优化问题上,显示出良好的性能。GA通过优胜劣汰的策略,采用生物进化的思想解决优化问题;SA源于统计物理学方法,由Kirkpatrick和其他研究人员首先引入组合优化领域[3]。GA具有较强的全局搜索能力,但存在过早收敛的问题,在进化后期,其搜索效率低,减缓遗传算法的进化,使得搜索效率低;SA具有较强的局部搜
索能力,能避免陷入局部优解,但搜索时不能很好地控制搜索过程而使工作效率很低[4]。在本文中,我们将模拟退火的思想嵌入到遗传算法,并有效地将它们结合在一起,从而减少了GA的参数选择的困难,和提高GA的全局收敛性,避免搜索时陷入局部优解。
II.模拟退火算法
模拟退火(SA)是一种基于Monte Carlo迭代法的启发式随机搜索算法。SA 来源于对固体物质的退火降温过程中的热平衡问题的模拟和随机搜索优化问题,以找到全局最优解或近似全局最优解[5]。SA在寻找最优解的过程中,不仅接受
优解,而且根据随机的验收标准,在一定程度上接受恶化解(Metropolis准则)。此外,接受恶化解的概率逐渐趋向于0,这使得能够跳出局部极值区,从而找到全局最优解,所以要确保算法的收敛性[6]。
模拟退火算法可以划分成三个部分:解空间,目标函数和初始解。其求解过程如下:
第一步:产生随机初始解x0(算法迭代的初始点);
第二步:初始化退火温度T0(足够大);
第三步:在温度T k下,执行如下操作:
(1)产生新的可行解x'(x'是x的相应的目标函数值);
(2)计算新解的评价函数f(x')和旧解的评价函数f(x)的差值:?f=f x′?f(x);(3)依照概率min?{1,exp?(??f T k)}>random[0,1]接收新解,(random[0,1]
是一个[0,1]区间内的随机数。如果温度达到T k,平衡条件转至第四步或第三步);
第四步:按一定的方式,缓慢降低温度,定义降温函数为T k +1=αT k,k←k +1(α∈[0,1])。α为略小于1.00的常数);
第五步:若满足收敛条件,退火过程结束。否则,转至第三步。
III .模拟退火新的遗传算法(SAGA)
虽然许多研究和应用表明遗传算法的性能是比较好的,但“过早收敛”的问题存在于许多实际应用中。在演化群体中,一些少数个体的适应度比其他个体大得多,再经过几代人,这些个体提前占据整个种群,进化过程提前收敛。为GA 选择一种选择方式,同时保持良好的个体及维持种群多样性,是一个棘手的问题。一些改进的选择方式提高遗传算法的性能是有用的。
传统的遗传算法,竞争只发生子代之间,子代和父代之间不存在竞争,所以可能会失去父代中一些优秀的个体。一些算法把最优解放入下一代以保持它们,但这可能会导致过早收敛。此外,GA采用随机交叉和变异系数和个体后,交叉和变异的个体不一定是优秀的,这可能会破坏原有的优秀个体,影响算法的性能。
GA和SA在一些现有的方法相结合。例如,SAGA在文献[7]中控制变异概率的选择(Pm=exp?(?θT));GA在文献[8]中提高了SA的性能,整个算法分为两部分:GA阶段和SA阶段。首先,它产生GA一组解,然后由SA调整优化。
本文中的算法与上述方法不同。通过使用SA,减少了GA的参数选择的困难。交叉操作是GA的主要因素,优化过程主要依赖于它。SAGA采用这个想法,并对每一个确定的个体执行交叉操作,并让交叉和变异后的子代与父代竞争。子代接受Boltzmann机制,这有利于保持良好的个体,可以同时避免过早收敛。随着个体的不断进化,温度逐渐降低,接受恶化解的概率也逐渐降低。它有效地利用SA的爬山特性,并提高了收敛速度。
本文的SAGA步骤如下:
第一步:初始化控制参数:N为种群规模;Pm为变异概率;T为退火初始温度;α一个是温度冷却参数。
第二步:随机生成初始解组。
第三步:对生成解组执行如下操作,直到下一代出来。
[1] 评估该组中的每一个个体的自适应值f(x i)(i= 1,2,…,N)。
[2] 随机选择两个个体x i和x j执行交叉操作,并生成两个新个体x i'和x j',然后
计算出两个新个体的自适应值f(x i')和f(x j'),然后在[0,1]之间随机产生一个随机数,依照概率min?{1,exp?(??f T k)}>random[0,1]接收新解,也就是接受新的个体。
[3] 个体交叉后执行变异操作,根据[2]中的方法决定是否接受变异后的解。
第四步:如果满足收敛条件,进化过程结束,或T k+1=αT K,转第三步。
IV.计算及实例分析
为了评估SAGA算法的性能,在本文中,我们对比SAGA与传统GA和GA 和SA的其他组合算法。求解以下两个函数最小值的最优化问题:
1 F1x=x
2 ,x∈?1,1;
2 F2x=1?sin30x | 1?x2 ,x∈0,1 .
F1x是一个二次函数。虽然它只有一个最小值点(x = 0处,F1x= 0),随着函数值不断接近最低值,后来的搜索算法的性能也变得非常重要。F2x是一个多峰函数,它在区间[0,1]有10个极小值点。全局极小值点(x = 0.05179,F2x= 0.0260374)对算法的全局收敛性是非常重要的。SAGA的控制参数如下:个体规模N = 100,变异率P = 0.01,初始温度T = 1000,冷却参数α= 0.9;传统GA 的控制参数是:个体规模N = 100,变异率Pm= 0.01,交叉率Pc = 0.6。
然后,分别用SAGA和传统GA求这两个函数的最小值。对单峰函数F1x,图1表示SAGA的结果,找到最优解前,迭代约180代;图2表示传统GA的结果,找到最优解前,迭代约490代。显然,这两种算法都可以收敛到最优解,但SAGA的收敛速率远远大于传统GA。相对来说,引入了SA的自适应GA的循环体,在一定程度上提高了效率[9]。
图1. SAGA 得到F 1 x 的结果
图
2. 传统
GA 得到的F 1 x 结果
对于多峰函数F 2 x ,图3表示的是SAGA 的结果,找到最优解前,迭代约200代;图4表示传统GA 的结果,它不能收敛到全局最优解,尽管GA 能在求解过程中找到最优解。所以,SAGA 的优势是显而易见的。
x的结果
图3. SAGA得到F2
与此同时,我们多次独立地运行每个算法和测试其平均求性能。结果表明:对于单峰函数F1x,SAGA和GA收敛到最优解的迭代次数,分别是180和490。所以,这两种算法都可以收敛到最优解,但SAGA的收敛速率明显大于GA。此外,SAGA的平均自适应值等于最优的函数值0,而GA的平均自适应值是0.012。对于多峰函数F2x,SAGA每次都能收敛到全局最优解,而传统GA不能收敛到全局最优解。这源于上述的个例操作。在收敛到全局最优解的条件下,相对于模糊自适应模拟退火遗传算法(FASAGA),SAGA收敛速率在一定程度上有所提高[10]。
V.结论
在本文中,我们介绍了SAGA的思想,减少GA参数选择的难度,并对交
叉和变异后的个体使用Boltzmann生存机制。由实例分析中,我们可以得到:该
算法充分利用交叉算子,这不仅可以减少可行区域的搜索范围,而且可以避免陷
入局部优解。它有效利用SA的爬山特性,提高了收敛速率。
参考文献
[1] Shunhuang Wang, Diqian Gu. Intelligence Control System and its Application.
Beijing: China Machine Press. 2005
[2] Xuemei Wang, Yihe Wang. The Combination of Simulated Annealing and Genetic
Algorithms. Chinese Journal of computer,1997 (4) 13-18.
[3] Chiye Lin, Fenghe Wang. Sequential Simulated Annealing for Multimodal Design
Optimization, 2003, 26(1):57-70.
[4] Haoyang Wu, Bingguo Chang, Changchun Zhu, Junhua Liu. Multigroup Parallel
Genetic Algorithm based on Simulated Annealing. Journal of Software. 2000 (3) : 20-25.
[5] Ling Wang. Intelligent Optimization Algorithms and its Application. Tsinghua
University Press. 2005
[6] Kirkpatrick S, Gellatt C D, V echi M R. Optimization by simulated annealing.
Science, 1983, 220: 671 - 680.
[7] Atashkari K, Nariman-Zadeh N, Pilechi A, Jamali A, Y ao X.Thermodynamic
Pareto optimization of turbojet engines using multi-objective genetic algorithms.
International Journal of Thermal Sciences .2005, 44(11) :1061-1071.
[8] Huang H.-C, Pan J.-S, Lu,Z.-M , Sun S. -H , Hang H. -M . V ector quantization
based on genetic simulated annealing. Signal Processing .2001, 81(7) 1513-1523.
[9] Hao Zhang, Ran Tao, Zhiyong Li, Hua Du. A Feature Selection Method Based on
Adaptive Simulated Annealing Genetic Algorithm. Acta Armamentarii, 2009 (1) : 15-19.
[10] Y onggang Peng, Xiaoping Luo, Wei Wei. New fuzzy adaptive simulated
annealing genetic algorithm. Control and Decision. 2009(6) : 15-20.
Simulated Annealing-New Genetic Algorithm and its Application
Guangming Lv, Xiaomeng Sun, Jian Wang
College of Mechanical and Electronic Engineering, Harbin Institute of Technology, Harbin Heilongjiang, China
lgmhit@https://www.sodocs.net/doc/412463072.html,
Abstract: In this paper, a new kind of algorithm was opposed combined with simulated annealing algorithm and new genetic algorithm. The simulated annealing (SA) method was brought into the genetic algorithm (GA), which combined the two methods into a new global optimization algorithm. The use of SA reduces the stress to choose for GA. Further more, the combination can reduce the search area and avoid the premature convergence problem existing in genetic algorithm, so to improve the convergence of the algorithm. The crossover operator in genetic operation plays a more important role in this algorithm. Through computer simulation, we can see there are advantages in this algorithm compared with traditional genetic algorithm and other pre-existing simulated annealing-genetic algorithm.
Keywords: simulated annealing; genetic algorithm; premature convergence; crossover operator
I. INTRODUCTION
Genetic Algorithm (GA) is firstly opposed by Professor J.Holland of Michigan University, deriving from the research on natural system and artificial adaptive system. GA is one kind of global optimization probability search algorithm based on theory of evolution by Darwin and theory of heredity by Mendel which can simulate life evolution mechanism in biosphere and realize optimization of specific goals in manual system. As to complicated optimization problems, there is no need of modeling and complicated operation in GA[1]. Compared with traditional search
algorithm, GA converts the solution search space of optimization problem into genetic space. It begins from one population of possible potential solution set of representative problems, and evolves generation after generation to better and better approximate solution according to the principle of survival of the fittest.
The main characteristic of GA is: the processing object is not the parameter itself but the individual to encode the parameter set; GA adopt the method of processing several individuals in the population simultaneously, that is estimating several solution in search space in the same time; GA only makes use of the target function of the problem and do not need any other precondition or auxiliary information; GA do not adopt certain roles but adopt change rules of probability to guide search method; GA can search rapidly in relatively big solution space.
GA keep the optimization population evolving make them to converge to optimum condition finally by the action of selecting copy and genetic factor. Selecting copy give the individual with bigger adaptive function value bigger copy probability, and can accelerate the convergence rate of the algorithm. Crossover operator can help to search out better individual by exchanging gene of parent individuals. V ariation operation can bring genetic gene to the evolution group, and avoid being entrapped into local extreme point.
In recent years, two kinds of global random optimization method-simulated annealing (SA) and genetic algorithm has found its extensive application[2]. They show excellent solution characteristic in complicated optimization problems that is hard to solve by traditional method based on gradient.GA adopts the idea of biological evolution and solves the optimization problems by the strategy of survival of the fittest; SA derives from statistical physics method and is first introduced into combinatorial optimization area by Kirkpatrick and other researchers[3]. GA has a strong global search ability but has the problem of premature convergence, and its search efficiency is low in later period of evolution so the population evolves slowly; SA has relatively strong local search ability can avoid being entrapped into local optimum solution when searching, but it cannot manage the search process well and
has a low operating efficiency [4]. In this paper, we introduced the idea of simulated annealing into new genetic algorithm and combined them effectively, so to reduce the stress to choose for GA and enhance the global convergence of GA and avoid being entrapped into local optimum solution when searching.
II. SIMULATED ANNEALING ALGORITHM
Simulated annealing (SA) is one kind of heuristic random search algorithm based on Monte Carlo iteration method. SA makes use of the similarity between thermal balance problem of solid matter annealing process and random search optimization problem to find global optimum solution or approximate global optimum solution [5]. In the process of searching optimum solution, SA can not only accept optimization solution but accept deterioration solution in certain degree according to random acceptance criteria (Metropolis criteria). In addition, the probability to accept deterioration solution tends to 0 gradually, and this make it possible to jump out from local extreme area and then find the global optimum solution, so to ensure the convergence of the algorithm [6].
Simulated annealing algorithm can be divided into three parts: solution space, target function and initial solution. And its solving procedure is as below: Step one: Generate the initial solution x0(starting point of iteration of the algorithm) randomly;
Step two: Initialize the annealing temperature T0 (make it big enough);
Step three: Execute following operation, under the temperature T k:
(1) Generate new feasible solution x' (x' is adjacent solution of x);
(2) Calculate the difference between evaluation function f(x') and f(x):
?f=f x′?f(x);
(3) Adopt new solution in the probability min{l,exp(??f T k)}>random[0,1]
(random[0,1] is a random number between 0 and 1. If the temperature reach T k, turn the balance condition to step four or to step 3);
Step four: Reduce the temperature in certain way, and define the decrease function as T k+1=αT k, k←k+1 (α∈[0, 1]). αis constant slightly smaller than 1.00);
Step five: If the convergence criteria is contented, the annealing procedure is over. Else, turn to step three.
III. SIMULATED ANNEALING- NEW GENETIC ALGORITHM (SAGA) Although many researches and applications indicate the performance of GA is relatively good, but premature problem occur in many practical applications. In the evolution group, the adaptation function value of some minority individuals are much bigger than others, and then after several generations these individuals take up the whole population and the evolution process converges in advance. It is a troublesome problem to choose a selection method to keep excellent individuals and maintain the diversity of the population at the same time for GA. Some modified selection method to improve the performance of GA is available.
As for traditional genetic algorithm, the competition happens only between offspring and there is no competition between offspring and parents, so some excellent individuals in parents may lose. Some algorithm put optimum solution into nest generation to keep them, but this may leads to premature convergence. Besides, what GA adopts is stochastic crossover and mutation coefficient and individuals after crossover and mutation are not necessarily excellent individual, and this may destroy original excellent individuals and affect performance of the algorithm.
GA and SA are combined in some existing method. For example, SAGA was adopted in document [7] to control the selection of mutation probability (Pm= exp?(?θT)); GA was adopted in document [8]to improve the performance of SA, and the whole algorithm was divided into two part: GA stage and SA stage. Firstly it evolves a group of solution by GA, and then adjusts optimization by SA.
The algorithm in this paper is different from methods above. It reduces the stress to choose for GA by the use of SA. Crossover operation is the main factor of GA, and optimization process mainly relies on it. SAGA adopts this idea and executes crossover operation to every selected individuals, and lets offspring after crossover and mutation compete with there parent. Offspring are accepted by Boltzmann
mechanism, and this is benefit for keeping excellent individuals and can avoid premature convergence at the same time. As the evolving of the population, the temperature decreases gradually, and the probability to accept inferior solution decreases gradually too. It makes use of the mountain climbing characteristics of SA effectively, and improves the convergence rate.
The SAGA algorithm in this paper is as below:
Step one: Initialize the control parameter: N is population size; Pm is mutation probability; T is annealing initial temperature; a is temperature cooling parameter.
Step two: Generate initial solution group randomly.
Step three: Execute operations to existing solution group as below until next generation come out.
[1] Evaluate the adaptive function value f (x i) (i= 1 ,2, ... , N) of every individual in this group.
[2] Select two individuals x i and x j randomly to execute crossover operation, and
generate two new individuals x i' and x j', and then calculate the adaptive function value f(x i') and f(x j') of both individuals; Then generate a random number named random between [0 , 1], and then accept new solution in the probability min{l,exp(??f T k)}>random[0,1], that is to accept new individuals.
[3] Execute mutation operation to individuals after crossover, and decide whether to
accept solutions after mutation according to the method in (2).
Step four: If the convergence condition is contented, the evolution process is over, or T k+1 =αT k and turn to (3).
IV. CALCULATION AND ANALYSIS OF EXAMPLES To evaluate the performance of SAGA algorithm in this paper, we contrast SAGA with general GA algorithm and other methods combined with GA and SA. Take minimum optimization problems of two functions as below:
1 F1x=x
2 ,x∈?1,1;
2 F2x=1?sin30x | 1?x2 ,x∈0,1 .
F1x is a quadratic function. Although it only has one minimum point (x = 0,
F1x= 0), function values closely near to the minimum, so the later search performance of the algorithm is very important. F2x is a multiple hump function, and it has ten minimal value points in the interval [0, 1]. The global minimal value point (x = 0.05179, F2x=0.0260374) is especially important to algorithm's global convergence of the function. The control parameters of SAGA are as below: population size N = 100, mutation probability Pm = 0.01, initial temperature T = 1000, cooling parameter α=0.9; the control parameters of standard GA algorithm are: population size N = 100, mutation probability Pm = 0. 01, crossover probability Pc= 0.6.
Then, solve the two function's minimum by SAGA and standard GA algorithm respectively. As to single hump function F1x, Fig.l indicates the outcome of SAGA algorithm, and it iterated about 180 generation before finding the optimum solution; Fig.2 indicates the outcome of general GA algorithm, and it iterate about 490 generation before finding the optimum solution. Obviously, both algorithms can converge to the optimum condition, but the convergence rate of SAGA is much bigger than that of general GA. The efficiency has improved in a certain degree compared with methods that embed SA into loop body of adaptive GA algorithm [9].
Fig.1 Outcome of F1x by SAGA
Fig.2 Outcome of F1x by general GA
As to multiple hump function F2x, Fig.3 indicates the outcome of SAGA algorithm, and it iterated about 200 generation before finding the optimum solution; Fig.4 indicates the outcome of general GA algorithm, and it cannot converge to global optimum point though GA can find optimum point during solving process. So, the advantage of SAGA is obvious.
Fig.3. Outcome of F2x by SAGA
Fig.4. Outcome of F2x by general GA
At the same time, we executed every algorithm several times independently and test their average solving performance. The result shows: as to single hump function F1(x), the average iteration numbers of SAGA and GA to converge to optimum point are 180 and 490 respectively. So, both algorithms can converge to optimum condition, but the convergence rate of SAGA is obviously bigger than GA. Moreover, the average adaptive function value of SAGA is equal to the optimum function value 0 while the average adaptive function value of GA is 0.012. As to multiple hump function F2(x), SAGA converged to global optimum point every time while GA cannot converge to global optimum point. It is in accordance with the single operation above. Compared with fuzzy adaptive simulated annealing genetic algorithm (FASAGA), the convergence rate has improved in a certain degree under the condition of converging to the global optimum point[10].
V. CONCLUSION
In this paper we introduce the idea of SA to GA, and reduce the stress to choose for GA, and use Boltzmann acceptance strategy to individuals after crossover and mutation. Through specific test we can get: this algorithm make full use of crossover operator, and this can not only reduce the search range of the feasible region but avoid being entrapped into local optimum solution. It makes use of the mountain climbing characteristics of SA effectively, and improves the convergence rate.
REFERENCES
[1] Shunhuang Wang, Diqian Gu. Intelligence Control System and its Application.
Beijing: China Machine Press. 2005
[2] Xuemei Wang, Yihe Wang. The Combination of Simulated Annealing and Genetic
Algorithms. Chinese Journal of computer,1997 (4) 13-18.
[3] Chiye Lin, Fenghe Wang. Sequential Simulated Annealing for Multimodal Design
Optimization, 2003, 26(1):57-70.
[4] Haoyang Wu, Bingguo Chang, Changchun Zhu, Junhua Liu. Multigroup Parallel
Genetic Algorithm based on Simulated Annealing. Journal of Software. 2000 (3) : 20-25.
[5] Ling Wang. Intelligent Optimization Algorithms and its Application. Tsinghua
University Press. 2005
[6] Kirkpatrick S, Gellatt C D, V echi M R. Optimization by simulated annealing.
Science, 1983, 220: 671 - 680.
[7] Atashkari K, Nariman-Zadeh N, Pilechi A, Jamali A, Y ao X.Thermodynamic
Pareto optimization of turbojet engines using multi-objective genetic algorithms.
International Journal of Thermal Sciences .2005, 44(11) :1061-1071.
[8] Huang H.-C, Pan J.-S, Lu,Z.-M , Sun S. -H , Hang H. -M . V ector quantization
based on genetic simulated annealing. Signal Processing .2001, 81(7) 1513-1523.
[9] Hao Zhang, Ran Tao, Zhiyong Li, Hua Du. A Feature Selection Method Based on
Adaptive Simulated Annealing Genetic Algorithm. Acta Armamentarii, 2009 (1) : 15-19.
[10] Y onggang Peng, Xiaoping Luo, Wei Wei. New fuzzy adaptive simulated
annealing genetic algorithm. Control and Decision. 2009(6) : 15-20.
单钻头退火算法matlab编程 clear clc a = 0.999; % 温度衰减函数的参数 t0 = 97; tf = 3; t = t0; Markov_length = 2800; % Markov链长度 coordinates = [ ]; coordinates(:,1) = []; amount = size(coordinates,1); % 城市的数目 % 通过向量化的方法计算距离矩阵 dist_matrix = zeros(amount, amount); coor_x_tmp1 = coordinates(:,1) * ones(1,amount); coor_x_tmp2 = coor_x_tmp1'; coor_y_tmp1 = coordinates(:,2) * ones(1,amount); coor_y_tmp2 = coor_y_tmp1'; dist_matrix = sqrt((coor_x_tmp1-coor_x_tmp2).^2 + ... (coor_y_tmp1-coor_y_tmp2).^2); sol_new = 1:amount; % 产生初始解 % sol_new是每次产生的新解;sol_current是当前解;sol_best是冷却中的最好解; E_current = inf;E_best = inf; % E_current是当前解对应的回路距离; % E_new是新解的回路距离; % E_best是最优解的 sol_current = sol_new; sol_best = sol_new; p = 1; while t>=tf for r=1:Markov_length % Markov链长度 % 产生随机扰动 if (rand < 0.5) % 随机决定是进行两交换还是三交换 % 两交换 ind1 = 0; ind2 = 0; while (ind1 == ind2) ind1 = ceil(rand.*amount); ind2 = ceil(rand.*amount); end tmp1 = sol_new(ind1); sol_new(ind1) = sol_new(ind2);
数学与统计学院 智能计算及应用课程设计 设计题目:智能计算解决旅行商问题 摘要 本文以遗传算法、模拟退火、蚁群算法三个算法解决旅行商问题,将三个算法进行比较分析。目前这三个算法广泛应用于各个领域中,本文以31个城市为例,运用遗传算法、模拟退火、蚁群算法分别进行了计算,将他们的计算结果进行了比较分析。 关键词:遗传算法模拟退火蚁群算法旅行商问题 背景: 遗传算法: 20世纪60年代初,美国Michigan大学的John Holland教授开始研究自然和人工系统的自适应行为,在从事如何建立能学习的机器的研究过程中,受达尔文进化论的启发,逐渐意识到为获得一个好的算法仅靠单个策略建立和改进是不够的,还要依赖于一个包含许多候选策略的群体的繁殖,从而提出了遗传算法的基本思想。 20世纪60年代中期,基于语言智能和逻辑数学智能的传统人工智能十分兴盛,而基于自然进化思想的模拟进化算法则遭到怀疑与反对,但Holland及其指导的博士仍坚持这一领域的研究。Bagley发表了第一篇有关遗传算法应用的论文,并首先提出“遗传算法”这一术语,在其博士论文中采用双倍体编码,发展了复制、交叉、变异、显性、倒位等基因操作算子,并敏锐地察觉到防止早熟的机理,发展了自组织遗传算法的概念。与此同时,Rosenberg在其博士论文中进行了单细胞生物群体的计算机仿真研究,对以后函数优化颇有启发,并发展了自适应交换策略,在遗传操作方面提出了许多独特的设想。Hollistien在其1971年发表的《计算机控制系统的人工遗传自适应方法》论文中首次将遗传算法应用于函数优化,并对优势基因控制、交叉、变异以及编码技术进行了深入的研究。 人们经过长期的研究,在20世纪}o年代初形成了遗传算法的基本框架。1975年Holland 出版了经典著作“Adaptation in Nature and Artificial System",该书详细阐述了遗传算
模拟退火算法 模拟退火算法来源于固体退火原理,将固体加温至充分高,再让其徐徐冷却,加温时,固体内部粒子随温升变为无序状,内能增大,而徐徐冷却时粒子渐趋有序,在每个温度都达到平衡态,最后在常温时达到基态,内能减为最小。根据Metropolis准则,粒子在温度T时趋于平衡的概率为e-ΔE/(kT),其中E为温度T时的内能,ΔE为其改变量,k为Boltzmann常数。用固体退火模拟组合优化问题,将内能E模拟为目标函数值f,温度T演化成控制参数t,即得到解组合优化问题的模拟退火算法:由初始解i和控制参数初值t开始,对当前解重复“产生新解→计算目标函数差→接受或舍弃”的迭代,并逐步衰减t值,算法终止时的当前解即为所得近似最优解,这是基于蒙特卡罗迭代求解法的一种启发式随机搜索过程。退火过程由冷却进度表(Cooling Schedule)控制,包括控制参数的初值t及其衰减因子Δt、每个t值时的迭代次数L和停止条件S。 模拟退火算法的模型 模拟退火算法可以分解为解空间、目标函数和初始解三部分。 模拟退火的基本思想: (1)初始化:初始温度T(充分大),初始解状态S(是算法迭代的起 点),每个T值的迭代次数L (2) 对k=1,……,L做第(3)至第6步: (3) 产生新解S′ (4) 计算增量Δt′=C(S′)-C(S),其中C(S)为评价函数 (5) 若Δt′<0则接受S′作为新的当前解,否则以概率exp(-Δt′/T)
接受S′作为新的当前解. (6) 如果满足终止条件则输出当前解作为最优解,结束程序。终止条件通常取为连续若干个新解都没有被接受时终止算法。 (7) T逐渐减少,且T->0,然后转第2步。 算法对应动态演示图: 模拟退火算法新解的产生和接受可分为如下四个步骤: 第一步是由一个产生函数从当前解产生一个位于解空间的新解;为便于后续的计算和接受,减少算法耗时,通常选择由当前新解经过简单地变换即可产生新解的方法,如对构成新解的全部或部分元素进行置换、互换等,注意到产生新解的变换方法决定了当前新解的邻域结构,因而对冷却进度表的选取有一定的影响。 第二步是计算与新解所对应的目标函数差。因为目标函数差仅由变换部分产生,所以目标函数差的计算最好按增量计算。事实表明,对大多数应用而言,这是计算目标函数差的最快方法。 第三步是判断新解是否被接受,判断的依据是一个接受准则,最常用的接受准则是Metropo1is准则: 若Δt′<0则接受S′作为新的当前解S,否则以概率exp(-Δt′/T)接受S′作为新的当前解S。 第四步是当新解被确定接受时,用新解代替当前解,这只需将当前解中对应于产生新解时的变换部分予以实现,同时修正目标函数值即可。此时,当前解实现了一次迭代。可在此基础上开始下一轮试验。而当新解被判定为舍弃时,则
本科毕业设计(论文)外文参考文献译文及原文 学院轻工化工学院 专业制药工程 (天然药物方向)年级班别20 09级(2)班 学号3109002300 学生姓名黄学润 指导教师魏关锋 2013年6月
遗传/模拟退火算法及其应用 Guangming Lv, Xiaomeng Sun, Jian Wang College of Mechanical and Electronic Engineering, Harbin Institute of Technology, Harbin Heilongjiang, China lgmhit@https://www.sodocs.net/doc/412463072.html, 摘要:本文将模拟退火算法和遗传算法相结合,提出了一种新的算法。遗传算法(GA)中嵌入模拟退火算法(SA),结合成一个新的全局优化算法。SA的使用降低了GA的参数选择的困难。此外,新算法可以缩减组合的搜索区域,并避免了遗传算法中存在的“过早收敛”问题,提高了算法的收敛性。遗传操作的交叉算子在该算法中发挥着重要作用。通过计算机仿真,我们可以看到新的算法相对于传统的遗传算法和模拟退火算法更具优势。 关键词:模拟退火法;遗传算法;过早收敛;交叉算子 I.引言 遗传算法(GA)首先由密歇根大学教授J.Holland提出,源于对自然和人工系统的自适应行为的研究。GA是模拟生物在自然环境中的遗传和进化过程而形成的一种基于达尔文的进化论和孟德尔的遗传学说的自适应全局优化概率搜索算法。对于复杂的优化问题,没有必要使用GA的建模和复杂操作[1]。与传统的搜索算法相比,GA将优化问题的解空间转换成遗传空间。它从一个种群中产生问题的一个解,并根据“优胜劣汰”的原则,一代又一代的达到问题的最优解或最近解。 遗传算法的主要特点是:处理对象不是参数本身,而是参数集的编码操作;GA同时处理的几个群体中个体,即同时估计在搜索空间中的几个解;GA只利用问题的目标函数,不需要任何其他条件或辅助信息;GA不采取一定的角色,而采用概率的变化规律来指导搜索方法;GA可以在较大的解空间快速搜索。 GA通过选择复制的行为和遗传因素保持优化种群的进化使得他们最终收敛到最优解。选择复制给予个体更大的适应性和函数值更大的复制概率,并能加速
模拟退火算法简介与实例 2010-07-10 12:30:55| 分类:algorithms | 标签:|字号大中小订阅 摘要 模拟退火算法是S. Kirkpatrick, C. D. Gelatt和M. P. Vecchi在1983年所发明。是一种典型的概率模拟算法(Monte Carlo算法),其基本想想与冶金上的退火有相似之处,在一个相当大的空间内搜索最优解,而每次只搜索与自己临近的状态。此算法被证明以接近概率1接近最优解。其中有较好的物理思想,是模拟类算法中的典范。模拟退火算法由于要计算相临状态,这与Ising模拟的计算模拟有相似之处,因此本文也将对Ising做一个介绍。本文介绍算法的基本思想并做一个例子求解TSP问题(旅行商问题),重在介绍算法思想,具体算法的优化与改进不是本文涵盖范围。 1. Ising模型 Ising模型描述的是物体的铁磁性质,在铁和镍这类金属中,当温度低于居里温度时,原子的自旋自发地倾向某个方向,而产生宏观磁矩。温度高于居里温度时,自旋的取向非常紊乱,因而不产生净磁矩。当温度从大于或小于两边趋于居里温度时,金属的比热容趋于无限大。这是物质在铁磁性状态和非铁磁性状态之间的相变。伊辛模型就是模拟铁磁性物质的结构,解释这类相变现象的一种粗略的模型。它的优点在于,用统计物理方法,对二维情形求得了数学上严格的解。这就使得铁磁性物质相变的大致特征,获得了理论上的描述。 1.1模型描述 这个模型所研究的系统是由N个阵点排列成n维周期性点阵,这里n=2。点阵的几何构形可以是立方的或六角形的,每个阵点上都赋予一个取值+1或-1的自旋变量i,如果i=+1,即第N个阵点的自旋向上;如i=-1,即第个N阵点的自旋向下并且认为只是最近邻的自旋之间有相互作用。点阵的位形用一组自旋变量(这里i=2)来确定,如下图所示 图1,模型图示图2,最近临磁子 1.2模型计算 1)两个相临磁子趋向平行能量最低,即两个磁子的自旋方向非平行与平行。能量相差ΔE。 2)每个磁子的磁矩为m,总的磁矩为每个磁子的磁矩和。
% maxpop给定群体规模% pop群体 % newpop种群 %t0初始温度 function [codmin,finmin]=fc0(cc,v0,t0) N=length(cc(1,:)); %定群体规模 if N>50 maxpop=2*N-20; end if N<=40 maxpop=2*N; end %产生初始群体 pop=zeros(maxpop,N); pop(:,1)=v0; finmin=inf; codmin=0; for i=1:maxpop Ra=randperm(N); Ra(find(Ra==v0))=Ra(1);
pop(i,:)=Ra; end t=t0; while t>0 %用模拟退火产生新的群体pop=fc1(maxpop,pop,N,cc,v0,t); %转轮赌选择种群 f=zeros(1,maxpop); for i=1:maxpop for j=1:N-1 x=pop(i,j); y=pop(i,j+1); fo1=cc(pop(i,j),pop(i,j+1)); f(i)=f(i)+fo1; end f(i)=f(i)+cc(pop(i,1),pop(i,N)); end fmin=min(f); for i=1:maxpop if fmin==inf&f(i)==inf
end if fmin~=inf|f(i)~=inf dd=fmin-f(i); end ftk(i)=exp(dd/t); end [fin1,cod]=sort(-ftk); fin=abs(fin1); %f(cod(1)) if f(cod(1))
模拟退火算法 摘要:阐述了模拟退火算法的基本原理及实现过程,运用MATLAB语言实现了该算法。并将其运用到解决旅行商问题的优化之中。数值仿真的结果表明了该方法能够对函数进行全局寻优,有效克服了基于导数的优化算法容易陷入局部最优的问题。该方法既可以增加对MATLAB 语言的了解又可以加深对模拟退火过程的认识,并达到以此来设计智能系统的目的。 关键词:模拟退火算法,全局寻优,搜索策略
simulatedannealing algorithm Abstract:This paper describes the basic principles and processes simulatedannealing algorithm, using MATLAB language implementation of the algorithm. And use it to solve the traveling salesman problem among optimization. Simulation results show that the method can be a function of global optimization, effectively overcome the derivative-based optimization algorithm is easy to fall into local optimum. This method not only can increase the MATLAB language can deepen understanding and awareness of the simulated annealing process, and in order to achieve the purpose of the design of intelligent systems. Keywords:simulatedannealing algorithm,Global optimization,strategy
一、遗传算法与模拟退火算法比较 分析模拟退火算法的基本原理可以看出,模拟退火算法是通过温度的不断下降渐进产生出最优解的过程,是一个列马尔科夫链序列,在一定温度下不断重复Metropolis过程,目标函数值满足Boltzmann概率分布。在温度下降足够慢的条件下,Boltzmann分布收敛于全局最小状态的均匀分布,从而保证模拟退火算法以概率为1收敛到全局最优。另外,不难看出,模拟退火算法还存在计算结构简单、通用性好以及鲁棒性强等优点。但是,模拟退火算法存在如下缺陷: 1. 尽管温度参数下降缓慢时理论上可以保证算法以概率为1地收敛到最优值,但是需要的时间过长加之误差积累与时间长度的限制,难以保证计算结果为最优; 2.如果降温过程加快,很可能得不到全局最优解,因此,温度的控制是一个需要解决的问题; 3.在每一种温度下什么时候系统达到平衡状态,即需要多少次Metropolis过程不易把握,从而影响模拟退火算法的最终结果。 与模拟退火算法相比较,遗传算法具有如下典型特征: 这两种算法的相同点是都采用进化控制优化的过程。主要不同点是模拟退火是采用单个个体进行进化,遗传算法是采用种群进行进化。模拟退火一般新解优于当前解才接受新解,并且还需要通过温度参数进行选择,并通过变异操作产生新个体。而遗传算法新解是通过选择操作进行选择个体,并通过交叉和变异产生新个体。具体说来,遗传算法具有如下特点: (1)与自然界相似,遗传算法对求解问题的本身一无所知,对搜索空间没有任何要求(如函数可导、光滑性、连通性等),只以决策编码变量作为运算对象并对算法所产生的染色体进行 评价,可用于求解无数值概念或很难有数值概念的优化问题,应用范围广泛; (2)搜索过程不直接作用到变量上,直接对参数集进行编码操作,操作对象可以是集合、序列、矩阵、树、图、链和表等; (3)搜索过程是一组解迭代到另一组解,采用同时处理群体中多个个体的方法,因此,算法具有并行特性;
% maxpop 给定群体规模 % pop 群体 % newpop 种群 %t0 初始温度 function [codmin,finmin]=fc0(cc,v0,t0) N=length(cc(1,:)); %定群体规模 if N>50 maxpop=2*N-20; end if N<=40 maxpop=2*N; end %产生初始群体 pop=zeros(maxpop,N); pop(:,1)=v0; finmin=inf; codmin=0; for i=1:maxpop Ra=randperm(N); Ra(find(Ra==v0))=Ra(1); Ra(1)=v0; pop(i,:)=Ra; end t=t0; while t>0 %用模拟退火产生新的群体 pop=fc1(maxpop,pop,N,cc,v0,t); %转轮赌选择种群 f=zeros(1,maxpop); for i=1:maxpop for j=1:N-1 x=pop(i,j); y=pop(i,j+1); fo1=cc(pop(i,j),pop(i,j+1)); f(i)=f(i)+fo1; end f(i)=f(i)+cc(pop(i,1),pop(i,N)); end fmin=min(f); for i=1:maxpop if fmin==inf&f(i)==inf dd=inf; end
if fmin~=inf|f(i)~=inf dd=fmin-f(i); end ftk(i)=exp(dd/t); end [fin1,cod]=sort(-ftk); fin=abs(fin1); %f(cod(1)) if f(cod(1))
模拟退火算法与遗传算法性能比较 摘要:模拟退火算法与遗传算法是两种非常重要的多目标优化算法。其原理简单,对优化目标函数解析性没有要求,因此在工程问题中被广泛应用。本文介绍了这两种优化算法的原理,并分析了两种算法的性能并讨论了应用过程中的关键问题,对两种算法的合理选取及改进具有参考价值。 关键字:模拟退火,遗传算法,优化 1.前言 对于多目标优化问题,传统的做法是全局搜索,即“穷举法”。这种通过搜索整个解空间的方法虽然能获得全局最优解,但运算量非常大,当优化空间的维度非常高时,该方法在计算上不可行。通过利用目标函数的解析性质以及借助实际问题的约束条件能部分降低搜索空间,但任不能解决高维问题优化。面对复杂问题,求得最优解是很困难的,在有限时间内求得满意解是可能的。获取高维优化问题满意解的常用方法是迭代运算,但通常迭代运算容易陷入局部最优陷阱,造成“死循环”。模拟退火算法及遗传算法是两种原理简单的启发式智能搜索算法,均具有逃离局部陷阱的能力,是工程应用中快速获取满意解的常用算法,对其性能比较对于正确使用这两种智能优化算法具有重要意义。 2.算法介绍 2.1.模拟退火算法 模拟退火算法是一种随机搜索算法,Kirkpatrick[1]于1983年首次将该算法应用于多目标优化。该算法模拟冶金上的退火过程而得名,其基本思想是:对当前合理解增加扰动产生新解,评价新解对目标函数的改进情况,若小于零,则接受新解为新的当前解,否则以概率接受新解为新的当前解。新的当前解将将继续优化,直到没有显著改进为止。 模拟退火算法使用过程中以下细节影响其全局搜索性能。初始温度T选择越高,则搜索到全局最优解的可能性也越大,但计算复杂度也显著增大。反之,能节省时间,但易于陷入局部最优。依据解的质量变化概率选择温度下降策略能增强算法性能。每次温度降低迭代次数及算法的终止可由给定迭代次数内获得更优解的概率而确定。 2.1.遗传算法 遗传算法最早由Holland等[2]提出,该算法模拟遗传变异与自然选择机制,是一种通过交换机制,重组基因串的概率搜索算法,其基本思想是:分析解空间大小及精度要求,确定合理解唯一编码形式。合理解转化成的编码即为染色体,随机选取的多个初始染色体构成初始种群。会依据评价函数计算种群中每个个体