1 / 37

South China University of Technology

South China University of Technology. Ising model and Stochastic optimization. Xiao- Bao Yang Department of Physics. www.compphys.cn. Ordered Structures. AuCu Alloy. LiC 6. BC 3. H x C y. Ising model. The simplest Ising model assumes an interaction only between nearest neighbors:.

Télécharger la présentation

South China University of Technology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. South China University of Technology Ising model and Stochastic optimization Xiao-BaoYang Department of Physics www.compphys.cn

  2. Ordered Structures AuCu Alloy LiC6 BC3 HxCy

  3. Ising model The simplest Ising model assumes an interaction only between nearest neighbors: Ernst Ising For a certain J, the lowest energy? the corresponding structure?

  4. Lattice gas model Absorption: 1, 0 for atom, Vacancy AB binary Alloy:1,0 for A,B

  5. Ground state and ordered structures For a certain J, the lowest energy? the corresponding structure? onesearch.m & calha.m

  6. Optimization and Curse of Dimensionality

  7. heuristic algorithm 一个基于直观或经验构造的算法,在可接受的花费(指计算时间和空间)下给出待解决组合优化问题每一个实例的一个可行解,该可行解与最优解的偏离程度不一定事先可以预计。 Simulated Annealing Genetic Algorithm Particle Swarm Optimization

  8. No Free Lunch 最优化理论的发展之一是wolpert和Macerday提出了没有免费的午餐定理(No Free Lunch,简称NFL)。该定理的结论是,由于对所有可能函数的相互补偿,最优化算法的性能是等价的。该定理暗指,没有其它任何算法能够比搜索空间的线性列举或者纯随机搜索算法更优。该定理只是定义在有限的搜索空间,对无限搜索空间结论是否成立尚不清楚。

  9. SimulatedAnnealing s ← s0; e ← E(s) // Initial state, energy. sbest ← s; ebest ← e // Initial "best" solution k ← 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: T ← temperature(k/kmax) // Temperature calculation. snew ← neighbour(s) // Pick some neighbour. enew ← E(snew) // Compute its energy. if P(e, enew, T) > random() then // Should we move to it? s ← snew; e ← enew // Yes, change state. if e < ebest then // Is this a new best? sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best found'. k ← k + 1 // One more evaluation done return sbest // Return the best solution found.

  10. Metropolis algorithm(1953) importance sampling detail balance Metropolis algorithm Nicholas Metropolis W(12)=1 E1>E2 Flipping rate W(12)=exp[-(E2-E1)/kT] E1<E2 Reaches thermal equilibrium

  11. The Monte Carlo method Direct sampling Markov chain sampling

  12. Spin flip A(ii,jj) - A(ii,jj) A(ii,jj)=sign(rand-0.5); Periodic Boundary Condition Free Boundary Condition Energy difference dE=-2*J*S(sf(1),sf(2))* (S(sf(1)+1,sf(2))+S(sf(1)-1,sf(2)) +S(sf(1),sf(2)+1)+S(sf(1),sf(2)-1)); S=[A(:,n) A A(:,1)]; S=[S(end,:) S S(1,:)]; Accept or Reject? S=[zeros(n,1) A zeros(n,1)]; S=[zeros(1,n+2) S zeros(1,n+2)]; if dE<0 S(sf(1),sf(2))=-S(sf(1),sf(2)); end if dE>0&&rand<exp(-dE/T) S(sf(1),sf(2))=-S(sf(1),sf(2)); end

  13. clear n=10; J=-1; nmax=10^6; for ii=1:n for jj=1:n A(ii,jj)=sign(rand-0.5); end end S=[zeros(n,1) A zeros(n,1)]; S=[zeros(1,n+2) S zeros(1,n+2)]; TS=0.2:-0.03:0.01 tmag=zeros(1,length(TS)); for it=1:length(TS) T=TS(it) for ii=1:nmax sf=ceil(rand(1,2)*n)+1; dE=-2*J*S(sf(1),sf(2))*(S(sf(1)+1,sf(2))+S(sf(1)-1,sf(2))+S(sf(1),sf(2)+1)+S(sf(1),sf(2)-1)); if dE<0 S(sf(1),sf(2))=-S(sf(1),sf(2)); end if dE>0&&rand<exp(-dE/T) S(sf(1),sf(2))=-S(sf(1),sf(2)); end end end gsising.m & phase.m

  14. Zunger’s research field is the condensed matter theory of real materials. He developed the first-principles pseudopotentialsfor the density functional theory (1977)… In recent years, Zunger has focused on developing the “Inverse Band Structure” concept, whereby one uses ideas from quantum mechanics as well as genetic algorithmsto search for atomic configurations that have a desired target property.

  15. The inverse band-structure problem of finding an atomic configuration with given electronic properties The simulated-annealing technique is an efficient algorithm for finding the global minimum of a multi-variable, multi-valley function. Nature 402, 60 (1999)

  16. Genetic Algorithm • 适应度(fitness)是借鉴生物个体对环境的适应程度, 而对所求解问题中的对象设计的一种表征优劣的测度。 • 以生物细胞中的染色体(chromosome)代表问题中的个体对象。遗传算法中染色体一般用字符串表示, 而基因也就是字符串中的一个个字符,用二进制数串作为染色体编码。 • 遗传算法是通过在种群(population-由若干个染色体组成的群体)上实施所称的遗传操作,使其不断更新换代而实现对整个参数空间的搜索。 • 遗传算法中有三种关于染色体的运算: 选择-复制、交叉和变异,这三种运算被称为遗传操作或遗传算子(genetic operator)。

  17.   选择-复制 选择-复制(selectionreproduction)操作是模拟生物界优胜劣汰的自然选择法则的一种染色体运算, 就是从种群中选择适应度较高的染色体进行复制,以生成下一代种群。选择-复制的通常做法是, 对于一个规模为N的种群S,按每个染色体xi∈S的选择概率P(xi)所决定的选中机会, 分N次从S中随机选定N个染色体, 并进行复制。 这里的选择概率P(xi)的计算公式为 按照这种选择概率定义, 适应度越高的染色体被随机选定的概率就越大, 被选中的次数也就越多, 从而被复制的次数也就越多。

  18.   交叉 交叉 (crossover)亦称交换、交配或杂交,就是互换两个染色体某些位上的基因。例如,设染色体s1=01001011, s2=10010101, 交换其后4位基因, 即 则得新串s1′=01000101,s2′=10011011。s1′和s2′可以看做是原染色体s1和s2的子代染色体。 变异 变异(mutation)亦称突变,就是改变染色体某个(些)位上的基因。例如,把染色体s=11001101的第三位上的0变为1, 则得到新染色体s′=11101101。

  19. 求解区间[0,31]上的二次函数y=x2的最大值 • 取[0,31]中的整数用5位二进制数作为个体x的基因型编码。 • 种群规模设定为4,取染色体s1=01101(13),s2=11000(24),s3=01000(8), s4=10011(19)组成初始种群S1。 选择-复制设从区间[0, 1]中产生4个随机数如下: r1=0.450126, r2=0.110347, r3=0.572496, r4=0.98503 s1’=11000(24), s2’=01101(13), s3’=11000(24), s4’=10011(19)

  20.  交叉 设交叉率pc=100%,即S1中的全体染色体都参加交叉运算。设s1’与s2’配对,s2’与s4’配对。分别交换后两位基因,得新染色体: s1’’=11001(25), s2’’=01100(12), s3’’=11011(27), s4’’=10000(16)  变异 设变异率pm=0.001。这样,群体S1中共有540.001=0.02位基因可以变异。0.02位显然不足1位,所以本轮遗传操作不做变异。 s1’=11000(24), s2’=01101(13), s3’=11000(24), s4’=10011(19)

  21. 假设这一轮选择-复制操作中,种群S2中的4个染色体都被选中(因为选择概率毕竟只是一种几率,所以4个染色体恰好都被选中的情况是存在的),得到群体:假设这一轮选择-复制操作中,种群S2中的4个染色体都被选中(因为选择概率毕竟只是一种几率,所以4个染色体恰好都被选中的情况是存在的),得到群体: s1’=11001(25), s2’=01100(12), s3’=11011(27), s4’=10000(16) 做交叉运算,让s1’与s2’,s3’与s4’分别交换后三位基因,得 s1’’=11100(28), s2’’=01001(9), s3’’=11000(24), s4’’=10011(19) 这一轮仍然不会发生变异。于是,得第三代种群S3:

  22. 设这一轮的选择-复制结果为: s1’=11100(28), s2’=11100(28), s3’=11000(24), s4’=10011(19) 然后,做交叉运算,让s1’与s4’,s2’与s3’分别交换后两位基因,得 s1’’=11111(31), s2’’=11100(28), s3’’=11000(24), s4’’=10000(16) 这一轮仍然不会发生变异。于是,得第四代种群S4: s1=11111(31), s2=11100(28), s3=11000(24), s4=10000(16)

  23. Genomic Design of in Si/Ge films and Nanowires much larger absorption from the magic sequence NanoLett,12,984(2012);PRL,108,027401(2012)

  24. Evolutionary approach for determining first-principles hamiltonians Nat. Mater. 4, 391 (2005)

  25. Divide people into several teams; Move towards the place for lower cost.

  26. Particle Swarm Optimization PSO中,每个优化问题的解都是搜索空间中的一只鸟,我们称之为“粒子”。所有的粒子都有一个由被优化的函数决定的适应值(fitness value),每个粒子还有一个速度决定他们飞翔的方向和距离,然后粒子们就追随当前的最优粒子在解空间中搜索。 PSO初始化为一群随机粒子(随机解),然后通过迭代找到最优解。在每一次迭代中,粒子通过跟踪两个“极值”来更新自己。一个就是粒子本身所找到的最优解,这个解叫做个体极值pBest,另一个极值是整个种群目前找到的最优解,这个极值是全局极值gBest。

  27. 在找到这两个最优值时,粒子根据如下的公式来更新自己的速度和新的位置:在找到这两个最优值时,粒子根据如下的公式来更新自己的速度和新的位置: 其中 ,V 是粒子的速度 , Present 是粒子的当前位置 ,pBest 与 gBest见前面定义。rand ( )是(0 ,1)之间的随机数,c1和c2被称作学习因子。通常 ,c1 = c2 = 2。w 是加权系数(惯性权重) ,取值在 0. 1到0. 9之间。粒子通过不断学习更新 ,最终飞至解空间中最优解所在的位置 ,搜索过程结束。最后输出的 gBest 就是全局最优解。在更新过程中 ,粒子每一维的最大速率限被限制为 Vmax ,如果某一维更新后的速度超过设定的Vmax,那么这一维的速度就被限定为Vmax。 粒子更新公式

  28. Program(1) function y=myfit(x) y=1-cos(3*x)*exp(-x); Clear;N=4; re=zeros(N,5); % x y pbest v ybest w=0.5; re(:,1)=linspace(0,4,N); re(:,4)=rand(N,1); for ii=1:N re(ii,2)=myfit(re(ii,1)); end re(:,5)=re(:,2);re(:,3)=re(:,1); [q,qq]=max(re(:,2));gbest=re(qq,3); %参数初始化, 包括选择的样板数目N, 更新的参数w和目标函数

  29. Program(2) 1)根据算法进行更新; 2)当自变量超过指定范围,将其限制在指定范围;3)更新pbest和gbest 1) re(:,4)=re(:,4)*w+2*rand*(re(:,3)-re(:,1))+rand*(gbest*ones(N,1)-re(:,1))*2; re(:,1)=re(:,1)+re(:,4); 3) for ii=1:N re(ii,2)=myfit(re(ii,1)); if re(ii,5)<re(ii,2) re(ii,5)=re(ii,2); re(ii,3)=re(ii,1); end end [q,qq]=max(re(:,2)); gbest=re(qq,3); 2) for ii=1:N if re(ii,1)>4 re(ii,1)=4; end if re(ii,1)<0 re(ii,1)=0; end end

  30. Famous scientists in High pressure Artem R. Oganov Yanming Ma 95 papers (including 5 in Nature, 1 in Nature Materials, 5 in PNAS, 6 in PRL) 2 in Nature and 9 in PRL

  31. Materials under High Pressure

  32. Homework Find the minimum or maximum of a physical problem with one of the three typical optimization methods.

More Related