( ) 1 ( ) : : (Differential Evolution, DE) (Particle Swarm Optimization, PSO) DE [1, 2, 3, 4] PSO [5, 6, 7, 8, 9, 10, 11] 2 2.1 (P) (P ) minimize f(x) subject to g j (x) 0, j = 1,..., q h j (x) = 0, j = q + 1,..., m l i x i u i, i = 1,..., n (1) x = (x 1,, x n ) n f(x) g j (x) 0 q h j (x) = 0 m q f, g j, h j l i, u i n x i 2.2 DE PSO (population-based descent method) 1. : 2. : 3. : 1
4. (a) : (b) : (c) : 5. 3. 3 3.1 (Differential evolution, DE) (evolution strategy) Storn and Price[1, 2]. DE DE DE DE DE (base vector) (difference vectors) 1 3.2 DE DE/best/1/bin DE/rand/1/exp DE/base/num/cross base DE/rand/num/cross DE/best/num/cross num cross DE/base/num/bin (binomial crossover) DE/base/num/exp (exponential crossover) DE n 1 + 2 num F (scaling factor) CR(crossover rate) (trial vector) 3.3 DE/rand/1/{bin,exp [3, 4] Step0 N x i {x i, i = 1, 2,, N Step1 Step2 x i 3 x p1, x p2, x p3 x i x x p1 x p2 x p3 x = x p1 + F (x p2 x p3 ) (2) F 2
Step3 x x i i j [1, n] i j x j bin CR x 1 CR x i exp CR x x i Step2 Step3 Step4 Step5 Step6 Step5 i Step6 Step1 DE/rand/1/bin() { P =Generate N individuals {x i randomly; Evaluate x i, i = 1, 2,, N; for(t=1; ; t++) { for(i=1; i N; i++) { (p 1, p 2, p 3 )=select randomly from [1, N] s.t. p 1 p 2 p 3 i; j=select randomly from [1, n]; for(k=1; k n; k++) { if(k == 1 u(0,1)<cr) j else j =x ij ; j=(j + 1)%n; =x p1,j+f (x p2,j x p3,j); Evaluate ; if(f( ) < f(x i )) x i = ; DE/rand/1/exp() { P =Generate N individuals {x i randomly; Evaluate x i, i = 1, 2,, N; for(t=1; ; t++) { for(i=1; i N; i++) { (p 1, p 2, p 3 )=select randomly from [1, N]\{i s.t. p j p k (j, k = 1, 2, 3, j k); j=select randomly from [1, n]; k=1; do { =x p1,j+f (x p2,j x p3,j); j=(j + 1)%n; k++; while(k n && u(0, 1) < CR); for(; k n; k++) { j j =x ij ; j=(j + 1)%n; Evaluate ; if(f( ) < f(x i )) x i = ; 3
u(0, 1) [0, 1] 4 (Particle Swarm Optimization) (Particle Swarm Optimization, PSO) 4.1 f i t x t i vt i pbest i x i pbest i = min τ=0,1,,t f(xτ i ) (3) x i = arg min τ=0,1,,t f(xτ i ) (4) gbest x G gbest = min pbest i i (5) x G = arg min f(x i ) i (6) t + 1 v t+1 ij = wv t ij + c 1 rand (x ij x t ij) (7) + c 2 rand (x Gj x t ij) w (inertia weight) rand [0, 1] c 1 cognitive c 2 social (7) t + 1 4.2 PSO x t+1 i = x t i + v t+1 i (8) 1. x i v i i x i x i x i S x ik [l k, u k ] v i v ik 0 2. G 3. T 4. i (7),(8) [ V maxj, V maxj ] 5. 3. PSO C 4
PSO() { Initialize Agents(0); Evaluate all x in Agents(0); x G =arg min f(x), x in Agents(0); for(t=1;t T ;t++) { w=w 0 + (w T w 0 )(t 1)/(T 1); for(each agent i in Agents(t)) { for(each dimension j) { v ij =wv ij +c 1 u(0, 1)(x ij -x ij)+c 2 u(0, 1)(x Gj -x ij); if(v ij >V j ) v ij =V j ; else if(v ij < V j ) v ij = V j ; x ij =x ij +v ij ; Evaluate x i ; if(f(x i ) < f(x i )) { if(f(x i ) < f(x G )) G=i; x i =x i; References [1] R. Storn and K. Price: Minimizing the real functions of the ICEC 96 contest by differential evolution, Proc. of the International Conference on Evolutionary Computation, pp. 842 844 (1996). [2] R. Storn and K. Price: Differential evolution a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization, 11, pp. 341 359 (1997). [3] T. Takahama, S. Sakai and N. Iwane: Solving nonlinear constrained optimization problems by the ε constrained differential evolution, Proc. of the 2006 IEEE Conference on Systems, Man, and Cybernetics, pp. 2322 2327 (2006). [4] T. Takahama and S. Sakai: Constrained optimization by the ε constrained differential evolution with gradient-based mutation and feasible elites, Proc. of the 2006 World Congress on Computational Intelligence, pp. 308 315 (2006). [5] J. Kennedy and R. C. Eberhart: Particle swarm optimization, Proceedings of IEEE International Conference on Neural Networks, Vol. 1498 of Lecture Notes in Computer Science, Perth, Australia, IEEE Press, pp. 1942 1948 (1995). vol.iv. [6] J. Kennedy and R. C. Eberhart: Swarm Intelligence, Morgan Kaufmann, San Francisco (2001). [7] T. Takahama and S. Sakai: Constrained optimization by combining the α constrained method with particle swarm optimization, Proc. of Joint 2nd International Conference on Soft Computing and Intelligent Systems and 5th International Symposium on Advanced Intelligent Systems (2004). [8] T. Takahama and S. Sakai: Constrained optimization by the α constrained particle swarm optimizer, Journal of Advanced Computational Intelligence and Intelligent Informatics, 9, 3, pp. 282 289 (2005). [9] T. Takahama and S. Sakai: Constrained optimization by ε constrained particle swarm optimizer with ε-level control, Proc. of the 4th IEEE International Workshop on Soft Computing as Transdisciplinary Science and Technology (WSTST 05), pp. 1019 1029 (2005). [10] T. Takahama, S. Sakai and N. Iwane: Constrained optimization by the ε constrained hybrid algorithm of particle swarm optimization and genetic algorithm, Proc. of the 18th Australian Joint Conference on Artificial Intelligence, pp. 389 400 (2005). Lecture Notes in Computer Science 3809. 5
[11] T. Takahama and S. Sakai: Solving constrained optimization problems by the ε constrained particle swarm optimizer with adaptive velocity limit control, Proc. of the 2nd IEEE International Conference on Cybernetics & Intelligent Systems, pp. 683 689 (2006). 6