2008 10 Chinese Journal of Applied Probability and Statistics Vol.24 No.5 Oct. 2008 (,, 1000871;,, 100044) (,, 100875) (,, 100871). EM, Wishart Jeffery.,,,,. : :,,, EM, Wishart. O212.7. 1.,. 1894, Pearson. 1969, Day 1 χ 2,. 1977, Dempster 2 EM,, Redner Walker 3, EM. EM MCMC,,. EM, : ; 4., :, ; ; EM (SMEM ) 5.,, 6., E-mail: xweitong@bnu.edu.cn. 2004 7 22.
476,,. : 4. 7, MML. 4 6,.,. : ; Wishart ; MML Wishart ; CEM 2 ; ;,. 2. EM 2.1 Y = Y 1,, Y d T d, y = y 1,, y d T Y. k : p(y θ) = k α m p(y θ m ), (2.1) Y,., α 1,, α k ; θ m m ; θ {θ 1,, θ k, α 1,, α k } ; α m : α m 0, m = 1,, k k α m = 1. (2.2) p(y θ m ) = 2π d/2 Σ m 1/2 exp{ (1/2) (y µ m ) T Σ 1 m (y µ m )},. d θ m µ m Σ m. (2.2), (2.1),., EM. N Y = {y (1),, y (N) }, : log p(y θ) = N ln k, (2.3) θ : α m p(y θ m ). (2.3) ( ) : θ ML = max{log p(y θ)}. (2.4) θ θ MAP = max{log p(y θ) + log p(θ)}. (2.5) θ
: 477 2.2 EM EM E- M-. E- Q, M-, E-,,., : E- : µ m, Σ m α m, n m : / k R mn = α m p(y θ m ) α m p(y θ m ). (2.6) M- : (2.2), µ m, Σ m α m : α m = µ m = Σ m =, N, d. ( N ) /N, R mn (2.7) ( N ) /(N R mn y n αm ), (2.8) N R mn (y n µ m )(y n µ m ) T / (N αm ), (2.9) 3. Wishart,,, 18.,, ( (2.5))., Ridolfi Idier 9 Gamma. Ormoneit Tresp 6 Snoussi M-Djafari 10 Wishart. : p(θ) = D(α γ), k D(α γ) = b(γ) p(µ m, Σ m ) = D(α γ) k k α γm 1 m, α m 0, N(µ m v m, η 1 m Σ m )IW (Σ 1 m α m, β m ). (3.1) k N(µ m ν m, η 1 m Σ m ) = (2π) d/2 η 1 m Σ m 1/2 exp α m = 1, (3.2) η m 2 (µ m ν m ) T Σ 1 m (µ m ν m ), (3.3) IW (Σ 1 m δ m, β m ) = c(δ m, β m ) Σ 1 m δm (d+1)/2 exp tr(β m Σ 1 m ). (3.4), δ m > (d 1)/2, b(γ) c(δ m, β m ), tr( ).
478 (3.2)(3.3)(3.4) (2.5), : { N k θ MAP = max R mn log α m + log N(x n µ m, Σ m ) + log D(α m γ) θ + k } log N(µ m ν m, ηm 1 Σ m ) + log IW (Σ 1 m δ m, β m ). (3.5) : E- : (2.6), : / k R mn = α m p(y θ m ) α m p(y θ m ). (3.6) M- : µ m, Σ m α m : ( N α m = R mn + γ m 1 )/(N + k ) γ m k, (3.7) ( N ) /(N µ m = R mn x n + η m ν m αm + η m ), (3.8) N R mn (x n µ m )(x n µ m ) T + η m ( µ m v m )( µ m v m ) T + 2β m Σ m =, (3.9) N α m + 2δ m d, γ, η, ν, δ, β, N, d. 4. MML (minimum message length, MML ) Wallace Freeman 11 1987.,.. : MessLen log p(θ) + 1 2 log I(θ) log p(y θ) + c 2 log κ c + c 2. (4.1), p(θ) ; I(θ) Fisher, I(θ)= E 2 log p(y θ)/ θ 2 ; p(y θ) ; c ; κ c c, κ 1 = 1/12, κ 2 = 5/(36 3), Conwan Sloane 12. (4.1) (3.1) p(θ),, Fisher I(θ)., 4 I(θ) I c (θ). { ĉ MDL = arg min k c ( + δ m d + 1 2 (γ m 1) log α m 1 2 log Σ m η m 2 (µ m v m ) T Σ 1 ) log Σ 1 m tr(β m Σ 1 log p(y θ) + c 2 log κ c + c 2 m ) n k m (µ m v m ) log(i d d + K)(Σ m Σ m ) }, (4.2)
: 479, γ, η, ν, δ, β, N, d, I d d d d, K = H ij H ij, H ij d d, (i, j) 1, 0. ij,, (2.2), : { ( n α m (t + 1) = max 0, i=1 ) R m (i) N }/ k { ( n max 0, 2 i=1 ) R m (i) N }. (4.3) 2 α m, α m (t + 1) = 0., CEM 2 13. α 1 θ 1, ; α 2 θ 2,,., k. 5. 3 4, : 1. : k min, k max, ε, α m µ m Σ m. 2. t = 0, k = k max, L min = +. 3. k > k min, : (MML). (1) t = t + 1 CEM 2, (3.6)-(3.9). comp = 1. comp < k,. R mn, µ m, Σ m α m. α m (t + 1) = 0, ( µ m, Σ m α m ), k = k 1. (2) (4.2), L(t). (3) L(t 1) L(t) ε L(t), (1). 4. L(t) L min, L min = L(t), k best = k. 5. k = k 1, 3., ε 10 5 ; ; µ ; Σ. 6. 4,. 900,, 3. : α 1 = α 2 = α 3 = 1/3; : µ 1 = 0, 2, µ 2 = 0, 0, µ 3 = 0, +2; :
480 C 1 = C 2 = C 3 = diag{2, 0.2}. 25, 80, 3, ( 1). (a) (b) 1 (a) 25 ; (b) 3, Figueiredo Jain,,. 50,. Jeffrey, 98%, 100%. 2, 1000,, 4,. : α 1 = α 2 = α 3 = 0.3, α 4 = 0.1. : µ 1 = µ 2 = 4, 4, µ 3 = 2, 2, µ 4 = 1, 6.
: 481 : 1 0.5 6 2 C 1 = 0.5 1 2 1, C 2 =, 2 6 0.125 0 C 3 = 1 2, C 4 = 0 0.125. 25, 100, 4, ( 3). (a) (b) 3 (a) 25 ; (b) 4, Figueiredo Jain, 1.,. 4 (a) (a) ; (b) (b), Wishart Jeffrey,,.
482 7.,,. 70, 30,.,, MML,. 4 6, Wishart Jeffery.,. 1 Day, N.E., Estimating the components of a mixture of normal distributions, Biometrika, 56(3)(1969), 463 474. 2 Dempster, A., Laird, N. and Rubin, D., Maximum likehood estimation from incomplete data via the EM algorithm, J. Royal Statistical Soc. B, 39(1977), 1 38. 3 Redner, R.A., Walker, H.F., Mixtures densities, maximum likelihood and the EM algorithm, SIAM Review, 26(1984), 195 239. 4 Figueiredo, M.A.T., Jain, A.K., Unsupervised learning of finite mixture models, IEEE-PAMI, 24(3)(2002), 381 396. 5 Ueda, N., Nakano, R., Gharhamani, Z. and Hinton, G., SMEM algorithm for mixture models, Neural Computation, 12(2000), 2109 2128. 6 Ormoneit, D., Tresp, V., Averaging, maximum penalized likelihood and Bayesian estimation for improving Gaussian mixture probability density estimates, IEEE Transactions on Neural Networks, 9(4)(1998), 639 649. 7 Oliver, J., Baxter, R. and Wallace, C., Unsupervised Learning Using MML, Proc. 13 th Int l Conf. Machine Learning, 364 372, 1996. 8 Hathway, R., Another interpretation of the EM algorithm for mixture distributions, Journal of Statistics & Probability Letters, 4(1986), 53 56. 9 Ridolfi, A., Idier, J., Penalized Maximum Likelihood Estimation for Normal Mixture Distributions, Actes 17 Coll. GRETSI, Vannes, France, 259 262, 1999. 10 Snoussi, H., M-Djafari, A., Penalized Maximum Likelihood for Multivariate Gaussian Mixture, Bayesian Inference and Maximum Entropy Methods in Science and Engeering, 21 st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engeering, Baltimore, Maryland, 77 88, 2001. 11 Wallace, C.S. and Freeman, P.R., Estimation and inference by compact coding, Journal of the Royal Statistical B, 49(1987), 240 252. 12 Conway, J.H. and Sloane, N.J.A., Sphere Packings, Lattices and Groups, Springer-Verlag, London, 1988. 13 Celeux, G., Chretien, S., Forbes, F. and Mkhadri, A., A component-wise EM algorithm for mixtures, Technical Report 3746, INRIA Rhone-Alpes, France, 1999. Available at http://www.inria.fr /RRRT/RR-3746.html.
: 483 Unsupervised Classification Based on Penalized Maximum Likelihood of Gaussian Mixture Models Yu Peng (School of Mathematical Sciences, Peking University, Beijing, 100871; National Geomatics Center of China, Beijing, 100044 ) Tong Xinwei (School of Mathematical Sciences, Beijing Normal University, Beijing, 100875 ) Feng Jufu (National Laboratory on Machine Perception, Center for Information Science, School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871 ) In this paper we propose an unsupervised classification algorithm which is based on Gaussian mixture models. Thinking that EM algorithm will result in a local optimal resolution of Gaussian mixture models in parameter estimations, we substitute invert Wishart distribution for Jeffery prior. Experiments show that this algorithm improves correct rates and decreases time while estimating classifications. Keywords: Gaussian mixture models, unsupervised Classification, penalized maximum likelihood, EM algorithm, invert Wishart distribution. AMS Subject Classification: 62G32.