2008 6 Chinese Journal of Applied Probability and Statistics Vol.24 No.3 Jun. 2008 Monte Carlo EM 1,2 ( 1,, 200241; 2,, 310018) EM, E,,. Monte Carlo EM, EM E Monte Carlo,. EM, Monte Carlo EM,,,,. Newton-Raphson. Monte Carlo EM, Monte Carlo EM Newton-Raphson, EM E Monte Carlo,. Monte Carlo EM, Monte Carlo EM., Monte Carlo EM EM Monte Carlo EM, Monte Carlo EM. :, Monte Carlo, EM, Monte Carlo EM, Newton-Raphson. : O212. 1. Bayes,.,,. Monte Carlo,.,,,,. EM (Dempster, Laird and Rubin, 1977). E ( ) M ( ). M,. E, E Z [log p(θ Y, Z) θ, Y ],,., Monte Carlo E. Monte Carlo EM ( MCEM ). Dempster, Laird Rubin (1977), EM,.,,. (07CTJ001), (10701021), (06CGYJ 21YQB). 2006 11 9, 2007 10 23.
: Monte Carlo EM 313, Louis (1982) EM, EM Newton-Raphson, /. EM, 2 Monte Carlo EM Newton-Raphson, Monte Carlo EM, Monte Carlo EM, /. 3, Monte Carlo EM EM Monte Carlo EM, Monte Carlo EM. 2. p(θ Y ) θ,, p(θ Y, Z) Z θ,, p(z θ, Y ) θ Y, Z, Q(θ θ, Y ) EM E, Q(θ θ, Y ) = E Z [log p(θ Y, Z) θ, Y ] = log p(θ Y, Z)p(Z θ, Y )dz. θ i + 1., Monte Carlo EM, Monte Carlo EM Newton-Raphson, Monte Carlo EM., Monte Carlo EM i, θ EM, Newton- Raphson, θ, i,,. Monte Carlo EM Monte Carlo EM, /. : MCE1 : p(z θ, Y ) m z 1,, z m ; MCE2 : Q(θ θ, Y ) 1 m Z m log p(θ z j, Y ); j=1 MCM : Q(θ θ, Y ), θ EM, N-R : Q(θ EM θ, Y ) = max Q(θ θ, Y ); θ Θ ( θ (i+1) = θ + 2 log p(θ Y ) ) θ 1 [ log p(θ Y ) 2 p(z y, θ )dz θ { = θ + E Z[ 2 log p(θ Y, Z) ] [ θ log p(θ Y, Z) 2, Y θ Var { E Z[ 2 log p(θ Y, Z) θ 2, Y ] θ ] (θ EM θ ) θ ]} 1 }. (2.1)
314 : θ θ (i+1). MCE1, MCE2, MCM N-R, θ (i+1) θ Q(θ (i+1) θ, Y ) Q(θ θ, Y ). Geweke (1989), Q(θ θ, Y ) a.s. Q(θ θ, Y )., θ Z, N-R., Q. 2.1 Q(θ θ, Y ) C (2), θ θ, Q(θ θ, Y ) = 0. 2 Q(θ θ, Y ), Hesse G(x) Lipschitz, β > 0, i, j, G ij (x) G ij (y) β x y, (2.2) G ij (x) Hesse G(x) (i, j). i,, n, {θ } θ,. : Q(θ θ, Y ), θ i θ, Hesse 2 Q(θ θ, Y ). θ Taylor Q(θ θ, Y ), Q(θ + s θ, Y ) Q(θ θ, Y ) + Q(θ θ, Y ) T s + 1 2 st 2 Q(θ θ, Y )s, (2.3) s = θ θ, f(x) 2 f(x) f(x) x Hesse. (2.3), θ (i+1) = θ [ 2 Q(θ θ, Y )] 1 Q(θ θ, Y ) = θ G 1 i g i ( = 2 log p(θ Y ) ) θ 1 [ log p(θ Y ) 2 p(z y, θ )dz θ (θ EM θ ), (2.4), G i = 2 Q(θ θ, Y ), g i = Q(θ θ, Y ). w i = θ θ, v i = θ EM θ EM, θ, θ EM Q(θ θ, Y ) i i EM, (2.1) θ θ EM. Q(θ θ, Y ) C (2), n, w i 0, v i 0. (2.4) ] θ (i+1) = θ [ 2 Q( θ θ, Y )] 1 Q( θ θ, Y ), Ĝi = 2 Q( θ θ, Y ), ĝ i = Q( θ θ, Y ). = θ Ĝ 1 i ĝ i, (2.5)
: Monte Carlo EM 315 h i = θ θ, g(θ) = Q(θ θ, Y ), G(θ) = 2 Q(θ θ, Y ). Taylor, g(θ ) = g( θ + h) = ĝ i + Ĝih + O( h 2 ). h = h i, 0 = g(θ ) = ĝ i Ĝih i + O( h i 2 ). Q(θ θ, Y ) C (2), θ θ, G = 2 Q(θ θ, Y ), θ θ, Ĝi. G(x) Lipschitz, Ĝ 1 i, i Newton. Ĝ 1 i, O( ), C, θ {θ h γ/c, γ (0, 1)}, 0 = Ĝ 1 i ĝ i h i + O( h i 2 ) = h i+1 + O( h i 2 ). h i+1 C h i 2. (2.6) h i+1 γ h i, (2.7) θ (i+1) {θ h γ 2 /C, γ (0, 1)}., i, h i 0,. (2.6). 3., Monte Carlo EM. 1 (Rao, 1973) 197 (Y ) : Y = (y 1, y 2, y 3, y 4 ) = (125, 18, 20, 34), ( 1 2 + θ 4, 1 4 (1 θ), 1 4 (1 θ), θ ). 4 θ,, 2 Monte Carlo EM θ. Z,, : X = (x 1, x 2, x 3, x 4, x 5 ) = (y 1 Z, Z, y 2, y 3, y 4 ), ( 1 2, θ 4, 1 4 (1 θ), 1 4 (1 θ), θ ). 4
316 θ, Z p(z θ, Y ) = B(125, θ /(2 + θ ))., log p(θ Y, Z) = (Z + x 5 ) log(θ) + (x 3 + x 4 ) log(1 θ). : MCE1 : B(125, θ /(2 + θ )) m z 1,, z m ; MCE2 : z = m z j /m; j=1 Q(θ θ, Y ) = (z + x 5 ) log(θ) + (x 3 + x 4 ) log(1 θ), MCM : Q(θ θ, Y ), θ EM, N-R : θ EM = z + x 5 z + x 5 + x 3 + x 4 ; θ (i+1) = ( θ + 2 log p(θ Y ) ) θ 1 [ log p(θ Y ) 2 p(z y, θ )dz (θ EM θ ). m = 10000, θ (i+1) θ < 10 4 θ (i+1) θ < 10 5 θ θ EM < 10 5, EM, Monte Carlo EM Monte Carlo EM, θ, 1, 2. 1 θ (i+1) θ < 10 4, θ θ EM Monte Carlo EM Monte Carlo EM θ (0) = 0.2000 0.5442 0.5437 0.6394 0.6151 0.6150 0.6268 0.6253 0.6251 0.6270 0.6266 0.6265 0.6267 0.6268 0.6268 0.6267 0.6268 0.6267 0.6270 0.6270 6 8 5 ]
: Monte Carlo EM 317 2 θ (i+1) θ < 10 5 θ θ EM < 10 5, θ EM Monte Carlo EM MCEM θ (0) = 0.2000 0.5442 0.5444 0.6265 0.6266 0.6393 0.6151 0.6148 0.6268 0.6267 0.6273 0.6253 0.6252 0.6272 0.6268 0.6267 0.6266 0.6266 0.6267 0.6271 0.6268 0.6268 0.6265 0.6266 0.6269 0.6268 0.6268 0.6271 0.6268 0.6267 0.6268 0.6266 0.6268 0.6269 0.6268 0.6268 0.6269 7 24 5 1 2 Monte Carlo EM Monte Carlo EM,,. θ (i+1) θ < 10 4, Monte Carlo EM 8, 5 ; θ (i+1) θ < 10 5, θ θ EM < 10 5, Monte Carlo EM 24, 5., EM., E Monte Carlo, N-R.,,.. [1] Booth, J.G. and Hobert, J.P., Maximizing generalized linear mixed model likehoods with an automated Monte Carlo EM algotithm, Journal of the Royal Statistical Society, Ser. B, 61(1999), 265 285. [2] Dempster, A.P., Laird, N.M. and Rubin, D.B., Maximum likelihood from incomplete data via the EM algorithm (with discussion), Journal of the Royal Statistical Society, Ser. B, 39(1977), 1 38. [3] Geweke, J., Bayesian inference in econometric models using Monte Carlo integration, Econometrica, 57(1989), 1317 1339. [4] Little, R.J.A. and Rubin, D.B., Statistical Analysis with Missing Data, New York: Wiley, 1987. [5] Louis, T.A., Finding observed information using the EM algorithm, Journal of the Royal Statistical Society, Ser. B, 44(1982), 98 130. [6] McLachlan, G.J. and Krishnan, T., The EM Algorithm and Extensions, New York: Wiley, 1996. [7] Tanner, M.A., Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions, New York: Springer-Verlag, 1993. [8],,,, :, 1998.
318 [9],,, :, 1997. [10],,, ( ), 24(2003), 36 41. Acceleration of Monte Carlo EM Algorithm Luo Ji 1,2 ( 1 School of Finance and Statistics, East China Normal University, Shanghai, 200241 ) ( 2 School of Mathematics and Statistics, Zhejiang University of Finance and Economics, Hangzhou, 310018 ) EM algorithm is one of the data augmentation algorithms, which usually are used to obtain estimate of the posterior mode of observed data recent years. However, because of its difficulty in calculating the explicit expression of the integral in E step, the application of EM algorithm is limited. While Monte Carlo EM algorithm solves the problem well. Owing to effectively facilitating the integral in E step of EM algorithm by Monte Carlo simulating, Monte Carlo EM algorithm has been successfully used to a wide range of applications. There is, however, the same shortage for EM algorithm and Monte Carlo EM algorithm, that the convergence rate of the two algorithms is linear. So this paper proposes the acceleration of Monte Carlo EM Algorithm, which is based on Monte Carlo EM Algorithm and Newton-Raphson algorithm, to improve the convergence rate. Thus the acceleration of Monte Carlo EM Algorithm has the advantages of both Monte Carlo EM Algorithm and Newton-Raphson algorithm, that is to say it facilitates E step by Monte Carlo simulation and also has quadratic convergence rate in a neighborhood of the posterior mode. Later its excellence in convergence rate is illustrated by a classical example. Keywords: Augmentation data, Monte Carlo simulation, EM algorithm, Monte Carlo EM algorithm, Newton-Raphson algorithm. AMS Subject Classification: Primary 65C60; secondary 65C05.