Bishop C.M. Pattern Recognition and Machine Learning (2006) (811375), страница 103
Текст из файла (страница 103)
Initialize the means µk , covariances Σk and mixing coefficients πk , andevaluate the initial value of the log likelihood.2. E step. Evaluate the responsibilities using the current parameter valuesγ(znk ) =πk N (xn |µk , Σk ).Kπj N (xn |µj , Σj )j =1(9.23)4399.3. An Alternative View of EM3. M step. Re-estimate the parameters using the current responsibilitiesµnewk=N1 γ(znk )xnNk(9.24)N1 new Tγ(znk ) (xn − µnewk ) (xn − µk )Nk(9.25)NkN(9.26)n=1Σnewk=n=1πknew=whereNk =Nγ(znk ).(9.27)n=14. Evaluate the log likelihoodln p(X|µ, Σ, π) =Nlnn=1Kπk N (xn |µk , Σk )(9.28)k=1and check for convergence of either the parameters or the log likelihood. Ifthe convergence criterion is not satisfied return to step 2.9.3.
An Alternative View of EMIn this section, we present a complementary view of the EM algorithm that recognizes the key role played by latent variables. We discuss this approach first of allin an abstract setting, and then for illustration we consider once again the case ofGaussian mixtures.The goal of the EM algorithm is to find maximum likelihood solutions for models<b>Текст обрезан, так как является слишком большим</b>.