Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)
Zoubin Ghahramani, Michael Jordan
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework , we de(cid:173) rive a learning algorithm based on the Expectation-Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algo(cid:173) rithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approxima(cid:173) tion is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.