Approximate inference by variational free energy minimization (also known as variational Bayes, or ensemble learning) has maximum likelihood and maximum {\em a posteriori\/} methods as special cases, so we might hope that it can only work better than these standard methods. However, cases have been found in which degrees of freedom are `pruned', perhaps inappropriately. This paper investigates this phenomenon in a toy example.
All postscript files are compressed with gzip - see this page for advice about gzip, if needed.