It turns out that this smug attitude is quite unwarranted. In fact, the only situation for which a Bayesian learner is guaranteed to converge toward the right result (which is called

*consistency*in this setting) is the finite case, in which the learner is observing from among a finite number of possible events (e.g. the usual cases involving cards and dice). In the more interesting infinite setting, a nasty set of results due to Freedman (published in the Annals of Mathematical Statistics 1963 & 1965) shows that Bayesian inference is no longer guaranteed to be consistent. It

*might*be consistent, depending upon the prior used, but now there arises the problem of

*bad priors*which yield inconsistent Bayes estimates. It seems that cognitive scientists would do well to bone up on some theory, if they are so fixated on Bayesian learning. Watch your priors carefully!

## No comments:

## Post a Comment