Once upon an age, linguists interested in learning theory were obsessed with the "Gold theorem," and its implications were widely discussed and misunderstood. But the Gold framework, with all its knotty "unlearnable" classes, is not naturally probabilistic. Within cognitive science, much more recently, a huge faction interested in learning has become very attached to different styles of "Bayesian" learning. No one discusses learnability issues related to Bayesian learning, it is just assumed to work to whatever extent is permitted by the type and amount of data fed to it.
It turns out that this smug attitude is quite unwarranted. In fact, the only situation for which a Bayesian learner is guaranteed to converge toward the right result (which is called consistency in this setting) is the finite case, in which the learner is observing from among a finite number of possible events (e.g. the usual cases involving cards and dice). In the more interesting infinite setting, a nasty set of results due to Freedman (published in the Annals of Mathematical Statistics 1963 & 1965) shows that Bayesian inference is no longer guaranteed to be consistent. It might be consistent, depending upon the prior used, but now there arises the problem of bad priors which yield inconsistent Bayes estimates. It seems that cognitive scientists would do well to bone up on some theory, if they are so fixated on Bayesian learning. Watch your priors carefully!