Monday, February 14, 2011

Niyogi's analysis of Principles and Parameters learning

In this post I will summarize Chapter 4 of Niyogi (1998) The Informational Complexity of Learning. In it, Niyogi emphasizes the importance of going beyond theoretical learnability when analyzing a grammatical paradigm. "One also needs to quantify the sample complexity of the learning problem, i.e., how many examples does the learning algorithm need to see in order to be able to identify the target grammar with high confidence."

He sets his sights upon the Triggering Learning Algorithm, put forth by Gibson and Wexler as a learning scheme for the grammatical "parameters" within the Chomskyan Principles and Parameters framework. For those unfamiliar with the background, this is a theory of language that posits a Universal Grammar underlying all natural languages (the principles), and then a finite set of variable parameters which account for the differences among languages. The "parameter setting" is really the sole learning task for the developing child on this account.

I think that in the beginning, this theory was put forth in an effort to address the supposed "poverty of the stimulus," with the hope that the resulting learning problem would be tractable, even easy. Niyogi, however, manages to demonstrate that Gibson and Wexler's assumption of the existence of "local triggers," i.e. a path through the parameter-setting space from the initial hypothesis to the target, is not even sufficient to guarantee learnability at all (though it was believed sufficient by Gibson and Wexler), much less tractability. He further demonstrated the surprising theorem that, for all its carefully thought out design, the Triggering Learning Algorithm is less optimal than a random walk on the parameter space!

At the time of Niyogi's writing, he judged that the Triggering Learning Algorithm was a preferred explanation of language learning in psycholinguistics. His results should really have killed it, but as far as I can see they have had no such effect. In fact, Google Scholar finds only 12 literature citations of his entire book, most of which are due to the author himself. This is hardly a flurry of activity; only one other author writing on problems of natural language learning appears to be among the citations.

5 comments:

  1. The failed reception of these results does not surprise me at all. I am a linguist trained in a more or less mainstream version of generative linguistics. I know many linguists who do research in language acquisition. As far as I can see formal learnability theory is mostly ignored. A serious problem, I think, is that even if more linguists took interest in this literature, they would never be able to read it because of their lack of mathematical background.

    To most linguists I know (me included) even basic statistics is hard enough, let alone advanced mathematics. Some math courses in linguistics grad schools would be precious, but even this way the main problem is still there: most leading linguists do not have the background to engage some topics, nor do they have the will to acquire it.

    So if I think of Niyogi's case, the question is not so much "How could it happen?" but rather "How could not happen more often?".

    ReplyDelete
  2. Thanks for your comment, it is important and probably could be repeated by many linguists. Niyogi shared his opinion of Linguistics with me once upon a time. He said that the nonmathematical character of the field and its research training programs (i.e. grad programs) demonstrated that the field was young, and that it would one day become "mathematized". He noted how this characterized almost every mathematical science, even physics, which people who are not historians of science often don't realize was once as "hand-waving" as linguistics or even worse. He pointed to Economics as the most recent example of a nonmathematical field becoming mathematized. It is a personal dream of mine to live to witness some further mathematization of linguistics, and to somehow contribute to that. Hence, this blog. . . but I can't do it by myself, much as I try.

    ReplyDelete
  3. Sean, I apologize for posting this note that is not related to the topic but I have a burning question for an article I am writing and I wanted to know your thoughts about it as someone interested in Math-ling. I am sure you are aware of Watson the IBM computer that won the Jeopardy against humans a couple of weeks ago. My question is: Does his win mean that P=NP for linguistic problems, in other words, did Watson prove that computer algorithms have what it takes, or will eminently have the ability to work through all the calculations needed to think and speak in English. I would love to hear your thoughts on the subjects. You can email me directly or respond here.

    ReplyDelete
  4. Well, Watson is very impressive. To me, it demonstrates that, as many have said before, as computing power increases we can move increasingly toward true AI. It doesn't show that P=NP at all; the algorithms used by Watson are no doubt quite intractable, but it has the raw power to help compensate for that. If that kind of display of AI were tractable, your laptop would be doing it right now. I do believe that we already have at least one theoretical approach that could, if implemented, permit a computer to learn a natural language. But the techniques involved are highly intractable, and so they will require huge computational power. In the future, I think we will see such things implemented more and more, and a computer will one day "acquire" a language properly.

    ReplyDelete
  5. To update my post, Google Scholar recently finds 53 citations of Niyogi's book, which is better than I found a while ago. But it is still true that all citations are in highly technical learning theory literature.

    ReplyDelete