Friday, August 30, 2013

Learning biases in constraint-based grammar learning

In my previous post I highlighted a new topic in TopiCS.  Here I will offer a few remarks on one of the papers, "Cognitive Biases, Linguistics Universals, and Constraint-Based Grammar Learning" by Culbertson, Smolensky, and Wilson.

The broad goals of this paper are (i) to exemplify the argument that human language learning is facilitated by learning biases, and (ii) to model some specific biases probabilistically in a Bayesian fashion.  Let me say firstly that I am very sympathetic to both of these general ideas.  But, I think that this project is very narrowly applicable only to the framework of optimality-theoretic syntax, and that is in no way fleshed out enough to generate an entire language. 

So, without going into too many details, I think the paper's result applies only to a tiny corner of a grammar, in particular the part that derives "nominal word order" within noun phrases involving a numeral + noun on one hand, and an adjective + noun on the other.   I'm not sure how to react to a result that only derives a couple of expressions from a whole language.  I agree there might be Bayesian probability at work in grammar learning, but a project like this really needs to be worked out on a grammatical system capable, at least in theory, of deriving an entire language.  I don't know if that capability has been shown for the kind of optimality-theoretic syntax under discussion here.  I do know there are about 50 frameworks waiting in the wings that are known to be able to generate real languages, at least in theory if not in practice.  Maybe we should try applying some of the ideas from this paper to fully fledged grammatical frameworks, instead of a half-baked one (sorry if that is a mixed metaphor!).

Sunday, August 18, 2013

Cognitive Modeling and Computational Linguistics

It has been some years since I quit my membership in the Association for Computational Linguistics.  I quit because, in simplest terms, I wasn't getting much out of the membership and I was not encouraged by the direction of the field of Comp Ling.  My impressions from about 2000 through 2008 were that Comp Ling was getting more and more "engineering" oriented, and more and more hostile to any other purpose for computational modeling.  I have a few anecdotes I could tell about my own papers on that score; one appeared in JoLLI after referees for Computational Linguistics suggested it be sent there, since it had no "practical application."  (Being naive at the time, I did not realize that every paper in Computational Linguistics had to have a practical application.) 

A new topic which appeared in the July issue of Topics in Cognitive Science gives some hope for a different future.  Here one finds 11 papers under Computational Models of Natural Language, edited by John Hale and David Reitter.  The overarching theme is basically computational psycholinguistics relaunched.  The papers include many which I would like to comment on here in later posts.   They were presented at the first workshop on Cognitive Modeling and Computational Linguistics, held at the ACL meeting in 2010.  This workshop has since been reprised in the succeeding years, so it seems that this is not a one-time aberration.  The notion of using computational linguistics to investigate linguistic theory was purged from the ACL (especially the North American chapter) before I finally quit.  I'm glad to see this research avenue explored under the auspices of this Association once again.