In my previous post I highlighted a new topic in TopiCS. Here I will offer a few remarks on one of the papers, "Cognitive Biases, Linguistics Universals, and Constraint-Based Grammar Learning" by Culbertson, Smolensky, and Wilson.
The broad goals of this paper are (i) to exemplify the argument that human language learning is facilitated by learning biases, and (ii) to model some specific biases probabilistically in a Bayesian fashion. Let me say firstly that I am very sympathetic to both of these general ideas. But, I think that this project is very narrowly applicable only to the framework of optimality-theoretic syntax, and that is in no way fleshed out enough to generate an entire language.
So, without going into too many details, I think the paper's result applies only to a tiny corner of a grammar, in particular the part that derives "nominal word order" within noun phrases involving a numeral + noun on one hand, and an adjective + noun on the other. I'm not sure how to react to a result that only derives a couple of expressions from a whole language. I agree there might be Bayesian probability at work in grammar learning, but a project like this really needs to be worked out on a grammatical system capable, at least in theory, of deriving an entire language. I don't know if that capability has been shown for the kind of optimality-theoretic syntax under discussion here. I do know there are about 50 frameworks waiting in the wings that are known to be able to generate real languages, at least in theory if not in practice. Maybe we should try applying some of the ideas from this paper to fully fledged grammatical frameworks, instead of a half-baked one (sorry if that is a mixed metaphor!).
Friday, August 30, 2013
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment