Chapter 5 of "Mathematical Linguistics" discusses a variety of approaches to syntax. Such a wide variety has surely never been discussed in a mathematically sound way in the same piece, I'm sure. He begins by going through relatively familiar theories like phrase structure rules and categorial grammars. He then discusses dependency grammars and case-based approaches, which gets increasingly unfamiliar to me. He then flies off to places unknown, in a fantastically interesting discussion of weighted and probabilistic grammars. The treatment of the "density" of a language is especially new to me.
There a few trouble spots, which include poorly introduced notions that get relied on at certain points. There is a discussion of weighted grammars that hinges on a "Dyck language," which is not a creature I am aware of, and I couldn't see where it had been introduced earlier in the book. This has the effect of making an already dense treatment into something overly cryptic.
The final discussion concerns "external evidence" in syntax, which refers to properties of human languages that are more or less evident by nonlinguistic facts. For instance, it is pretty plain that natural languages are, in some sense, learnable. What is missing is, as Kornai points out, "a realistic theory of grammar induction." This is something dear to me, which I have devoted some years to working on. I invite readers to see my recent paper "Grammar induction by unification of type-logical lexicons," published online in the Journal of Logic, Language, and Information. This brings me to another slight criticism of Kornai, where he (probably without meaning to) appears to propagate a common misunderstanding about the import of Gold's inductive inference results. K. states that "context-free grammars cannot be learned this way" [by exposure to positive examples]. But Gold only showed this to be true if we attempt to learn a class of context-free grammars that generates every finite language and beyond. Results can be much more positive if the finite languages are excluded from the hypothesis space, for instance. In my paper I show how a restricted class including some context-sensitive grammars can still be learned from a finite sample set, which is an even stronger learning criterion than Gold's identifiability in the limit.
On the whole, I was fascinated by this chapter, it contains a wealth of things that have never been treated very well formally, and simply to have so many approaches collected for discussion is an immense contribution to our literature.
Tuesday, March 16, 2010
Subscribe to:
Post Comments (Atom)
This should have been in the book, and I apologize. Until the next edition, http://en.wikipedia.org/wiki/Dyck_language
ReplyDelete