Since 1997 or so I have been busy with type-logical grammar, as known from the books by Morrill (1994) or myself (2004), and various other sources such as Moortgat's fine paper in the Handbook of Logic and Language (1997). These systems became quite baroque and complicated with the whole "multimodal" agenda, amply studied by Moot in his PhD thesis (2002).

Competing with the type-logical approach, one finds the combinatory categorial grammar championed by Steedman in a number of sources such as his books of 1996 and 2001. Now there is also the system known as "pregroup grammars" introduced by Lambek in the now-defunct journal Grammars in 2001. He had complained that the multimodal type-logical grammars were too complicated.

Now the question is, which of these frameworks is the most lively? Certainly pregroup grammars have that new-car smell in their first decade, and are being studied. But I am greatly encouraged by type-logical projects such as Jaeger's book on anaphora in the Springer series Trends in Logic. Linguist friends who know of my type-logical affections have often asked me about linguistic problems like the handling of anaphora and ellipsis. I have always demurred, sweeping such matters under the rug in my desire to just work on what I wanted, but also confident that these things could be taken care of. I have reason to believe that type-logical grammars are still a very good framework worthy of continuing study. I think their mathematical connections are quite rich, and my mind is full of things that will wait for future posts.

## Tuesday, February 16, 2010

Subscribe to:
Post Comments (Atom)

Pregroup grammars need to grow up a bit more: in a short but elegant paper Kobele and Kracht (2005) have shown the formalism to be Turing-equivalent. Those with long memories will remember the cathartic effect a similar theorem (Peters and Ritchie 1971) had on the development of transformational grammar. Perhaps because the work goes unpublished (languishing on Kracht's abandoned UCLA homepage) there has been little reaction from the pregroup community, though at least the work coming from Technion seems to be informed by it.

ReplyDeleteIt's good to learn of this result by Greg (Kobele) and Marcus (Kracht), I was unaware. Turing-equivalence of grammar formalisms is not unexpected; one generally finds it out (as Carpenter proved for the multimodal type-logical systems, in the festschrifft for J. van Benthem 1999) but this leads to interesting studies of how to restrict the system and get something tighter and perhaps learnable. Moot's thesis showed a restriction of multimodal type logical grammars which limits them to context-sensitive languages, and then I have a result concerning a further restriction of those which shows them to be finitely learnable from good examples (published in JoLLI online now). What is more, I have also shown (I think) that the more restricted systems still retain context-sensitive generative capacity. I have a draft of a paper on this that I might post soon.

ReplyDelete