Here's a question that I've long thought to be a puzzle in the history of logic. Perhaps I've been wrong about this, but there doesn't seem to be any reason for Gerhard Gentzen's sequent calculus to make use of sequences of formulae.
Let me try to clarify what I mean here. A sequent, for Gentzen, is a special inference expression connecting a (possibly empty) sequence of antecedent formulae with at least one consequent formula. The system for intuitionistic logic requires just one consequent formula, while the classical system allows a sequence in the consequent. The sequent calculus is then a way of defining classical or intuitionistic logic as a scheme of inferences among sequent expressions rather than the formulae within. Baez and Stay (2009) nicely explain that Gentzen used sequents so he could develop a logical calculus that has the subformula property, which has in turn led to so many useful developments. But that only requires sequent expressions connecting sets of formulae, not sequences of formulae.
I feel there remains the question, why did Gentzen insist that the antecedent and consequent of a sequent be a sequence of formulae? He immediately had to compensate for this by invoking the structural rules, special rules that eliminate the sensitivity of his systems to the sequence structure. He did this, of course, because classical and intuitionistic logic aren't sensitive to the structure of formulae occurring in a sequence. In fact, later authors relying on Gentzen to present classical or intuitionistic logic have frequently just gone ahead and redefined sequents as special expressions connecting sets of formulae, thus doing away with the structural rules. Logicians more recently have played extensively with the logics that result from not using one or more of the structural rules---these are the substructural logics, which are used in type-logical grammar, among other applications. But Gentzen didn't suggest doing away with any structural rules, so far as I know. So then, why did he insist that formulae came in sequences, only to turn around and insert provisions that eliminated this sensitivity? We certainly have to thank him for his odd bit of foresight, given how much fruitful research has been done using substructural logics in recent times.
Sunday, April 25, 2010
Friday, April 23, 2010
Against possible worlds
I admit it, I never liked possible worlds semantics. Montague grammar and any common relative uses it as part of the intensional semantics for natural language. The low value of possible worlds as a methodology was made very clear for me when I read Forster's essay "The Modal Aether," published in Reinhard Kahle's volume Intensionality (2005) in the Lecture Notes in Logic series. So I would like to see more linguistic semantics developed without this baroque and unfortunate construct. Yet, I also feel intensional logic is worth having in semantic theory, so I don't want to give up on it.
Doing away with possible worlds within intensional logic has been tried in at least two ways. Church's intensional type theory (published in Nous in 1973, 1974, and 1993) accomplishes the goal by assigning a special type for "intension" of an entity, and another for intension of a proposition. The Montague approach would assign as intension a function from possible worlds (or world-time indices). Church's approach further provides an infinite chain of intensions of intensions, represented as an infinite chain of new types. So, while Montague's scheme relies on an infinite function space, Church's scheme would require an infinite number of "primitive" types, which is a little uncomfortable.
Another way was explored by George Bealer, especially in his book Quality and Concept (1982). Bealer shows how intensional logic can be set up using an operator that he writes with [], so that a formula [A] simply represents the intension of logical object A, which can be any of the usual things in a first-order system. Bealer takes the logical notion of intension as primitive, and does not attempt to define it as a function from such to such or whatever. My problem with this system is that it tries to be first-order, which means that you cannot organize a formula to reflect a decent linguistic composition of the subterms with respect to meaning composition. In other words, Bealer's system applied to natural language cannot be compositional (or so it seems to me), since it inherits the same problems for linguistic compositionality that have always plagued first-order logic.
But surely one of these systems or a relative could be applied to yield a decent intensional logic for natural language semantics that does not use possible worlds.
Doing away with possible worlds within intensional logic has been tried in at least two ways. Church's intensional type theory (published in Nous in 1973, 1974, and 1993) accomplishes the goal by assigning a special type for "intension" of an entity, and another for intension of a proposition. The Montague approach would assign as intension a function from possible worlds (or world-time indices). Church's approach further provides an infinite chain of intensions of intensions, represented as an infinite chain of new types. So, while Montague's scheme relies on an infinite function space, Church's scheme would require an infinite number of "primitive" types, which is a little uncomfortable.
Another way was explored by George Bealer, especially in his book Quality and Concept (1982). Bealer shows how intensional logic can be set up using an operator that he writes with [], so that a formula [A] simply represents the intension of logical object A, which can be any of the usual things in a first-order system. Bealer takes the logical notion of intension as primitive, and does not attempt to define it as a function from such to such or whatever. My problem with this system is that it tries to be first-order, which means that you cannot organize a formula to reflect a decent linguistic composition of the subterms with respect to meaning composition. In other words, Bealer's system applied to natural language cannot be compositional (or so it seems to me), since it inherits the same problems for linguistic compositionality that have always plagued first-order logic.
But surely one of these systems or a relative could be applied to yield a decent intensional logic for natural language semantics that does not use possible worlds.
Friday, April 16, 2010
Morphosyntax: the next frontier
In an earlier post I wondered whether the newer pregroup grammar framework should supersede type-logical grammar for syntax. I have now decided such niceties are not important. The fact is, we have more than enough machinery to handle the pure syntax of language "algebraically" or "logically," however you view it. My own work has contributed to the learning algorithms for this. I think it is time to move on.
What's next? Morphosyntax. Many, perhaps most, languages involve a very significant amount of morphology which contributes essentially syntactic information. Some, such as Navajo, have a very simple syntax and a cripplingly complex morphology does most of the actual work. Yet mathematical linguists rarely treat morphosyntax. I don't really know how to handle it, myself. My basic suggestion is that we need a new class of mathematical models that would be capable of handling a language such as Navajo. These would be able to relate semantics directly to the morphology and syntax at the same time. Preferably without making reference to morphemes. Most morphology is not very well described morphemically, but it all submits rather well to a relational view in which the morphology is modeled as a system of form-meaning relations among words.
This is a challenge I hope to take up one day in the not-too-distant future. I of course invite anyone to tell me what is already out there in the vein of mathematical morphosyntax. My acid test is, does it work for Navajo?
What's next? Morphosyntax. Many, perhaps most, languages involve a very significant amount of morphology which contributes essentially syntactic information. Some, such as Navajo, have a very simple syntax and a cripplingly complex morphology does most of the actual work. Yet mathematical linguists rarely treat morphosyntax. I don't really know how to handle it, myself. My basic suggestion is that we need a new class of mathematical models that would be capable of handling a language such as Navajo. These would be able to relate semantics directly to the morphology and syntax at the same time. Preferably without making reference to morphemes. Most morphology is not very well described morphemically, but it all submits rather well to a relational view in which the morphology is modeled as a system of form-meaning relations among words.
This is a challenge I hope to take up one day in the not-too-distant future. I of course invite anyone to tell me what is already out there in the vein of mathematical morphosyntax. My acid test is, does it work for Navajo?
Tuesday, April 6, 2010
Kornai on complexity and learning
Learning theory is a subject I am very close to, so Chapter 7 of Kornai's "Mathematical Linguistics" is one I read with great interest. The first section is a quick run-through of essential concepts from information and coding theory. The second section is an equally quick run-through of informational complexity, so-called Kolmogorov complexity. The third section is closer to my own expertise, and it discusses the three most important topics in learning theory today, namely minimum description length, identification in the limit (the Gold inductive inference paradigm), and probably approximately correct learning.
There are very few, if any, chapters or articles on formal linguistics that address all these important topics. I do feel that Kornai's chapter is too short, and fails to highlight connections between the topics considered. The subsection on identification in the limit does make what I see as the most important point about this paradigm for the linguist, which is that while Gold's theorem famously showed that no language class is identifiable which is "superfinite," a superfinite class by definition contains *all* finite languages over the vocabulary, and there is no reason to suppose that possible human languages include all (or even a single) finite language. For this reason, Gold's notorious result does not impede or even have much relevance to the formal modeling of learning human language by inductive inference. This point cannot be overemphasized, and my feeling is that research in formal modeling of human language learning was set back decades and became extremely unpopular because of widespread misunderstanding of the import of Gold's result.
For lack of time, this entry will be my last about Kornai's book for the moment. There are three further chapters which concern hidden Markov models and related things about speech and handwriting pattern recognition. A highly recommended read, I hope these posts have whet the appetite of some readers here and will send you to look at Kornai's book.
There are very few, if any, chapters or articles on formal linguistics that address all these important topics. I do feel that Kornai's chapter is too short, and fails to highlight connections between the topics considered. The subsection on identification in the limit does make what I see as the most important point about this paradigm for the linguist, which is that while Gold's theorem famously showed that no language class is identifiable which is "superfinite," a superfinite class by definition contains *all* finite languages over the vocabulary, and there is no reason to suppose that possible human languages include all (or even a single) finite language. For this reason, Gold's notorious result does not impede or even have much relevance to the formal modeling of learning human language by inductive inference. This point cannot be overemphasized, and my feeling is that research in formal modeling of human language learning was set back decades and became extremely unpopular because of widespread misunderstanding of the import of Gold's result.
For lack of time, this entry will be my last about Kornai's book for the moment. There are three further chapters which concern hidden Markov models and related things about speech and handwriting pattern recognition. A highly recommended read, I hope these posts have whet the appetite of some readers here and will send you to look at Kornai's book.
Subscribe to:
Posts (Atom)