I ran across something interesting in the book Cosmology by Liebscher (2005). A conundrum known as Olbers's paradox has provided motivation for cosmological considerations. This paradox demonstrates that, if there were an infinity of potentially observable stars, the sky should have infinite brightness. Modern cosmology takes this as a proof that the universe has a history in which there were no stars, and thus the number of stars we can see by looking far enough away from Earth is effectively limited by time.
There is an alternative attempt to resolve this paradox, credited to Lambert (1761), which assumes the universe consists of systems of different order. As Liebscher describes, "a system of order n contains g_n systems of order n-1, where g_n is the multiplicity. . . now every integral over the universe may converge." This naturally reminded me of type theory being invoked to resolve paradoxes of semantics etc. Lambert's "ordered universe" cosmology is known to be inadequate because it predicts an inhomogeneous universe at every distance scale, which is contrary to observation. Many would also say that type theory is an inadequate solution for our problems in logic and linguistics.
Thursday, May 27, 2010
Sunday, May 16, 2010
Modeling the learning of grammar
In mathematical approaches to linguistics, the results are usually solid because they are derived mathematically. But there always remains the question whether the mathematical model chosen does actually serve as a satisfactory model of something. Carnap pointed out a long time ago that this question does not have a formal answer, it is really quite subjective. He used the term explicatum to refer to the formal model of an informally expressed explicandum.
When it comes to the learning of grammar, even restricted to syntax of natural language, it is not even so clear what the explicandum is. The only obvious fact is that children do indeed learn a language. Suppose it has a grammar---then what? Researchers disagree on how to model the learning of grammar. There is a canon of literature in algorithmic learning theory that models grammatical inference using only text strings as input data. Alex Clark is a contributor to this literature, and he has derived interesting results concerning the identifiability of substitutable context-free languages. I noticed in the 2009 Algorithmic Learning Theory proceedings that this result has recently been extended to substitutable mildly context-sensitive languages, the latter being a ubiquitous class that turns up again and again in mathematical linguistics.
A substitutable language is, roughly, one where any two substrings sharing the same context (construed linearly, not structurally) also share all their contexts. I.e. the language allows them to substitute for one another. It seems to me that this is too strong a property to be true of natural languages.
Aside from that, it is quite unlikely that children learn language from word sequences only, divorced from meanings. For these reasons, I have pursued a different modeling approach in my work on type-logical grammar induction (see my new paper in the Journal of Logic, Language and Information for a full account). I hold that learning should be modeled using sentences annotated with structural information. I think it is possible that children could use a kind of corpus analysis, together with their meaningful experience, to glean this information without benefit of a complete grammar.
But you see, the matter of modeling is subjective. I had a (friendly) argument about these very points with Alex Clark a few years ago. He was equally committed to pursuing the pure string learning model. He said my assumptions were too rich, the extra information was too plentiful. We weren't arguing about mathematical points, but rather about what is the best explicatum for the right explicandum. We can't even agree about the explicandum.
Mathematical linguistics will benefit, I believe, from the parallel pursuit of different explicata for different explicanda. Then when we know more, due to advances in cognitive science or whatever, a wide range of different mathematical results will already be known.
When it comes to the learning of grammar, even restricted to syntax of natural language, it is not even so clear what the explicandum is. The only obvious fact is that children do indeed learn a language. Suppose it has a grammar---then what? Researchers disagree on how to model the learning of grammar. There is a canon of literature in algorithmic learning theory that models grammatical inference using only text strings as input data. Alex Clark is a contributor to this literature, and he has derived interesting results concerning the identifiability of substitutable context-free languages. I noticed in the 2009 Algorithmic Learning Theory proceedings that this result has recently been extended to substitutable mildly context-sensitive languages, the latter being a ubiquitous class that turns up again and again in mathematical linguistics.
A substitutable language is, roughly, one where any two substrings sharing the same context (construed linearly, not structurally) also share all their contexts. I.e. the language allows them to substitute for one another. It seems to me that this is too strong a property to be true of natural languages.
Aside from that, it is quite unlikely that children learn language from word sequences only, divorced from meanings. For these reasons, I have pursued a different modeling approach in my work on type-logical grammar induction (see my new paper in the Journal of Logic, Language and Information for a full account). I hold that learning should be modeled using sentences annotated with structural information. I think it is possible that children could use a kind of corpus analysis, together with their meaningful experience, to glean this information without benefit of a complete grammar.
But you see, the matter of modeling is subjective. I had a (friendly) argument about these very points with Alex Clark a few years ago. He was equally committed to pursuing the pure string learning model. He said my assumptions were too rich, the extra information was too plentiful. We weren't arguing about mathematical points, but rather about what is the best explicatum for the right explicandum. We can't even agree about the explicandum.
Mathematical linguistics will benefit, I believe, from the parallel pursuit of different explicata for different explicanda. Then when we know more, due to advances in cognitive science or whatever, a wide range of different mathematical results will already be known.
Subscribe to:
Posts (Atom)