A new publication by Philippe Schlenker ["Super Liars" in Review of Symbolic Logic 3(3)] presents an excellent new treatment of that old chestnut, the Liar paradox (exemplified in my post title). Schlenker's insight is to develop a technical logical semantics which, rather than trying to solve the paradox, tries to account for how human languages cope with the paradox while devising a sane system of truth values for human languages.

Schlenker begins with a now standard assumption that an account of Liars and other paradoxes in natural language requires a third truth value. He then shows in an easily readable fashion how this step leads directly to the need for allowing ordinal-many truth values, with a sort of infinite hierarchy of so-called "super liars" arising as well. The resulting treatment is as expressively complete as can be expected, providing the ability for a language to express that any given "super liar" sentence is something other than true.

The semantics of paradox must eventually be integrated into a functioning semantic theory for natural language; this paper makes a good start.

## Sunday, December 19, 2010

## Tuesday, December 14, 2010

### Proof nets for grammatical logic

A "proof net" is a kind of compressed proof that was developed for linear logic in the late 1980s. They have been studied a great deal ever since for a number of reasons, one of which is that linear logic is a close relative of the Lambek logics that are useful in linguistic syntax under the modern-day rubric of type-logical grammar. A good source to learn about type-logical grammar and linguistics is my now passé 2004 book (not in most libraries but available cheaply as an ebook, though try to ignore Ch. 6), but also Glyn Morrill's 1994 book (not available cheaply but found in many libraries). For learning about linear logic, type-logical grammar, and proof nets all under one roof, the one and only source is Richard Moot's PhD dissertation, a real tour-de-force. My own dissertation (which grew up into the book) is quite embarrassing compared to this masterpiece, which is still available here.

I'm currently working on a survey paper about proof nets and other proof "compressions." I've actually been working on this paper for about ten years, but I really feel I might finish it soon. The purpose is to highlight the subject for the broader logic community, and to focus primarily on how the proof compressions become more and more geometrical as the logic gets stricter control over the arrangement of formulae. For classical propositional logic, the most compressed proof object is a "matrix" of atoms derived from the formula tree. The condition ensuring provability of the formula is basically set-theoretical, there is not much geometry to it. For linear logic, one keeps track of multisets of formulae (counting occurrences), and so the proof net provability condition is more graph-theoretical. For the associative Lambek logic we no longer have commutativity of the formulae, and the corresponding proof net condition comes to involve "planarity" of the graph. This is fundamentally a geometric condition that can be expressed using topology. The most restricted logic of all is the nonassociative Lambek system, and extending the proof nets to this case has to involve more specific topological conditions which I am still working out for my paper.

All of this seems important to me because if linguistic grammar is to be modeled type-logically, the "proofs" of sentences should be able to be carried out in a highly compressed fashion if the system is to have cognitive relevance. Generally, the brain's innate computational abilities seem to invoke highly efficient methods that are implemented in the neural "analog computer." But that leads me to a discussion of analog computation in the brain, which should be left to another post.

I'm currently working on a survey paper about proof nets and other proof "compressions." I've actually been working on this paper for about ten years, but I really feel I might finish it soon. The purpose is to highlight the subject for the broader logic community, and to focus primarily on how the proof compressions become more and more geometrical as the logic gets stricter control over the arrangement of formulae. For classical propositional logic, the most compressed proof object is a "matrix" of atoms derived from the formula tree. The condition ensuring provability of the formula is basically set-theoretical, there is not much geometry to it. For linear logic, one keeps track of multisets of formulae (counting occurrences), and so the proof net provability condition is more graph-theoretical. For the associative Lambek logic we no longer have commutativity of the formulae, and the corresponding proof net condition comes to involve "planarity" of the graph. This is fundamentally a geometric condition that can be expressed using topology. The most restricted logic of all is the nonassociative Lambek system, and extending the proof nets to this case has to involve more specific topological conditions which I am still working out for my paper.

All of this seems important to me because if linguistic grammar is to be modeled type-logically, the "proofs" of sentences should be able to be carried out in a highly compressed fashion if the system is to have cognitive relevance. Generally, the brain's innate computational abilities seem to invoke highly efficient methods that are implemented in the neural "analog computer." But that leads me to a discussion of analog computation in the brain, which should be left to another post.

Subscribe to:
Posts (Atom)