As a book selector for my university library, I find out about numerous books that would remain unknown to me otherwise. One interesting book which we recently bought is Cognitive Reasoning: A Formal Approach by Anshakov and Gergely. This book presents a massive effort to systematize a formal logical framework which is seriously intended to model cognitive reasoning. It is, in one sense, an amplification of the typical efforts to devise a "logic of knowledge and belief" within AI, but this description does not really do any justice to the efforts documented.
The book relies on "a long prehistory" of work carried out in Russia and Hungary over many years, founded on the plausible reasoning methods of so-called JSM systems (named after John Stuart Mill because of the close affinity with his logical proposals). The formal logics underlying the technique are versions of "pure J-logics." These are multi-valued logics with two sorts of truth values called "internal" and "external." The external truth values are just the usual two, but the internal truth values can be a very rich (countably infinite) set. The logics contain J-operators, which are characteristic functions of subsets of the internal truth values. All this machinery is used to devise a sophisticated logic of knowledge updating which "permits one to describe the history of cognitive processes." The objective is to model the reasoning process which moves a cognitive agent "from ignorance to knowledge," and so the system is a dynamic logic as well. The formalism also involves a new kind of syntactic entity in the logic, a "modification inference" which can add new formulae by nondeductive rules and thereby modify constituents which are already established in the inference.
I have not finished reading this book, but I believe that it is very important for anyone interested in logic-based AI or cognitive science. It provides a rich and technically interesting set of formal methods for seriously trying to model cognitive reasoning.
Monday, March 21, 2011
Tuesday, March 1, 2011
Robot language acquisition
I discovered an interesting research program going on currently in the Adaptive Systems Group at the University of Hertfordshire, UK. A representative paper is "An integrated three-stage model towards grammar acquisition" by Yo Sato and colleagues, that appeared in the 2010 IEEE International Conference on Development and Learning. The paper documents an experiment in "cognitive robotics" where a robot is situated in a realistic language-learning environment.
According to the abstract, "the first, phonological stage consists in learning sound patterns that are likely to correspond to words. The second stage concerns word-denotation association. . . The data thus gathered allows us to invoke semantic bootstrapping in the third, grammar induction stage, where sets of words are mapped with simple logical types." This work is especially interesting to me because the grammar induction uses a semantic bootstrapping algorithm related to one which I developed, and published in 2005 (Journal of Logic, Language and Information).
In a discussion following my previous post, I offered the opinion that as computing power increases, we will (I hope) see more efforts to implement theoretically inspired learning algorithms that are quite intractable. This robotics paper represents one such effort, which I am very pleased to see. Yo Sato tells me that they are now looking at incorporating the improvements I have recently made to the original semantic bootstrapping algorithms. It's always gratifying to see an application inspired by my theoretical developments, since this is really why I pursue the work, but I am not sufficiently capable or interested to carry out the applied work that is then called for.
According to the abstract, "the first, phonological stage consists in learning sound patterns that are likely to correspond to words. The second stage concerns word-denotation association. . . The data thus gathered allows us to invoke semantic bootstrapping in the third, grammar induction stage, where sets of words are mapped with simple logical types." This work is especially interesting to me because the grammar induction uses a semantic bootstrapping algorithm related to one which I developed, and published in 2005 (Journal of Logic, Language and Information).
In a discussion following my previous post, I offered the opinion that as computing power increases, we will (I hope) see more efforts to implement theoretically inspired learning algorithms that are quite intractable. This robotics paper represents one such effort, which I am very pleased to see. Yo Sato tells me that they are now looking at incorporating the improvements I have recently made to the original semantic bootstrapping algorithms. It's always gratifying to see an application inspired by my theoretical developments, since this is really why I pursue the work, but I am not sufficiently capable or interested to carry out the applied work that is then called for.
Subscribe to:
Posts (Atom)