Wednesday, May 26, 2010

The Intentional Description of Reactive Systems

Here is the paper where I wrote up my Ph.D. work (PDF file). And here is how it starts.
"This is a revised and more "user-friendly" version of [Seel, 90] which wrote up my Ph.D. work for publication. The latter has more pictures, and its slightly greater austerity may please the formalist. Still all the math is still here, it's just more clearly explained!

The problem we're looking at is simply stated. Take an example: we can look at intelligent robots operating in trial-and-error mode in a local environment, and accurately describe their behaviour using sentences like "look, it didn't know there was a hole there, that's why it fell over", and "now it knows where the power supply is, it will find a direct route."

In the jargon, this use of words like "knows", along with words like "believes", "wants" to describe and predict the behaviour of agents is called intentional description. It's so natural that it's not even obvious why it should be problematic. But the designers and engineers who constructed the robot have a different story as to how it behaves. They understand the causal mechanisms underpinning its perceptual, information-storage and problem-solving abilities. They can also predict (and alter and fix) the robot's behaviour.

Engineers normally have little time for intentional descriptions, considering them as so much sentimentality and anthropomorphism. Also, they don't see how intentional descriptions could work - the casual observer doesn't, after all, know the engineering. Still, intentional descriptions do work - we use them all the time and not just for robots.

To see how it all fits together, we need to use mathematical models: both for robots and their environments (for which we use automata-type formalisms) and for intentional language (for which we use logic, namely the special modal logics of knowledge, belief and goals).

It then turns out that the relationship between the two kinds of description, intentional and engineering, comes out in the maths as a semantic relationship between formulae in the logic and the appropriate automata-structured model spaces of the logic.

To properly get the details of what follows, you need to know basic propositional modal logic as taught in introductory AI or logic courses. Despite the jargon, the paper is more interested in conceptual clarity than the use of deep or intricate mathematical techniques (in fact I don't use any). If you read the words and avoid the formulae, you still get the gist of what's going on.

A shorter alternative to reading the rest of this paper is to look at (Seel, 1991) at"

Seel, N. R. (1990). Intentional Description of Reactive Systems. In Y. Demazeau & J-P. Muller (Eds.), Proceedings of the Second European Workshop on Modelling Autonomous Agents in a Multi-Agent World. Elsevier Science Publishers B.V./North-Holland.

This is the main technical presentation of my Ph.D work in an accessible form.