Sunday, August 18, 2013

The aura of theorem-proving

There is an aura around automated theorem proving within AI (indeed,  there once was a high-powered prover called AURA). It's seen as very sophisticated and technical with proven applications and results.

But there's a little secret at the heart of the subject. Once you've implemented the formulae,  the axioms and the inference rules you hit an exponentially increasing search space which kills you.

People write incredibly clever algorithms to speed up inference and search, host their provers on massively parallel hardware and deploy libraries of hopeful heuristics. Computational complexity is unimpressed, however,  and so automated theorem provers have failed to take over the world.

Humans do not solve problems - mathematical or otherwise - by constrained search through the problem space from first principles. Instead we get educated about the domain, and after study and practice we do the following.

1. Categorise a given problem/situation as of a certain type.

2. Select those results and techniques which have a bearing on this type of problem.

3. Apply the relevant results and procedures with intelligent monitoring of progress, which may cause a revisit to steps 1 and 2.

There have been attempts to match this knowledge-based approach to theorem-prover and automated planning system architecture but you seem to end up down the rat-hole of ad hoc design. The ever-increasing speed of modern computer systems seems to tempt researchers back to cleaner ATP architectures which directly leverage the available raw processing power.

Long-term though, it's not the answer.