Thursday, March 17, 2016

AI really is applied neurobiology

When I was researching AI back in the 1980s, we'd all heard of Geoffrey Hinton. He was the key pioneer of artificial neural nets, a field which at the time wasn't making much progress. He never turned up at the main AI conferences where we discussed logics, theorem-proving and symbolic planning programs. A different paradigm entirely.

I never set eyes on him.

Professor Hinton has had the last laugh. His work, and that of close colleagues, underpins the recent AI successes of Google (AlphaGo!), Baidu, Facebook and Microsoft. Whether it's speech recognition, automatic translation or face recognition, deep-learning neural nets are powering it along behind the scenes.

Here's Professor Hinton's recent address to the Royal Society (h/t Steve Hsu). I don't normally find time to watch other people's recommended videos, but I made an exception for this one. Hinton rather reminds me of Richard Dawkins in appearance and style. He's lucid, understated and staggeringly smart. Here he engagingly tells the story of the fall and rise again of the neural net approach to artificial intelligence.




At almost the end of the talk, he puts up this slide for almost two seconds .. and then hides it.

The "secret slide"
I'm sure he just felt it wasn't quite aligned to his audience which didn't seem packed with AI specialists.

There was always a tendency within AI which made a distinction between our language for describing agents as knowing, believing, wanting entities - using intentional, symbolic terms, and the presumed internal agent architecture which caused behaviour - and which need not involve pushing symbols around at all.

We've long been aware of the awesome computation underlying animal/human unconscious situational competences. We've long failed to replicate such abilities using 'Good Old-Fashioned AI'. Perhaps it's time to conclude, with Prof. Hinton, that the architecture which realises such capabilities really has to be that of the deep-learning neural net.

Prof. Hinton is at pains to point out that the most sophisticated current systems fall well short of human brain structure both in terms of quantity of neurons and complexity of interconnection and communication.

We should keep reminding ourselves that brains are doing important stuff at the granularity of small groups of molecules: they are the essence of nanotech.*
___

* In fact, we should probably be using our best AI neural nets to figure out what our own neurons are actually doing. The massive connectivity found in brains may implement features and structures so complex as to be beyond unaided human comprehension.

[In an interesting analogy, you may recall the 'killer app' for quantum computers is said to be the simulation of quantum systems themselves, intractable with conventional computers.]

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.