Thursday, November 03, 2016

Finally, consciousness as an engineering problem!



I noted this recent research from MIT: "Making computers explain themselves:
new training technique would reveal the basis for machine-learning systems’ decisions"
"In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

"But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

"At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions."
The article is itself pretty opaque  - we appear to be at the very beginning of this difficult road.

But ..

A neural net system C which observes another neural net system B and confabulates a symbolic, causal, intentional model explaining how B behaves. Does any of that sound familiar?

The most convincing (obviously still hand-waving) paradigm for thinking this through is due to R. Scott Bakker's Blind Brain Theory (BBT). I do advise you to read his PDF here.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.