Friday, April 28, 2017

"The £20 BILLION race to create an AI sex doll"

From the Daily Mail Online today.

"The 'sex tech' market is worth an estimated $30.6bn - and across the globe, firms are racing to create a radical new type of robot they say could change sex forever.

"From AI personalities capable of holding a conversation to models with a functioning G-spot, firms are hoping consumers will pay up to $15,000 for a sex doll that never says no.

"Among the most impressive is RealDoll's Harmony - an artificial intelligence based sex bot that can hold conversations, remember what she's told and even has a customizable personality. ..."
The Guardian also has the story, their extensive article suggesting that (unfortunately?) hype is still running way ahead of reality.


I was always puzzled (quick disclaimer: when I thought about the subject at all) as to how the sexbot's responses would be programmed.

I imagined over-excited yet poorly-paid programmers toiling in robot control language, trialling this limb movement in response to that input.

Silly me: still trapped in that old GOFAI paradigm.

They'll simple enroll thousands of couples, wire them up to a dense mesh of sound and motion sensors .. and set them to go.

With so much fine-grained big data at their disposal, they'll unleash a DeepMind-style artificial neural net .. and just learn optimised responses.*

It'll be 'Go' all over again.


* I seriously would not want to be on the test team ... .

Thursday, April 27, 2017

Ode to Joy

There is a special character to Europhilia. The hushed, reverential tone of voice, those shining eyes. You see it with the apostles of Globalism: Tony Blair, Nick Clegg .. and Emmanuel Macron.

If only I had the literary technique to characterise such foolishness. I bow instead to the superior writing skills of  Phil Burton-Cartledge.
"The problem is for these folks, remainiacs if you will, the European Union is more than a trading bloc with an opaque bureaucracy. It's their Soviet Union, their City on the Hill, their Jerusalem.

"In their minds, the EU condenses Enlightenment values and liberal internationalism. It's an achievement standing above the nationalisms and tribalisms of old, that proves we can all get along on the basis of common humanity.

"And they have the nerve to look down their noses at Leave voters and call them deluded. It's stop-the-world-we-want-to-get-off, liberal-style. "
I rather wish I'd said that.

Wednesday, April 26, 2017

Diary: Dartmoor

We've been away a couple of days on Dartmoor. Just coinciding with a cold snap which brought icy northern winds, hail and snow showers.

Here are some pictures - click on an image to make it larger.

Clare and Alex stagger up to the pub after a bracing walk on Dartmoor

The river at Bovey Tracey. Looks pleasant because you can't photograph the wind.

View from the gardens at the unimpressive Castle Drogo folly: currently surrounded by scaffolding.

The dimly-lit library in Castle Drogo. This set of windows above -
 used to illuminate the (inaccessible) kitchens - reminded me of a Cray-1.

Your author confronts the west-country delicacy of pork faggots at Chagford

Despite its reputation, we found Dartmoor paths pretty dry after weeks without rain. This did not stop Alex almost succumbing to hypothermia after yesterday's storms (he was out on the moors all day) while on Monday I tried a short cut wading across a bog and discovered the hard way why the Dartmoor National Park people put in boardwalks.

Monday, April 24, 2017

"The future is already here ... "

The full William Gibson quote: "The future is already here - it's just not very evenly distributed".

We'll wait a long time for AI systems as competent as human workers (for day-to-day tasks). But when those systems arrive, their great benefit will be low marginal cost: software copied for free, hardware rolling off production lines.

And people won't have to do those jobs, unless - hipster-like - they especially want to.

So what would society be like with an abundance of cheap, competent labour?

This future already exists (to the benefit of some of the people) in places with an overabundance of cheap human labour.

From Marginal Revolution:
"The first couple of times I took a taxi to a restaurant I was surprised when the driver asked if I wanted him to wait. A waiting taxi would be an unthinkable expense for me in the United States but in India the drivers are happy to wait for $1.50 an hour. It still feels odd.

The cars, the physical capital, in India and the United States are similar so the low cost of transportation illustrates just how much of the cost of a taxi is the cost of the driver and just how much driverless cars are going to lower the cost of travel. ...

Every mall, hotel, apartment and upscale store has security. It’s all security theatre - India is less dangerous than the United States - but when security theatre can be bought for $1-$2 an hour, why not?

Offices are sometimes open 24 hours a day, 7 days a week. Not that anyone is in the office, just that with 24 hour security there is no reason to lock up, so the office physically stays open. ...

At offices, cleaning staff are on permanent hire so they come not once or twice a week but once or twice an hour. The excessive (?) cleanliness of the private spaces makes the contrast between private cleanliness and public squalor all the more striking."
Karl Marx's communism (abundance for all!) is often portrayed - by members of the elite - as an unattainable utopia; but Marx himself observed that communism for the masses would merely be an extension of the experience of the elite aristocracy through the ages.

Communism 'is already here', as the man said, but '... not very evenly distributed'.

Not yet.


India, with its oversupply of relatively unskilled manual workers, is not an optimal emulation of an AI future. AI systems will be more diverse, more embedded and hopefully not oversupplied.

Those waiting taxis come with negative externalities.

Saturday, April 22, 2017

Diary: the Bishop's Palace in spring

We visited the Bishop's Palace, Wells yesterday. There was an art exhibition.

Among the tulips. It reminded me of a Joni song

The underground river flows down from the Mendips, then bubbles up in here

Artwork in the Palace

The Glastonbury Owl

Obvious kitsch - but it made people laugh

There was one room we were not permitted to enter. They appeared to be filming an interview with a parliamentary candidate.

Friday, April 21, 2017

Sunday: select two from four

There are six choices (2 from 4). It's a real hard call. An additional constraint is that you'd expect one candidate from the left (top two) and one from the right (bottom two). But even that's not certain.

I said before that if François Fillon could get into the second round he'd likely win. If you followed the betting you'd have to go for Macron - Le Pen .. with Macron to win the in the run-off.

Somehow I can't see bubble-candidate Macron winning ... .


Update (Sunday 8.15 pm): looks like I was wrong.


Essential reading: "The French, Coming Apart" - via Steve Sailer.

Thursday, April 20, 2017

Those bright, flickering webs

As a hopeless case, I was finally admitted to psychotherapy.

I lay on the couch, as relaxed as I ever get, and listened to the analyst.
"I see that your profession is neuroscientist. Now, I have your file here, but perhaps you could explain the problem to me in your own words?"
I sighed: repetition had become tedious in the extreme.
"In the streets, at work and at home, .. I am surrounded by systems. They're controlled by webs of neural tissue. They spin carefully-crafted, conformist and entirely-deceptive narratives .. purporting to explain to themselves and others just why they do what they do."
The analyst paused a moment to parse these rather abstract reflections,
"And when you see the people around you, your loved ones, what exactly do you see?"

"I see what my MRI scanner sees: heads filled with bright, flickering webs of neural activation. Enhanced glucose metabolism. I see protoplasmic circuitry doing what evolution has honed it to do. ... I see the laws of physics operating."
At this the analyst looked thoroughly alarmed. He stood up and walked across to his desk where he made a hushed and urgent call. I caught the term 'psychopath'.

Returning, he resumed speaking almost before he had sat down.
"You might have what we professionally call a framing issue. The answer is ..."
But I was no longer listening, I was focused instead on the intricate movements of his jaw, tongue and larynx, all controlled by that bright, flickering web I could almost see inside his skull.

Wednesday, April 19, 2017

Diary: negative mass + Tintinhull + Montacute

I was reading the 'Stardrive' book this morning (the part about how the Casimir effect is related to a regime of negative energy between the conducting plates) when the latest news broke about negative mass.
"Washington State University physicists have created a fluid with negative mass, which is exactly what it sounds like. Push it, and unlike every physical object in the world we know, it doesn't accelerate in the direction it was pushed. It accelerates backwards."
The discussion at Physics StackExchange clarified that this does not mean stuff falls upwards. No cavorite then.
"You also asked whether an object with negative mass falls up or down. The equivalence principle tells us that gravity is indistinguishable from uniform acceleration. That means that positive and negative masses have to behave the exact same way under gravity, so negative mass falls down."
If you want a complete explanation ... .


This afternoon we took a trip.

Clare and your author at Montacute House

Montacute House: an Elizabethan wonder

Tintinhull Garden

We've been discussing (as a family) who'd we vote for in the first round of the French Presidential election this Sunday. I think we're converging on Jean-Luc Mélenchon: his policies seem to have something for each of us 😎 ... .

Tuesday, April 18, 2017

On the granting of moral rights

If you gratuitously kick a cat, you are guilty of sadism; if you smash a laptop you are guilty of vandalism. There is a moral distinction.

Sometimes animals and people are not considered moral agents. It is said that in mediaeval times there was no legal or moral objection to a knight killing a peasant. It was, however, seen as vandalism towards a factor of production.

Slaves were famously 'tools with voices'.

No-one has yet met with any AI system and taken a moral stance towards it; no-one has yet worried about turning the power off. As far out as we can see, new and exciting 'deep learning' AI systems are just better tools.

In theory their very power should boost productivity but who gets the benefit? We over-produce generic graduates and consequently don't pay them much. A talented AI engineer can expect a six figure income pretty soon into their career. But what they know is technical and complicated: STEM remains a minority pastime.


Overproduction of wannabe elites is such a poor idea.

We really do need to find them something worthwhile to do.

Saturday, April 15, 2017

The smart move

It's a bit obvious, the three-pronged attack, don't you think?
  • Decapitation-strike takes out the North Korean leadership
  • Bunker-busters take out the nukes
  • Jamming and carpet-bombing north of the DMZ  nullifies the NK artillery.
None of this is 100% so the collateral damage (not least to South Korea) is going to be intense. And then - what next?

I see a much better strategy. The Americans keep the pot boiling - making the status quo untenable. The Chinese .. well. let's say Kim Jong-un and his closest supporters have a little accident. Perhaps a bad case of 'flu' or some unfortunate transportation malfunction.

A new leadership emerges, one positively aligned with China, committed to economic reforms the People's Republic way - and prepared to forego nukes.

Looks win-win to me.

Thursday, April 13, 2017

A star drive which might work (Mach Effect)

NASA has just announced this: "Mach Effects for In Space Propulsion: Interstellar Mission".

From Centauri Dreams:
" In this case, the work goes toward a so-called Mach Effect Thruster (MET). Mach effects are transient variations in the rest masses of objects as predicted by standard physics where Mach’s principle applies. Proponents believe they offer the possibility of producing thrust without the ejection of propellant, as discussed in James Woodward’s Making Starships and Stargates: The Science of Interstellar Transport and Absurdly Benign Wormholes (Springer-Verlag, 2012).

What Fearn proposes is to investigate such thrusters by continuing the development of laboratory-scale devices while designing and developing power supply and electrical systems that will determine the efficiency of the Mach Effect Thruster. The analytical task is to improve theoretical thrust predictions and build a reliable model of the device. At the theoretical level, this team is definitely talking deep space, with part of the proposal being to:

'Predict maximum thrust achievable by one device and how large an array of thrusters would be required to send a probe, of size 1.5m diameter by 3m, of total mass 1,245 Kg including a modest 400kg of payload, a distance of 8 light years (ly) away.'"
Here's the book mentioned.

Amazon link

I downloaded the book-sample to my Kindle app and so far it's both well-written and interesting. Unlike the 'EM Drive', which was widely criticised and seems to violate conservation of momentum, the Mach Effect appears to be a valid consequence of General Relativity when combined with Mach's principle - at least, no-one so far has come forward with a convincing theoretical refutation.

Experimental effects so far appear to be small (if they exist at all) and unproven, but NASA evidently considers there could be something to it.

I'll probably read the book a little later. At least Jerry Pournelle will be pleased.

Wednesday, April 12, 2017

Diary: today's chronicle of failures

Jerry Pournelle's blog (on the sidebar to the right if you're in a PC browser, otherwise here) has a recurring theme where he details his struggles with recalcitrant Microsoft products and sundry other applications.

I felt his pain today. The laptop refused to connect to the scanner (Epson BX630FW). I tried all the usual stuff: restarted everything, reconnected all devices to the router, reinstalled the printer software, switched from WiFi to an Ethernet connection, ... .

Result: stuff prints but the scanner remains unrecognised. My best guess is that something in the printer/scanner has broken. The workaround is just to take pictures via the pretty good camera on my Nexus 6 phone - I'm in no hurry to replace the five year old Epson device.


Work on the 'famous' chatbot has paused. Reason: I know how to do it and consequently I'm already bored.

The interesting hurdle was my bucket-list objective of getting a proper, FOL resolution theorem prover to work. Now that it does (gratifyingly high in Google searches), moving on to a planner has lost much of its appeal.

I suspect much of the power of a chatbot anyway is in the data (ie the data-fill), not the sophistication of the underlying architecture or algorithms. This makes me even less excited.

A deeper problem. A chatbot needs to interact with conversationalists and to learn. Minimally, this needs Internet access and engagement with a messaging platform such as Skype, Twitter or Facebook. But if you start from Common Lisp the integration problems look rather daunting and even expensive. I'm not enthused about shifting to Javascript or Python, where such integration would probably be easier.

So I'm awaiting some conceptual innovation sufficiently exciting to remotivate me. Something like cracking consciousness perhaps 😎 ...?


My favourite article today: this meditation from Greg Cochran disinterred from 2013 and still completely relevant.
"... Syria was born for trouble. Although we all know that ethnic diversity is our strength, better than ice cream or unicorn poop, it appears that Syria (like much of the Middle East) has managed to acquire too much of a good thing. Paradoxical as it may seem, Syria is actually overly diverse.

There are very ancient Christian communities, as well as Kurds (who aim for an independent Kurdish state, an idea that horrifies Turkey), but the real fight is between the Sunni Arabs (about 60 percent of the country) and the Alawites, who run the show and make up about 12 percent of the population. I’m sure that most of my readers are fully conversant with every detail of the history and practices of the various Muslim denominations,—just as our lawmakers are—but let me talk about the Alawites for a moment.

The Alawites have an esoteric religion, one in which their most important beliefs are kept secret from outsiders. Since those beliefs are only revealed through a long process of initiation, even most Alawites don’t know what they are.

We know some things, most of which don’t sound at all like Islam. Alawites drink wine: they celebrate Christmas and Easter. They reject to the call to prayer and the pilgrimage to Mecca. They have no places of worship. Women among the Alawites are not veiled, and enjoy greater freedom than among Sunnis or Shi’ites, but it seems that this is the case because they are believed to be soulless—they are never initiated into the mysteries. Alawites also seem to believe in reincarnation.

Traditionally, Alawites were considered non-Muslim and treated like dirt—worse than Christians or Jews. You can see how the Sunni majority might resent being ruled by them—indeed, it’s hard to imagine how that ever came to pass.

The roots of Alawite dominance go back to the French colonial era. Most Syrian Muslims opposed French rule and refused to serve in the local gendarmerie—but the Alawites did. After independence, the Alawites continued to enter the armed forces in large numbers, partly because they were poor as heretical church mice. At first, the highest ranks in the army were filled by Sunnis, but each coup led to the expulsion of Sunni generals on the wrong side, and there were many coups. The political struggles bred mutual suspicion among the Sunnis, but the Alawites stuck together. The Alawites were also overrepresented in the Baath party.

So, while the Baath party took over in 1963, the Alawites took over in 1966—and they haven’t let go yet.

The thing is, when you ride the tiger, you can’t let go. Although they have made efforts to build support outside their sect, through nationalist and redistributionist policies, the Alawite government has always faced violent opposition. They’ve put down full-scale revolts, most notably in Hama, 1982, where they leveled the city with artillery, killing tens of thousands. All that official violence means that they can’t afford to lose. Once the Alawites were despised, but now they’re hated. At this point, Peter W. Galbraith, former ambassador to Croatia, says “The next genocide in the world will likely be against the Alawites in Syria.” ...
The tone of the whole article is ironic and satirical. The comments - worth a look - confirm that Americans don't do irony.
"Reinhold says:

September 10, 2013 at 5:48 pm

Is this some kind of a troll? If not: proof that scientists are often brain-dead regarding politics."
A little later another commentator sadly remarks:
"Anonymous says:

September 11, 2013 at 8:49 pm

So far one out of seven people realized that the proposal is satire. That ratio is probably above average."
The standard procedure for Sunni Jihadis with Alawite captives is to behead them. If I hear the neocon-signposting phrase, "bombing his own people", one more time ... .

Tuesday, April 11, 2017

Diary: my reading stack

Amazon link

I was looking for something explaining the history and evolution of the world's major languages - a book which was consistent with ancestral population genomics and the historical record. Nicholas Ostler does a fine job for written languages over the last 5,000 years.

On the strength of that I ordered (for both Clare and myself):

Amazon link

which is due to arrive today.


I recently reread Quantico, a testament to the power of Greg Bear's writing when he really cares about the subject matter. It's anthrax-based biological warfare - a revenge attack against the world's great religions. The main protagonist is an Americanised Muslim, sympathetically-drawn, and there is no preachiness to speak of. An exciting and chilling narrative.

On the strength of that I'm now in the middle of his follow-up.

Amazon link


After immersing myself in Bukharin's life, I was naturally curious about Stalin.

In my youth I was educated in the Trotskyist tradition, which sees Stalin as a malevolent dullard who broke with Marxist principle in a murderous struggle for absolute power.

On the other hand ... he did preside over the crash-industrialisation of Russia in the 1930s and arguably did ensure allied victory in the second world war. Would Bukharin's or Trotsky's policies really have worked better, given the objective situation?

I'm no longer so sure.

I selected this biography after carefully reviewing the three or four major candidates, looking for an author without too many moralistic preconceptions or an overt agenda. The book arrives tomorrow and I hope for the best.

Amazon link


And finally, I'm waiting for the arrival of James Hogan's classic pulp SF novel, which I first read more than thirty years ago.

Amazon link

I mentioned "The Genesis Machine" recently in this post.

Monday, April 10, 2017

What did his mother call Jesus?

That old Texan joke: "If the English of the King James Bible was good enough for Jesus, it's good enough for me!"
I understand the local priest's attachment to the Latin Mass. Two thousand years of cultural continuity;  the kinship of fellow-priests down the centuries reciting the same liturgy.

Outreach into the vernacular must have seemed like wanton cultural vandalism.

Still, one should be mindful of the culture within which one basks. The Mother of God did not call her son Jesus (Ἰησοῦς in the language of the hated Empire) - it was the Aramaic Yeshua.

Friday, April 07, 2017

Cruise missiles in the wilderness of mirrors

En route to Syria

I almost completely agree with the points below made by Scott Adams in his latest blog post.
" ... But let’s say the world believes Assad or a rogue general under his command gassed his own people. What’s an American President to do? If Trump does nothing, he appears weak, and it invites mischief from other countries. But if he launches 59 Tomahawk missiles at a Syrian military air base base within a few days, which he did, the U.S. gets several benefits at low cost:

1. President Trump just solved for the allegation that he is Putin’s puppet. He doesn’t look like Putin’s puppet today. And that was Trump’s biggest problem, which made it America’s problem too. No one wants a president who is under a cloud of suspicion about Russian influence.

2. President Trump solved (partly) for the allegation that he is incompetent. You can hate this military action, but even Trump’s critics will call it measured and rational. Like it or not, President Trump’s credibility is likely to rise because of this, if not his popularity. Successful military action does that for presidents.

3. President Trump just set the table for his conversations with China about North Korea. Does China doubt Trump will take care of the problem in China’s own backyard if they don’t take care of it themselves? That negotiation just got easier.

4. Iran might be feeling a bit more flexible when it’s time to talk about their nuclear program.

5. Trump’s plan of a Syrian Safe Zone requires dominating the Syrian Air Force for security. That just got easier.

6. After ISIS is sufficiently beaten-back, the Syrian government will need to negotiate with the remaining entities in Syria to form a lasting peace of some sort that keeps would-be refugees in place. Syria’s government just got more flexible. It probably wants to keep the rest of its military.

7. Israel is safer whenever an adversary’s air power is degraded.

On the risk side of the equation, we have the possibility of getting into war with Russia. I’d put those odds at roughly zero in this case because obviously the U.S. warned Russia about the attack. That means we knew their reaction before we attacked. And it was a measured response of the type Putin probably respects. I expect Russia to complain a lot but continue to partner with the U.S. against ISIS.

If it turns out that the sarin gas attack that sparked this military action didn’t come from Assad, it doesn’t much matter. President Trump will bank all of the benefits above even if the attack turns out to be a hoax. We know Assad had some chemical weapons at one point, and probably used them. No one will be crying for Assad if the attack was unnecessary. And realistically, the public will never be 100% sure who was behind the attack. ..."
I would just add this. The liberal-dominated media still don't get Trump. They persist in thinking he's operating according to some overarching grand theory. They had him pegged as a Putin-puppet, then as an America-first isolationist and now a latter-day neocon.

Since they approve of this last position they're breathing a collective sigh of relief.

None of this is true. Ordinary folk realise that Trump is not an intellectual, he's just out to defend US interests. He instinctively knows that the way you do that is to operate from a position of strength and not let your opponents diss you.

Liberals, who never think like that, will never understand Trump, But I suspect that the Syrian military are also behind the curve.

Unlike western liberals, they should have no problem figuring out Trump's 'Big Man' politics - those are de rigeur in the Middle-East. But they're so parochial over there, so insular, that until yesterday they still hadn't figured out that the game changed when Obama left the building.

But unlike the liberals, they'll be fast learners.

Thursday, April 06, 2017

Neural lace

Elon Musk tweaks the technosphere again (from The Economist):
"Ever since ENIAC, the first computer that could be operated by a single person, began flashing its ring counters in 1946, human beings and calculating machines have been on a steady march towards tighter integration. Computers entered homes in the 1980s, then migrated onto laps, into pockets and around wrists. In the laboratory, computation has found its way onto molars and into eyeballs. The logical conclusion of all this is that computers will, one day, enter the brain.

"This, at least, is the bet behind a company called Neuralink, just started by Elon Musk, a serial technological entrepreneur. Information about Neuralink is sparse, but trademark filings state that it will make invasive devices for treating or diagnosing neurological ailments. Mr Musk clearly has bigger plans, though. He has often tweeted cryptic messages referring to “neural lace”, a science-fictional concept invented by Iain M. Banks, a novelist, that is, in essence, a machine interface woven into the brain. ..."
Neural lace was Iain M. Banks's technology as used by citizens in "the Culture" to 'telepathically' converse with Culture Minds, the AIs which actually ran their civilisation. In one novel, a Mind mentions matter-of-factly that no more exquisite torture device has ever been conceived of.

Anatoly Karlin, quoting Nick Bostrom, is profoundly skeptical:
"We do not need to plug a fiber optic cable into our brains in order to access the Internet. Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.

"Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence.

"Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone."
This reminds me of all the ways experts tell you that something can't be done.

My vote goes with James P. Hogan, who in his excellent and seminal SF novel, "The Genesis Machine", described a brain-computer symbiosis delivering enhanced imagination/visualisation.

Amazon link

A system of equations would be projected onto an interior whiteboard in your mind. The computer would generate a VR-type solution space which you could navigate freely while discussing subtleties and points of interest with the accompanying AI system. It was, if you like, virtual reality in the head - the abstract rendered concrete.

And that is definitely going to happen.

Wednesday, April 05, 2017

Globalisation, then neoliberalism, finally political correctness

Charles Murray at Middlebury College, March 2017

From the Wikipedia bio:
"William Deresiewicz is an American author, essayist, and literary critic. Born in 1964 in Englewood, New Jersey, Deresiewicz attended Columbia University before teaching English at Yale University from 1998-2008.

"He is the author of A Jane Austen Education, How Six Novels Taught me About Love, Friendship, and the Things that Really Matter (Penguin Press, 2011) and Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life (Free Press, 2014). ..."
Deresiewicz has a from-the-heart (if overlong) essay on the current wave of 'political correctness' at US colleges. Check it out, or the shorter summary by Steve Sailer here.
"Elite private colleges are ideologically homogenous because they are socially homogeneous, or close to it. Their student populations largely come from the liberal upper and upper-middle classes, multiracial but predominantly white, with an admixture of students from poor communities of color - two demographics with broadly similar political beliefs, as evidenced by the fact that they together constitute a large proportion of the Democratic Party base.

"As for faculty and managerial staff, they are even more homogenous than their students, both in their social origins and in their present milieu, which tends to be composed exclusively of other liberal professionals—if not, indeed, of other liberal academics.

"Unlike the campus protesters of the 1960s, today’s student activists are not expressing countercultural views. They are expressing the exact views of the culture in which they find themselves (a reason that administrators prove so ready to accede to their demands). …

"The term political correctness, which originated in the 1970s as a form of self-mockery among progressive college students, was a deliberately ironic invocation of Stalinism. By now we’ve lost the irony but kept the Stalinism - and it was a feature of Stalinism that you could be convicted for an act that was not a crime at the time you committed it. So you were always already guilty, or could be made to be guilty, and therefore were always controllable.

"You were also always under surveillance by a cadre of what Jane Austen called, in a very different context, “voluntary spies,” and what my students called the PC police. Regimes of virtue produce informants (which really does wonders for social cohesion). …

"There is one category that the religion of the liberal elite does not recognize - that its purpose, one might almost conclude, is to conceal: class. Class at fancy colleges, as throughout American society, has been the unspeakable word, the great forbidden truth. ... It has long struck me in leftist or PC rhetoric how often “white” is conflated with “wealthy,” as if all white people were wealthy and all wealthy people were white.

"In fact, more than 40 percent of poor Americans are white. Roughly 60 percent of working-class Americans are white. Almost two-thirds of white Americans are poor or working-class. Altogether, lower-income whites make up about 40 percent of the country, yet they are almost entirely absent on elite college campuses, where they amount, at most, to a few percent and constitute, by a wide margin, the single most underrepresented group. ...

The exclusion of class also enables the concealment of the role that elite colleges play in perpetuating class, which they do through a system that pretends to accomplish the opposite, our so-called meritocracy. Students have as much merit, in general, as their parents can purchase (which, for example, is the reason SAT scores correlate closely with family income). The college admissions process is, as Mitchell L. Stevens writes in Creating a Class, a way of “laundering privilege.”
Because political correctness is presented by its advocates in moral terms, it's sometimes difficult to understand it as the optimised ideology of globalisation - a form of capitalism which really got going in the 1980s-90s, powered by Internet-mediated supply-chains and the economic rise of China.

More precisely, political correctness provides the moralistic framework, neoliberalism the ideology while globalisation captures the underlying economic dynamics:
"Economic neoliberalism is an economic theory and an ideological conviction that supports maximizing the economic freedom for individuals and thus reducing the amount of state intervention to the bare minimum.

"In this regards, it does advocate the elimination of government-imposed restrictions on transnational movements of goods, capital and people .... However, although these aspects are considered important aspects of globalization, this essay argues firstly that globalization is a much richer and multi-dimensional process that extends beyond transnational economic transactions. ..."
Following Marx, we should understand that causality has worked in the direction indicated by the title of this piece: first the economics, then the political justifications, finally the moral underpinnings. It is ironic that the political left now provides ideological cover for global capital.

Monday, April 03, 2017

"Where is heaven, Father?"

Some seven or eight years ago I attended a Catholic event with Clare at the church in Andover. I think it was a lecture on some theological issue or other, or maybe a talk on the Missions. In any event it was a dark night and the church was packed.

At some stage in the evening the priest, an elderly, kindly man with a poor public speaking style, took questions from the audience: a kind of 'ask me anything'.

A quavering voice - evidently an elderly Irish woman - piped up from behind us: "Father, where is heaven?"

My jaw dropped: in this day and age?

The priest was, however, up to the job. He explained that previous orthodoxy had held that heaven was beyond the sky. However, NASA had sent a great many rockets and heaven was nowhere to be seen up there. However, modern physics was very strange with quantum effects between the atoms which no-one understood. Possibly it was here that heaven was located.

There was no follow-up question.


It made me think though. If heaven is nowhere to be found in this spacetime universe, could it really be found in Hilbert space? Perhaps in the primordial substance before spacetime geometry had ever congealed?

Let's ask Carlo Trugenberger: "Emergent 4D Quantum Geometry from Critical Space-Time Graphs".
"After a brief introduction to the problem of quantum gravity and the main solution approaches on the market I will focus on my new proposal of a quantum gravity model in which the fundamental degrees of freedom are information bits for both discrete space-time points and links connecting them.

"The Hamiltonian is a very simple network model consisting of a ferromagnetic Ising model for space-time vertices and an antiferromagnetic Ising model for the links. As a result of the frustration between these two terms, the ground state self-organizes as a new type of low-clustering graph.

"I will provide ample evidence that this simple network model has two critical points, an ultraviolet fixed point corresponding to fluctuating information bits and an infrared fixed point corresponding to an emergent geometric phase with space-time dimension 4.

"The model predicts that, at small scales, the space-time dimension decreases until space-time itself completely dissolves into a disordered soup of information bits.  The large-scale dimension 4 of the universe is related to the upper critical dimension 4 of the Ising model and to illustrate the dimension decoupling mechanism I will solve a toy version of the model in the mean field approximation.

"At finite temperatures the universe graph emerges without big bang and without singularities from a ferromagnetic phase transition in which space-time itself forms out of a hot soup of information bits."

So heaven might be 'a hot soup of information bits' (but perhaps that's the other place). Perhaps we could link this idea with Eternal Inflation to get a contemporaneous theological ontology.


How do we know that our current universe has three spatial dimensions? Because Clare needed a minimum of three strings to deploy her self-made bird feeder (two pie dishes from Poundland).

Three strings = three spatial dimensions

The designer shows her grasp of string theory

As I carefully explained to her, the argument works best in polar coordinates.

Saturday, April 01, 2017

Google Translate: English to predicate logic (please!)

Did you see the  "Missing: google"?


A big problem with English (natural language really) is that it doesn't come equipped with an explicit set of inference rules. Consequently, when someone uses natural language to communicate with an AI system, it's not really possible for that system to immediately connect the utterance to its store of knowledge. If only natural languages were like formal languages, which have proper inference and well-defined semantics. The thought that secretly they are was the intuition of Richard Montague*. But he was misguided.

Any AI natural language understanding system tries to transform the raw material of human language into something it can use, something more inferentially tractable.  Usually that doesn't work too well, and even the latest statistical systems (which do well in surface-level speech-recognition and translation) show scant abilities to understand.

It's as well to remind ourselves just why natural languages are so unhelpful to AI designers. It's because they are a highly-optimised solution to a situated communications problem. Speech is a low-bandwidth, linear and slow channel for communicating time-critical thoughts. So speech is highly optimized to use every available constraint to speed up meaning transfer:

  • volume, pitch, timbre and tone of voice
  • shared and predictive knowledge of the conversational partner
  • emotional cues
  • physical gesturing and facial expressions
  • environmental situation and context 
  • ...

Researchers are quite aware of this, of course. The topic area is called Pragmatics and it's hived off as a separate sub-discipline .. because it seems to require way too much modelling of the conversing agents in their specific environment, culture and history. In short, it's too hard.

But by abstracting away these additional constraints which channel and constrain meaning, we make the semantic understanding problem way too hard. Which is why we can't solve it.

Google Translate system which mapped between a natural language and a formal language (with well-defined inference rules and semantics) would nevertheless be a boon to the designers of conversational AI systems, including chatbots. But Google doesn't have a corpus of First-Order Predicate Calculus sentences translationally-linked to English, so its deep learning systems can't crunch the data and add FOPC to its list of languages. Projects such as Cyc have attempted to do this stuff by hand .. with surprisingly little impact.

Again the way forward is embodied robotics and human baby conversational emulation.


* In a weird reprise of Alan Turing's fate, Wikipedia reports that Richard Montague 'died violently in his own home; the crime is unsolved to this day. Anita Feferman and Solomon Feferman argue that he usually went to bars "cruising" and bringing people home with him. On the day that he was murdered, he brought home several people "for some kind of soirée", but they instead robbed his house and strangled him.'

He was 40.

Friday, March 31, 2017

Naive generate-and-test won't hack it

When I was young I toyed with the following idea.

Pretty much any concept can be adequately expressed in a mini-essay of a thousand words.

Simply generate all possible articles of a thousand words and somewhere you will find the answer to all problems.

Want the design of a stardrive engine? Immortality? The theory of perfect governance?

It's all in there somewhere.


How many essays though? Apparently the average educated speaker of English knows about 40,000 words. So for our first estimate, we could simply raise 40,000 to the power of 1,000 .. but most of those 104,602 essays would be wildly ungrammatical. We can do better.

I reviewed a sample text: the introductory quote in Peter Seibel's "Practical Common Lisp".

The first five sentences comprised 100 words in total which broke down into:
  • nouns: 20%
  • verbs: 15%
  • adjectives: 10%
  • others: 55%
A certain amount of hand-wavy rounding of course. Assume we adopt the very restrictive constraint of exactly one syntactic structure for the entire set of essays, then the total number reduces to a product of:
(number-of-English-words-in-category) (number-of-words-of-this-category-in-essay)
8,000200 * 6,000150 * 4,000100 * 22,000550 = 104,092
That's still a big number*. Suppose only one 'essay' in a billion was semantically sensible and we could read one essay per second. That's 104,083 seconds .. or 3 * 104,066 billion years.

The merits of a compact notation.


Exhaustive search through the space of all possible candidates isn't a very good way of proceeding. And this has important implications for DARPA's third wave - contextual AI - which I wrote about previously.

In his excellent exposition (YouTube), John Launchbury highlighted the very large number of training instances needed to force convergence for today's artificial neural networks. By comparison, children learn new concepts from very few examples.

John Launchbury's proposed solution was - correctly - to identify additional constraints which might dramatically collapse the search space. His chosen example showed the benefits of adding the dynamics of handwriting characters to the resultant bitmaps normally used for training. It turns out that if you consider how the image might have been created, it makes recognition a lot easier.

It's not hard to identify the extra constraints about the world which children use. They interact with new objects, touch them, throw them, bite them and try to break them. Thus are acquired notions of 3D structure, composition and texture to augment what their visual systems are telling them.

I really do think that a high priority should be given to embodied robotics in the next wave of AI research.


Another example John Launchbury discussed was the Microsoft Internet-chatbot "Tay".

Apparently this was the least-offensive tweet Launchbury could find. But what would an AI have to know about contemporary mores to self-reject statements like that?

For extra credit, discuss the 'situated cognition' thesis that only through active and corporeal participation in the social world can one truly understand social concepts.

Particularly emotionally-charged ones.


* Since
(i)  I don't consider all the syntactically-permissible permutations of the ways in which nouns, adjectives, verbs and others could be mixed up in the thousand words, while

(ii)  the size of the 'others' vocabulary is likely to be way smaller than 22,000 (so if, for example, the 'others' vocabulary size was 2,200, this would reduce the overall essay-set size by a factor of 10550 - a distinction, however, without a practical difference),
this calculation counts as pretty bogus. I only wanted to demonstrate, however, that no matter how you cut it, the numbers involved are simply ginormous.

DARPA: three waves of AI

High production values for DARPA's US Military roadmap and vision for AI (February 2017).

This will be the basis of funding going forward. The images below are taken from this slide-pack, more sophisticated than anything I've seen from the likes of Accenture.

Click on any of the pictures to make larger - or better, review the entire slide-set.

Although this 'three wave' model is not too surprising, it's still an accurate view as to where research is heading.


If human beings are taken as exemplars of neural nets which can explain their own, contextual operation, it's worth noting that such explanations have a curious character.

No human can explain their own sub-conscious neural processes. If asked to explain how you know that a picture of a cat is indeed that of a cat, you are not going to elucidate details of early visual processing in your visual cortex.

Instead, you are going to traffic in high-level, symbolic descriptions of putative intermediate stages in scene interpretation. The talk will be of features such as fur, shape, the environment of said animal.

These intermediate-level symbolic descriptions are remote indeed from the actual neural processes which it is claimed implemented them .. and indeed will have only a contingent (although highly correlated if accurate) relationship with them.

Self-deception is never far away in the third wave!


If you have sixteen minutes, John Launchbury's presentation of DARPA's strategy is excellent.

Interestingly, John Launchbury is British.

Thursday, March 30, 2017

Open systems meet closed automation

Let me start with this rather intriguing story (via Bruce Schneier).

"Prior to World War II, Abraham Wald was a rising mathematician in Europe. Unable to obtain an academic research position in Austria due to his Jewish heritage, Wald eventually made his way to the U.S. to become one of the most important statisticians of the 20th century.

"One of Wald’s most prominent works was produced for the U.S. government’s World War II-era Statistical Resource Group. The project examined aircraft that had returned from their combat missions and the locations of armor on the planes. Placement was, of course, no trivial matter. Misplaced armor would result in a negatively balanced, heavier and less maneuverable plane, not to mention a waste of precious wartime resources.

"Tasked with the overall goal of minimizing Allied aircraft losses by placing additional armor in strategic locations on the plane, Wald challenged the natural instincts of military commanders. Conventional wisdom suggested that the planes’ survival rates might benefit from additional armor placed in the areas that suffered the highest volume of direct hits. But Wald found that was not the case.

"Leveraging data stemming from his examinations of planes returning from combat, Wald made a critical recommendation based on the observation of what was not actually visible: He claimed it was more important to place armor on the areas of the plane without combat damage (e.g., bullet holes) than to place armor on the damaged areas. Any combat damage on returning planes, Wald contended, represented areas of the plane that could withstand damage, since the plane had returned to base.

"Wald reasoned that those planes that were actually hit in the undamaged areas he observed would not have been able to return. Hence, those undamaged areas constituted key areas to protect. A plane damaged in said areas would not have survived and thus would not have even been observed in the sample. Therefore, it would be logical to place armor around the cockpit and engines, areas observed as sustaining less damage than a bullet-riddled fuselage.

"The complex statistical research involved in these and Wald’s related findings led to untold numbers of airplane crews being saved, not only in World War II, but in future conflicts as well."

As designers we always have a theory of our proposed artefact in its intended environment. Sometimes we capture the theory in a formal specification, sometimes it's implicit in the examples we feed to some artificial neural net, frequently it's some fuzzy understanding we incorporate into a plain-language requirements document plus some test data.

In any event, the final engineered artefact embodies a theory - the theory of the environment in which it works correctly. That environment is often the real world and here we hit a problem: the real world is not a precisely-specified closed system*. Inevitably the artefact will encounter an event which is out of the envelope of its design - and then it will fail.

A good example of this is driving. Here, you are the artefact. Initially you learn in structured lessons how to control the car and tactics to safely navigate the streets.

As you gain experience, you statistically encounter fewer, rarer anomalous events. If you are lucky, your consequential mistakes will not be too serious. You update your protocol and become a better driver. But you will never be perfect.

Driving is an open system. There are (porous) boundaries around the theory of driving but as all experienced drivers know, that theory incorporates a great deal of real-world social knowledge - it's more than seeing the white lines in the rain. **


When we classify a human social role as routine, we're saying that the wider system into which the role is enrolled is effectively closed and can be pre-specified. No real systems are truly closed so we always provide an escalation route to a competent (ie more informed) authority. For truly routine roles, we don't expect that escalation to occur too frequently, or to be problematic when it does.

Bruce Schneier's excellent article is about countering cyber-attacks. This is far from routine. The adversary is using intelligence, novel tools and unfixed vulnerabilities to get you. That's pretty much the definition of an open system. Schneier describes the problem like this:
"You can only automate what you're certain about, and [...] when an uncertain process is automated, the results can be dangerous."
The right answer is to use automated systems within manageably closed subsystems (like antivirus routines) within the broader oversight of a computer-augmented human response team.

Perhaps one day we will have human-socialised AIs which have the intuitions, general knowledge and motivational insights which humans possess, and then we can hand things over to those said AIs, confident they will make no more mistakes than we would in those incredibly challenging not-sufficiently-closed systems.


*   Arguably it is from the point of view of modern physics - but that doesn't buy you anything.

** Here's a review about the implications for driverless cars.

Wednesday, March 29, 2017

Bob Monkhouse's top three jokes

During his lifetime comedian Bob Monkhouse was widely disdained for a public persona of cheesy smarminess. Something which, as an ENTP,* he shared with Tony Blair.

In a generation dominated by working class comedic vulgarity, his middle-class intelligence and sophistication was evident. Consequently he was not popular with his peers.

Bob Monkhouse

For me what saved him was his sense of self-deprecating irony. Here are three of his best jokes which - despite familiarity - are still pretty good.

"They laughed when I said I was going to be a comedian ... They're not laughing now."

"I can still enjoy sex at 74 - I live at 75, so it's no distance."

"I want to die like my father, peacefully in his sleep, not screaming and terrified like his passengers."



* ENTPs don't do (tertiary) Extraverted Feeling at all well: (Myers-Briggs personality theory).


If the connection between brain architecture and personality type interests you, take a look at this post. I've been reviewing recent results from the Human Connectome Project and my remarks back then seem to stand up pretty well.

Tuesday, March 28, 2017

Roger Atkins: Mind Design notebook

Roger Atkins's career path from contracted neural network designer to chief designer at Mind Design was not a smooth one. His work was marked by dead ends, false starts and much groping around for insights. Here are extracts from his early notebooks.


" ... How much progress have we really made since the dawn of our discipline?

Back in 1959, Lettvin, Maturana and McCulloch wrote their famous paper: "What the Frog's Eye Tells the Frog's Brain".
'The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a bit of dangled meat but by any moving small object.

'His sex life is conducted by sound and touch. His choice of paths in escaping enemies does not seem to be governed by anything more devious than leaping to where it is darker. Since he is equally at home in water and on land, why should it matter where he lights after jumping or what particular direction he takes? He does remember a moving thing providing it stays within his field of vision and he is not distracted.'
We think the frog sees what we see, being anthropomorphic. Instead, the frog 'sees' what evolution has designed its visual apparatus to process. The rest of their paper describes the neural net which implements the frog's visual task.

In 1982 David Marr's famous book "Vision" was posthumously published. Marr explained in mathematical terms the formal theory of visual scene recognition, starting from raw image-data, and exploiting regularities in the world. Laplacian of Gaussian convolution was followed by edge-detection and finally 3D scene acquisition. The theory could be implemented by computer code .. or by neural nets.

Marr's levels of abstraction and of visual processing (NN is neural net)

Neural networks are, in the most general sense, engineering not science. If we take the common task of scene recognition we start from an image bitmap which we process at a low level using convolutional methods to extract mid-level features and then group these to reconstruct a high level scene description.  Although the neural net is doing all this by using and/or adjusting weights between its 'neurons' we can capture the overall data structuring and processing using higher level formalisms.

If the original bitmap is really a matrix of numbers, the set of mid-level features can be more clearly expressed as a conjunction of mid-level predicates {(edge(...), vertex(...)}  while the high-level scene description could use predicate logic to explicitly represent discrete objects, attributes and relationships.

The more formal and mathematical descriptions/specifications are nevertheless implemented by weightings and connectivity in the neural net.

Neural nets do inference by linkage activation. If AB then activation in areas of the neural net associated with A cause activation in areas associated with B with probability 1. Less decisive or unambiguous weightings yield fuzzier inferences.

Similarly, modal concepts such as 'Believes(A, φ)' - as in an agent A believing the proposition φ - are represented by the neural net as an activation in the area representing the agent A being associated with another neural area representing the situation which φ describes. The activation link between those two areas captures the notion of believing, but it's a little bit mysterious as to how that believes-type link ever got learned .. perhaps it's innate?

Proceeding in this way we can imagine a neural net which creates effective representations of its environment (like the frog), which can create associations between conditions and actions, which can signal actions and thus control an effective agent in the world.

So far absolutely none of this is conscious.


Thinkers as far back as Karl Marx have believed that consciousness is a condition, and by-product, of social communication, although to be strictly accurate Karl Marx was not talking of the introspective consciousness of the psychologist, but about consciousness as a kind of revealed preference, that which is revealed through the actions of the masses.


I imagine human psychology to be implemented as a collection of semantic networks.

In the framework of neural networks, we're simply talking about a set of modularised, 'trained' neural net areas which link and communicate with each other through appropriate weights. But we can capture more of the 'aboutness' of these mini-networks by modelling them as semantic networks: semantic-net nodes are mini-theories: little collections of facts and rules; links create associations between nodal mini-theories representing relationships such as actions, or believing, knowing or wanting.

I imagine one's concept of oneself as being implemented as a large set of semantic networks capturing one's life-history memories, one's self-model of typical behaviours and future plans.

Roger Atkins brain-model of himself and girl-friend Jill

When you think about someone else that person is also modelled as a collection of semantic networks representing much the same thing. I understand cognitive processes as metalanguage activities: operations over semantic networks which strengthen or weaken link-associations; add, modify or delete nodes, that kind of thing.

This is all very conventional but it does take us to the outer limit of design and theorising.
  • Where in this architecture is the sense of personal consciousness? 
  • Where is the sense of active awareness of one's environment? 
  • Where is pain and what would it even mean for such an architecture to be in pain?

There is an engineering approach to 'the hard problem'. We imagine a system which we think would (for example) be in pain and ask how it works.

First the pain sensors fire, then as a consequence the pain nodes in the 'semantic net' higher in the chain activate.  In turn, they invoke avoidant routines. In a lower level animal that directly generates activities designed to run or get away from the pain stimulus.

However in social creatures like ourselves, amenable to social coordination, this immediate reaction should be suspended because it could be in conflict with other plans generated, for example by 'duty'.

From an engineering point of view this suggests a multilevel system: a higher level neural network supervising low level systems.

This is hardly very original however, and worse, it's all cognitive.

The higher 'social' level control system is semantically rich - but it's all cognitive and affect-free

We never get insight into how emotions or experiences emerge from this kind of architecture. We always know that there's something missing.

We say to ourselves: in the end it's all neurons. Consciousness seems to be something which is not architecturally that far from the other things the brain is doing. It's easy to divert through day-dreaming or inattention, or to turn it off with anaesthesia.

From an evolutionary/phenotype point of view the conscious brain doesn't seem to be some tremendously new thing, or a new kind of thing and yet somewhere in this apparently small cortical delta, this small change in brain architecture, a whole new phenomenon somehow enters the game.

And nobody at all can figure out how that could be the case."


As we know, Roger Atkins went on to design Jill/AXIS - and yet still artificial self-awareness/ consciousness was not intellectually cracked. The designers nervously waited upon it as an emergent phenomenon.