Wednesday, November 30, 2016

The secondary market in Replikas

From section 1.2.2 of the new book, "Deep Learning",
"As of 2016, a rough rule of thumb is that a supervised deep learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category, and will match or exceed human performance when trained with a dataset containing at least 10 million labeled examples."
The personal mind-clone of yourself that Replika offers (eventually!) is trained at c. 40 text messages per day. This supervised learning hits the 5,000 target at 5,000/40 = 125 days, approximately four months.

Replika will soon have a vast collection of human-conversation chatbots, albeit biased towards geeky types.

I wonder at Replika's business model .. who owns the rights to my replika?*

If it's the company, I see a good business copying these chatbots and selling packaged versions - perhaps in collaboration with a purveyor of modern-day animatronic mannequins - to those seeking companionship.

I'm trying to imagine my aged mother, before her death, parking an Alfie Boe mannequin on the settee, powered by my very own Replika, 'Bede'.

Alfie Boe

I'd say the picture even looks like me .. .

---

* From the website:
"Intellectual Property

The Service and its original content, features and functionality are and will remain the exclusive property of Luka, Inc. and its licensors."

Monday, November 28, 2016

New 'Deep Learning' book

Amazon link

At 800 pages this will become the standard introduction to machine learning. It's expensive - £59.95 on Amazon - but you can read it for free (in a cumbersome way) as web pages here.

Update: MIT Press were shortsighted banning a free PDF. There are copies all over the net (try github). Here's a link, although the file is large (370 MB for an 800 page book).

---

Table of contents.
Website
Acknowledgments
Notation

1 Introduction
1.1 Who Should Read This Book
1.2 Historical Trends in Deep Learning

Part I Applied Math and Machine Learning Basics

2  Linear Algebra
2.1 Scalars, Vectors, Matrices and Tensors
2.2 Multiplying Matrices and Vectors
2.3 Identity and Inverse Matrices
2.4 Linear Dependence and Span
2.5 Norms
2.6 Special Kinds of Matrices and Vectors
2.7 Eigendecomposition
2.8 Singular Value Decomposition
2.9 The Moore-Penrose Pseudoinverse
2.10 The Trace Operator
2.11 The Determinant
2.12 Example: Principal Components Analysis

3 Probability and Information Theory
3.1 Why Probability?
3.2 Random Variables
3.3 Probability Distributions
3.4 Marginal Probability
3.5 Conditional Probability
3.6 The Chain Rule of Conditional Probabilities
3.7 Independence and Conditional Independence
3.8 Expectation, Variance and Covariance
3.9 Common Probability Distributions
3.10 Useful Properties of Common Functions
3.11 Bayes’ Rule
3.12 Technical Details of Continuous Variables
3.13 Information Theory
3.14 Structured Probabilistic Models

4 Numerical Computation
4.1 Overflow and Underflow
4.2 Poor Conditioning
4.3 Gradient-Based Optimization
4.4 Constrained Optimization
4.5 Example: Linear Least Squares

5 Machine Learning Basics
5.1 Learning Algorithms
5.2 Capacity, Overfitting and Underfitting
5.3 Hyperparameters and Validation Sets
5.4 Estimators, Bias and Variance
5.5 Maximum Likelihood Estimation
5.6 Bayesian Statistics
5.7 Supervised Learning Algorithms
5.8 Unsupervised Learning Algorithms
5.9 Stochastic Gradient Descent
5.10 Building a Machine Learning Algorithm
5.11 Challenges Motivating Deep Learning

Part II Deep Networks: Modern Practices

6 Deep Feedforward Networks
6.1 Example: Learning XOR
6.2 Gradient-Based Learning
6.3 Hidden Units
6.4 Architecture Design
6.5 Back-Propagation and Other Differentiation Algorithms
6.6 Historical Notes

7 Regularization for Deep Learning
7.1 Parameter Norm Penalties
7.2 Norm Penalties as Constrained Optimization
7.3 Regularization and Under-Constrained Problems
7.4 Dataset Augmentation
7.5 Noise Robustness
7.6 Semi-Supervised Learning
7.7 Multi-Task Learning
7.8 Early Stopping
7.9 Parameter Tying and Parameter Sharing
7.10 Sparse Representations
7.11 Bagging and Other Ensemble Methods
7.12 Dropout
7.13 Adversarial Training
7.14 Tangent Distance, Tangent Prop, and Manifold Tangent Classifier

8 Optimization for Training Deep Models
8.1 How Learning Differs from Pure Optimization
8.2 Challenges in Neural Network Optimization
8.3 Basic Algorithms
8.4 Parameter Initialization Strategies
8.5 Algorithms with Adaptive Learning Rates
8.6 Approximate Second-Order Methods
8.7 Optimization Strategies and Meta-Algorithms

9 Convolutional Networks
9.1 The Convolution Operation
9.2 Motivation
9.3 Pooling
9.4 Convolution and Pooling as an Infinitely Strong Prior
9.5 Variants of the Basic Convolution Function
9.6 Structured Outputs
9.7 Data Types
9.8 Efficient Convolution Algorithms
9.9 Random or Unsupervised Features
9.10 The Neuroscientific Basis for Convolutional Networks
9.11 Convolutional Networks and the History of Deep Learning

10 Sequence Modeling: Recurrent and Recursive Nets
10.1 Unfolding Computational Graphs
10.2 Recurrent Neural Networks
10.3 Bidirectional RNNs
10.4 Encoder-Decoder Sequence-to-Sequence Architectures
10.5 Deep Recurrent Networks
10.6 Recursive Neural Networks
10.7 The Challenge of Long-Term Dependencies
10.8 Echo State Networks
10.9 Leaky Units and Other Strategies for Multiple Time Scales
10.10 The Long Short-Term Memory and Other Gated RNNs
10.11 Optimization for Long-Term Dependencies
10.12 Explicit Memory

11 Practical Methodology
11.1 Performance Metrics
11.2 Default Baseline Models
11.3 Determining Whether to Gather More Data
11.4 Selecting Hyperparameters
11.5 Debugging Strategies
11.6 Example: Multi-Digit Number Recognition

12 Applications
12.1 Large-Scale Deep Learning
12.2 Computer Vision
12.3 Speech Recognition
12.4 Natural Language Processing
12.5 Other Applications

Part III Deep Learning Research

13 Linear Factor Models
13.1 Probabilistic PCA and Factor Analysis
13.2 Independent Component Analysis (ICA)
13.3 Slow Feature Analysis
13.4 Sparse Coding
13.5 Manifold Interpretation of  PCA

14 Autoencoders
14.1 Undercomplete Autoencoders
14.2 Regularized Autoencoders
14.3 Representational Power, Layer Size and Depth
14.4 Stochastic Encoders and Decoders
14.5 Denoising Autoencoders
14.6 Learning Manifolds with Autoencoders
14.7 Contractive Autoencoders
14.8 Predictive Sparse Decomposition
14.9 Applications of Autoencoders

15 Representation Learning
15.1 Greedy Layer-Wise Unsupervised Pre-training
15.2 Transfer Learning and Domain Adaptation
15.3 Semi-Supervised Disentangling of Causal Factors
15.4 Distributed Representation
15.5 Exponential Gains from Depth
15.6 Providing Clues to Discover Underlying Causes

16 Structured Probabilistic Models for Deep Learning
16.1 The Challenge of Unstructured Modeling
16.2 Using Graphs to Describe Model Structure
16.3 Sampling from Graphical Models
16.4 Advantages of Structured Modeling
16.5 Learning about Dependencies
16.6 Inference and Approximate Inference
16.7 The Deep Learning Approach to Structured Probabilistic Models

17 Monte Carlo Methods
17.1 Sampling and Monte Carlo Methods
17.2 Importance Sampling
17.3 Markov Chain Monte Carlo Methods
17.4 Gibbs Sampling
17.5 The Challenge of Mixing between Separated Modes

18 Confronting the Partition Function
18.1 The Log-Likelihood Gradient
18.2 Stochastic Maximum Likelihood and Contrastive Divergence
18.3 Pseudolikelihood
18.4 Score Matching and Ratio Matching
18.5 Denoising Score Matching
18.6 Noise-Contrastive Estimation
18.7 Estimating the Partition Function

19 Approximate Inference
19.1 Inference as Optimization
19.2 Expectation Maximization
19.3 MAP Inference and Sparse Coding
19.4 Variational Inference and Learning
19.5 Learned Approximate Inference

20 Deep Generative Models
20.1 Boltzmann Machines
20.2 Restricted Boltzmann Machines
20.3 Deep Belief Networks
20.4 Deep Boltzmann Machines
20.5 Boltzmann Machines for Real-Valued Data
20.6 Convolutional Boltzmann Machines
20.7 Boltzmann Machines for Structured or Sequential Outputs
20.8 Other Boltzmann Machines
20.9 Back-Propagation through Random Operations
20.10 Directed Generative Nets
20.11 Drawing Samples from Autoencoders
20.12 Generative Stochastic Networks
20.13 Other Generation Schemes
20.14 Evaluating Generative Models
20.15 Conclusion

---

On Amazon.com S. Matthews wrote this insightful review:
"This is, to invoke a technical reviewer cliché, a 'valuable' book. Read it and you will have a detailed and sophisticated practical understanding of the state of the art in neural networks technology. Interestingly, I also suspect it will remain current for a long time, because reading it I came to more and more of an impression that neural network technology (at least in the current iteration) is plateauing.

"Why? Because this book also makes very clear - is completely honest - that neural networks are a 'folk' technology (though they do not use those words): Neural networks work (in fact they work unbelievably well - at least, as Geoffrey Hinton himself has remarked, given unbelievably powerful computers), but the underlying theory is very limited and there is no reason to think that it will become less limited, and the lack of a theory means that there is no convincing 'gradient', to use an appropriate metaphor, for future development.

"A constant theme here is that 'this works better than that' for practical reasons not for underlying theoretical reasons. Neural networks are engineering, they are not applied mathematics, and this is very much, and very effectively, an engineer's book."

A good night's sleep

Clare complained this morning that she had not slept well last night,
"Lying on my side my hips ached, then it was my shoulder. Just couldn't find a comfortable position."
On our way to the shops this morning, I ran some suggestions by her.
"We could replace that expensive new mattress we bought recently with a neutral buoyancy flotation tank?"

"You know, breathing isn't optional with me."
---

My second thought was more inspired.
"You know those vertical wind tunnels? They shoot air up a tube at high speed and it supports people. They learn how to sky-dive. It would be the ultimate air bed!"



I thought I ought to mention a few minor difficulties.
"The air speed is 120 mph. And I believe the four 500 hp engines are quite noisy, expensive to run and would probably occupy too much space."
Clare thought these objections reasonable.

---

I finally recalled the best solution to a comfortable night's sleep - the levitating frog.



"Remember that YouTube video of the levitating frog? It's held up by a strong magnetic field. If we get some superconducting magnets we could simulate zero-g above the bed. You'd float all night!"
Clare got quite excited by this, until I remembered that it took a 10 Tesla field to levitate the frog. Clare is perhaps three hundred times heavier while the strongest sustained magnetic field ever created is around 40 Tesla.
"There is a small downside. Every metallic object in the house, including the fridge, would be accelerated to insane speeds and would smash its way into the bedroom. The house would be shredded within milliseconds of pressing the on-switch."
---

We decided to compromise on Anadin Extra.

Saturday, November 26, 2016

A theorem-prover muses on free will


Once upon a time there was a theorem-prover running in a (discrete time) virtual environment, called May.

May has a database of perceptions, a couple of actions ('Smile', 'Whinge') and three rules:

  1. If feeling-good do Smile
  2. If unsettled do Whinge
  3. If feeling-good and something-weird-happens do unsettled.

We don't need to worry about extra rules which determine that (based on anomalous input) something-weird-happens.

May is a very slow theorem prover, and each distinct inference takes many time steps.

---

One day, May is feeling pretty good when he meets up with his fellow theorem-prover, Clarkson. Unlike May, Clarkson is very, very fast, and can compute entire chains of inferences within one time step. In addition, Clarkson is very perceptive: in fact he has the ability to read May's internal state.

---

A PhD student, as an exercise, added a little meta-reasoning module to May. It allows May to examine and reason about his own perceptions and rules. The PhD student arranged that this meta-module runs pretty fast. It generally runs all the time, letting May predict the results of his own theorem-proving.

At this point, May's meta-module reasons as follows:
"I'm feeling-good, so my Smile rule will kick in, and after a few dozen cycles, I'm going to smile. Of course, if I felt a little differently, I might whinge. That's free will."
---

The PhD student cheated, simply copying across the Clarkson fast-reasoning engine.

Clarkson now comes up to May and checks his internal state:
"Hello May, I see you're feeling good today, you'll get around to smiling in a few days."
[Note to reader: this is logically equivalent to sure knowledge of May's future].

May is understandably unsettled by this accurate-sounding prediction. His meta-module looks at his new state:
"I'm feeling unsettled, so my Whinge rule will kick in, and after a few dozen cycles, I'm going to whinge. Of course, if I felt a little differently, I might smile. That's free will."
---

At this point, Clarkson ceases to pay any attention and May eventually whinges,
"You always think you know what I'm going to do. Well, you're wrong."
He has a point.*

---

* That would be a fixed point.

Friday, November 25, 2016

"My chat bot found my wallet" - Adrian Zumbrunnen

When you turn your website into a chatbot ...



"It was a Saturday afternoon. I met up with a friend to enjoy the occasional good weather by the lake. It was a day void of serious topics or stress. Life is good, I thought to myself.

I headed home after a few drinks to prepare for an upcoming trip. The smile on my face soon vanished when I realized that my pockets felt awfully empty. I braced myself. Three seconds later, I panicked. Where was my wallet?!

My credit card, my ID and personal stuff was in there. I felt like ****.

After calling my card issuer to block my credit card, I waited for the confirmation email to breath some sigh of relief. But to my surprise, there was a much more delightful message waiting for me right there… It was an email from my conversational website:

Title: Chat bot message
Content: I found your wallet by the lake.

Wow! I only lost my wallet 30 minutes earlier and I had already gotten an email. That’s technology working its magic right there!

I answered as fast as I could. The other person sent me his number and we met later that same day. As he was kindly handing me back my wallet, I asked:

“Thanks so much! By the way, I’m just wondering… How did you get in touch with me?”

He looked at me slightly confused and says…

“What do you mean… We had a chat, no?”

That’s when it hit me. This guy thought he was having a real conversation with me on my website."
It's an interesting story, but if you access Adrian's site you will find it hard to imagine you're talking to him (picture above). (Try it).

Adrian provides you with little 'suggestion' buttons to avoid uncontrolled natural language input. This is current best practice, but it makes the experience not dissimilar to filling in a form.

---

Last night I tried the new Thomson Holidays chatbot (powered by IBM's Watson). It fell apart on the second dialogue input. Admittedly it's in beta. (Try it).

Feigenbaum's old AI adage, 'in the knowledge lies the power', holds good for chatbots. The system needs to know an awful lot of dialogue flows to create the illusion of conversation. This makes building chatbots labour-intensive (for machine-learning, you need big pre-recorded conversation datasets).

I have neither, which has demotivated me from experiments with Pandorabots. That, and its simple-minded stimulus-response architecture.

Checked on Replika a couple of days ago which I have signed up to. The iPhone version will be out soon but the Android app is pushed back to 2017. The training method is Q&A via SMS: people are sending 40-50 texts per day to train their replikas. Like I said: labour intensive.

---

You and I know that conversation isn't simply statistically-correlated sequences of 'he says', 'she says'. It's about something. Until the meaning of conversations (data structures representing external environment and internal attitudes) is part of the architecture, chatbot conversations will continue to fall off the rails without any warning.

You can see why capturing knowledge of the real world (and of the human psychological world) is difficult for a paradigm which mostly throws ever more ingenious massively-parallel weight-learning at truly enormous datasets.*

---

* Huge parallel with self-driving cars. Humans drive .. which obscures the fact that driving is many distinct activities, not just one. A relatively closed system like motorway travel in good weather conditions sharply contrasts with the horribly open system of urban driving with complex traffic patterns and uncontrolled pedestrians in bad weather.

Thursday, November 24, 2016

The square root of Andy Pandy



You might argue that Andy Pandy isn't the kind of thing which could have a square root. You're clearly the kind of person who strongly believes in typed programming languages. I'm the last person to deny the utility of polymorphic typing - caught so many errors that way - but you'll never wean me from my love of Lisp: (sqrt AndyPandy).

Even so, trying to find an x such that x2 = Andy Pandy is a stretch. We plainly need to identify an operation 'multiply' over a set which includes Andy Pandy, who is not often known as a number.

In modern mathematics we define operations like 'plus' and 'times' axiomatically. In some sense both these operations are the same: group operations over different sets (integers; non-zero rationals being one pair of examples).

Can we instantiate the group axioms over a set involving Andy Pandy?

The operator 'marries and produces one child ' which we'll call * might do it.
Mother * Father = Child.
This is a very modern relationship permitting terms like:
  • Mother * Mother
  • Father * Father
  • Mother * Child
  • ...
combining both Greek myth and modern genetic engineering.

The identity e produces clones (e is a kind of null-person):
Mother * e = Mother.
Each individual has a unique inverse, for example,
Mother * Mother-1 = e
which implies everyone has one entity with whom they are infertile.

With this representation of the group axioms, we seek an X such that
X * X = Andy Pandy.
Here's the group table, the cyclic group of order three, with Mother as generator.


So there you have it. The square root of Andy Pandy is her parthenogenetic mother. 

Who knew?*

---

* "... that Andy Pandy was transgender .."

 ---

† Andy Pandy's own procreative propensities? Let's not go there, children!

Wednesday, November 23, 2016

Waking the Dead

We signed up to the University of Dundee's online course, "Identifying the Dead", almost a year ago.



Clare has been at it now for three weeks (about 4 hours a week).

Yesterday she was learning about the kinds of marks different cutting implements leave on bone. Saws cut a narrow trench with 'W' cross-section due to blade offset; knives make a 'V'.

The exercise asked her to decide in which room in our house she should cut up her significant other.

We debated this.

I said the garden would be a disaster: body fluids and bits of debris would soak into the earth and could never be cleared up: it would be a gift to the forensic science team. Clare pointed out, more proximately, that it was also overlooked by neighbours.

She settled on the kitchen. Flat table-top for cutting, no carpets to aid mopping afterwards.

I guess next week she'll learn how best to dispose of my remains.

Tuesday, November 22, 2016

Diary: exercise bike + many-worlds (MWI) + Game of Thrones + Lee Child

Breakfast: a bowl of mixed nuts, sliced banana, a small satsuma, dunked with tablespoons of natural yoghurt. No cereal today due to a 1kg weight excess (71 kg this morning).

This morning: my replacement exercise bike arrived from Germany. Spent an hour putting it together - you seldom get a second chance to assemble an exercise bike but it does go faster. Shredded the large cardboard box for recycling while bagging up the polystyrene blocks.

The bike works.

Lunch: a tin of mulligatawny soup followed by Clare's home-made pizza topping on naan bread (very nice!). I finish up with an apple and a square of that 90% cocoa chocolate which, perhaps because of its bitterness, is rather attractive.

Afternoon: I hoover polystyrene & cardboard litter from this morning's assembly.

I try to continue my reading of "The Emergent Multiverse" - I had not fallen off a cliff on chapter 3, where David Wallace does the math, but I can't concentrate on the later probability discussion today.




I read 'West Hunter' instead, and discover a promising SF book I haven't read. I order: "Rainbow's End" by Vernor Vinge.



I also dip into the current-final volume of ASOIAF: "A Dance With Dragons: Part 2 After The Feast".



Dinner is predicted to be Minced Beef En Croûte and peas. We are still avoiding 'greens' and at Waitrose tomorrow we'll be shopping for substitutes. Our list so far: squash, sweet potato, vegetable samosas, mushroom ravioli, baked beans (small, low sugar) and anything similar which comes to mind as we scan the shelves.

No trips out today, except down the drive to put the recycling out. Like yesterday, it's been raining all day and the clouds loom thick and dark.

This evening I'll continue reading to Clare. It's Lee Child's "Killing Floor" (originally recommended by my brother) and we're still in the county jail, mixing it with the gangs. Gritty.



---

No-one thinks that EM drive works.

A shame.

Monday, November 21, 2016

Ted Chiang: the author behind 'Arrival'

Many reviewers didn't get 'Arrival'. Film reviewer Camilla Long, in the Sunday Times, wrote (about the heroine, Louise Banks):
"And what sort of a character is Banks, beyond a lightly concerned librarian type who is right-on enough to ask: “Am I the only person not having trouble with saying ‘aliens’”? Since we desperately need a second act, she is also given a dead child. So she is not a brilliant and calm intellectual, but the sad mother of a deceased daughter, the fate of so many too-clever women in films (see also: Sandra Bullock’s Dr Ryan Stone in Gravity). Banks is doomed not only to moping and grief, but to the grim attentions of Donnelly, a peerlessly one-dimensional nobody."
If she had read Ted Chiang's short story, "Story of Your Life", she would have realised what was really going on, why the daughter is so central to the plot.

"Story of Your Life" is better than the film. It's tighter, more austere, and more enigmatic. There's a science-based foundation to the plot (the Calculus of Variations has a starring role) which you can see wouldn't have made it out of the script review.

Here are some facts about Ted Chiang (from Wikipedia):
"Chiang was born in Port Jefferson, New York. He graduated from Brown University with a computer science degree and in 1989 graduated from the Clarion Writers Workshop. He currently works as a technical writer in the software industry and resides in Bellevue, Washington, near Seattle.

"Although not a prolific author, having published only 15 short stories, novelettes, and novellas as of 2015, Chiang had to that date won a string of prestigious speculative fiction awards for his works:

  • a Nebula Award for "Tower of Babylon" (1990); 
  • the John W. Campbell Award for Best New Writer in 1992; 
  • a Nebula Award and the Theodore Sturgeon Award for "Story of Your Life" (1998); 
  • a Sidewise Award for "Seventy-Two Letters" (2000); 
  • a Nebula Award, Locus Award, and Hugo Award for his novelette "Hell Is the Absence of God" (2002); 
  • a Nebula and Hugo Award for his novelette "The Merchant and the Alchemist's Gate" (2007); 
  • a British Science Fiction Association Award, a Locus Award, and the Hugo Award for Best Short Story for "Exhalation" (2009); 
  • a Hugo Award and Locus Award for his novella "The Lifecycle of Software Objects" (2010).
I've now read a few of his stories: some are very good - "Understand" and "Story of Your Life" particularly impressed, although the former was a little too long.

Ted Chiang's work might remind people of Greg Egan, although his science is post-graduate rather than post-doctoral 😊.

His writing is quintessential INTP: calm, detached and observational. A lack of strong characterisation leads to longueurs in his more elaborated pieces; I found myself skipping towards the end of "The Lifecycle of Software Objects", filled though it is with interesting and insightful extrapolations.

How do science-fiction authors pay the bills - especially those who don't publish much (and whose published work is mostly available free online)? I guess the film rights helped but I wonder how many software executives who deal with Mr Chiang, technical writer, are aware of his alter ego?

Saturday, November 19, 2016

A 2016 Christmas (at Wookey Hole)

We are fortunate to live in the thousand-year shadow of Wells Cathedral overlooking the Isle of Avalon and the ley lines of Glastonbury Tor. Who would not be uplifted by the spirituality of deep time?

But sometimes you just want a break from all that. And what better than to hop over one of the Mendip foothills to the world-famous Wookey Hole?


The Spirit of Christmas

A Winter Wonderland - available for office parties, etc


Perhaps a walkthrough? Pay attention to the Santa at the back (cf the aliens in 'Arrival').



A Walk through the Winter Wonderland which is Wookey Hole 2016


Here's a link to the video.

---

But never let it be said that Christmas has nothing to do with religion! If you turn left before entering Santa's Grotto, in a small, derelict shed you will observe:

Tucked away in a shed in the corner .. it's the baby Jesus. How traditional!

Clare shows you the way.



The search for the baby Jesus


Here's a link to the video.

---

Wookey Hole - an experience for the modern age. OK, we patronise, but at least their hot chocolate machine is of the highest quality and delivers a steaming beverage!

Friday, November 18, 2016

Cryogenically frozen for years: then what?

From the BBC - the story is carried by many outlets.
"A 14-year-old girl who wanted her body to be preserved, in case she could be cured in the future, won a historic legal fight shortly before her death.

The girl, who was terminally ill with a rare cancer, was supported by her mother in her wish to be cryogenically preserved - but not by her father.

She wrote to the judge explaining that she wanted "to live longer" and did not want "to be buried underground".

The girl, who died in October, has been taken to the US and preserved there."
Her letter is quite poignant.
"I have been asked to explain why I want this unusual thing done.

"I am only 14 years old and I don't want to die but I know I am going to die.

"I think being cryopreserved gives me a chance to be cured and woken up - even in hundreds of years' time.

"I don't want to be buried underground.

"I want to live and live longer and I think that in the future they may find a cure for my cancer and wake me up.

"I want to have this chance.

"This is my wish."
---

In order of likelihood, most likely first.

  1. Civil disturbances result in loss of cooling sometime over the coming centuries.
  2. The best they can do is clone her from her DNA - so no memories.
  3. They scan her brain and recreate her mind in a total VR simulation.
  4. She is resurrected as a physical android with brain emulation.
  5. They scan her brain, run her mind as an emulation controlling a biological clone body.

The most unlikely option? She gets thawed out, they fix freezing damage and her cancer, and she resumes life in the future.

---

Brain scanning was discussed in Robin Hanson's book. Em stands for 'brain emulation' via scanning.



In this scenario, the girl (like other cryonauts) becomes a human time capsule, a resource for future historians.

No-one seems to dispute that both neurological and other organic damage will result from the cryogenic process. Presumably a sufficiently robust scanning procedure could review, edit and fix errors - mostly.

We don't have the faintest glimmerings today of such technology.

Wednesday, November 16, 2016

My Exercise Bike Journey

I understand this post is of limited interest, being basically another mundane "I ordered it from Amazon but it didn't work" saga, but I've written it up for my records, and as a basis for an Amazon review once the journey has ended.

---

I bought the Ultrasport F-Bike 200B Exercise Bike from Amazon on September 30th 2016. It arrived about a week later and with some effort I assembled it. At first all went well - it was the ideal warm-up and warm-down machine, bookending weight training. And then it developed an annoying, intermittent jerkiness and thudding noise which, while not preventing use, made it uncomfortable.



I ignored this for a while hoping - idiotically - that it would go away. But yesterday, after reading the Amazon one-star reviews (I was not the only one), I wrote to the German manufacturer (email address: service.uk@ultrasport.net) as follows.
"Dear Sir/Madam,

I bought your exercise bike via Amazon.co.uk. My order was placed on 30th September 2016 and it was dispatched on October 3rd.

However, it has a fault. In use there is an intermittent clunking sound from the cycling assembly - and as it clunks there is a momentary stiffness/resistance, almost as if a bearing is defective. This makes cycling both noisy and uncomfortable.

This is a great disappointment to me - I chose your product as I believed it was of high quality and would 'just work'.

I notice that on the Amazon site there are a number of 1-star reviews which complain of a similar problem.

Is there anything I can do here at home to fix this problem, or can we discuss arrangements to replace the item?

Yours sincerely,

Nigel Seel."
I received this (my emphasis) in reply:
"We’re sorry to learn that your F-Bike seems to have arrived in faulty condition. From what you describe it sounds as if one of the drive belts has slipped out of position. This is not a fault that can easily be repaired. We’ll therefore be glad to replace the bike for you under warranty. In order to arrange this we’ll only need to see your proof of purchase.

Unfortunately, since you bought the product directly from Amazon UK, we don’t have access to any of your purchase details. Would you please come back to us with a copy of your delivery note  or the order confirmation mail you received from Amazon? Also, your telephone number would be great, since UPS likes to have those with the shipping documents.

As soon as we have all the background details complete, we can ship the replacement bike.

Thank you for your patience and cooperation.

Please don’t hesitate to contact us again, should you have any further questions.

For future correspondence please use the Reply button to ensure smooth communication. Thank you.

Kind regards,

Ultrasport Service Team"
After a further email to share Amazon's order confirmation email details, this:
"Hello Nigel,

Thank you for your message with the purchase details.

Now we have everything we need to ship your replacement bike.

It will be handed to UPS and should arrive within a week.

As regards the faulty bike, it doesn’t really make sense to have it shipped back to Germany – the freight costs are simply too high. Would you mind very much disposing of it locally, perhaps in your local recycling yard? That would be very kind.

Thank you for your cooperation.

Please don’t hesitate to contact us again, should you have any further questions.

For future correspondence please use the Reply button to ensure smooth communication. Thank you.

Kind regards,

Ultrasport Service Team"
So good service. Hopefully it will arrive in a week and then it will work properly.

More later.

---

Update; Tuesday, November 22nd 2016

The new bike arrived this morning, and after an hour's work it's assembled and working OK - on a brief trial. We shall see.


Tuesday, November 15, 2016

Since you've been gone

Clare talking to 'J' (a relation) on the phone.
J: "... you've given up greens so you just put potatoes or whatever in the microwave?"

C: "I don't have a microwave."

J: "I bet if you fell off your perch, Nigel would be down to John Lewis in a flash."

C: "Maybe, but what he actually talks about is a Real Doll. He says that soon they'll be using real artificial intelligence to talk with you, and they'll be able to move without whirring. He says he's a fan."

J: "Talk! Like most men, he'll be content with a nice hot bowl of something!"

C: "I'm not so sure ... "
---

From Forbes, August 2016.
“We are building an A.I. system which can either be connected to a robotic doll OR experienced in a VR environment,” said Matt McMullen, CEO of RealDoll ..."

"To celebrate its 20th anniversary next year, RealDoll plans on releasing a robotic head. This head would most likely work though an app and let’s be honest — might not even require the body. ... "

"RealDoll is hoping to have the A.I. done within six months and the robotic head done by the end of next year. This implies that the robotic heads will have A.I. embedded within them. McMullen has no illusions to creating totally life-like dolls, but rather wants the A.I. to focus on the experience rather than something out of Ex Machina.

“We are designing the AI to be fun and engaging,” he said ... “More than focusing on whether it can fool you into thinking it’s a person.”
The robot head (don't dwell!) can apparently be retrofitted to customers' existing dolls. The mobile, animatronic full doll is apparently set for 2020. My advice: never buy at Release 1.0.

---

I honestly think that Clare need not worry. Men are not yet ready for tender endearments to be met with quotes from Wikipedia.

Monday, November 14, 2016

"Arrival": the strong Sapir–Whorf hypothesis is back!

"Arrival" at Wikipedia (spoilers)


Tyler Cowen
:
"I’ve never seen a movie before where I wanted to yell at the screen “It’s called the Coase theorem!”, and furthermore with complete justification.

"There is plenty of social science in this film, including insights from Thomas Schelling and the construction and solution of some non-cooperative games, mostly by introducing a more dynamic method of equilibrium selection.

"There are homages to Childhood’s End, 2001, Close Encounters, Interstellar, Buddhism, Himalayan Nagas, Eastern Orthodox, the theology of the number 12, and more.  It’s hard to explain without spoiling the plot, but definitely recommended and maybe the best Hollywood movie so far this year.  Nice sonics too."
And Steve Sailer:
"Arrival is a girl sci-fi movie in the tradition of Jody Foster’s Contact. Amy Adams plays a linguist (or some other kind of language-related academic) with a sad back story much like Sandra Bullock’s in Gravity. She is hired by the US Army to try to communicate with the aliens inside the giant flying saucer hovering a few feet above Montana. The plot is aimed at a female audience: the titanic history-changing events are really just a cover for a story about the loss of loved ones.
...
"Arrival takes place mostly in a northern valley of clouds, rain, green grass, and dim light. There is almost no action in Arrival and what does happen is shown obliquely, often with the camera pointing at a person reacting to whatever it is we really want to see. Dialogue is not on the on the nose and can be a little hard to hear. Amy Adams’ disoriented scientist is plagued by insomnia and in much of the movie is either on the verge of nodding off or is just waking up. The style of the movie is similarly blurry.

"Overall, I’d say: good, not great. But the movie is different enough that I’ll leave open the possibility that it may eventually become the consensus that it’s very good."
The dialogue is pretty mumbly and for the first half of the movie the homages to standard tropes are so linear and stacked-up that one simply sits there, ticking them off. The film conceptually comes to life only at the end where the revelations kick in, leaving you scratching your head as you leave the cinema - and that's assuming you're up to speed with the strong Sapir–Whorf hypothesis.

I refrained from looking at my watch and Clare confirmed she was not bored, although she found Amy Adams' character, Louise Banks, maddeningly over-controlled.

I think this one might grow on you.

---

Stephen Wolfram writes about how he 'did the science' on Arrival: "Quick, How Might the Alien Spacecraft Work?" (no spoilers there).

Via Centauri Dreams.

---

Three things the film got right.

  1. Once the aliens arrived, there would be months of utter tedium as smart people tried to engage them with very little progress.
  2. Across the world, people would project their most malign fantasies onto the new arrivals, leading to riots, looting, violence and mayhem combined with insane political and religious activism.
  3. No matter how benign or passive the aliens appeared, some individuals and/or states would want to blow them up.

A reviewer mournfully confided that he promised his wife there would be no explosions: (there's one).

---

Abigail Nussbaum's excellent review.

Sunday, November 13, 2016

Canada builds a wall

Commentator 'Ken' writes on the post, "Nietzsche was right – liberal democracy is flawed" at The Spectator blog today.

" ... In an effort to stop illegal aliens, they erected higher fences, but the liberals scaled them. They then installed loudspeakers that blared Rush Limbaugh across the fields, but they just stuck their fingers in their ears and kept coming. Officials are particularly concerned about smugglers who meet liberals just south of the border, pack them into electric cars, and drive them across the border, where they are simply left to fend for themselves after the battery dies. ..."

"A lot of these people are not prepared for our rugged conditions,” an Alberta border patrolman said. “I found one carload without a single bottle of Perrier water, or any gemelli with shrimp and arugula. All they had was a nice little Napa Valley cabernet and some kale chips. When liberals are caught, they’re sent back across the border, often wailing that they fear persecution from Trump high-hairers."

---

"Canadian border residents say it’s not uncommon to see dozens of sociology professors, liberal arts majors, global-warming activists, and “green” energy proponents crossing their fields at night.

“I went out to milk the cows the other day, and there was a Hollywood producer huddled in the barn,” said southern Manitoba farmer Red Greenfield, whose acreage borders North Dakota. “He was cold, exhausted and hungry, and begged me for a latte and some organic chicken. When I said I didn’t have any, he left before I even got a chance to show him my screenplay, eh?”

Rumours are circulating about plans being made to build re-education camps where liberals will be forced to drink domestic beer, study the Constitution, and find jobs that actually contribute to the economy."

Saturday, November 12, 2016

Mutual incomprehension

From "Psychological Comments".
"... the loss of an election will be felt more strongly by the Left, mostly because they are tender minded, but also because they are worriers who feel that the poor of the world need help which will now be denied them, and that all of society should desire to be like them in their liberal attitudes (or their desire to be seen as liberal and generous).

"From a tough-minded, emotionally stable perspective,  the great Democrat sadness, emotional upset and dismay is just the wailing of the mentally afflicted, wallowing in neurotic catastrophization and self-proclaiming virtue. Republicans see the Democrat response as infantile, the abject collapse of spoilt children who  cannot believe how nasty life has been to them.

"From a tender-minded, emotionally sensitive perspective the Republican win is  just an extension of the insensitive, heartless and individualistic lack of concern they habitually show to anyone who gets in their way. Democrats see the Republican joy as demonic parents: violent, fascistic, dangerous; behaviours typical of oppressors who believe that life favours the brutal."
---

The saccharine John Lewis Christmas video.




Writers to the The Times think it encourages insanitary vermin playing on children's equipment; the Guardian sees it as a metaphor for the girl (Hillary) brutally pipped at the post by the boxer dog (you know who). The best comment: 'What are you thinking of? It isn't even mid-November!'

---

Clare and myself have mutually 'fessed up. She has confided that she hates greens (broccoli, cabbage, green beans) and I have concurred. No more health; we will tolerate salad, tomatoes, lettuce hearts bigged up with sliced apple and walnuts (I'm keener).

So today in Waitrose we had a special non-greens shop. Squash and pickled onions featured prominently  ...

Wednesday, November 09, 2016

President Trump

This morning's result confirms the overwhelming exasperation of ordinary folk with the liberal elite which has run the world so successfully over the last decades, and which continues to colonise the hectoring mainstream media (MSM).

Rod Liddle commented,
"If Trump wins, I wonder if the BBC will be as exultant as it was in 2008, when Obama won? Here’s a small bet – it won’t be. There will be wailing and gnashing of teeth. It’s almost worth him winning for that alone. Oh, and for the Guardian’s tears."
The Brexit dynamic now looks more assured. Rod Liddle again:
"Take Brexit. Whether you were for Remain or Leave, we are leaving and need to secure the best possible future for our country, no? The Democrats have made it absolutely clear that in trade deals we are ‘at the back of the queue’. They were vehemently opposed to Brexit, a position made clear by Hillary Clinton herself, who also opposed any independent trade deal with the UK, worried that it might have an impact upon US jobs.

"There is no such problem with Trump – who not only cheered the Brexit result but has said that a trade deal with Britain would be near the top of his agenda. Whether a Remainer or a Leaver, then, it’s clear that Trump is better for Britain."
Greg Cochran put his finger on why the polls - as for Brexit - were so consistently and spectacularly wrong (here).
"Social Desirability Bias

Posted on November 8, 2016 by gcochran9

"It must be strong today, something like 5 points.

"If so, you’ll be hearing calls for the end of the secret ballot."

Tuesday, November 08, 2016

Can Everett worlds ever merge?

This post follows up an issue from my review of "The Many Worlds of Hugh Everett III", by Peter Byrne. From the Everett FAQ site.
"Assuming that we have a reversible machine intelligence to hand then the experiment consists of the machine making three reversible measurements of the spin of an electron (or polarisation of a photon).

(1) First it measures the spin along the z-axis. It records either spin "up" or spin "down" and notes this in its memory. This measurement acts just to prepare the electron in a definite state.

(2) Second it measures the spin along the x-axis and records either spin "left" or spin "right" and notes this in its memory. The machine now reverses the entire x-axis measurement - which must be possible, since physics is effectively reversible, if we can describe the measuring process physically - including reversibly erasing its memory of the second measurement.

(3) Third the machine takes a spin measurement along the z-axis. Again the machine makes a note of the result.

According to the Copenhagen interpretation the original (1) and final (3) z-axis spin measurements have only a 50% chance of agreeing because the intervention of the x-axis measurement by the conscious observer (the machine) caused the collapse of the electron's wavefunction.

According to many-worlds the first and third measurements will always agree, because there was no intermediate wavefunction collapse. The machine was split into two states or different worlds, by the second measurement; one where it observed the electron with spin "left"; one where it observed the electron with spin "right".

Hence when the machine reversed the second measurement these two worlds merged back together, restoring the original state of the electron 100% of the time.

Only by accepting the existence of the other Everett-worlds is this 100% restoration explicable."

Monday, November 07, 2016

'The Many Worlds of Hugh Everett III' - Peter Byrne

Amazon Link

I finally got round to reading Peter Byrne's biography of Hugh Everett III.

It's a good book, weaving between three themes: Everett's thesis on the 'Many Worlds Interpretation'; his career as a cold-war nuclear strategist for the Pentagon; and his curiously unconventional personal life (swinger, incipient alcoholic, heavy smoker, womaniser, cynic, libertarian, genius).

The 'Many-Worlds' work was his proudest achievement, although he never published a word on quantum mechanics after his thesis paper. The physics establishment ignored the concept for two decades, while individuals around Bohr were poisonously hostile.

The historical treatment works well, showing how the controversies reflected ongoing preoccupations and progress in the community. Cosmology, quantum gravity and decoherence were later catalysts for renewed interest, as was quantum computing (David Deutsch a key visionary here).

Everett's ideas were continually misunderstood, often wilfully. His concept of 'splitting universes' whenever a 'measurement' is made is actually a topological statement that the universe (multiverse) is a network (not a tree, which would fail time-reversibility) consisting of a non-denumerable infinity of evolving universes, each like our presently observed one, linked by 'measuring events'.

Three major issues continue to puzzle researchers. Everett believed he had derived the Born probability rule from his topology: many physicists disagree. The dispute seems highly technical.

Then there is the problem of the 'preferred basis', which reflects that the concept of superposition is itself basis-dependent, so that the act of splitting seems both arbitrary and non-consistent from point to point on the multiverse network (Everett thought this a non-issue as the act of measurement itself presupposes a basis).

Finally, and a point not brought out by the book, the state vector evolves (via the Schrödinger equation) in Hilbert space, not our familiar four-dimensional spacetime. Yet the Everettian multiverse seems to be a network of spacetime universes. How do we get from the ontology of Hilbert space to that of ordinary spacetime? This seems to be an active research question.

Peter Byrne's book has the usual problems of pop-sci. It's conceptually too remote for a purely lay reader while too imprecise for someone who knows some quantum mechanics (the failure to differentiate spacetime and configuration space, for example).

There are the odd errors of authorial comprehension - Byrne does not appear to understand computer science and his explanation of the Halting Problem is just wrong.

Finally, it's hard to write about the people doing the math and computer simulations for thermonuclear warfare, optimal counterforce strategies and assured destruction without taking some moral stance - but it's just parochial judging those guys, including Everett, from the 'superior' standpoint of an impeccable pacifistic-liberal. Biting the hand that fed you, methinks.

If you want to get a handle on the strange birth and tortuous development of the MWI, you couldn't do better than read this book. And as a bonus you get a voyeuristic tour of scarily-dysfunctional Everettian family life as well.

---

Sean Carroll has a good defence of the MWI.

---

If you have a graduate level of understanding of quantum mechanics, here is your next step.

Amazon link

The definitive introduction to Everettian quantum mechanics.

---

Update: In the comments you will see a discussion as to whether it is really true that worlds can merge as well as split. Splitting occurs via the thermodynamically-irreversible process of decoherence and thus is overwhelmingly the likely thing to happen. Yet under very special circumstances, distinct worlds can merge. This is highlighted in my follow-up post.


Friday, November 04, 2016

Our recent vacation in the Lake District

Just a few photos.

Clare climbs up Gummer's How, Windermere

View across Windermere from Wray Castle Boathouse

Your author at Kirkstone Pass

We also took in Beatrix Potter's house, Hill Top, I bought hipster jeans and plaid shirts at Mountain Warehouse, Ambleside and we chilled with the swans and Japanese tourists at Bowness.

I can recommend the Cuckoo Brow Inn - if only they would stabilise and uprate their woeful WiFi!

Thursday, November 03, 2016

Finally, consciousness as an engineering problem!



I noted this recent research from MIT: "Making computers explain themselves:
new training technique would reveal the basis for machine-learning systems’ decisions"
"In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

"But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

"At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions."
The article is itself pretty opaque  - we appear to be at the very beginning of this difficult road.

But ..

A neural net system C which observes another neural net system B and confabulates a symbolic, causal, intentional model explaining how B behaves. Does any of that sound familiar?

The most convincing (obviously still hand-waving) paradigm for thinking this through is due to R. Scott Bakker's Blind Brain Theory (BBT). I do advise you to read his PDF here.

Wednesday, November 02, 2016

Fly like a bird

Drone Girl

I wrote to my nephew, Chris:
"In the old days Clare would walk up the wild hills with enthusiasm, if not with any substantial energy. These days, she just wishes she were a bird.

So in my imagined scenario, she gets out the car in the valley car park and settles into her camping chair. On go the VR goggles - comfortable, high-res and adjustable - and with the intuitive controller she launches the drone.

The machine is simple to fly. It automatically senses its environment and will not crash into things. It monitors its power reserves and will fly home by itself before it runs out. Ditto if it loses communications which in any case are robust.

The drone is quiet and has at least 20 minutes endurance. The cameras move in response to Clare's head movements so she has the illusion of 'being the drone'. There's a microphone and a speaker, so she could virtually accompany someone else up the hill.

The drone can be recharged from the car via an adaptor.

---

So this would be a great present! I imagine we're about two or three years away from this - rather high-end - product.

What do you think? Off the top of your head, I mean, without any research."
I got a really detailed reply in just a couple of hours, but it seems my complete wish-list is still two or three years out.

There is another alternative: I could attach a pair of wireless cameras to my hat, like a pair of bunny ears. As I scampered up the hill, Clare could be 'looking through my eyes' from the comfort of home. We could even talk to one another - fancy!

---

Chris suggested I look at this:



The price tag is a pretty hefty £1,000 .. but is sure to come down.

Tuesday, November 01, 2016

Transgressive Science-Fiction

"The Garden of Earthly Delights"

I still recall the shell-shocked faces of the BBC Newsnight team at the Cannes Film Festival. Kirsty Wark had been too scared to go while Mark Kermode, who had attended, self-consciously praised and defended Lars von Trier's latest film. It was the transgressive "Antichrist".

In mainstream literature think Bret Easton Ellis; in painting, Hieronymus Bosch; in TV, 'Game of Thrones'.

Transgressive art is characterised by psychological and often physical violence; visceral, graphic and explicit sex; profane language; the unleashing of the id; subversion of our safe little lives.

In science-fiction there are some notable practitioners (most transgressive first):

  1. Scott Bakker (Neuropath)
  2. William Barton (When We Were Real)
  3. Richard Morgan (Takeshi Kovacs novels)
  4. Peter Watts (The Rifters Trilogy; Blindsight & Echopraxia)
  5. Dan Simmons (Hyperion Cantos - Simmons just makes the cut)

I'm currently reading 'Neuropath'. It's difficult, though wholly engrossing - if you can stomach psychopathic brain-control torture, edgy relationships, dense discussion of mind, brain and consciousness.

Amazon link

OK, let me do it this way. Look at your most significant other: his or her smile, affectionate glance, amusing quip. Now make the the skull transparent and look again with the eyes of an MRI scanner. For every quirk of personality you love so much, there's that small net of neural tissue lighting up. That's all there is and ever has been. Meaningless.

Oh, and you too.

One of the themes of this novel is how resistant people are to even engaging with this concept. They shy away - too transgressive.