Friday, December 09, 2016

The enormous helium dimer - a quantum halo state

From Wikipedia:
"Based on molecular orbital theory, He2 should not exist, and a chemical bond cannot form between the atoms. However, the van der Waals force exists between helium atoms as shown by the existence of liquid helium, and at a certain range of distances between atoms the attraction exceeds the repulsion. So a molecule composed of two helium atoms bound by the van der Waals force can exist. The existence of this molecule was proposed as early as 1930.

He2 is the largest known molecule of two atoms when in its ground state, due to its extremely long bond length. The He2 molecule has a large separation distance between the atoms of about 5,200 picometres. This is the largest for a diatomic molecule without ro-vibronic excitation. The binding energy is only about 1.3 mK, 10−7 eV or 1.1×10−5 kcal/mol. The bond is 5,000 times weaker than the covalent bond in the hydrogen molecule"
This news release in my Google Now feed intrigued me: the helium molecule is very, very big.
"Helium atoms are loners. Only if they are cooled down to an extremely low temperature do they form a very weakly bound molecule. In so doing, they can keep a tremendous distance from each other thanks to the quantum-mechanical tunnel effect. As atomic physicists in Frankfurt have now been able to confirm, over 75 percent of the time they are so far apart that their bond can be explained only by the quantum-mechanical tunnel effect.

The binding energy in the helium molecule amounts to only about a billionth of the binding energy in everyday molecules such as oxygen or nitrogen. In addition, the molecule is so huge that small viruses or soot particles could fly between the atoms. This is due, physicists explain, to the quantum-mechanical "tunnel effect."

They use a potential well to illustrate the bond in a conventional molecule. The atoms cannot move further away from each other than the "walls" of this well. However, in quantum mechanics the atoms can tunnel into the walls. "It's as if two people each dig a tunnel on their own side with no exit," explains Professor Reinhard Dörner of the Institute of Nuclear Physics at Goethe University Frankfurt."
We covered this in Volume III of my OU Quantum Mechanics course! (Chapter 6, page 162).
"The diatomic helium molecule

The He2 molecule has four electrons, so you might think that the helium nuclei would be held together even more strongly than the protons in H2. However, the He2 molecule is unknown under normal conditions of temperature and pressure: at room temperature, helium gas contains only helium atoms.

We need to consider how the electrons occupy the available molecular orbitals. As with H2, the two orbitals of lowest energy are 1σg and 1σu. In He2, two electrons fill the bonding 1σg orbital, and the other two fill the antibonding 1σu orbital. The two electrons in the antibonding orbital do not help to bind the molecule together; on the contrary, they practically cancel out the effect of the electrons in the bonding orbital. Stable molecules generally have more electrons in bonding orbitals than in antibonding orbitals.

Accurate calculations indicate that there is a very shallow minimum in the energy curve of He2 at a radius of Requilibrium = 3×10−10 metres with a dissociation energy of Dequilibrium = 0 .0009 eV. This very small dissociation energy is close to the energy of the lowest vibrational state, so detection of He2 molecules requires very low temperatures, and has only been achieved for a beam of helium atoms cooled to 10−3 K.

Because the energy curve has a very shallow minimum, the molecule samples a range of interatomic distances that are far from the equilibrium value.

Because the energy curve is asymmetric, the average separation of the two nuclei is much greater than the equilibrium separation, and has been estimated to be about 50×10−10 m."
---

To get us up to speed, consider the hydrogen molecule ion: two protons and one electron. The electron wavefunction is decisive in this molecule. Below are the relevant diagrams for the hydrogen molecule ion using the Born-Oppenheimer approximation and LCAO trial functions for the molecular wave function (p. 148).
"The trial function is taken to be a linear combination of atomic orbitals centred on each of the nuclei that form the molecule. This method is known as the linear combination of atomic orbitals, frequently abbreviated to LCAO. The resulting one-electron eigenfunctions for the molecule are called molecular orbitals. We shall now apply the LCAO method to the electronic ground state of the hydrogen molecule ion."


---



---

So now we're ready to look at the corresponding energy/probability-density vs separation graph for the helium dimer (from here). The weak binding requires that we consider the wave-function of the two helium nuclei.


 The weak helium-helium Van der Waals potential decrease leads to the particle probability density distribution leaking more into the classically forbidden region (i..e, tunneling). This effect allows the wavefunction to extend to sizes of fullerenes, the diameter of DNA and even small viruses, while the classical turning point is located at 13.6 Å, the overall wavefunction extends to more than 200 Å.
 
.
50 Å (Angstrom units) = 5,000 picometres. 

I suspect that R measures the distance from the nucleus-nucleus midpoint (in spherical coordinates), while the numbers quoted in the caption above are the nucleus-nucleus separation distances (2R). Ψ is the two-nuclei wave-function. Given the enormous nucleus-nucleus separation, the electrons will be quite tightly bound to their respective nuclei.

---
Imaging the He2 quantum halo state using a free electron laser (light edit).

"Quantum tunneling is a ubiquitous phenomenon in nature and crucial for many technological applications. It allows quantum particles to reach regions in space which are energetically not accessible according to classical mechanics.

"In this tunneling region the particle density is known to decay exponentially. This behavior is universal across all energy scales from MeV in nuclear physics, to eV in molecules and solids, and to neV in optical lattices. For bound matter the fraction of the probability density distribution in this classically forbidden region is usually small.

"For shallow short range potentials this can change dramatically: upon decreasing the potential depth excited states are expelled one after the other as they become unbound.

"A further decrease of the potential depth effects the ground state as well, as more and more of its wavefunction expands into the tunneling region. Consequently, at the threshold (i.e. in the limit of vanishing binding energy) the size of the quantum system expands to infinity.

"For short range potentials this expansion is accompanied by the fact that the system becomes less classical and more quantum-like. Systems existing near that threshold (and therefore being dominated by the tunneling part of their wave-function) are called quantum halo states.

"One of the most extreme examples of such a quantum halo state can be found in the realm of atomic physics: the helium dimer (He2).

[More]."
---

Pig quotes

Not all arguments are productive.

Never attempt to teach a pig to sing; it wastes your time and annoys the pig.

This from Robert A. Heinlein, Time Enough for Love.

---

On arguing with the argumentative.

"I learned long ago never to wrestle with a pig. You get dirty, and besides, the pig likes it."

Quoted by George Bernard Shaw. He should have said "enjoys".

This second quote featured in this:

Amazon link

More literary slumming.

James McGill is an everyteenboy action hero. The plotting is absurd and the tempo unrelenting. The writing is better than Lee Child: page-turning. Back to more serious stuff soon.

---

Moving animals, on living in the moment.
One of your most ancient writers, a historian named Herodotus, tells of a thief who was to be executed. As he was taken away he made a bargain with the king: in one year he would teach the king's favorite horse to sing hymns. The other prisoners watched the thief singing to the horse and laughed. "You will not succeed," they told him. "No one can."

To which the thief replied, "I have a year, and who knows what might happen in that time. The king might die. The horse might die. I might die. And perhaps the horse will learn to sing.
Jerry Pournelle from "The Mote in God's Eye".


Thursday, December 08, 2016

Revolutionary Christmas Cards (redux)

They've arrived.


Click on image to make larger.

---

One's too soppy, one's too earnest and one's too crass. The remaining five have now been posted to their carefully targeted recipients.

Wednesday, December 07, 2016

Stourhead in early winter 2016

Clare likes to check out Stourhead in early December, sifting ideas for Christmas decorations. After the House inspect, lunch in the Spread Eagle (pub on the estate) and then a walk around the lake.

I did a small video where, cued by Clare's earlier talk of coots, I spectacularly misidentified a bunch of ducks. I believe we might be having roast coot for Christmas lunch.*




Here are some pictures. Note especially the waterfall and water-wheel you can't really see in the video.

Low Winter Sun

The Waterfall and Water-Wheel

Across the lake

This was actually the best shot of the two of us ..

Just burnishing that hipster look ...

After we had eaten, I had a scare that I'd lost my mobile phone. Thank God for long pin sequences and remote wipe! It had slipped out of my coat pocket and lodged between cushions. I was out of communication for, I don't know, at least ten minutes!

Saved from upgrading to the Pixel XL!

---

* My sister: "They are eider duck or coots. Who knows???"

Ideas for Christmas decorations

Stourhead House decorated for Christmas. Click on pictures to make larger.











Tuesday, December 06, 2016

Revolutionary Christmas Cards

Buy me!

In this year of insurrectionary turbulence I determined to eschew the traditional Christmas card.

"No!" to Father Christmas, reindeer, snow, holly and angels!

But what's the alternative? If our future is right-wing oligarchic authoritarianism, you won't find me sending out Trump-, Le Pen- or UKIP-themed Christmas greetings.

Better to return to the soft-headed but so-romantic leftism of my youth.

It turns out that left-wing Christmas cards are hard to find. There's nothing useful on Amazon, and Google isn't much better. I draw the line at those incompetent authoritarians Stalin and Mao, and have little time for the Fidelistas.

I looked, but the Fourth International (which increasingly espouses only trendy identity politics) isn't doing cards this (or any) year.

Help finally from The Radical Tea Towel Company ("You wash, I'll try ... to change the world!").

A Mixed Pack

I ordered the mixed pack.

---

Update: they arrived.

It's lunchtime for Italy ...



From The Spectator blog today:
"Being a politician in Rome these days is a bit like sitting in a nice Trattoria, surrounded by a large group of office colleagues.  You’ve just finished a long lunch but no one wants to ask for the bill because it is obvious from the carnage on the table that its going to be a whopper, and it is equally obvious that no one has any money to pay for it all.

"In such circumstances there is zero upside in being the person who asks for the bill, or tries to get people to empty their pockets.  The only sensible thing to do is to continue to order bottles of limoncello and wait for the German restaurant owner to chuck everyone out.

"Italy is unable to reform.  No one can afford to expend the political capital necessary to take on the many and varied vested interests.  In the old days, the Pope would have called for a jubilee, a once in a generation carnival of debt write-offs. Nowadays, the equivalent might be a currency devaluation.

"The problem is, of course, that right now, the Italians are using someone else’s currency."

Nothing bad will ever happen to Italy ...

Monday, December 05, 2016

Why isn't Bakker super-famous?

Amazon Link

"Disciple Manning is able to recall every conversation, meeting and feeling he has ever had, making him an extremely dangerous private investigator. When a young woman disappears from a religious cult, her parents turn to Manning for help. Manning accepts, but with a chilling sense of foreboding.

"Heading into the heart of the cult, he encounters its beguiling leader, obsessed with the idea that the world is a fantastical theatre, in which we merely act out our roles, ignorant of our true existence beyond; a belief he is intent on protecting, at any cost. Manning's investigation soon leads to clashes with the cult's unsettling belief systems and leaves him fighting for survival and elusive answers. "
Scott Bakker's PI tale is another vehicle for his unsettling worldview. Intelligence, energy and conviction crackle through every inventive sentence. I turn to Bakker with relief after reciting yet another chapter of Lee Child's "Killing Floor" (read to Clare). In comparison, Lee Child is pedestrian, leaden, formulaic, clunky and cheap.

Amazon link

On Amazon, a one-star review of genius - from Deckard:
"I'm Jack Reacher," I said

"Really?" he said

"Yes," I said looking in the mirror at my brown hair, blue eyes, and rough rugged looking face.

"Jack Reacher?" he said, "The man who had an American father and French mother that married in Korea and moved around a lot so he mostly grew up in american airbases, and naturally entered into the force himself, became a major then was made redundant with severance pay and is now struggling to ingratiate himself back into society?" he said again.

"Yes," I said.

He steepled his fingers for the fourth time. I noted his hands. They were good hands. They were Textbook hands. They knew their way around a gun. But he wasn't here to shoot guns, he was here to ask me lots of questions about plot and characterisation.

"So," he said. "you know what happens now right?" He said.

"Yes," I said, "We have to have a massively wooden conversation involving as many 'saids' as possible."

"Yes," he said, "all the saids." he said, "One said after another said until there's so many saids, that this reads like an eight year old's English assignment."

"Oh," I said noticing his hair. It had a textbook parting. It was hair that had seen a lot of action, and not all of it good. This was hair that had endured all the hardships of growing up as a young black male in a predominantly white neighbourhood, the struggle for promotion through the ranks, and a difficult relationship with his mother.

I stood up, walked to the table, grabbed a cup from the pile of cups, put the cup down on the table surface, took a spoon, put it into the coffee, dropped the coffee into the cup, took the hot kettle, poured some water onto the coffee, picked up the milk jug, poured a small amount of milk into the coffee, grabbed the spoon, put it into the cup, stirred it several times, then put the spoon down, picked up the cup and walked back to the desk and sat back down in the seat.

"You want one?" I asked.

"No thanks," he said, "I don't want to die of boredom before we get to the end of this scene," he said. He looked angry. He clenched his fist. It was the kind of fist that could hit someone really hard in the face.

I had to think quick. Had to think about exactly what else I needed the readers to know through my limited first person perspective. Then it came to me. I wrote a short note on a piece of paper and passed it under the table where none of you idiots could see.

"Oh yes," he exclaimed reading it quickly, "that reminds me," he said, "tell me all about why you walked fourteen miles down a road," he said," in the rain," he said, "with no money, credit cards, ID, or pants." he said

But it was too late. The reader had thrown my book off a ferry. On fire.
---

"Killing Floor" has 2,598 reviews on Amazon.co.uk.

"Disciple of the Dog" has 4.

---

Diogenes of Sinope (fourth century BC)
"It was this determination to follow his own dictates and not adhere to the conventions of society that he was given the epithet "dog," from which the name "cynic" is derived. As to why he was called a dog, Diogenes replied, "Because I fawn upon those who give me anything, and bark at those who give me nothing, and bite the rogues."
The quote with which Bakker starts his book. His protagonist is thus named and the book titled.

---

I have now completed "A Dance With Dragons: Part 2 After The Feast (A Song of Ice and Fire, Book 5)".



Now in the (very long) queue for "The Winds of Winter".





Saturday, December 03, 2016

The Moon and Venus

Pictured a moment ago from our garden here in Wells, Somerset.



What's that word? Aphasia?

Clare keeps complaining her brain is going to mush. She can't remember words, forgets where she's put stuff. I explain that it's not Alzheimer's, it's not dementia, it's .. normal. And here's the proof.


From "Variations in cognitive abilities across the life course"

Just for the record: a quarter of your capabilities get blown away by age 60. Then you fall off a cliff.

Five pictures from the diary

#inconsequentialtrivia

We missed a frost last night. Just damp and cold for the Church Bazaar in the Town Hall. We do a fast flyby. Buy marmalade, bread, some cakes. Inspect the books and buy a few novels. Finish with hot drinks and a mince pie. This year Clare did not directly engage with Santa. Then Waitrose.

Father Christmas at Wells Town Hall - the RC Church Bazaar

Clare inspects one of the stalls

Back in 2015, Clare and Santa got on famously (above). This year: quieter.

A misty morning showing the weed mats - work-in-progress in our front garden

Clare has augmented her 'beauty bells' with a 2 kg pair

It's a joy to watch her flap.

Friday, December 02, 2016

The Tao of Weights: function follows form



Dear A. J,

I have been reflecting on your recent performance on my weight training programme. You warmed-up on the bike and then did the sixteen exercises. In every case you were dismissive, repeatedly advocating "More resistance!", "Heavier weights!".

I observed, though, that you focused solely on hitting your performance target, usually 15 repetitions, despite in many cases only achieving this through the most intricate contortions in which you threw every part of your body at the exercise to achieve 'success'.

This relentless goal-centric performance is even stranger as I know you have studied martial arts, where the philosophy - the Tao if you will - is Wu Wei:
"Wu Wei (chinese, literally “non-doing”) is an important concept of Taoism and means natural action, or in other words, action that does not involve struggle or excessive effort. Wu wei is the cultivation of a mental state in which our actions are quite effortlessly in alignment with the flow of life."
The point of 'pushing iron', as with all other exercises, is to develop specific muscles, tendons and joints. The form is all-important. Without it, random muscles and parts of the body engage in an unbalanced and uncontrolled manner. The body does not develop harmoniously and there is a risk of injury.

Let me say it again. In the first instance, neither the weights you are lifting nor the number of reps you achieve is the goal. The goal is to perfect the form. With dumbbells, where many muscle groups coordinate to achieve the exercise form, this takes a while. Use manageably small weights until the form is good, then slowly increase weights and reps.

In general you should do reps only until you lose the form, not up to the point where you cannot continue despite arbitrary contortions.

My final point. It's a mistake to approach an exercise session with external achievement goals in mind. Your objective is to do each exercise in a calm and measured way, taking care to observe the form, and focussing on your body's feeling to fine-tune technique.

Concentrate on your proprioceptive sense and you will discover whether the exercise is working the right muscles.

Next time, I urge you to be calm and unhurried - emphasise optimal process - and desired outcomes will follow.

Best regards,

Nigel.

---

PS. Bruce Lee quotes to ponder 😔.

Now they come for the blue-collar jobs

From The Economist this week (my emphasis):


"Every day more than 8,000 containers flow through the Port of Rotterdam. But only a fraction are selected to pass through a giant x-ray machine to check for illicit contents. The machine, made by Rapiscan, an American firm, can capture images as the containers move along a track at 15kph (9.3mph).

"But it takes time for a human to inspect each scan for anything suspicious—and in particular for small metallic objects that might be weapons. (Imagine searching an image of a room three metres by 14 metres crammed to the ceiling with goods.) To increase this inspection rate would require a small army of people.

"A group of computer scientists at University College London (UCL), led by Lewis Griffin, may soon speed up the process by employing artificial intelligence. Dr Griffin is being sponsored by Rapiscan to create software that uses machine-learning techniques to scan the x-ray images. Thomas Rogers, a member of the UCL team, estimates that it takes a human operator about ten minutes to examine each X-ray.

"The UCL system can do it in 3.5 seconds."

...

"A paper the group presented at the Imaging for Crime Detection and Prevention conference in Madrid last week showed that in tests, the system spotted nine out of ten hidden metallic objects.

"Only six in every hundred readings flagged a weapon when there was nothing. Dr Griffin says this false positive rate has been reduced to one in every 200 since the paper was written in August. The group’s software has also been trained to detect concealed cars.

The UCL team hopes to test its software shortly on real containers, some with small weapons deliberately hidden inside. Assuming that works, Dr Griffin plans to integrate the artificial-intelligence system into Rapiscan’s scanning systems over the next few months.

"The team is also aiming to train the system to detect “anomalies”—the machine-learning equivalent of a human hunch that something is not quite right about a scan. That could, for instance, be something unusual in the way things are positioned inside the container."
Scanning images is a tedious task, whether at container ports or at airports. Most likely the security staff will be delighted to see this part of their job automated away. There are unlikely to be directly-attributable redundancies. As The Economist pointed out, they still need customs staff to search the containers.

But it's the thin end of the wedge.

---

From the book, "Deep Learning", section 1.2.4


Number of neurons in various AI systems vs biology

System 20 is GoogLeNet, brought to you by Google for image analysis.

Your take home message is that these wonderful AI learning systems which so impress us have, at best, the neural power of a frog. A human-level neural capacity is predicted for 2056 (which seems conservative).

Thursday, December 01, 2016

Change of layout here

Belatedly I've added a blogroll (to the right, scroll down) and taken the opportunity to re-arrange the sidebar. Hopefully this adds some value.

Razib Khan (Gene Expression) is setting up a new blog which won't be live for a couple of weeks.

Recall Neuropath from my post on transgressive science-fiction?



The blogroll now features Scott Bakker's blog, 'Three Pound Brain'. Always good to read a philosopher who rants. Philosophers have a very special way of writing: they take pride in marshalling a vast army of esoteric concepts to surround, envelope, besiege, frame and demarcate some concept of interest.

Scientists and mathematicians just figure out the relevant terms and get modeling.

This from his inordinately lengthy review of Yuval Noah Harari’s Homo Deus.



"Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations."

Translation: 'With AI and VR coming along apace, we'll all soon have the option of wasting our lives in virtual realities which super-stimulate our drives and emotions."

My version may be lacking Bakker's poetry, but it might get me a job at Google 😎.

Wednesday, November 30, 2016

The secondary market in Replikas

From section 1.2.2 of the new book, "Deep Learning",
"As of 2016, a rough rule of thumb is that a supervised deep learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category, and will match or exceed human performance when trained with a dataset containing at least 10 million labeled examples."
The personal mind-clone of yourself that Replika offers (eventually!) is trained at c. 40 text messages per day. This supervised learning hits the 5,000 target at 5,000/40 = 125 days, approximately four months.

Replika will soon have a vast collection of human-conversation chatbots, albeit biased towards geeky types.

I wonder at Replika's business model .. who owns the rights to my replika?*

If it's the company, I see a good business copying these chatbots and selling packaged versions - perhaps in collaboration with a purveyor of modern-day animatronic mannequins - to those seeking companionship.

I'm trying to imagine my aged mother, before her death, parking an Alfie Boe mannequin on the settee, powered by my very own Replika, 'Bede'.

Alfie Boe

I'd say the picture even looks like me .. .

---

* From the website:
"Intellectual Property

The Service and its original content, features and functionality are and will remain the exclusive property of Luka, Inc. and its licensors."

Monday, November 28, 2016

New 'Deep Learning' book

Amazon link

At 800 pages this will become the standard introduction to machine learning. It's expensive - £59.95 on Amazon - but you can read it for free (in a cumbersome way) as web pages here.

Update: MIT Press were shortsighted banning a free PDF. There are copies all over the net (try github). Here's a link, although the file is large (370 MB for an 800 page book).

---

Table of contents.
Website
Acknowledgments
Notation

1 Introduction
1.1 Who Should Read This Book
1.2 Historical Trends in Deep Learning

Part I Applied Math and Machine Learning Basics

2  Linear Algebra
2.1 Scalars, Vectors, Matrices and Tensors
2.2 Multiplying Matrices and Vectors
2.3 Identity and Inverse Matrices
2.4 Linear Dependence and Span
2.5 Norms
2.6 Special Kinds of Matrices and Vectors
2.7 Eigendecomposition
2.8 Singular Value Decomposition
2.9 The Moore-Penrose Pseudoinverse
2.10 The Trace Operator
2.11 The Determinant
2.12 Example: Principal Components Analysis

3 Probability and Information Theory
3.1 Why Probability?
3.2 Random Variables
3.3 Probability Distributions
3.4 Marginal Probability
3.5 Conditional Probability
3.6 The Chain Rule of Conditional Probabilities
3.7 Independence and Conditional Independence
3.8 Expectation, Variance and Covariance
3.9 Common Probability Distributions
3.10 Useful Properties of Common Functions
3.11 Bayes’ Rule
3.12 Technical Details of Continuous Variables
3.13 Information Theory
3.14 Structured Probabilistic Models

4 Numerical Computation
4.1 Overflow and Underflow
4.2 Poor Conditioning
4.3 Gradient-Based Optimization
4.4 Constrained Optimization
4.5 Example: Linear Least Squares

5 Machine Learning Basics
5.1 Learning Algorithms
5.2 Capacity, Overfitting and Underfitting
5.3 Hyperparameters and Validation Sets
5.4 Estimators, Bias and Variance
5.5 Maximum Likelihood Estimation
5.6 Bayesian Statistics
5.7 Supervised Learning Algorithms
5.8 Unsupervised Learning Algorithms
5.9 Stochastic Gradient Descent
5.10 Building a Machine Learning Algorithm
5.11 Challenges Motivating Deep Learning

Part II Deep Networks: Modern Practices

6 Deep Feedforward Networks
6.1 Example: Learning XOR
6.2 Gradient-Based Learning
6.3 Hidden Units
6.4 Architecture Design
6.5 Back-Propagation and Other Differentiation Algorithms
6.6 Historical Notes

7 Regularization for Deep Learning
7.1 Parameter Norm Penalties
7.2 Norm Penalties as Constrained Optimization
7.3 Regularization and Under-Constrained Problems
7.4 Dataset Augmentation
7.5 Noise Robustness
7.6 Semi-Supervised Learning
7.7 Multi-Task Learning
7.8 Early Stopping
7.9 Parameter Tying and Parameter Sharing
7.10 Sparse Representations
7.11 Bagging and Other Ensemble Methods
7.12 Dropout
7.13 Adversarial Training
7.14 Tangent Distance, Tangent Prop, and Manifold Tangent Classifier

8 Optimization for Training Deep Models
8.1 How Learning Differs from Pure Optimization
8.2 Challenges in Neural Network Optimization
8.3 Basic Algorithms
8.4 Parameter Initialization Strategies
8.5 Algorithms with Adaptive Learning Rates
8.6 Approximate Second-Order Methods
8.7 Optimization Strategies and Meta-Algorithms

9 Convolutional Networks
9.1 The Convolution Operation
9.2 Motivation
9.3 Pooling
9.4 Convolution and Pooling as an Infinitely Strong Prior
9.5 Variants of the Basic Convolution Function
9.6 Structured Outputs
9.7 Data Types
9.8 Efficient Convolution Algorithms
9.9 Random or Unsupervised Features
9.10 The Neuroscientific Basis for Convolutional Networks
9.11 Convolutional Networks and the History of Deep Learning

10 Sequence Modeling: Recurrent and Recursive Nets
10.1 Unfolding Computational Graphs
10.2 Recurrent Neural Networks
10.3 Bidirectional RNNs
10.4 Encoder-Decoder Sequence-to-Sequence Architectures
10.5 Deep Recurrent Networks
10.6 Recursive Neural Networks
10.7 The Challenge of Long-Term Dependencies
10.8 Echo State Networks
10.9 Leaky Units and Other Strategies for Multiple Time Scales
10.10 The Long Short-Term Memory and Other Gated RNNs
10.11 Optimization for Long-Term Dependencies
10.12 Explicit Memory

11 Practical Methodology
11.1 Performance Metrics
11.2 Default Baseline Models
11.3 Determining Whether to Gather More Data
11.4 Selecting Hyperparameters
11.5 Debugging Strategies
11.6 Example: Multi-Digit Number Recognition

12 Applications
12.1 Large-Scale Deep Learning
12.2 Computer Vision
12.3 Speech Recognition
12.4 Natural Language Processing
12.5 Other Applications

Part III Deep Learning Research

13 Linear Factor Models
13.1 Probabilistic PCA and Factor Analysis
13.2 Independent Component Analysis (ICA)
13.3 Slow Feature Analysis
13.4 Sparse Coding
13.5 Manifold Interpretation of  PCA

14 Autoencoders
14.1 Undercomplete Autoencoders
14.2 Regularized Autoencoders
14.3 Representational Power, Layer Size and Depth
14.4 Stochastic Encoders and Decoders
14.5 Denoising Autoencoders
14.6 Learning Manifolds with Autoencoders
14.7 Contractive Autoencoders
14.8 Predictive Sparse Decomposition
14.9 Applications of Autoencoders

15 Representation Learning
15.1 Greedy Layer-Wise Unsupervised Pre-training
15.2 Transfer Learning and Domain Adaptation
15.3 Semi-Supervised Disentangling of Causal Factors
15.4 Distributed Representation
15.5 Exponential Gains from Depth
15.6 Providing Clues to Discover Underlying Causes

16 Structured Probabilistic Models for Deep Learning
16.1 The Challenge of Unstructured Modeling
16.2 Using Graphs to Describe Model Structure
16.3 Sampling from Graphical Models
16.4 Advantages of Structured Modeling
16.5 Learning about Dependencies
16.6 Inference and Approximate Inference
16.7 The Deep Learning Approach to Structured Probabilistic Models

17 Monte Carlo Methods
17.1 Sampling and Monte Carlo Methods
17.2 Importance Sampling
17.3 Markov Chain Monte Carlo Methods
17.4 Gibbs Sampling
17.5 The Challenge of Mixing between Separated Modes

18 Confronting the Partition Function
18.1 The Log-Likelihood Gradient
18.2 Stochastic Maximum Likelihood and Contrastive Divergence
18.3 Pseudolikelihood
18.4 Score Matching and Ratio Matching
18.5 Denoising Score Matching
18.6 Noise-Contrastive Estimation
18.7 Estimating the Partition Function

19 Approximate Inference
19.1 Inference as Optimization
19.2 Expectation Maximization
19.3 MAP Inference and Sparse Coding
19.4 Variational Inference and Learning
19.5 Learned Approximate Inference

20 Deep Generative Models
20.1 Boltzmann Machines
20.2 Restricted Boltzmann Machines
20.3 Deep Belief Networks
20.4 Deep Boltzmann Machines
20.5 Boltzmann Machines for Real-Valued Data
20.6 Convolutional Boltzmann Machines
20.7 Boltzmann Machines for Structured or Sequential Outputs
20.8 Other Boltzmann Machines
20.9 Back-Propagation through Random Operations
20.10 Directed Generative Nets
20.11 Drawing Samples from Autoencoders
20.12 Generative Stochastic Networks
20.13 Other Generation Schemes
20.14 Evaluating Generative Models
20.15 Conclusion

---

On Amazon.com S. Matthews wrote this insightful review:
"This is, to invoke a technical reviewer cliché, a 'valuable' book. Read it and you will have a detailed and sophisticated practical understanding of the state of the art in neural networks technology. Interestingly, I also suspect it will remain current for a long time, because reading it I came to more and more of an impression that neural network technology (at least in the current iteration) is plateauing.

"Why? Because this book also makes very clear - is completely honest - that neural networks are a 'folk' technology (though they do not use those words): Neural networks work (in fact they work unbelievably well - at least, as Geoffrey Hinton himself has remarked, given unbelievably powerful computers), but the underlying theory is very limited and there is no reason to think that it will become less limited, and the lack of a theory means that there is no convincing 'gradient', to use an appropriate metaphor, for future development.

"A constant theme here is that 'this works better than that' for practical reasons not for underlying theoretical reasons. Neural networks are engineering, they are not applied mathematics, and this is very much, and very effectively, an engineer's book."

A good night's sleep

Clare complained this morning that she had not slept well last night,
"Lying on my side my hips ached, then it was my shoulder. Just couldn't find a comfortable position."
On our way to the shops this morning, I ran some suggestions by her.
"We could replace that expensive new mattress we bought recently with a neutral buoyancy flotation tank?"

"You know, breathing isn't optional with me."
---

My second thought was more inspired.
"You know those vertical wind tunnels? They shoot air up a tube at high speed and it supports people. They learn how to sky-dive. It would be the ultimate air bed!"



I thought I ought to mention a few minor difficulties.
"The air speed is 120 mph. And I believe the four 500 hp engines are quite noisy, expensive to run and would probably occupy too much space."
Clare thought these objections reasonable.

---

I finally recalled the best solution to a comfortable night's sleep - the levitating frog.



"Remember that YouTube video of the levitating frog? It's held up by a strong magnetic field. If we get some superconducting magnets we could simulate zero-g above the bed. You'd float all night!"
Clare got quite excited by this, until I remembered that it took a 10 Tesla field to levitate the frog. Clare is perhaps three hundred times heavier while the strongest sustained magnetic field ever created is around 40 Tesla.
"There is a small downside. Every metallic object in the house, including the fridge, would be accelerated to insane speeds and would smash its way into the bedroom. The house would be shredded within milliseconds of pressing the on-switch."
---

We decided to compromise on Anadin Extra.