色盒直播

Neural code-breaker

Despite being the father of modern computing, Alan Turing’s greatest impact on contemporary science may stem from his insights into altogether more complex hardware, argues Ray Dolan: the human brain

September 6, 2012

Many eminent neurologists have crossed the threshold of the National Hospital for Neurology and Neurosurgery in London, but a curious fact about the red-brick building, the historical home of the discipline, probably escaped them. It is that between 1949 and 1958, the basement of “The National” provided an occasional setting for a very unusual dining society known as the Ratio Club.

Among its elite members were individuals who made key contributions to the war effort in code-breaking and radar development. They included the neurophysiologist W. Grey Walter and cyberneticist W. Ross Ashby, the psychiatrist Eliot Slater, the neuroscientists Horace Barlow and Donald MacKay and the mathematician Irving John “Jack” Good. Although many of this group went on to become eminent figures in their respective fields, there is no doubt that Alan Turing, the mathematician often described as the father of computer science, is best remembered today.

The common bond uniting the Ratio Club members was an interest in information theory and cybernetics. Unsurprisingly, and perhaps influenced by the setting for their initial meetings, they also shared a nascent interest in the workings of the human brain. This interest was implied in their choice of name: “ratio” as the Latin root meaning “computation or the faculty of mind which calculates plans and reasons”.

The scale of their ambition is evident in the fact that many of their early discussions focused not only on how one might understand the workings of the brain but also on how to design one.

色盒直播

ADVERTISEMENT

However, it was a shared background in code-breaking, or cryptanalysis, that formed an unspoken bond between many Ratio Club members. Cryptanalysis was a discipline that grew out of a response to the adoption of the Enigma machine as the basic cipher system for the German armed forces in the 1920s. While various versions of the Enigma were developed over time, they all provided combinatorial possibilities for enciphering text of mind-boggling sophistication.

In essence, an Enigma machine was an electromechanical device that transformed plain-text messages into seemingly indecipherable code. At its heart was a set of rotors that could be set in billions of combinations, each generating a unique cipher-text message. To recover the original text, the person receiving the message had to know the settings of the machine that sent it: typing the cipher text into an Enigma machine with the same settings unscrambled the message.

色盒直播

ADVERTISEMENT

Turing was recruited to the Government Code and Cypher School at Bletchley Park in September 1939. Astonishingly, despite the scale of the challenge facing the cryptologists, he was routinely decrypting messages sent by German naval ciphers by the middle of 1941. This was a turning point of immense significance in the Second World War: the code-breakers performed a crucial role in helping Allied supply ships from North America stay clear of marauding German U-boats in the North Atlantic.

The methods deployed by Turing and his colleagues, many of whom were Polish mathematicians in exile, involved a combination of statistics, logical analysis and mechanical approaches. At the heart of Turing’s approach was the concept of “inductive inference”, an approach to uncertainty that has its intellectual roots in the mathematical treatment of conditional probabilities attributed to Thomas Bayes and Pierre-Simon Laplace. Now widely known as Bayesian statistics, the method concerns the way in which uncertain pieces of information can be combined to form a new (and better) estimate.

The simplest example of this occurs in everyday perception when new sensory information is combined with data from the past: for example, when driving in foggy conditions (the fog obscuring visual data), we infer that an object moving towards us is another car and not an elephant (a fair assumption given our prior experience of roads). By using Bayesian inference techniques, it is possible to devise a mathematically precise way to determine the extent to which new data support competing hypotheses.

Like the driver of the car, the Bletchley Park cryptologists needed to make sense of “noisy” data and weigh up competing hypotheses, but the challenge they faced was infinitely more complex. Given the combinatorial possibilities of the Enigma, it was crucial for cryptologists to be able to put a bound on the machine’s encoding possibilities.

One important fact that helped to reduce the enormity of the task was the recognition that the frequency of the occurrence of some letters in any natural language is not random. In German and English, the letter “e” occurs at about a frequency of 12 per cent as opposed to an expected random frequency of 4 per cent, a fact that Turing and his colleagues fully exploited in their calculations.

As recounted years later by Turing’s Bletchley Park colleague Jack Good after the veil of the Official Secrets Act had been lifted, the former’s success rested on his exploitation of a number of mathematical tools. Although Turing never explicitly acknowledged Bayes, these included Bayes factors, sequential analysis, log factors and the weighted average of factors (likelihood ratio).

Turing’s own innovative flair was also abundantly evident. Attempts to decode a message often began simply with a hunch or an informed guess. What Turing devised was a way to quantify these hunches - a metric for comparing probabilities, which he termed a “ban” (and a “deciban”), defined as the “smallest change in weight of evidence that is directly perceptible to human intuition”.

色盒直播

ADVERTISEMENT

How is this relevant to contemporary neuroscience? There is a striking analogy between the task faced by cryptologists and the problems encountered on a daily basis by the brain. Our brain regularly has to process “noisy”, unclear sensory inputs and there are now numerous experiments that suggest that when we make a simple inference in relation to the sensory world, the brain exploits computations akin to calculating the weight of evidence and implements some form of Bayesian inference.

色盒直播

ADVERTISEMENT

A simple example is the “moving dot” task, a visual test in which a subject has to infer the main direction of motion in a display. The viewer is presented with a screen of moving dots in which most are moving randomly. A proportion, however, is moving in the same direction. The viewer initially finds it hard to discern the direction in which the constant dots are moving. Gradually, however, by sampling the image repeatedly, the brain is able to accumulate enough evidence and detect the dominant direction of movement. This process mirrors the sequential sampling and accumulation of data employed by the cryptologists.

The idea that the brain solves some sort of computational problem by decoding the new information it constantly encounters no longer attracts much controversy in neuroscience: in fact, current thinking in theoretical neurobiology is suffused with ideas exploited by Turing’s cryptanalysis. It is not uncommon to find empirical studies of perception and cognition that include the very same conceptual tools used by cryptologists, including weighted (Bayesian model) evidence, sequential analysis and empirical (hierarchical) Bayes methods. The pervasive use of these approaches makes a compelling case for Turing being the first theoretical neuroscientist.

A common fallacy of retrospection, however, is overattribution. Whether Turing saw cryptanalysis as the basis for understanding the fundamental computation implemented in the brain (the neural code) is debatable - but there are strong hints that he was thinking along these lines.

In other areas, too, Turing appeared to have an extraordinary level of insight into the workings of the brain.

After a discussion on “the mind and the computing machine”, held in the University of Manchester’s philosophy department on October 1949, Turing began to correspond with the renowned physiologist J.Z. Young. Young at the time was preparing to give the 1950 Reith Lectures for the BBC, a series in which he would present a forthright case for the explanatory power of neurophysiology in accounting for behaviour. In a letter dated 8 February 1951 addressed to Young, Turing showed an impressive understanding of the likely basis of memory formation.

“If I understand it right,” he wrote, “the idea is that by different training certain of the paths could be made effective and the others ineffective.” The letter suggested that memory formation was based upon some form of experience-induced plasticity in the connection between brain cells, as is now believed to be the case.

Turing’s influence on the way we think of the brain’s fundamental operations continues to grow. Ironically, in neuroscience his impact has become arguably greater than that of the neurological elite whose status has sustained influential institutions such as the National at the forefront of the clinical discipline of neurology over many generations. Surprisingly, it is Turing’s contribution to cryptanalysis rather than his better-known ideas on a universal computing machine that has had the most substantial contemporary impact.

The entrance hall to the National contains a board that proudly displays the names of distinguished surgeons and physicians who have served on its staff. Perhaps this would be a fitting place, and his centenary year a fitting time, to acknowledge Turing’s historic link to the institution and his far-sighted contributions to our understanding of the still-mysterious and endlessly fascinating workings of the mind.

色盒直播

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT