Summary of 'Computing Machinery And Intelligence' (1950) by Alan Turing

Can Machines Think?

This question begins Alan Turing’s paper ‘Computing Machinery and Intelligence’ (1950). However he found the form of the question unhelpful, that even the process of defining ‘machines’ and ‘think’ in common terms would be dangerous, as it could mistakenly lead one to think the answer can be obtained from some kind of statistical survey.

Instead, he replaced the question with ‘The Imitation Game’ - a game of two rooms. In one sits a man (A) and a woman (B). In the other sits an interrogator (C) who must determine which of the other two is the man and which is the woman through a series of typewritten communication [1].

  • A’s objective is to cause C to make the incorrect identification
  • B’s objective is to help C to make the correct identification.

Turing noted that the best strategy for the woman (B) is probably to give truthful answers, adding things like “I am the woman, don’t listen to him!” to her answers, but this alone will not be enough as the man (A) can make similar remarks.

He then reframed the original question as ‘What happens when a machine takes the role of A?’ Will the interrogator still decide incorrectly as many times if the role is performed by a machine?

Testing Machine Intelligence

In critiquing his new problem, Turing was satisfied with the clear line drawn between physical and intellectual capacities of humans by the separation of rooms and use of written communication. He remarked that there is little point in a ‘thinking machine’ looking, sounding or feeling like a human for the purpose of testing intelligence.

He then defined the meaning of the word ‘machine’ within the context of the game. Initially he stated that any kind of engineering technique should be permitted, yet in a desire to exclude natural born humans, biological creations and the fact that interest in thinking machines at the time was driven by a specific kind of machine, he concluded that the ‘machine’ in the game should refer to a ‘digital computer’.

Digital Computers

Given that computers are something we take for granted today, it’s fascinating that Turing starts his definition of digital computers using the analogy of a human computer - a human which would follow a set of fixed rules using an unlimited supply of paper for calculations. Putting yourself in that headspace where this analogy was required is humbling.

He noted that the idea of a digital computer is an old one, referencing the various Analytical Engine designs produced by Charles Babbage up until his death in 1871. Although mechanical in design, he found the ideas of no less importance.

Turing defined a digital computer as having:

  • A Store of infomation for calculations and rules the computer must follow.
  • An Executive unit which carries out the individual operations.
  • A Control which ensures that instructions are performed in accordance with the rules and in the correct order.

Numbers should be assigned to parts of the store, allowing the machine to receive instructions like ‘Add the number stored in position 6809 to that in 4302 and put the result back into the latter storage position’. These instructions would be expressed in a machine language something like - ‘6809430217’ - where 17 references the type of instruction, as defined by the rulebook.

Normally instructions are to be carried out in the order they are stored in memory unless the instruction contains logic that asks the control to continue from an instruction in another storage location. The creation of the rules the computer must follow is described as programming and he highlighted the ability to define logic as an important factor in mimicing the ability of a human computer.

Turing predicted that before the year 2000 it would be possible to program computers to play ‘The Imitation Game’ so well that an average interrogator will not have more than 70% chance of making the right identification after 5 minutes of questioning [2].

Contrary Views

Before putting his arguments forward, Turing went to great length to refute expected objections:

Theological Objection

  • Argument: Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.
  • Response: Turing could not accept any part of this argument and rejected the idea that thinking machines would usurp God’s power of creating souls any more than humans do so in the procreation of children.

Head in the Sand Objection

  • Argument: The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.
  • Response: He claimed this argument would likely be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power. Turing did not feel the argument is sufficiently substantial to require refutation.

Mathematical Objection

  • Argument: There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines. The best known of these results is known as Godel’s theorem (1931) and shows that in any sufficiently powerful logical system, statements can be formulated which can neither be proved nor disproved within the system.
  • Response: He acknowledged that there are limits to the powers of any machine but saw no proof that these limits do not also apply to human intellect and expected that those holding the mathematical argument would be willing to accept ‘The Imitation Game’ as a basis for discussion, unlike the previous two arguments.

Argument from Consciousness

  • Argument: A machine cannot match a brain until it is conscious, moved to act by it’s own thoughts and emotions.
  • Response: According to this view, the only way to know that a man thinks is to be that particular man. He noted it may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe “A thinks but B does not” whilst B believes “B thinks but A does not.” instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. He accepted a certain mystery about the nature of consciousness but did not see it having an impact on his question.

Arguments from Various Disabilities

  • Argument: Machines may do all the things you have mentioned but you will never be able to make one to do X.” (Insert: fall in love, have a sense of humour etc.)
  • Response: He argued that no support is usually offered for these statements and they are mostly founded on the basis of scientific induction. From the thousands of machines people have seen before them - they are all ugly, each designed for a very limited purpose, when required for a minutely different purpose they are useless etc. Naturally they conclude this is true of all machines. Turing believed that diversity in behaviour would directly increase with storage capacity which was growing exponentially.

Lady Lovelace’s Objection

  • Argument: Lady Lovelace’s memoir stated that Babbage’s Analytical Engine had no pretensions to originate any behaviour of it’s own.
  • Response: Turing agreed with Hartree’s (1949) analysis that despite the fact machines constructed or projected at the time did not seem to have this property, it does not imply the impossibility of constructing electronic equipment that will ‘think for itself’.

Argument from Continuity in the Nervous System

  • Argument: The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse, [therefore] one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.”
  • Response: He agreed that a discrete-state machine must be different from a continuous machine. But in adhering to the conditions of ‘The Imitation Game’, the interrogator will not be able to take any advantage of this difference, that it would be possible to sufficiently mimic continous outputs where required.

The Argument from Informality of Behaviour

  • Argument: It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible.
  • Response: Turing reformed this argument as ‘if men acted based on rules they would be no better than machines but there are no such rules, so men cannot be machines’. He then argues that men are clearly governed by laws of behaviour, the laws of nature that are applied to men (‘if you pinch a man, he will squeak’). Rules of conduct a man follows are harder to ascertain, they are only found through scientific observation but there are no circumstances under which one could say, “We have searched enough. There are no such laws.”

The Argument from Extrasensory Perception

  • Argument: Unlike machines, humans possess Extrasensory Perception (viz., telepathy, clairvoyance, precognition and psychokinesis).
  • Response: Turing felt this argument was strong, yet noted many scientific theories seem to remain workable in practice, in spite of clashing with ESP; that in fact one can get along very nicely if one forgets about it. He acknowledged this was not much comfort and that the domain of thinking is one where ESP may be especially relevant. If telepathy is admitted, the test may need to be modified to use a ‘telepathy-proof room’.

Learning Machines

Given Turing’s lengthy analysis of objections, he suspected the reader would think he had no arguments of his own, however he devoted the remainder of the paper to his views. Here is a summary of them:

  • Consider the human mind to be like the skin of an onion, in each layer we find mechanical operations that can be explained in mechanical terms but we say these layers do not correspond to the real mind - if that is true then where is it to be found? Do we ever peel back an onion layer and find the real mind?
  • The only really satisfactory support for thinking machines will be provided by waiting for the end of the century and then playing ‘The Imitation Game’.
  • The problem is mainly one of programming, rather than a engineering or data storage problem. Estimates of the storage capacity of the brain vary from 10^10 to 10^15 binary digits. Only a very small fraction is used for the higher types of thinking. Most of it likely used for the retention of visual impressions. It would be surprising if more than 10^9 was required for satisfactory playing of ‘The Imitation Game’.
  • Think about the process which has brought about the adult mind:
    • The initial state of the mind (birth)
    • The education to which it has been subjected
    • Other experience, not to be described as education, to which it has been subjected

    Why not separate the problem into two, first create a child’s brain, then educate it. Through experimentation of the machine and teaching methods you could emulate an evolutiontionary process. Opinions may vary on the complexity of this child machine, one might try to make it as simple as possible following the general principles, alternatively, one might have a complete system of logical inference “built in.”

  • Regulating the order in which the rules of the logical system are applied is important as at each stage there would be a very large number of valid alternative steps. The choices mean the difference between a brilliant and a poor reasoner.

  • An important feature of a learning machine is that its teacher will often be largely ignorant of what is going on inside, although he may still be able to some extent to predict his pupil’s behavior. This is in clear contrast with normal procedure when using a machine to do computations one’s object is then to have a clear mental picture of the state of the machine at each moment in the computation. Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation

  • It is probably wise to include a random element in a learning machine. A random element is rather useful when we are searching for a solution of some problem. A systematic method has the disadvantage that there may be an enormous block without any solutions in the region which has to be investigated first. The learning process may be regarded as a search for a form of behaviour which will satisfy the teacher (or some other criterion). Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic. It should be noticed that it is used in the analogous process of evolution. But there the systematic method is not possible. How could one keep track of the different genetical combinations that had been tried, so as to avoid trying them again?

Conclusion

The words of Alan Turing:

“We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Read the Original Paper

Sidenotes

[1] The well-known ‘Turing Test’ was not actually proposed by Turing himself, this is an interpretation (also called the ‘Standard Interpretation’) of ‘The Imitation Game’.

[2] On first look this seems like a measured, calculated prediction (no doubt typical of Turing’s character). Are we there yet? Not quite.




/dev/hackjoy

Software engineer. Thinking about intelligent systems, startups & well-being

  • If you'd like to be notified of new content: