English philosopher-mathematician Alan Turing created the Turing Test to ask, “Can machines think?” in his paper “Computer Machinery and Intelligence.” Two years later, in a BBC Radio Broadcast, Turing considered a similar problem of whether a jury could ask questions to a computer such the computer would respond to convince them it is really a person.
In “Computer Machinery and Intelligence,” Turing wrote the first version of a Turing Test (paraphrased):
Suppose that three people take part of a game: A man (A), a women (B) and an interrogator (C), who can be of either sex. C is in a different room than A and B. C must identify who of the two is the man and who is the women. C can address questions to A and B, but C only hears the responses through a teleprinter. A must deceive C while B must help the C.
“What will happen when a machine takes the part of A in this game?”
The machine can play this “imitation game” and, though it may be disadvantaged, it can play the game so it’s a fair test. Turing suggested “child machines” be built to grow to learn to communicate in natural language at the level of adult humans.
In “Computing Machinery and Intelligence,” Turing considered the ways machines think and addressed objections that others may raise. He closely examined the meaning of “thinking” and created what future researchers would use in their notion of a machine “thinking.” First, Turing considered that, because thinking is a function of God’s given soul to man, a machine can’t think. Turing responded that creating machines doesn’t take away from God’s power. Then, he remarked the objection that, because machines thinking can be detrimental to mankind, we shouldn’t believe they can confused what can be with what should be. Turing continued that mankind can be wrong in our mathematical research and, therefore, what we raise as mathematical limits of computers don’t limit how computers can think.
He continued to answer the objection that a computer can’t think on the grounds that it can’t have conscious experiences or understanding. We only observe the computer’s behavior. Turing addressed this concern by replying that we only know the conscious experiences of other people through observing their behavior. Turing addressed the argument that a computer will never do a certain thing such as fall in love or tell right from wrong by responding that many of these claims come from assumptions. It’s possible to, for example, program a computer to make a mistake or report on a computer’s internal processes as though it were a computer’s own thought. In the 1840s mathematician Lady Lovelace had previously raised the objection that computers can’t be original because they can’t learn independently. Turing replied that computers surprise humans and that Lovelace was limited by the scientific knowledge of her time.
Neuroscience researchers had shown that the human brain does not behave the same way a computer behaves in many cases. Some raised the objection that a computer can’t mimic the nervous system. Turing doesn’t deny this claim, but, instead, argues computer scientists can simulate human systems to a sensible levels of accuracy. He then addressed the argument that we can predict how computers behave by arguing behavior isn’t the same as conduct and there may be predictable ways we determine humans behave as well. Finally, Turing gave benefit of the doubt to extrasensory perception may change how humans perceive the world, but, if such a thing were to exist, we could make Turing tests such that mind-reading wouldn’t affect them.