Seventy years ago this week, English mathematician Alan Turing gave a speech to the London Mathematical Society that’s as fresh in concept today as it was Feb. 20, 1947. The Bletchley Park alumnus—having cracked the Enigma code, a key breakthrough in Allied efforts to defeat the Germans—had turned his attention to the field of digital computers and what we now know as artificial intelligence.
After describing the functions and designs of the “digital computing machine,” Turing grew increasingly philosophical in his presentation, posing questions about the relationship between man and machine:
“It is also true that the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands what in principle is going on all the time. Up till the present machines have only been used in this way. But is it necessary that they should always be used in such a manner?”
Turing went on to posit that man could create a machine that improved itself over time, modifying its initial instruction tables to ultimately operate more efficiently. “When this happens I feel that one is obliged to regard the machine as showing intelligence,” Turing wrote.
In 1950, Turing introduced a tool for determining whether a machine was intelligent—the Turing Test, or an “imitation game” (hence the name of the Benedict Cumberbatch movie). In a Turing Test, a human judge unwittingly interacts with a machine. If the machine’s responses can convince the human judge that he or she is interacting with another human, the machine possesses “intelligence.” No machine passed the Turing Test until 2014, when a computer program successfully impersonated a 13-year-old.
Tech Time Warp is a weekly feature that looks back at interesting moments and milestones in tech history.