You may have heard that a computer passed the Turing Test over the weekend. There’s been all sorts of media freakouts over this, including the statement that this is the first time a computer has passed the Turing Test, which it isn’t. So what’s hype, what isn’t, and why is the Turing Test so important?
So what is the Turing Test?
It’s pretty simple, actually: Can we teach a computer how to speak to humans, so they think it’s a person on the other end of the line? It was proposed by Alan Turing as a replacement for the thorny and ambiguous question of whether what computers do can be construed as actual thought. If the computer can fool one of three judges, it passes the test.
That… sounds pretty simple.
In theory, it should be. In practice, well, listen to this sad robot try to convince you it’s a human woman. That’s a good example of how hard the Turing Test is to clear.
So, the Turing Test is a silver bullet for computing?
Not so much. It’s really more of Turing’s attempt to make the discussions we have about computers better reflect the science involved. It sounds smart-assed to say that human behavior and intelligent behavior aren’t always the same thing, but it’s undeniably true.
Similarly, it’s not about the internal processes of the computer, but how the computer appears to a human, something Turing himself noted in the paper discussing the test. Basically the Turing Test can boil down to which computer has the most convincing Halloween costume.
A third problem is humans themselves. For example, the program that passed the test this weekend, Eugene Goostman, was designed to emulate a thirteen-year-old Ukrainian boy, and the test was judged by English academics. Russian academics might have more easily spotted misused slang and wouldn’t chalk up odd statements or behaviors to the language barrier.
So why is passing the Turing Test so important?
Really, the Turing Test is less about computers and more about us. Think about the emails you send during the day; the slang you use, the in-jokes you make, the unspoken meaning behind the words. Simulating that is incredibly difficult even for other humans. Creating a program that can, however clumsily, simulate that means humanity has a better grasp of itself.
What will this mean for us, long-term?
Mostly that customer service chatbots will actually be useful. The Turing Test is of limited practicality for “real world” uses, despite the current panic about how computers are going to steal our credit cards. Remember, it’s limited to text; the computer doesn’t have to synthesize or recognize speech, both systems that, while surprisingly advanced, are still kind of flawed. Just ask anybody who’s used a Kinect recently.
Things really aren’t going to get interesting until we’ve refined these programs to the point where they can fool most of us. That’s still a long time away… but hey, no reason we can’t kill time making jokes about HAL until then, right?