The computer/robot that becomes self-aware and decides to kill everybody is tried and true. As mankind continuously attempts to create and perfect artificial intelligence (probably in response to the fact that they just won’t stop making Terminator movies), it’s also set benchmark tests to determine if this has actually happened or if it’s just a decent simulation. One of the most famous tests is the Turing Test — in which a conversation with a computer is indistinguishable from a conversation with a real human — and, once in a while, it’s claimed that someone has actually cracked it.
One of the more recent examples of these benchmark tests — and the one apparently solved by a robot in the Rensselaer Polytechnic Institute’s Artificial Intelligence and Reasoning Lab — is based around the “Three Wise Men” puzzle. Researchers set up three NOA robots and presented them with this scenario: Two of the robots were given a “pill” that would prevent them from speaking (in other words, they had their speech volume turned down), and the third was “given” a “placebo.” The test was to see if the robots could determine which of them got the placebo.
“I don’t know… Sorry! I know now!”
So, what does this mean? Well, not as much as you might think. These robots aren’t actually self-aware (that we know of, anyway…); they simply passed a test for self-awareness. What it means, however, is that technology has advanced to the point where it can reason and solve problems. It’s science that can be beneficial when sending in robots to perform tasks that humans can’t (such as space exploration and deep sea operations).
Still, we should probably keep an eye on them. Just in case.