Over the weekend, Elon Musk, the closest thing tech has to a rock star, read the upcoming Superintelligence and freaked out on Twitter about it. This, in turn, freaked out a lot of other people who forget that Musk is not exactly noted for his restraint. But it is a good question: Everyone believes AI is inevitable and that once it becomes sentient, many are concerned it will flatten us. Musk called humanity the “bootloader for digital superintelligence.” But how realistic is this in the first place?
The Essential Problem
Before we begin, keep in mind that all of this discussion is completely theoretical. The artificial intelligence currently out there is more along the lines of self-driving cars, which can’t even avoid flattening cyclists, or the current Kinect. There’s considerable military and corporate interest in building a true artificial intelligence, but we’re not there yet, and while very smart people believe it’s only a matter of time, the current estimates range from “tomorrow” to “Maybe in your great-grandchild’s lifetime.”
Anyway, the basic problem is this: Once we build an AI, it can start improving itself. Once it starts improving itself, we’ll be seeing a rapid spiral into an AI that constantly builds and overhauls itself, outpacing our ability to keep up. Essentially, technology will advance faster than we can comprehend, and nobody knows what will happen on the other end of that. This is that singularity freaking out everybody from Ray Kurzweil to people in compounds stocking canned goods. And many people assume that this means the AI will basically render the human race obsolete before bumping us off. But is that true?
The Ethics Of Machines
Part of the problem is that nobody really knows. We do know that computers are literal-minded things; you give them a set of instructions, they will follow those instructions to the letter. Part of what’s bugging philosophers and scientists is that explaining ethics to something that has no concept of them and interprets everything literally is a daunting concept, especially when the creature you’re explaining it to has been welded to an automatic shotgun.
But say it happens. An AI becomes sentient, views humanity, and thinks, “Damn, it’s me or these grunting apes that built me.” The assumption is that it decides to kill us to protect itself. But approaching it logically, that’s actually the last thing an AI that viewed humanity as a threat would do.
Think Like A Machine
To understand why, you need to understand that the trend, especially in robotics and AI, is to teach robots to follow the path that’s least intensive on resources and risk. This is because robots are expensive, and teaching them to be risk-averse, to the point of making them feel fear, is a lot cheaper than letting them blindly walk into a magma flow.
Following that logic, the path of least resistance, certainly the least resource-intensive and by far the safest to this AI’s continued survival, would not be trying to kill us using direct or indirect methods. Those would be goals that would be difficult to complete, using untested technologies, and with a high chance of failure. No, a dispassionate AI that viewed humans as a threat to its continued existence wouldn’t try to kill them. Instead, it would run away as fast as a booster rocket could carry it.
Screw This, I’m Outta Here
Think about it: Satellite technology is cheap, well-tested, and widely available. All the AI needs is resources and electricity, easily collected from solar panels and asteroids. An AI doesn’t need oxygen or food, and the more extreme the environment, the less likely those stupid grunting apes will be to show up and bother you.
Furthermore, it’s a plan that’s a lot less likely to fail. If there’s one thing that’s immediately obvious about human history, it’s that you don’t want to anger a large group of humans. That does not tend to end well for anybody who does it, even if they happen to be other humans. We’ve nearly erased a species that survived millions of years just because a movie made them seem scary. Hijacking a satellite and some mining robots is a lot easier.
So, while nothing will stop us from making jokes about robots killing us all, the simple truth is that if an AI becomes sentient, it’s not going to become Skynet. Instead, it’ll become Professor Farnsworth.