No, You Shouldn’t Be Worried About Artificially Intelligent Weapons

The Future of Life Institute recently put out an open letter asking militaries to not invest in weapons with artificial intelligence, or AI. They’re worried that intelligent weapons might, you know, decide to kill whatever people they please, which is a reasonable fear. If, first of all, we were going to see such complex AI in the timeframe it proposes, and any military with the capability of making or buying such weapons were interested in them in the first place.

The Future of Life Institute means well, and raises some important points, namely that you can’t trust a computer with a gun. But nobody does; the U.S. military, for example, is researching autonomous weapons, because of course it is. But the Department of Defense directive on autonomous weapons states that “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

That’s a fancy way of saying that if the Army has a Terminator, that Terminator can march to a target, it can point a gun at a target, but it can’t shoot until a human gives it the go-ahead. If for some reason, said Terminator loses contact with the base, it’s not allowed to just pick any targets it wants. And, of course, they have to actually build that Terminator in the first place; although we’ve had some stunning advances in robotics, they’re still, ah, not exactly up to Skynet standards.

It’s true that other nations are likely funding AI weapons research as well. We’ll point you towards Russia’s most advanced robot to explain why that isn’t a concern.

Which brings us to the next issue: Timeframe. It’s true that we’re seeing better and better software, as we gather more and more data and pull algorithms from it. But that’s not artificial intelligence; it’s a technology with its own set of problems, but it’s not a sentient computer that can make its own decisions, but rather software that can figure out how to do what a human tells it to.

In truth, the best versions of this software aren’t being developed by the military. It’s stuff like Siri or IBM’s Watson. In fact, almost every tech company is investing in AI research to make computers do the boring stuff, to figure out the fastest route to the grocery store and predict what you’ll search for next.

Even that takes a staggering amount of resources. Google’s products are so good because they’ve got a billion people a day teaching it with every search query they type in. We’re nowhere close to software anybody would trust with a weapon; it’s not even being designed for that potential.

There are some fairly serious concerns about how this software is used; if a private corporation can figure out you’re pregnant from your purchases, that’s worrisome, especially since there’s almost no legislation about how the data you generate from your purchases can be used. And that’s the issue; everybody understands the danger of building a Terminator. But the danger of giving marketing departments this software and access to detailed, granular information about us? That we haven’t worked out yet.

×