Self-driving cars, long the province of Google and science fiction, are increasingly popping up on city streets. Uber, for example, is testing self-driving cars in Pittsburgh, while rumors of Apple entering the automotive industry are swirling more intensely than usual. As for Google, they’re adding safety features to its fleet of self-driving vehicles. It seems, like it or not, that robots are hitting the road, and that’s going to change how you drive, both for the better and for the worse.
Robots Are Better Drivers…
Self-driving cars are the ultimate culmination of a tricky problem roboticists have been struggling with for decades. Robots are incredibly linear in how they operate: They assess an environment, make a plan, and then execute that plan step by step. This is hard enough in a controlled environment or even a relatively empty space, but even a quiet suburb is packed with information that constantly forces changes in plan, from kids on bikes to jerks in convertibles. In many senses, probing the heart of a volcano is an easier job for a robot than driving your commute to work.
Self-driving cars solve this by using a set of sensors, usually a range-finding technology like LIDAR or sonar and a stereo camera, and pairing that with GPS and a powerful algorithm that’s slowly been built with thousands upon thousands of hours of driving. And upcoming innovations will only improve them, most notably vehicle to vehicle, or V2V, communication system, which the U.S. government is aiming to make standard before 2020.
In theory, at least, this means that your average self-driving car is a better driver than most humans. After all, robots don’t text while driving, don’t spill coffee on themselves at 65 mph on the freeway, and don’t cut across five lanes of traffic a half-mile before their exit. Self-driving cars don’t get lost and aren’t stuck with directions like “Turn left at the pond, you’ll know it when you see it.” Once we reach a surprisingly tiny critical mass of self-driving cars, traffic jams would be a thing of the past, as cars would plan out alternate routes and drive us to work and play in orderly fashion. The problem, though, is that this technology is being rushed onto the market, and it’s an open question how robots and humans will share the road.
…But They Can’t Drive Like Humans
Tesla recently debuted self-driving features, and almost immediately, they went wrong. A driver in Utah bashed out his windshield on an overhanging trailer and a California woman claims her autopilot features didn’t engage when they should have, causing her to rear-end another driver.
In both cases, Tesla insists it was driver error. The former was a case of a driver using Tesla’s “Summon” feature, and the latter was because the autopilot disengages when you tap the brakes. What Tesla can’t get away from, though, is that if a human had been using the car, likely neither of these accidents would have happened.
Even the most advanced self-driving cars are far from infallible. Google’s self-driving car recently crashed with a bus, because the human behind the wheel didn’t take over, assuming the bus would slow down. The car did so as well, because its algorithm “knew” the probability of the bus slowing down was high, and the result was a low-speed crash.
The problem with any algorithm is that ultimately, its direction is to some degree dictated by a human, whether by data or whether by the direction the algorithm was led to take in the first place. As a result, the flaws of humans are translated into the algorithm. For example, Microsoft’s artificial Millennial Tay became a racist nutjob within hours of unprotected contact with Twitter.