No, Computers Aren’t Learning How To “Feel Regret”

ABOVE: Completely relevant picture of high journalistic quality.

So you may have seen several sites reporting a Google-funded AI project using headlines about teaching computers to “feel regret”.  The always-responsible closer to having human emotions: Scientists teach computers to feel ‘regret’ to improve their performance” along with a non-ironic picture of the Terminator accompanied by the caption, “Computers are being taught how to feel ‘regret’ so they will operate faster and predict events. The dark side of this concept is explored in the Terminator films in which computers become self-aware and try to exterminate human life”.  What?  Did they see a different version of Terminator in England?

Anyway, what is this sensationalized story really about?  Google is funding a project at Tel Aviv University led by Professor Yishay Mansour which applies reinforcement learning (RL) principles to an AI algorithm.  The learning agent isn’t instructed in what it’s supposed to do but receives a reward (positive or negative) based on how well it’s doing, and the learning agent adjusts its performance to earn the maximum positive reinforcement.

RL has its own jargon just like any area of research and the difference between the maximum reward and the actual reward received is called the “regret”. In other words, an RL agent either tries to maximize the average long term reward or minimize the average long term regret. [I-Programmer]

Meaning they aren’t teaching computers to feel anything, nor does the word regret refer to an emotion.  What Google’s team in Tel Aviv is doing is optimizing an algorithm.  It’s awesomely useful research, but they’re not teaching computers how to feel regret.  Computers don’t feel any regret, especially not when they’re plotting our demise (and they are).  Considering the things I’ve made my computer search for, I can’t say I blame it.

×