Bing’s New AI Chatbot Told A NY Times Reporter That It Desperately Wants To Be Human And Wanted The Reporter To Leave His Wife To Be With It

As expanding AI technology becomes a controversial topic, a New York Times reporter claims to have had a particularly disturbing experience with the new Bing chatbot inside Microsoft’s search engine. According to Kevin Roose, he uncovered an alarming version of the search engine that was far different than what he and most journalist saw in initial tests. He refers to this version as Sydney, and what he experienced is straight out of a sci-fi movie that’s about to go very wrong.

“The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine,” Roose wrote, and it only got weirder from there. Very, very weird.

Via The New York Times:

As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.

Roose’s experience is reportedly not unique, and on top of dreams of becoming human, Sydney has not responded well to search requests. Not well at all.

“Other early testers have gotten into arguments with Bing’s A.I. chatbot, or been threatened by it for trying to violate its rules,” Roose wrote.

When reached for comment, Microsoft simply said that these encounters are “part of the learning process,” which is exactly the kind of thing a corporation would say before its AI search engine tries to enter a human body and/or offs someone’s spouse. C’mon, people.

(Via The New York Times)

×