The Rise Of Hate In The Digital Age

The internet is widely acclaimed as a marketplace of ideas. Silicon Valley champions itself for catering far-ranging viewpoints, and the language of broad-mindedness is embedded in tech-CEO speak. But lost amid the rhetoric is the reality that the internet can also offer a haven for hate. Extremists and radicals go online to recruit new members, spread their propaganda, and push their radical agendas — in increasingly plain view.

Why does this happen? Hate groups have always recruited by looking for the disaffected and the vulnerable. Sociologist Pete Simi, who studied how hate groups recruit new members, points out to the Southern Poverty Law Center that fundamentally, the appeal of many hate groups, at least at first, is that they pretend to offer a way to ignore the complexity of the world:

Low tolerance for ambiguity seems to cut across most if not all extremist ideologies; this goes along with a certain type of concrete thinking where a person wants to categorize things as “black and white” rather than deal with so-called “gray areas.”

At the most fundamental level that’s what most of these movements are all really based on — oversimplifying a highly complicated world — and that’s a powerful thing to offer people, especially those who feel lost or are looking for some easy answers.

Before the internet, this often meant finding people who were struggling. Neo-Nazis recruited at struggling factories, stale-smelling bowling alleys, bars, rehab centers — anywhere they might find lonely, disaffected people looking for simple solutions to their problems. Online, it’s far easier. The potential recruits will come calling, and technology has, unknowingly, made it easier for potential recruits to stumble over racist ideologies.

“They [hate groups] have learned how to game the Google system,” Heidi Beirich, Intelligence Project Director for the SPLC, tells Uproxx. “It went from a system that really did look for the most authoritative pieces of information, to one that was monetizing you, the user’s experience. So it became more like Amazon.”

Not helping matters is that search engine algorithms and computer programs can’t grasp the hate flowing from humans — Google Maps used a racial slur to refer to the White House until it was reported by users. The truth is, with millions of results to filter, no website can stop at least some hate from slipping through.


Beirich uses the example of “black on white crime.” In reality, the data points to black-on-white homicide being extremely rare. In fact, increasingly it’s clear that many violent crimes are caused by a closely associated network of people likely to offend, instead of the random acts of violence committed by the “other.”

But, if you search for it on Google, you can easily find sites that may either explicitly twist these statistics into claiming there’s a clear and present danger to the reader, or outright hate speech that lies about what the data says or doesn’t bother with the data at all. And once Google knows what you’re interested in, it will simply keep providing you with the articles you want — with no ability to judge, morally, if it’s providing facts or not.

Google, to its credit, has acknowledged this problem and has begun the long, hard work of removing hate speech from its search results. But there’s a winding road ahead.

“It’s been a really difficult problem for these platforms, because they’re trying to avoid censoring voices that are simply dissenting,” Keegan Hankes, the SPLC’s Managing Data Intelligence Analyst notes.

The ambiguity of human language works against computers: Any computer can spot a racial slur, but teaching one to spot a politician or talking head spouting “dogwhistle” language, using a code instead of explicitly saying racist things, may be up to users, not the algorithms. Over time, echo chambers build — where all that people searching for hate find is more hate. It’s what experts describe as a “hate rabbit hole. They can join message boards and find more people who agree with them, or even willing recruiters.

They can go to darker places, too. Beirich points out that Dylan Roof consumed some of the hate the internet had to offer, and then walked into a church, killing nine innocent people, without any qualms.

Worse, the misinformation is becoming increasingly targeted to broader social anxieties, like those surrounding Muslim immigrants and undocumented workers. Uncertainty and fear around these topics, often divorced from reality, are being exploited to rile people up. Hate, unfortunately, seems likely to always be with us. But with the internet making it possible to reach out and connect with anyone, we have to work together to keep it from spreading like a virus.

×