Although Facebook says it is cracking down on extremism through its automated systems, a recent expose by a whistleblower’s complaint showed that the big tech company was actually generates content for the social media company which followers are using to recruit and network. Remember those animated celebratory videos Facebook generates about your friendships or content? That’s what Facebook is making for Islamic State terrorists, neo-Nazis and the like. Moreover, the whistleblower says that impression of success that Facebook claims in fighting extremists online is greatly exaggerated.
Can big tech companies deliver on clamping down on extremism?
The internet has completely revolutionized politics. Long gone are the days when Walter Cronkite told America, “That’s the way it is.” Now anyone with a smartphone can start sharing information and broadcasting from anywhere in the world.
On the one hand, this has been incredible. The amazing achievement of all this technology should not be lost in our worries about how to address extremism. We are able to learn more, and learn faster than at any other time in human history. On the other hand, seriously bad actors are able to get their messages out just as swiftly.
How are big tech companies responding?
Big Tech companies are under a lot of political pressure to remove extremism from their sites. The European Parliament in April passed a law demanding that tech companies take down extremist content within one hour of receiving an order from the government to do so.
President Trump summoned tech leaders in August to the White House to discuss how they plan to combat extremism. Trump said he wants to work with social media companies “to develop tools that can detect mass shooters before they strike,” according to Politico.
This followed a call in May from House Democrats demanding major tech companies release their budgets to fight extremism.
“We must recognize that the internet has provided a dangerous avenue to radicalize disturbed minds and perform demented acts,” Trump said. “We must shine light on the dark recesses of the internet and stop mass murders before they start.”
“The conversation focused on how technology can be leveraged to identify potential threats, to provide help to individuals exhibiting potentially violent behavior and to combat domestic terror,” White House spokesman Judd Deere said, according to IRJ.
Easier said than done.
Why It’s So Hard
Big tech companies are caught in a bind. There is certainly a large amount of white nationalist hate sloshing around online. There is also plenty of Islamist terrorist content.
Left-wing organizations accuse the tech companies of not taking enough action to combat white supremacy and Right-wing hate. The New Republic, for example, excoriated Twitter in particular, saying, “The flailing micro-blogging company not only allows leading white supremacists like David Duke and Richard Spencer to use its platform but has also, inexplicably, verified Spencer, thereby elevating his status.”
Right-wing organizations accuse the tech companies of not taking enough action to combat Islamist extremism, or of unfairly targeting mainstream conservative voices using strategies intended to target genuine extremists.
In 2019 Alex Jones, Laura Loomer, Gavin McInnes and Tommy Robinson collaborated to produce “You Can’t Watch This,” a film focusing on the censorship of Right-wing and fringe figures.
Tech companies are keenly aware that the electoral map is constantly fluctuating. First and foremost they are a business, and they want to stay afloat. They say their incentive is to collaborate with the government to ensure they are not supporting violence but don’t want to get drawn into partisan politics. Yet many conservative commentators have been outright banned or shadow banned.
The Technical Challenge
It’s not just a political issue. There are many technical difficulties to taking down extremist content which companies have to wrestle with.
The first is the scale of the issue. There are millions of accounts online, and it is trivially easy to set up a new one. Even banning accounts doesn’t necessarily keep the users offline. As the internet becomes more encrypted, extremist networks are becoming more agile and can stay ahead of lumbering censors.
One tactic big tech companies have used is machine-learning algorithms which target certain words and phrases. Even that has ceased to work, since extremists are adept at modifying language to fly under the radar – while still communicating hardline messages to other extremists.
Hash technology enables companies to identify a specific video or piece of audio, such as footage from a terrorist attack or an ISIS propaganda movie. That digital signature can then identify the clip wherever and however it is used.
But it’s not always possible to identify, especially when you factor in remixes, edits, references and other original forms of content creation.
Experts, including former Facebook Chief Security Officer Alex Stamos told the House Homeland Security Committee that artificial intelligence cannot remove all extremist content from the internet in its current form.
In May, Facebook, Twitter, Google, Amazon and Microsoft signed a joint pledge to work together to combat extremism, along with 18 governments including the United Kingdom, Canada and France. The so-called Christchurch call was a response to the mosque shooting in Christchurch New Zealand which killed 51 people in March. The U.S. declined to be involved.
Yet is far from clear that the big tech companies will do the right thing. In 2017, Facebook met with the government in Pakistan to discuss helping them takedown blasphemous content. While governments with positive intentions can pressure tech companies to make healthy decisions to ensure national security, repressive governments can use the same pressuring techniques to push these companies to help them surveil, monitor and oppress their citizens.
Big Tech companies now have to make a choice. How far will they go in working to take down extremist content? And how will they balance that mission with protecting freedom of speech?