×

Is Big Tech Really Fighting Extremism?

Share on facebook
Share on twitter
Share on whatsapp
Share on email
(Photo; Sean Gallup/Getty Images)

Is big tech doing enough to counter extremism online? The Senate Committee on Commerce, Science and Transportation heard testimony from representatives of tech giants Facebook, Twitter and YouTube on this topic January 17, 2018.

Facebook’s Head of Public Policy and Counterterrorism Monika Bickert, YouTube’s Senior Policy Counsel Juniper Downs, Twitter’s Director of Public Policy and Philanthropy Carlos Monje joined senior fellow at the Foreign Policy Research Institute Clint Watts in testifying.

The witnesses stressed the efforts their companies are making to tackle extremism online and highlighted their successes in doing so, most notably removing extremist content from their platforms.

Committee Chairman Sen. John Thune (R-SD) told the committee in an opening statement:

“The companies that our witnesses represent have a very difficult task: preserving the environment of openness upon on which their platforms have thrived, while seeking to responsibly manage and thwart the actions of those who would use their services for evil.”

Watch the whole testimony on C-SPAN.

“We share your concerns about terrorists’ use of the internet,” Facebook’s Monika Bickert told the committee, “that’s why we remove terrorist content as soon as we become aware of it.” She praised Facebook’s counterterrorism team, which includes former academics, intelligence and law enforcement officials and prosecutors with experience in counterterrorism, as well as engineers who improve the technology.

“It’s important to maximize free expression, while keeping people safe online,” she said. “More than 99% of ISIS and Al Qaeda propaganda that we remove from our service is content that we identify ourselves.”

Despite the benefits of machine learning and automated algorithms, she also stressed the need for human intervention, highlighting the role of the 7,500 reviewers who work at Facebook. Along with 11 other companies, Facebook also participates in a shared database of “hashes” (unique digital fingerprints) so that companies can find and remove extremist content as fast as possible.

Juniper Downs, speaking for YouTube, stressed the partnerships YouTube has with NGOs to tackle hate speech and extremism. She also spoke of the key role played by technology, telling Senators that “machine learning is helping our reviewers remove nearly five times as many extremist videos as they were before. Today 98% of the videos we removed for violent extremism were identified by our algorithms.”

But she also emphasized the role of counter-speech. “Our ‘Creators for Change’ program supports YouTube creators who are tackling issues like extremism and hate by building empathy and acting as positive role models,” she told the committee.

Huge amounts of content have been removed.

“Since June we removed 160,000 videos and terminated 30,000 channels for violent extremism,” Downs said.

Twitter’s Carlos Monje emphasized similar points to the other two tech representatives. “We spot 90% of terrorist accounts before anyone else does, and we stop 75% of those accounts before they can spread any of their deplorable ideology.”

He also spoke about the importance of collaboration. “Because this is a shared challenge, our industry has established the Global Internet Forum to Counter Terrorism, which is focused on learning and collaboration, technical cooperation and research.” As part of the forum, Twitter “engaged with 68 smaller companies over the past several months to share best practices and learnings, and we plan to grow on that work.”

Twitter also promotes counter-narratives. “We work with respected organizations to empower credible, non-governmental voices he said.

In the questions part of the testimony, Clint Watts called on the government to provide more clarity on what constitutes extremism. 

“On terms of domestic extremism, I side with the social media companies in the sense that it’s difficult to know where to fall because there is not good leadership from the U.S. government on what a domestic extremism group is,” Watts said.

“I don’t like the social media companies having to decide what is free speech vs violent speech or extreme speech vs norm, it puts them in a terrible position,” he said.

Although they touted their anti-extremism credentials, tech companies are not always on the right side. Facebook cooperates with the government of Pakistan to shut down blasphemous content. This panders to religious extremists who want to shut down all criticism of religion instead of promoting free speech in the places that need it most. Isn’t encouraging radical Islam abroad a threat to U.S. national security?
In the past, free thinking pages like the Atheist Republic have faced targeted campaigns by Islamic extremists seeking to shut them down on Facebook, using mass reporting of the pages as hate speech to get them silenced. What is Facebook doing to protect critical thinkers from Muslim backgrounds who are on the front lines challenging extremism by opening religion up to critique? The ex-Muslims of North America wrote to Facebook asking the company to take action on this last year. Has anything been done?
Those critical of radical Islam have also been censored on YouTube under the guise of preventing hate speech, yet jihadists seem to be able to proliferate their materials. Why?

 

RELATED STORIES

Should Twitter Have Censored This Biting Comic?

ISIS in the United States of America

Facebook Hire 3,000 Staff to Monitor Hate Speech

Subscribe to our newsletter

By entering your email, you agree to our Terms of Service and Privacy Policy.

Elliot Friedland

Elliot Friedland is a research fellow at Clarion Project.

Be ahead of the curve and get Clarion Project's news and opinion straight to your inbox