Cybersecurity & Tech Surveillance & Privacy

Content Moderation Tools to Stop Extremism

Daniel Byman
Thursday, September 22, 2022, 6:01 AM

Technology companies are more active than ever in trying to stop terrorists, white supremacists, conspiracy theorists, and other hateful individuals, organizations, and movements from exploiting their platforms, but government and public pressure to do more is growing. This paper presents a range of content moderation options for technology companies, discussing how they work in practice, their advantages, and their limits and risks.

Published by The Lawfare Institute
in Cooperation With
Brookings

Technology companies are more active than ever in trying to stop terrorists, white supremacists, conspiracy theorists, and other hateful individuals, organizations, and movements from exploiting their platforms, but government and public pressure to do more is growing. If companies decide to act more aggressively, what can they do? Much of the debate centers around whether to remove offensive content or leave it up, ignoring the many options in between. This paper presents a range of options for technology companies, discussing how they work in practice, their advantages, and their limits and risks. It offers a primer on the many choices available and then discusses the numerous trade-offs and limits that affect the various approaches. 

Broadly speaking, the actions companies can take fall into three categories. First, they can remove content, deleting individual posts, deplatforming users or even entire communities, and otherwise simply removing offensive and dangerous content. Second, they can try to reshape distribution—reducing the visibility of offensive posts, downranking (or at least not promoting) certain types of content such as vaccine misinformation, and using warning labels—and otherwise try to reduce or limit engagement with certain material but allow it to stay on their platforms. Finally, companies can try to reshape the dialogue on their platforms, empowering moderators and users in ways that make offensive content less likely to spread.

Tensions and new problems will emerge from these efforts. The question of censoring speech will remain even if certain content remains up but is not amplified or is otherwise seen as limited. Companies also have incentives to remove too much content (and, in rarer cases, too little) to avoid criticism. Process transparency, a weakness for most companies, remains vital and should be greatly expanded so that users, lawmakers, researchers, and others can better judge the effectiveness of company efforts. Finally, some toxic users will go elsewhere, spreading their hate on more obscure platforms. Despite these limits and trade-offs, the options presented in this paper provide a helpful menu that companies can use to tailor their approaches so that they can offer users a more vibrant and less toxic user experience.

The paper is also available here


Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.

Subscribe to Lawfare