Cybersecurity & Tech Surveillance & Privacy Terrorism & Extremism

Online Platforms’ Responses to Terrorism

Brian Fishman
Tuesday, November 14, 2023, 2:08 PM

Managing terrorism is always difficult for platforms, but the situation in Israel and Gaza is complex in ways that exceed any previous circumstance. 

"Page not found" (ElReySilver, https://tinyurl.com/5es2k3ph; CC BY-SA 4.0 DEED, https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

The current war between Israel and Hamas began with Hamas’s terrorist attack on Oct. 7 and is currently playing out in densely-occupied Gaza. The violence is real, visceral, and, for both combatants and civilians, often final. Hamas has had years to prepare the physical battlespace of Gaza, famously building networks of tunnels and weapons caches; Israel has trained troops and advanced weaponry that enable it to maneuver across and strike physical space with a precision and force Hamas cannot match. Both sets of combatants engage online as well, but unlike in the physical arena where local combatants and civilians have constructed the battlespace, in the digital arena it is companies far from the physical fighting. 

Those platform companies balance competing interests when addressing political violence, including terrorism. These interests include to honor victims, prevent or mitigate future violence, abide by legal requirements, limit harmful experiences for their users, and empower communication necessary to understand an often brutal and violent world. These imperatives do not always align. No company will balance them perfectly—that should not be the expectation—but companies that prepared for moments like this will likely manage better than those that did not. 

These tensions are not new, but the current Israel-Hamas conflict is testing companies in novel ways. Platforms began to aggressively address terrorist material online in response to the Islamic State’s surge of digital activity from 2013 to 2017. But the Islamic State, despite producing a slew of creative propaganda, was always far more politically isolated than Hamas. The Christchurch attack in 2019 spurred an internet-wide scramble to remove copies of the terrorist’s murderous livestream. The incident powerfully illustrated that a network of terrorist supporters can amplify the impact of an attack even if they were not involved in planning it. But Hamas’s capacity for both violence and propaganda far outstrips that lone actor. The confluence of political and policy complications coupled with operational complexity makes the current situation uniquely complex for platforms. 

The potential challenges facing digital platforms are too numerous to list comprehensively: livestreamed murder; hate speech and incitement targeting Israelis and Palestinians in the conflict zone and Jews and Muslims further afield; disinformation propagated by combatants and their supporters; and  innumerable instances of rancorous political speech, much of which merits protection and some of which violates any number of platform rules on promotion of violence, gore, and celebrating terrorism. 

Platforms should take a hard line against terrorism and war crimes, but they should not whitewash this moment. Technology firms generally ought to create safe experiences online, but the physical world is not safe. Hamas’s brutality on Oct. 7 should and will indelibly impact the group’s viability as a political organization moving forward. But the full scope of the violence on Oct. 7 will not be known on social media, which ultimately will scrub much of it away. Meanwhile, Israel’s attack on Hamas positions in Gaza exacerbates an already-terrible humanitarian situation. Every political action, including war, carries pros and cons—and in war those negative consequences are often reflected in the kind of soul-splitting imagery and video that makes the cost of both terrorism and war real. Much of that imagery also happens to violate the rules of most platforms.

To be clear, platforms should remove this material because it is so often used to incite more violence. But that decision comes at the cost of insulating those of us far from the battlefield from the horrors suffered by men, women, and children much closer to the fighting. At a moment when empathy for civilian victims of this violence is too often wanting, that separation is particularly notable. And so, even as policymakers of all sorts consider how to manage this material, they should remember the admonishment of William Tecumseh Sherman, who from his grave reminds contemporary soldiers, policymakers, and technologists that “[i]t is only those who have neither fired a shot nor heard the shrieks and groans of the wounded who cry aloud for blood, for vengeance, for desolation. War is hell.”

Contextualizing Hamas’s Attack Online

Extremists’ use of digital tools is not new. American white supremacists built bulletin board systems on Commodore 64’s and Apple IIe’s in the early 1980s. Many terrorist organizations established websites in the 1990s, including Hamas. The prominent white supremacist forum Stormfront was first hosted on Beverly Hills Internet, the earliest incarnation of the website-hosting platform GeoCities. Ansar al-Islam, a Kurdish jihadi group in Iraq, hosted multilingual websites on Yahoo-owned GeoCities in the early 2000s. 

Early internet service providers did take action against some early extremist sites, but for the most part, Silicon Valley responded to terrorist groups and hate actors very slowly. Only when the Islamic State’s recruitment of westerners instigated significantly more political pressure on platforms did they begin to address violent extremism more seriously, for example, by announcing major investments in personnel and launching collaborative efforts to address terrorism. As a result, large companies are better situated today to manage online content of Hamas’s atrocities and the ongoing impact of war in Israel and Gaza than they were 10 or even five years ago. Additionally, many platforms have internal processes for crisis management, developed or purchased tools and intelligence, and work with cross-industry bodies to facilitate information-sharing, which ultimately positions these companies to better detect and address this type of content

Yet Hamas poses a different challenge to online platforms than that of the Islamic State. Hamas has a long history of using the internet to propagandize and muster support, and it is deeply embedded in Gaza, governing the territory for more than 15 years. Its roots on the ground, and online, are extremely deep. The Oct. 7 attack was also particularly complex. Unlike the singular video produced during the Christchurch attack, there are innumerable videos from Oct. 7 produced by terrorists, CCTV cameras, and even recovery officials. The single Christchurch video was also edited extensively and re-uploaded by the terrorist’s supporters (and some of his critics); Hamas has much more raw material to utilize. 

Moreover, the war in Gaza is far from over. At the time of writing, Israel has been bombarding northern Gaza for weeks and has sent ground troops into the densely populated territory. This violence will undoubtedly produce more civilian casualties and raises the potential for war crimes committed not just by Hamas but also by Israeli forces. At the same time, Hezbollah attacks on Israel’s northern border have proliferated, as have reports of settler violence in the West Bank. 

Online, both sides will try to use digital platforms to win the information war. Hamas fighters have threatened to kill hostages on camera and might even go so far as to demand platforms leave their propaganda online or commit atrocities. Israel will likely make legal requests that test platforms’ typical calculations for responding to government requests. Given the extraordinary stakes and emotions on both sides, partisans will not be satisfied even if platforms manage to be consistent and clear about their rules. Instead, as they do even when the stakes are much, much lower, partisans will likely spread disinformation and pressure platforms to make decisions that they believe will achieve their political ends, even if those decisions are inconsistent or unprincipled. 

Platforms Choose Their Own Rules Regarding Terrorism Online

In most circumstances, the First Amendment of the U.S. Constitution conveys responsibility for determining acceptable speech on platforms to the platforms themselves. But rules regarding terrorism are somewhat anomalous because U.S. sanctions law seemingly requires that platforms remove digital accounts that are managed by or in the service of designated terrorist organizations, of which Hamas is one. Nonetheless, platforms often build policies to address terrorism that go beyond legal requirements. Of course, these approaches have different pros and cons. 

As mentioned above, sanctions enacted under the United States’s International Emergency Economic Powers Act prohibit platforms from knowingly allowing U.S.-designated terrorist organizations to operate on their platforms. That usually manifests as the removal of accounts managed by designated terrorists and those that provide those entities direct support, such as fundraising. These prohibitions apply to Hamas. Large U.S.-based platforms tend to abide by U.S. law, despite the varying nationalities of their users and despite lack of clarity on enforcement of these rules. Moreover, not every platform is U.S. based. And some international bodies do not sanction Hamas, most notably the United Nations. So, while it is true that legal frameworks shape platform approaches toward terrorism, it is also true that platforms choose how to interpret regulations and what risks to accept regarding compliance. As a result, platform approaches to terrorist material differ significantly. 

The European Union will increasingly impact how platforms approach terrorist content online. Both the Digital Services Act and the Terrorist Content Online regulation give European policymakers leverage over platforms by enabling severe penalties for noncompliance. It is not clear, yet, how this authority will be used regarding terrorist content. European Union President Thierry Breton sent a series of letters to tech companies reminding them of a responsibility to remove terrorist content but was notably imprecise about the content at issue or the measures that the EU wants platforms to take. 

Of course, responsible platforms do not want to facilitate harm or violence and often set their own rules that both reflect and expand on legal requirements. Rules regarding terrorism manifest in three broad policy categories: actors, behavior, and content

Platform prohibitions on terrorist groups are actor-level restrictions. Partly as a matter of compliance, these often mean that terrorists may not use a platform for any purpose. This means everything from no recruitment and no coordinating violence to no organizing food drives and no pictures of puppies. What content the actor posts does not matter. Such policies closely align internal guidelines with sanctions compliance. 

While actor-level restrictions are aggressive, they are not not foolproof. Hamas’s leadership is brutal, not stupid. The group has adapted to pressure from various platforms, both by shifting toward other platforms such as Telegram that lack enforcement and by developing proxy networks that obscure users’ connections to Hamas. Like other groups proscribed online, Hamas complicates enforcement by distributing their activities across platforms—advertising on one, fundraising or communicating on another. 

Covert networks and front groups pose practical and political challenges for platforms—these are often known as behavioral challenges. During my time working at Facebook (now Meta), the company sometimes removed networks—often branded as media outlets—covertly linked to Hamas or other terrorist groups only to face excoriation from activists that either did not care about such links or could not see them using public information. Platforms may find it difficult to share their entire rationale for such decisions because they are based on nonpublic information. In these scenarios, the actor and their content appear neutral on the surface, but “sub rosa” behavior—content in messaging groups, for example, shared infrastructure or IP addresses, or even off-platform behavior—indicates affiliation with a banned organization.  

Sometimes, the shoe is on the other foot. Governments sometimes point to groups or individuals that they claim operate on behalf of terrorist organizations without providing significant justification. Responsible platforms are skeptical of such claims, given the pressure that some governments exert on platforms to suppress political opposition. This leads to vexing scenarios in which governments, aware of links between front groups and terrorist organizations through means they do not share, cannot substantiate conclusions to understandably wary digital platforms that require a higher standard of evidence to take action. 

Platform rules prohibiting terrorist-related content operate a bit differently than actor-level prohibitions. Content rules can be quite complex. For starters, they generally apply to everyone, not just known terrorist actors. And, although they prohibit support for terrorism, they usually allow newsworthy discussion (including counterspeech) regarding a group and its activities. That means a propaganda video posted in one context may be allowed and in others it will be banned. Platforms often find that such distinctions are difficult to make consistently at scale; and, even in the best scenarios, such deliberations require slower decision-making. One workaround is that platforms may use other policies to remove terrorist material. When I worked at Facebook, the platform often utilized policies prohibiting gore or violence to remove Islamic State videos depicting executions because those policies prohibited such material in all circumstances. That allowed for more automation and quicker action against such material. This approach matters in the present crisis, not simply in the context of known terrorist groups such as Hamas and Hezbollah, but also regarding settler violence in the West Bank. Using policies prohibiting gore and celebration or encouragement of violence by terrorists and their supporters may allow for faster decision-making, but it will also address content produced and shared by bystanders and journalists. Such removals do not necessarily mean that a platform has deemed such material supportive of terrorism.

Enforcing Platform Rules in Crisis

Operational considerations profoundly impact how platform policies actually manifest, especially in a crisis. Platforms utilize a range of operational efforts to manage political violence and drive decisions across the entire “Trust & Safety” decision spectrum, ranging from structured decisions at scale to unstructured investigations of complex, adversarial networks. Large platforms will likely spin up teams of national security veterans to manage the crisis in real time; smaller platforms will often struggle to identify key individuals who can be spared from regular work to deal with the particulars of the crisis. 

Teams of reviewers that make many decisions are often critical in crisis. These teams often include people that speak relevant languages and can understand rapidly evolving linguistic shifts. But there can be challenges with such teams in crisis moments. For starters, employees and contractors may be impacted by the conflict themselves or need to care for loved ones. Large companies know, as well, that geopolitical conflicts are often reflected inside companies and outsourced review teams. Visceral political debates can be disruptive and, in worst-case scenarios, lead to explicit efforts to support partisans. Cultural and linguistic expertise is critical to understanding a local conflict, but if a tactile understanding of local circumstances leads to an individual’s inability to subsume their personal views to policy, it can be problematic.

Larger platforms often build investigative teams that can identify networks of accounts working in concert, including those that deliberately obscure their association with the problematic group or network. This is useful for uncovering terrorist-related accounts but also for surfacing disinformation networks. These teams typically require different training, tools, and oversight than scaled review teams. Another advantage of these teams is that they are nimble. Highly trained investigators can often pivot quickly to understand the tactics and methods of evolving threat actors. That matters in a dynamic conflict where adversaries deliberately seek to avoid platform restrictions.

Automation is also critical. Simple automation, such as hash matching and keyword matching are used to surface known media or language referencing or discussing terrorist entities or events. Such capabilities are valuable, even though they have critical limitations. For example, companies will often build simple keyword matching systems to identify references to terrorist groups. But these systems do not catch novel threats and require constant upkeep to be useful because adversarial actors obfuscate words to a computer with misspellings, spaces, and various other symbols. They also require follow-up determinations to assess whether the flagged language actually reflects a policy violation.  

More sophisticated artificial intelligence (AI) detection can improve on matching systems, both evolving more quickly and acting more independently. But AI has significant error rates, in terms of both under- and over-enforcement. This can matter during a crisis, both because acrimonious political language will address similar issues as extremist speech and because the most acute real-world threats, which are relatively rare, may be less likely to be caught by an algorithm. What’s more, adversaries tend to shift operations more quickly than even sophisticated AI adjusts. For example, generative AI offers new mechanisms for algorithmic steganography—burying human readable words in images that an algorithm struggles to detect. Such shifts require either manual fine-tuning in real time or a shift toward more human operations. 

Platforms also run intelligence-driven operations to identify threats to their platform as they occur on other platforms. During my tenure at Facebook, the team developed a program to collect Islamic State propaganda from Telegram, review it per platform policies, and prepare hashes of violating material, often before that content had even been posted on Facebook. This allowed the team to identify Islamic State propaganda with a high degree of certainty as soon as it was posted. Not every intelligence-driven operation needs to run so efficiently, but such techniques are useful because nefarious actors often prepare their activities on one platform before executing them on another. 

Political violence involves dedicated actors in the real world. When they suffer the loss of accounts online, such organizations and individuals will often adapt their procedures and return to a platform. Platforms need to develop automated systems to remove accounts that violate policies repeatedly and identify new accounts that reflect prior removals. 

A platform’s principles are more important than its standard operating procedures. In crisis, platforms may aim to speed up enforcement by simplifying their enforcement posture. After the Christchurch attack, the terrorist’s livestream was downloaded, spliced, and re-uploaded millions of times. Facebook made the decision to remove all media derived from the video, even posts using stills that did not depict violence and were used to condemn the attack. Facebook simply could not review every post fast enough. It was the right decision to suppress an active propaganda campaign by supporters of a violent white supremacist, but it came at the cost of removing a significant amount of reporting and counterspeech. In a recent blog post describing its approach to the Oct. 7 Hamas attack, Meta suggests it is making similar compromises in the current conflict and is therefore not applying strikes on accounts that violate some platform rules, presumably because of an increased risk of false positives.

Cross-Industry Resources Can Help Platforms

Terrorist content is fundamentally a cross-platform problem, which is a challenge for defenders because most efforts to counter this material occur on individual platforms. Still, there have been important efforts to build mechanisms for cross-platform collaboration. The most important is the Global Internet Forum to Counter Terrorism (GIFCT), which provides training, enforcement resources, and collaboration mechanisms for digital platforms. GIFCT is best known for its hash-sharing database: a mechanism for sharing digital fingerprints of known terrorist propaganda, manifestos, and URLs where noxious material is shared. 

GIFCT also has protocols for communicating among member platforms during a crisis. These range from relatively simple situation updates and facilitating bilateral conversations between platforms to the Content Incident Protocol (CIP), which enables cross-industry hash sharing and regular updates to cross-sector partners such as governments and nongovernmental organizations (NGOs). 

The CIP also allows for hash sharing about terrorists outside GIFCT’s standard guidelines, which limits collaboration to those groups and entities listed on the United Nations Consolidated Sanctions list. The United Nations, unlike the United States and the European Union, does not sanction Hamas. As of this writing (Nov. 14), GIFCT has activated its Incident Protocol but not the CIP process; this means the coalition is communicating regularly, monitoring for viral material that meets the criteria for declaring a CIP, but is not yet actively sharing hashes of material related to the current crisis.

Platforms have access to other resources as well. The Christchurch Call brings together governments, NGOs, and platforms to develop collective responses to terrorist abuse of the internet. NGOs such as Tech Against Terrorism can provide guidance to platforms building initial policies around terrorist material. Alphabet’s Jigsaw division and Tech Against Terrorism recently announced Altitude, a tool to help smaller platforms remove terrorist content. And, increasingly, a wide range of vendors provide consulting services, intelligence, and tooling to improve trust and safety operations generally and counterterrorism efforts specifically. (Full disclosure: I am a co-founder and the chief strategy officer of Cinder, which provides a tooling platform to trust and safety teams.)

Practical Conclusions 

The ongoing crisis will inevitably produce significant surprises, but there are scenarios that platforms should plan on. Hamas has already utilized video of one hostage and may repeat this tactic. It remains unclear how the Israeli attack in Gaza will play out or how other regional parties will respond. Lebanese Hezbollah is more capable than Hamas militarily and on a messaging level. If Hezbollah enters the conflict, platforms should expect even more imagery of violence and increased cyber operations. Hamas is likely to use Telegram as a base for propaganda operations unless that space is disrupted. They will use X (formerly Twitter) because, these days, it is easier to exploit than better-governed platforms. Most importantly, they will shift operations as necessary, which means platforms should stay alert even if they do not see significant content on their platform in the next few days.

One of the truisms of terrorist groups online is that they operate across platforms. That means platforms should build relationships with other platforms now. Platforms should build multilateral and bilateral relationships with key partners ahead of time. GIFCT is an important forum for such engagement; even platforms that are not members should plug in. Similarly, the Christchurch Call is an important forum for multistakeholder discussion. These forums are not silver bullets, but they are mechanisms for platforms to learn from the successes and failures of others, and thereby reduce the odds of repeating them.

Platforms will be forced to make extremely difficult choices defining the line between acceptable, even if controversial, political speech, and incitement or celebration of terrorism. Disinformation will proliferate, both online and off. Urban warfare will inevitably kill large numbers of civilians, leading to claims of war crimes, claims that platforms are whitewashing suffering, and disinformation of all sorts about the circumstances and propriety of such violence. Platforms, like traditional media organizations, will struggle to separate truth from fiction—a problem exacerbated by the relative lack of journalists in the war zone and communication difficulties from Gaza. Companies should be frank with their users about the difficulty of those decisions and aim to explain the principles they use to adjudicate difficult choices. This is particularly important if a platform takes extraordinary trust and safety measures to keep the platform functional and limit the risk of real-world harm.

This is a complex crisis that begs for thoughtful engagement between platforms and responsible governments that may have insight into events on the ground and forthcoming disinformation campaigns. In the United States, however, such engagement has been complicated by Missouri v. Biden, which aims to limit government jawboning of platforms, but has had a broader chilling effect on engagement between the U.S. government and platforms. This is not the place to adjudicate the very complex legal decisions in that case, but a crisis like this illustrates why policymakers must identify legal and appropriate mechanisms for the government to share information with platforms. Platforms are likely to be less capable of managing the present complexities, including disinformation aimed at stoking social antipathy and violence inside the United States, because such sharing has been curtailed.

It is important to note that, as a practical matter, platform rules regarding terrorism generally favor the states that those terrorist groups are fighting. In the current conflict, that means platforms will generally remove accounts run by Hamas and allow those run by Israel, including its military. Some readers may find that unfair, but Hamas is a non-state actor that has targeted civilians with political intent for decades, and the discrepancy is not specific to this conflict. It reflects U.S. law, long-standing definitions of terrorism, the reality that the international system is built around states, and the presumption that they have the right to use military force in some circumstances. The variance between state and non-state actors, however, does suggest the importance for platforms to develop policies that address war crimes, regardless of the actor that commits them. Hamas’s murder and kidnapping of civilians on Oct. 7 (not to mention its track record of deliberately bombing civilian infrastructure) would qualify but so too would well-documented, intentional abuses of civilians by state military units, including Israel’s. Such a policy would be nearly impossible for a company to adjudicate and apply, and partisan activists would inevitably aim to manipulate it. Such concerns understandably deter platforms from building such a policy. Nonetheless, companies should explore this concept, including the development of suitably high evidentiary standards. 

This crisis illustrates, again, that trust and safety teams and tools are not simply “nice-to-haves.” Digital space is part of the battlespace; managing real-world geopolitical crises responsibly is what both users and regulators expect of platforms. The current crisis will test those trust and safety teams like very few moments before. Despite significant cuts in trust and safety efforts at many platforms in recent years, most are better prepared to manage a crisis of this kind than they were a decade ago. The truth is that they have so many decisions to make that they will do both a lot of things right and a lot of things wrong. But those that prepared for this moment will perform better than those that did not. Hopefully, they will all learn something and prepare for the future, because this crisis may very well get far worse and it is inevitable that this will not be the last geopolitical crisis to spill into digital space. 


Brian Fishman is a co-founder of Cinder, an integrated software platform for Trust and Safety. Previously, he served as a Policy Director at Facebook and the Director of Research for the Combating Terrorism Center at West Point. Fishman is the author of The Master Plan: ISIS: al-Qaeda, and the Jihadi Strategy for Final Victory (Yale University Press, 2016).

Subscribe to Lawfare