Cybersecurity & Tech Terrorism & Extremism

Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem

Daveed Gartenstein-Ross, Madison Urban, Cody Wilson
Wednesday, July 27, 2022, 8:01 AM

Could new Texas and Florida content moderation laws promote extremist activity online?

Social media icons on an iPhone screen. (Stacey MacNaught, https://flic.kr/p/Y69SeU; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/).

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: The authors have received support from Meta for their work examining the intersection of technology and national security.

Various malign actors have used social media to spread terrorist content, engage in hate speech and harassment, and spread misiniformation and disinformation. Platforms have increasingly been called upon to monitor their content, but the expansion of content moderation has not been without controversy. The debate about content moderation and the role of Big Tech has long been simmering. But it reached a boiling point following Donald Trump’s ban from Twitter and Facebook in January 2021, when those companies determined that Trump’s presence on the platforms posed too many risks after his supporters stormed the U.S. Capitol on Jan. 6.

Citing Trump’s ban and other controversies, the Florida and Texas state legislatures passed “anti-censorship” laws—Florida’s Senate Bill (SB) 7072 and Texas’s House Bill (HB) 20—that sought to constrain companies’ ability to remove content. Indeed, it seems obvious that tech companies have not always drawn the correct lines in their moderation efforts. One apparent misfire, for example, was the suppression of an October 2020 New York Post article about Hunter Biden’s laptop (most of which other publications have now confirmed to be authentic).

But in the area of content moderation, seeking perfection is impossible and harmful. Despite apparent misfires and those that will continue to occur in the future, SB 7072 and HB 20 go too far in their attempts to redraw the lines. In this post, we provide an overview of these two laws and examine the likely impact of anti-censorship legislation on the removal of terrorist and extremist content, hate speech and harassment, and misinformation and disinformation. While we acknowledge that companies have made content moderation errors, we highlight the need for flexibility and innovation on the part of tech companies to respond in real time to the presence of dangerous content. Laws like SB 7072 and HB 20 precariously tip the balance in favor of malign actors by weakening social media companies’ response playbooks and creating loopholes ripe for exploitation. 

Florida Senate Bill 7072

When Florida’s SB 7072 passed in May 2021, Gov. Ron DeSantis heralded it as a way to “hold Big Tech accountable.” The bill focuses on ensuring that social media companies are “consistent” in their moderation and provides specific protections for journalistic enterprises and political candidates. 

Six provisions of SB 7072 are particularly prone to exploitation by malign actors. In brief, they are:

  • The “consistent manner” provision. Section 501.2041(2)(b) mandates that social media companies must “apply censorship, deplatforming, and shadow banning standards in a consistent manner.”
  • Making tech companies’ content moderation playbooks public. Section 501.2041(2)(a) requires platforms to publish their deplatforming standards and provide “detailed definitions” of the standards they use “for determining how to censor, deplatform, and shadow ban.”
  • Temporal limitations on the promulgation of new rules. Section 501.2041(2)(c) prevents social media companies from creating new rules related to content moderation more than once every 30 days.
  • The journalistic safe harbor provision. Section 501.2041(2)(j) prohibits “action[s] to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.”
  • The safe harbor provision for political candidates. Section 106.072 protects registered political candidates from being deplatformed for the duration of their candidacy.
  • The statutory cause of action. Section 501.2041(6) allows Florida residents to bring legal action against social media companies for inconsistent “censorship, deplatforming, and shadow banning standards” or for “censoring, shadow banning, or deplatforming” a user without prior notice. This cause of action entitles a victorious plaintiff to several remedies, including up to $100,000 in statutory damages per proven claim.

All six of these provisions present significant challenges for effective content moderation efforts. It is unclear whether the law’s potential conflicts with Section 230 of the 1996 Communications Decency Act would result in the nullification of certain provisions of Florida’s SB 7072. As such, we examine the law as it is written rather than interpreting how Section 230 might limit its scope.

After the passage of SB 7072, tech industry trade associations NetChoice and the Computer & Communications Industry Association (CCIA) filed suit in the U.S. District Court for the Northern District of Florida in May 2021. They argued, among other things, that the Florida bill infringed on platforms’ free speech protections. The court issued a preliminary injunction in June 2021 that was upheld by the U.S. Court of Appeals for the Eleventh Circuit.

SB 7072 is plagued by loopholes that malign actors could exploit. We examine the potential for this exploitation in the same order in which we presented the six relevant provisions, beginning with the “consistent manner” provision

If enforced, this provision would clearly illuminate the line that must be crossed for the activities of trolls, harassers, and stalkers to result in post deletion or user suspension. Accounts coordinating with one another could thus constantly probe where the line lies. For example, in harassing a specific target, one account might say: You should kill yourself. The second says: Have you thought about killing yourself? Still another says: You definitely shouldn’t kill yourself. The attacker waits to see which posts are pulled down and which accounts are banned, and then knows how far harassment can be pushed. 

Alternatively, the consistent manner provision could be used as a legal hook by various actors, including conspiracy theorists and foreign states’ disinformation operations. Such actors would have a cause of action that would at the least burn up a tech company’s time and resources when a legitimate outlet like the New York Times or the Wall Street Journal misreports breaking news. The suit could argue that the tech company’s treatment of the Times or the Journal was not consistent with its treatment of conspiracy theorists and disinformation operators. New situations and events emerge constantly; and even attempting to be consistent across all situations, particularly in a manner that will satisfy a court, can be nearly impossible. 

The consistency standard intersects in important ways with the second provision we highlighted—making companies’ content moderation playbooks public. The combination of being forced to litigate exactly what consistency means and the requirement to publish the specifics of content moderation standards would force social media companies to reveal their playbooks for preventing misuse of their platforms. Malicious actors can then access, study, and exploit these playbooks. An actor could figure out how to just barely stay within the lines of a company’s policies while still circulating destructive content. 

For example, imagine a neo-Nazi in a small town in Florida who wants to target the local Jewish population. He decides to fly a drone just outside the property of each Jewish resident and stream video footage from the drones to Facebook. There is unlikely to be a policy that explicitly covers this situation since the victim’s property rights are not being violated nor is their personal information necessarily being leaked online. 

In response, if Meta sought to prohibit this activity, it would have to create a new rule specifically banning it (to satisfy the provision of the law requiring the publication of content moderation standards). Yet the process of promulgating new rules would likely take time. 

Further, this situation would potentially intersect with the third problematic provision that we highlighted: By the terms of the Florida law, the company would be unable to promulgate the new rule if it had already made a rule change in the previous 30 days. Thus, SB 7072 turns what would have been a relatively easy fix under the old system—which provided Meta with flexibility—into a cumbersome process with the legitimate potential for destructive consequences for the victims. Also, if the neo-Nazi streamer were to successfully tie his harassment to a journalistic enterprise (the fourth problematic provision we highlighted), SB 7072 would prevent it from being pulled down at all.

An extremist group styling its work as journalism is far from a theoretical problem. Amaq News Agency, one of the Islamic State’s premier propaganda outlets, has long claimed to report objectively on Islamic State activities rather than cheerleading for the group. Other terrorist groups have had or experimented with their own journalistic outlets, including al-Qaeda, FARC (prior to its 2021 removal from the United States’ foreign terrorist organization list), Hamas, and Hezbollah. While media outlets that are clearly a part of designated terrorist organizations could be deplatformed, terrorist groups have experience setting up front groups that could potentially evade enforcement. Notably, there are plenty of websites propagating extremist content that are not tied to designated terrorist organizations. The neo-Nazi website Stormfront, for example, might qualify as a journalistic enterprise.

Under SB 7072, the path of least resistance for tech companies would be to allow faux journalistic enterprises, all of which are potential plaintiffs, to propagandize for terrorist groups under the pretext that they are not connected to the militant organizations. This could generate a significant increase in the circulation of terrorist propaganda and recruitment content.

Similarly, the protection that the law affords to political candidates could be exploited by malign actors. Under existing Florida election laws, the threshold for qualifying as a political candidate is low. For example, extremists of any stripe would only need to register as a candidate seeking election via write-in with the Florida Division of Elections or a local election supervisor—which merely requires prospective candidates to submit a notarized document indicating they meet the qualification criteria. While the qualification criteria depend on the specific office that a candidate might run for, suffice it to say that the criteria are far from rigorous. To qualify as a write-in candidate for state representative, for example, an individual must simply have lived in Florida for two years, be a resident of the district where they are running, and be at least 21 years old. With the submission of that document, social media companies would be legally barred from removing posts from that “candidate” for the duration of their candidacy. This “candidate” could even purchase advertising that social media companies would be forced to host.

The last provision we discuss is vitally important to the challenges SB 7072 poses to content moderation. SB 7072 provides a cause of action, giving Floridians and the state’s attorney general the ability to bring legal action for violation of the prior notice and consistent manner provisions. This authority would produce enormous legal headaches for tech companies that seek to productively moderate harmful content. Even if a company managed to prevail every time, lawsuits could be brought for each piece of content removed, leading to enormous legal costs and serving as a drain on the company’s time.

Some observers might find false comfort in SB 7072’s statement that it shall not disrupt content moderation of material that violates state or federal laws. However, categories of illicit material, even as it relates to illicit ends such as terrorism, are vastly outnumbered by legal material. Most harmful content related to terrorism, hate speech, harassment, or misinformation and disinformation is not illegal. Hate speech, for example, is perfectly legal, as is recruiting for violent extremist groups so long as those groups are not designated as terrorist groups. This renders the statutory carve-out enabling the removal of illegal material less impactful than one might hope. 

In essence, SB 7072 provides a huge disincentive for companies to engage in any but the most basic content moderation. This disincentive comes in the form of statutorily imposed inflexibility in content moderation, increased ability for individuals to sue tech companies for their content moderation decisions, and penalties designed to raise the stakes in litigation.

Texas House Bill 20

When signing Texas’s HB 20 into law in late 2021, Gov. Greg Abbott claimed the bill would “protect Texans from wrongful censorship on social media platforms,” primarily by preventing social media companies with over 50 million users from banning content and users based on their political viewpoints. This section provides an overview of key HB 20 provisions that are vulnerable to exploitation.

HB 20 covers more ground than does SB 7072. HB 20’s premise is that “social media platforms function as common carriers, are affected with a public interest, are central public forums for public debate, and have enjoyed governmental support in the United States; and social media platforms with the largest number of users are common carriers by virtue of their market dominance.” If social media platforms are indeed treated as common carriers, it would introduce new legal obstacles to deplatforming individuals and conducting content moderation more broadly.

Section 120.052 requires the creation of an acceptable use policy that clearly outlines what content is permitted, platforms’ compliance policies, and instructions for reporting content that violates these policies. It also requires a biannual report detailing actions taken to enforce content moderation policies. Some of these elements are similar to the terms of service or end-user license agreements already in place and to transparency reports that many platforms already produce. However, these policies and reports are currently not subject to legal oversight and a legal process of discovery. HB 20 would change that.

Section 143A.002, entitled “Censorship Prohibited,” serves as perhaps the core component of HB 20. The section states that “a social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: the viewpoint of the user or another person; the viewpoint represented in the user’s expression or another person’s expression; or a user’s geographic location in this state or any part of this state.” However, Section 143A.006(a) includes several carve-outs that allow platforms to restrict expression, including when it “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge” or is unlawful. 

Like SB 7072, HB 20 allows users to take legal action against social media companies for violations of the law. Additionally, Texas’s attorney general can receive reports from users about alleged violations of the law and may file for injunctive relief.

In September 2021, NetChoice and CCIA filed suit challenging HB 20 in the U.S. District Court for the Western District of Texas. The court issued a preliminary injunction on Dec. 1, 2021, which the State of Texas appealed. After the U.S. Court of Appeals for the Fifth Circuit overturned the injunction, an emergency application to stay the law went to the Supreme Court, and the district court’s decision to enjoin was affirmed. A decision on the merits of the case is pending. 

HB 20 is purportedly intended to “protect Texans from wrongful social media censorship” by restricting social media companies’ ability to moderate content based on political viewpoint. But the law’s imprecision, loopholes, and other shortcomings would allow exploitation by malign actors.

Many of the same loopholes discussed above with respect to SB 7072—particularly those related to SB 7072’s “consistent manner” provision—apply to HB 20. Like SB 7072, HB 20 reveals companies’ content moderation playbooks to everyone, including bad actors. Also applicable are the concerns we raised about bad actors being able to test the limits of how far destructive behavior can extend through the use of fake accounts or coordinated probing. In essence, HB 20 generates similar inflexibility to respond to bad actors’ use of social media platforms.

Yet HB 20 also presents some different loopholes. HB 20’s ban on the removal of content based on viewpoint limits the ability of a platform to remove toxic material. For example, does the takedown of pro-jihadist content constitute censorship on the basis of viewpoint? While many jihadist groups are proscribed terrorist organizations, posts that are objectively pro-jihadist are, in most cases, perfectly legal albeit controversial. While recruiting for the Islamic State or fundraising for a proscribed terrorist group might be illegal, many people openly and legally express their enthusiasm for jihadism. Under HB 20, there is a strong legal argument that their content could no longer be moderated and, given the potential for a lawsuit, a tech company may well believe that it is in its best interest to err on the side of leaving pro-jihadist content in place. 

Further, isn’t any violent extremist cause, or hate speech, nothing more than a viewpoint (for legal purposes) in the absence of illegal activity? The district court gave a nod to this exact challenge when it favorably quoted from plaintiff NetChoice’s brief: “Using YouTube as an example, hate speech is necessarily ‘viewpoint’-based, as abhorrent as those viewpoints may be. And removing such hate speech and assessing penalties against users for submitting that content is ‘censor[ship]’ as defined by H.B. 20.” 

HB 20 includes provisions that appear intended to address this issue, but these provisions fall short. For example, one may read Section 143A.006(a)(3), quoted above, as an anti-hate speech provision, as it allows companies to remove speech that “consists of specific threats of violence” against various protected classes. However, because the provision is crafted so narrowly, it fails to allow moderation of most kinds of hate speech. In order for companies to remove hate speech, under HB 20 it must consist of “specific threats of violence” (emphasis added). But most online hate speech and harassment does not contain explicit threats of violence, let alone specific ones. 

On the whole, HB 20 imposes significant inflexibility in content moderation with insufficient caveats. 

Outlook: The Dark Side of Anti-Censorship Legislation

The national debate about content moderation is fierce because fundamental principles are at stake: freedom of speech and political expression, personal safety, and the role of major corporations in regulating platforms that have become central to the way Americans communicate their ideas. There are legitimate concerns about how social media companies have applied their policies in practice. But the anti-censorship laws discussed in this post are ill advised because they sharply limit social media companies’ potential responses to the evolving tactics of malign actors and create several loopholes that individuals and groups seeking to promote harmful content can readily exploit. 

Consistency in content moderation and lack of discrimination based on political viewpoints are worthy goals, but the application of these principles in the Florida and Texas laws is problematic. It is unclear precisely how SB 7072’s stringent consistency requirement or HB 20’s viewpoint protections—among the laws’ other provisions—would impact the proliferation of extremism, hatred, and misinformation and disinformation, but the potential for significant harm is clear. 

While these laws represent the leading edge of the anti-censorship movement, state legislatures across the country have considered more than a dozen bills since 2020 that target social media “censorship” and seek to place more restrictions on content moderation. The efforts to reform content moderation are playing out in a highly uncertain legal environment. Ensuring that social media spaces do not become safe havens for those who seek to spread toxic content—rather than just share provocative opinions—is a goal that all platforms must have the ability to pursue. Anti-censorship legislation as currently conceived risks becoming a boon to extremists and other malign actors, allowing them to bust open the informational floodgates that have already failed to contain them.


Daveed Gartenstein-Ross is a scholar, author, practitioner, and entrepreneur who is the founder and chief executive officer of Valens Global and leads a project on domestic extremism for the Foundation for Defense of Democracies (FDD). Gartenstein-Ross is the author or volume editor of over 30 books and monographs, most recently “Enemies Near and Far: How Jihadist Groups Strategize, Plot, and Learn.”
Madison Urban is an analyst at Valens Global and supports the Foundation for Defense of Democracies' project on domestic extremism.
Cody Wilson holds a master's degree in global studies and international relations, with a concentration in conflict resolution, from Northeastern University. Mr. Wilson previously earned a bachelor's degree in political science, with a concentration in international relations, from the University of California, Los Angeles. As an undergraduate, he completed a capstone research project focused on Iran's nuclear capabilities, and in graduate school he completed two additional significant research projects on the recruitment of women by ISIS and on the drivers of, and policy responses to, the decades-long conflict in Somalia. In his capacity as an analyst at Valens Global, Mr. Wilson contributed to reports used in federal litigation, helped execute eight successful wargames, designed a cybersecurity-focused tabletop exercise, and produced reports for the U.S. government.

Subscribe to Lawfare