Cybersecurity & Tech Surveillance & Privacy

The Unfortunate Irony of Meta’s EU Troubles and the Case of TikTok

Ania Zolyniak
Monday, June 26, 2023, 9:00 AM

If EU concerns about Meta sound like déjà vu to U.S. ears, they should.

A cell phone on a table. (https://unsplash.com/photos/Xh3k8-vfl8s)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Jan. 4, Meta was fined 390 million euros (approximately $414 million) for illegally forcing European Union users to accept personalized ads on Facebook and Instagram. The fine was issued by the Irish Data Protection Commission (DPC), which serves as Meta’s main regulator in Europe (the company’s European operations are headquartered out of Ireland for tax purposes). The commission gave Meta three months to comply with EU data privacy protection under the General Data Protection Regulation (GDPR) after finding that it could not use the law’s contractual necessity clause as a workaround for collecting and processing user digital activity for targeted advertising. Essentially, Meta couldn’t rely on its notoriously opaque terms of service agreements to justify targeted ads for EU users. Rather, Facebook and Instagram had to provide users with a way to opt out of having their digital activity collected and used to tailor the advertisements they saw on the apps. This will likely have significant financial implications for the company: In 2022, Meta made $113.64 billion in advertising revenue, almost a quarter of which came from Europe. At the time the decision was issued, the New York Times reported that it jeopardized 5 to 7 percent of Meta’s overall advertising revenue. 

On May 22, the DPC fined Meta again, this time for breaking EU privacy laws when transferring EU user data from Europe to the United States. In addition to the record $1.3 billion fine, the commission ordered Meta to suspend all transfers of personal data belonging to EU users and users within the European Economic Area to the United States. In this case, the DPC’s main concern was whether Facebook specifically provided enough “appropriate safeguards” when transferring EU user data to the United States, which is a much more relaxed data privacy regulatory environment, particularly when it comes to U.S. foreign intelligence collection programs. The case stems from a 2020 decision issued by the European Court of Justice that struck down the Privacy Shield—a 2016 U.S.-EU agreement that allowed businesses in both jurisdictions to more easily transfer data across the Atlantic—and is the latest development in a decade-old campaign to protect European citizens’ data from U.S. surveillance. Indeed, the commission explicitly noted that its decision “exposes a situation whereby any internet platforms falling within the definition of an electronic communications service provider subject to the U.S. Foreign Intelligence Surveillance Act 702 PRISM program may equally fall foul” of GDPR safeguards for data transfers. 

Fear of a foreign government gathering intelligence on citizens through a social media platform is surely familiar to the United States. Indeed, the concerns regarding foreign surveillance inherent in the commission’s May 22 decision against Meta seem to echo those of U.S. lawmakers’ vis-a-vis the Chinese Communist Party (CCP) and TikTok. Despite causing some inconveniences for multinational companies handling data across international borders, Europe’s response to these concerns through instruments like the GDPR offers U.S. officials a more practical model than pervious nonce bans for realistically and effectively addressing related fears involving U.S. citizens and national security.

The Case of TikTok

In calling to order the House of Representatives’s Energy and Commerce Committee hearing on March 23, Rep. Cathy McMorris Rodgers (R-Wash.) told TikTok CEO Shou Chew that he had been invited to testify before the committee “because the American people need the truth about the threat TikTok poses to [their] national and personal security.” According to Congress, Chew was there because “TikTok surveils us all,” pointing to alleged “internal records reveal[ing] … a backdoor for China to access user data.” 

In his written testimony, Chew claimed that ByteDance, TikTok’s parent company, “is not an agent of China” and that “there is no way for the Chinese government to access [U.S. user data] or compel access to it.” However, an ex-ByteDance executive, who had filed a wrongful dismissal suit in February against the company, alleged that the CCP was able to access U.S. TikTok user data through a “backdoor channel in the code.” The concern regarding CCP access to U.S. user data stems from two Chinese national security laws. The first, the 2014 Counter-Espionage Law, states that “when the state security organ investigates and understands the situation of espionage and collects relevant evidence, the relevant organizations and individuals shall provide it truthfully and may not refuse.” Under the second, the 2017 National Intelligence Law, “any organization or citizen shall support, assist and cooperate with the state intelligence work in accordance with the law.” 

In response to concerns for  EU citizens’ privacy and the security of their information, which largely came to the fore after Edward Snowden’s 2013 leaks, EU lawmakers drafted comprehensive legislation that erected safeguards for Europeans’ data in the EU and beyond. Rather than seeking a similar path toward thorough privacy protections for Americans, the U.S. government has instead zeroed in on a designated “boogeyman” for all digital data collection and security concerns: a proposed national ban on a single application used by over 150 million Americans

In December 2020, then-President Trump attempted to ban TikTok via executive order—he was unsuccessful. In May, Montana banned TikTok from operating in the state in the interest of protecting “Montanans’ personal, private, and sensitive data and information from intelligence gathering by the Chinese Communist Party.” It appears, however, that Montana’s ban may not work out as the state intended. Ironically, if TikTok attempts to comply with the ban, it would have to start collecting precise user location data to determine in which state they are using the app. According to the Council on Foreign Relations’s Tarah Wheeler, doing so would create “a surveillance state that includes fine-grained location data and the ability to monitor and read people’s phones—the exact mirror of the Chinese surveillance state they’re afraid of to begin with.”

President Biden, for his part, issued his own executive order in October of last year concerning data privacy and protection: Executive Order 14086. The order provides a new framework for safeguarding personal data … transferred from Europe to the United States. The order primarily addresses European concerns regarding improperly obtained data through U.S. signals intelligence activities (from which U.S. citizens are, at least in principle, supposed to already be protected) rather than the commercial activities. Nevertheless, it lays out a clearer structure for review, requirements, and redress that could be recycled and tailored into legislation that could better secure the data privacy of American citizens than chasing after problem apps with bans.  

Why a Ban Just Won’t Cut It

If protecting user data is indeed a critical matter of national security, then lawmakers should treat it as such. Rather than seeking legal justifications (and technological conjurations) to try to ban a single platform used by millions of Americans, U.S. policymakers could pursue comprehensive legislation that provides internet users explicit rights over the collection, processing, use, and movement of their data, as well as legal recourse for abuses. And they wouldn’t have to start from scratch: Multiple countries—be it Australia, Canada, South Africa, or the members of the EU—and even some U.S. states offer convenient case studies for discerning which and what kind of provisions would be most desirable for protecting U.S. citizens’ data in the U.S. context across multiple platforms and applications. TikTok data is not and should not be Americans’ only security concern regarding social media: The very nature of the internet, which is diffuse and open by design; the sheer amount of information generated and collected across digital platforms; and the lack of comprehensive federal data regulation put all U.S. digital data at risk. 

With or without a TikTok ban, Americans’ digital data is still up for sale. The Biden administration’s warning to TikTok earlier this year that it may face a national ban if ByteDance fails to sell its stake in the U.S. version of the app is reminiscent of the Committee on Foreign Investment in the United States’s 2019 decision to mandate the Chinese owners of Grindr, a dating app, to sell it back to a U.S. firm. As Justin Sherman argued in a 2022 Lawfare piece, although the owners conceded and sold the application to San Vicente Acquisition, a low-profile investment group, doing so did nothing to prevent Grindr from legally selling its data to governments through data brokers or from sharing user data with third parties, including through a Chinese software development kit. Thus, even if TikTok acquiesces to the administration’s demands and sells its stake, such divestment wouldn’t necessarily prevent the Chinese or another foreign government from obtaining U.S. user data through the open market.  

In addition to the data up for sale, there is the potluck of U.S. citizens’ information available for free. In April 2015, Facebook’s Mark Zuckerburg told Congress that the company would end its policy of granting applications created on the platform unrestricted access to user data in May of that year. In 2018, however, Facebook (now Meta) revealed that it allowed apps developed by Mail.ru Group, a Russian technology conglomerate with ties to the Kremlin, to operate under the more permissive pre-May 2015 rules for two weeks beyond the designated cut-off date. Doing so allowed the apps to collect data by delving deep into profiles and tracking activity unbeknownst to Facebook users, who were ultimately left exposed to and unprotected from such abuse due to the lack of U.S. privacy regulations. Facebook declined to comment on its determinations regarding what Mail.ru may have done with the data. Its reason? Confidentiality and privacy concerns between the company and app developers. 

Foreign adversaries are not the only ones accessing U.S. data: In 2017, engineers working for the athletic social networking app Strava created and published a heat map of anonymized user training data. After reviewing the map, an Australian grad student posted his revelations about the data on Twitter in 2018: His data visualization revealed—and mapped—the locations of forward-deployed U.S. bases (as well as military forces of other countries) and an undisclosed CIA site in Djibouti. 

In her comparative assessment of data risks in the United States, Canada, and Germany, Susan Ariel Aaronson, a senior fellow at the Centre for International Governance Innovation, examined the case of FaceApp, which took the United States by storm after its release in 2017. At the time of the report’s writing (April 2020), about 80 million users had downloaded FaceApp, an image editing application developed by the Russian company Wireless Lab that went viral for its “old” face filter. Upon downloading the app, users granted FaceApp “a fully paid, royalty-free, perpetual, irrevocable, worldwide, non-exclusive, and fully sublicensable right and license to use, reproduce, perform, display, distribute, adapt, modify, re-format, create derivative works of, and otherwise commercially or non-commercially exploit in any manner, any and all Feedback, and to sublicense the foregoing rights, in connection with the operation and maintenance of the Services and/or FaceApp’s business.” In non-term-and-conditions legalese: They essentially signed away their rights to their data and allowed FaceApp to do whatever it wanted with it. 

In 2019, the FBI reviewed the app as part of a larger national security investigation into Russian-made software, concluding that the app was “a potential counterintelligence threat based on the data [it] collects, its privacy and terms of use [policies], and the legal mechanism available to the Government of Russia that permit access to data within Russia’s borders.” It remains unclear whether the app is indeed an arm of the Russian government; however, Aaronson points out that the company’s terms of service “give it great power to control the information it collects” and that it plans to continue selling it. She also points to U.S. companies such as Clearview AI that are “scraping the web and selling personal profiles to police authorities in both democratic and repressive states.” According to Aaronson, “America’s failure to enact clear personal data protection rules has enabled firms to obtain and monetize personal data for a wide range of current and future purposes.”

Yes, the United States could try to force the sale of FaceApp or threaten to ban it. But waiting to take formidable action after a new app pops up in stores, becomes wildly popular across the country, collects and stores vast quantities of user data, and is assessed by the FBI to be a national security risk is not a national data security strategy: It is a Sisyphean game of whack-a-mole. 

If personal and national security concerns are not enough to cajole congressional support and action, it is also worth mentioning that a clearly articulated national framework of data regulation in the United States also has advantages for business interests by providing countries with a more consistent standard. As Robert D. Williams of the Brookings Institution and Yale Law School notes, such a standard would reduce compliance costs and mitigate inefficiencies arising from the adoption of different regulatory schemes by individual U.S. states. It would also promote the harmonization of the U.S. operational data environment “with those of other major economies, easing trade concerns and promoting American technology in Europe and beyond.” 

A Reality Check

While the GDPR may fuel Meta’s ire, it should be the object of envy for the American social media consumer. Even China has its own version of the GDPR, the 2021 Personal Information Protection Law (PIPL), with data collection consent requirements and protections for data transfers similar to those at issue in the aforementioned cases involving Meta’s European operations. Of course, the differences in political realities in China versus those in the EU cannot and should not be ignored. Despite its similarities on paper to the GDPR, the PIPL’s de facto execution will depend wholly on how the CCP elects to implement its provisions in furtherance of the party’s interests, including its increasing desire to exert expanded surveillance and control over its populations’ digital activities. Still, the larger point remains that as comprehensive national data protection laws are becoming the norm, the United States’ lack of an overarching, nationwide legal architecture for protecting U.S. internet user data that regulates private companies rather than banning problematic platforms retroactively—and only after a critical threat is detected—sticks out like a sore thumb. The United States’ lateness to the game, however, offers U.S. policymakers a vast repository of models from which they could pick and choose in crafting harmonizing legislation that protects U.S. netizens and reduces aforementioned private-sector costs of doing business in a fractured global data policy landscape. Many countries that have followed the way of the EU in enacting data privacy laws have borrowed from the GDPR, but there are serious concerns that ought to be deliberated and debated regarding challenges to its implementation and lacunae that continue to threaten user privacy despite the legal regime’s stringency. U.S. policymakers thus have the advantage of accessing the successes and failures of the GDPR and other countries’ policies. Importantly, however, that doesn’t mean that U.S. legislatures can simply continue to kick the can down the road, relying on ineffective and perforated bans while making zero progress on comprehensive legislation.

To comply with the January ruling from the EU Commission, Meta changed its model to permit users to opt out of targeted ads—but only in Europe. European Instagram and Facebook users now have a mechanism, as cumbersome as it may be, to reclaim greater control over their digital information. As an American Facebook user, all I can say is that it must be nice. But at least now the CCP might not be able to get Montana Gov. Greg Gianforte’s data—at least not for free.


Ania Zolyniak is a research associate with the Council on Foreign Relations' Center for Preventive Action.

Subscribe to Lawfare