Cybersecurity & Tech Surveillance & Privacy

Addressing the Security Risks of AI

Jim Dempsey
Tuesday, April 11, 2023, 8:16 AM

AI’s vulnerability to adversarial attack is not futuristic, and there are reasonable measures that should be taken now to address the risk.

An individual types on a laptop. (Rawpixel, https://www.rawpixel.com/image/5926184/man-using-laptop-free-public-domain-cc0-photo; Public Domain, https://creativecommons.org/publicdomain/zero/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In recent weeks, there have been urgent warnings about the risks of rapid developments in artificial intelligence (AI). The current obsession is with large language models (LLMs) such as GPT-4, the generative AI system that Microsoft has incorporated into its Bing search engine. However, despite all the concerns about LLMs hallucinating and trying to break up marriages (the former quite real, the latter more on the amusing side), little has been written lately about the vulnerability of many AI-based systems to adversarial attack. A new Stanford and Georgetown report offers stark reminders that the security risks for AI-based systems are real. Moreover, the report—which I signed, along with 16 others from policy research, law, industry, and government—recommends immediately achievable actions that developers and policymakers can take to address the issue.

The report starts from the premise that AI systems, especially those based on the techniques of machine learning, are remarkably vulnerable to a range of attacks. My Stanford colleague Andy Grotto and I wrote about this in 2021. Drawing on the research of others, we outlined how evasion, data poisoning, and exploitation of traditional software flaws could deceive, manipulate, and compromise AI systems, to the point of rendering them ineffective. We were by no means the first to sound the alarm: Technologists in 2018 surveyed the landscape of potential security threats from malicious uses of AI, and Andrew Lohn at Georgetown warned in 2020 that machine learning’s vulnerabilities are pervasive

Most of that work cited academic studies, as opposed to attacks in the wild. So when Stanford and Georgetown convened a group of experts last summer for a workshop that informed our new report, I specifically asked if there was any doubt that real-world implementations of AI were vulnerable to malicious compromise. Or was this merely a theoretical concern? Uniformly, participants from industry and government—those developing and using AI—agreed that the problem was real. Some pointed out that there are so many vulnerabilities in digital systems that the AI in those systems is not yet an attack vector of choice, but all agreed that, with the continued incorporation of AI models into a wider range of use cases, the frequency of deep learning-based attacks will grow. Moreover, all agreed the time to begin addressing the problem is now, as new systems are being designed and new deployments are occurring. (It actually would have been better to start years ago, before AI technologies had begun to be deployed in a wide range of commercial and government contexts, but now is second best.)

Our workshop participants and the report authors were a cautious crew. At a time of AI hype both utopian and dystopian, no one was interested in even whispering “wolf” about the dangers posed by vulnerable AI. But the massive acceleration in AI development and deployment in the past 10 months certainly heightens the urgency of our call for immediate action. 

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued against siloing AI security in its own governance and policy vertical.)

Our report also recommends more collaboration between cybersecurity practitioners, machine learning engineers, and adversarial machine learning researchers. Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources. We also note that AI security researchers and practitioners should consult with those addressing AI bias. AI fairness researchers have extensively studied how poor data, design choices, and risk decisions can produce biased outcomes. Since AI vulnerabilities may be more analogous to algorithmic bias than they are to traditional software vulnerabilities, it is important to cultivate greater engagement between the two communities. 

Another major recommendation calls for establishing some form of information sharing among AI developers and users. Right now, even if vulnerabilities are identified or malicious attacks are observed, this information is rarely transmitted to others, whether peer organizations, other companies in the supply chain, end users, or government or civil society observers. Bureaucratic, policy, and cultural barriers currently inhibit such sharing. This means that a compromise will likely remain mostly unnoticed until long after attackers have successfully exploited vulnerabilities. To avoid this outcome, we recommend that organizations developing AI models monitor for potential attacks on AI systems, create—formally or informally—a trusted forum for incident information sharing on a protected basis, and improve transparency. 

Regarding the legal framework for AI security, we hesitated to recommend any sweeping legislative action. It’s not clear what an AI security law would look like or how it would differ from the growing body of law addressing cybersecurity in general. Instead, we recommend that government agencies with authority over cybersecurity should clarify how AI-based security concerns fit into their existing regulatory structures. Agencies ranging from the Federal Trade Commission to the Department of Justice to the New York State Department of Financial Services have already explained how AI bias fits within existing anti-discrimination laws. They and others should do the same for AI security.

We also note that there are other steps government should take within existing authorities: Public efforts to promote AI research should more heavily emphasize AI security, including through funding open-source tooling that can promote more secure AI development. And government should provide testbeds or enable audits for assessing the security of AI models.

Elon Musk and thousands of other entrepreneurs, scientists, and tech policy experts have called for a global six-month pause in the training of AI systems more powerful than GPT-4. Such a pause is highly improbable: China is not likely to sign on, and those pushing the boundaries of AI in the United States are too entranced by what they perceive as AI’s benefits to slow down, even as they admit the risks. What is achievable is the incremental but urgent incorporation of AI into cybersecurity governance structures. AI developers and users should heed the culture shift recommendations of the Georgetown-Stanford report, and regulators should start insisting that AI vulnerabilities be addressed within the maturing legal framework for cybersecurity.


Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Cyber Policy Center. From 2012-2017, he served as a part-time member of the Privacy and Civil Liberties Oversight Board. He is the author of Cybersecurity Law Fundamentals (IAPP, 2021).

Subscribe to Lawfare