-
Time to Act: Building the Technical and Institutional Foundations for AI Assurance
AI assurance requires agreement among governments that systems are behaving appropriately. Existing international standards institutions can help. -
Rational Security: The "Regulatory Cage Match" Edition
This week, Alan, Quinta, and Scott were joined by Lawfare cyber fellow Eugenia Lostri to tackle some of the overlooked national security stories that have been percolating the past few weeks. -
Human Subjects Protection in the Era of Deepfakes
The unique risks posed by deepfakes require special consideration for the Defense Department’s use of the technology. -
ChinaTalk: Emergency Pod: AI Executive Order!
-
Hacking and Cybersecurity: Class 7, Encryption
The seventh class of Lawfare's live course on hacking and cyber security is now available to the public. -
The Cyberlaw Podcast: Fancy Bear Goes Phishing
-
ChinaTalk: Can AI Be Governed?
-
Intentional Damage to Submarine Cable Systems by States
Two legal regimes—the law of the sea and the law on the use of force—can apply to damage caused by states to submarine cables during peacetime. -
ChinaTalk: PLA Purges + Taiwan War Risk
-
The Cyberlaw Podcast: Administration Fails Forward on China Chip Exports
-
The Lawfare Podcast: Rules for Civilian Hackers in War, with Tilman Rodenhäuser and Mauro Vignati
Thanks to advances in digital technologies, it is now easier than ever for civilians to get involved in military cyber operations. -
TechTank: A Conversation with Congresswoman Yvette Clarke (D-NY) on AI Accountability and Election Integrity