AI-Based Attacks , Fraud Management & Cybercrime , Video
Autonomous AI and US National Security: A Double-Edged Sword
DARPA Deputy Director Matt Turek on Managing Autonomous AI Risk in Critical SystemsAs AI agents perform tasks autonomously in both virtual and physical environments, the potential for errors to occur at unprecedented speed and scale increases significantly.
While human oversight has been central to cybersecurity, defenses need to operate independently due to the speed at which AI agents can make decisions, said Matt Turek of the Defense Advanced Research Projects Agency. DARPA seeks to reduce risk by developing mathematical models that ensure AI systems act reliably under various conditions as well as by implementing defenses against adversarial attacks (see: DARPA Picks 7 Small Businesses for AI Cyber Challenge).
"Systems might take actions for you, whether that is in the virtual world or whether that is in the real world," Turek said. "Some of the challenges associated with that is that those agents may make mistakes at speed and scale that we're just not prepared to handle. Things can go wrong at speed and scale, and there's potential national security implications for that."
In this video interview with Information Security Media Group at Black Hat 2024, Turek discussed:
- The national security risks posed by autonomous AI agents acting without human oversight;
- The challenges around building provably secure AI systems with limited formal verification;
- How to go about stopping AI-generated media and adversarial attacks on machine learning.
Turek provides technical leadership to envision, create and transition capabilities that ensure enduring information advantage for the U.S. and its allies. His research interests include computer vision, machine learning and artificial intelligence - and their application to problems that have significant societal impact. Prior to DARPA, Turek led a team at Kitware that developed computer vision technologies.