Our Research

Securing AI

The SEI supports the U.S. Department of Defense (DoD) in ensuring that AI systems are robust to mitigate vulnerabilities and protect against threats.

To realize the advantages of AI-enabled systems, the DoD must secure those systems against a new set of vulnerabilities—vulnerabilities that stem from fundamental characteristics of AI models based on how they are trained and how they operate. Securing AI-enabled systems requires characterizing novel AI vulnerabilities and attacks and developing AI security measures.

AI security presents a multipronged challenge. First, attacks against AI-enabled systems can take a variety of forms: For instance, adversaries could inject malicious samples into datasets, optimize adversarial features that cause malicious model outputs, extract private information about training data, or identify protected model information. Second, understanding the system risk requires defining the attack threat model, which includes system details, mission goals, and adversarial knowledge and capabilities. This requires detailed information about the system and the adversary. Finally, advances in AI techniques and adversarial AI attacks and defenses are rapid, and countering those attacks requires a dynamic development environment. To meet the challenge of AI security for the DoD, the SEI maintains deep expertise in the state of the art as well as an understanding of critical missions.

In the absence of established secure AI practices, AI systems are vulnerable to adversarial attacks that can directly manipulate model behavior, causing surveillance systems to fail to identify a target, causing automated threat recognition (ATR) systems to be overwhelmed with malicious false detections, causing signals intelligence (SIGINT) systems to incorrectly identify a signal, or causing an LLM-based battle management system to suggest ineffective strategies. Falling behind in the characterization of AI security vulnerabilities and the development of defenses will leave the DoD AI systems exposed to malicious manipulation, which can lead to loss of assets and mission failure.

The SEI is a leader in securing AI-enabled systems, bridging the gap between cutting-edge academic research in the fast-moving field of AI and mission needs from the DoD and Intelligence Community. As experts in the space of adversarial machine learning (ML), we can develop and characterize new adversarial techniques to understand threats, identify how these threats impact DoD mission success, and develop strategies for protecting AI systems.

Characterizing, Defending Against, and Responding to Adversarial Capabilities

As the attack surface of AI systems expands, our Secure AI Lab discovers unique vulnerabilities in AI models and data, evaluates their impact on model performance, generates tools to test for AI vulnerabilities, and develops defenses to protect against attacks. This supports DoD stakeholders in staying ahead of the threats that can make the greatest impact on DoD missions.

The SEI works closely with DoD mission partners as well as academic collaborators at Carnegie Mellon University to respond to immediate security concerns and to develop guidelines and protocols that can prevent future incidents.

Building on our capabilities in cyber response, the SEI established the AI Security Incident Response Team (AISIRT). The AISIRT is a collaborative effort that draws not only on the SEI’s technical expertise but leverages our vast partnership network that includes software vendors like Google and Microsoft, AI and ML vendors, and DoD and academic organizations.

In another example of research on securing AI, SEI researchers created a novel method of detecting trojans in convolutional neural network image models. Their method, Feature Embeddings Using Diffusion (FEUD), won second place in an IEEE CNN interpretability competition in May 2024.

Additional Resources

The Latest from the SEI Blog

Data Poisoning in AI Models: The Case for Chain of Custody Controls

Blog Post
and

This post explores data poisoning, which occurs when training data is modified to influence the performance of a model, and proposes cryptographic chain of custody as a mitigation.

READ

Protecting AI from the Outside In: The Case for Coordinated Vulnerability Disclosure

Blog Post
, , , , , , , , and

This post highlights lessons learned from applying the coordinated vulnerability disclosure (CVD) process to reported vulnerabilities in AI and ML systems.

READ

The Latest from the Digital Library

Ensuring the Safety and Security of AI Systems

Webcast
and

In this webcast, SEI researchers explain how System Theoretic Process Analysis helps organizations build stronger assurances about the safety and security of complex systems, including those that incorporate AI.

Watch

Ensuring the Safety and Security of AI Systems

Presentation
and

SEI researchers explain how System Theoretic Process Analysis helps organizations build stronger assurances about the safety and security of complex systems, including those that incorporate AI.

Learn More

Explore Our Securing AI Projects