search icon-carat-right cmu-wordmark

AISIRT Ensures the Safety of AI Systems

Created November 2023

The advance of AI promises enormous potential, but it also introduces new and dangerous risks. To fill the need for a capability that can identify, analyze, and respond to the threats, vulnerabilities, and incidents that emerge from the ongoing advances in artificial intelligence (AI) and machine learning (ML), the SEI developed the first Artificial Intelligence Security Incident Response Team (AISIRT).

AI Creates New Capabilities, but Also New Risks

The emergence of AI has created a new class of software techniques that offer unprecedented capabilities for solving difficult problems that directly affect the economic health, societal well-being, and security of the nation. These techniques can perform feats that once seemed unattainable for software, like finding patterns in complex data that humans are unable to detect on their own, or permitting a single individual to swiftly complete tasks that previously required entire teams.

The safe and effective adoption and use of these capabilities, however, is not guaranteed. Improper development, implementation, or use of AI can result in disastrous consequences, especially considering its widespread use in sectors like critical infrastructure or the military. In fact, several large-scale AI and ML vulnerabilities have already had far-reaching impacts and implications, and these events are likely to proliferate as AI rapidly evolves and more organizations embrace its potential to expand their frontiers.

To provide the U.S. with a capability for addressing the risks introduced by the rapid growth and widespread use of AI, the SEI formed a first-of-its-kind AISIRT.

AISIRT: A Collaboration Between the SEI and Carnegie Mellon University

The AISIRT is part of Carnegie Mellon's (CMU) coordinated effort to advance AI, and it involves collaborations between the researchers at the SEI and CMU's faculty, staff, and students. CMU is the leading academic and research institution in the disciplines of computer science and engineering, AI engineering, and cybersecurity.

AISIRT Provides Protection as AI Evolves

The SEI leveraged its expertise in cybersecurity and AI to establish the AISIRT, as well as its strong track record in the development of cyber response capabilities and team development across the globe over the last 35 years. The goal of the AISIRT is to lead a community-focused research and development effort to ensure the safe and effective development and use of AI technologies as these continue to evolve and grow.

Some of the challenges for maintaining effective monitoring of AI systems include identifying when AI systems are operating out of tolerance; whether they have been subjected to external tampering or attack; where defects occur that need to be corrected; and how to diagnose and respond to suspected or known problems. In addition, response capabilities require successful community and team building with both national as well as international organizations. The SEI delivers solutions to these challenges thanks to its technical expertise, but also thanks to its vast partnership network that includes software vendors like Google and Microsoft, as well as many AI and ML vendors, and organizations of all kinds, including military, government, industry, and academia.

Built from these foundations at the SEI, the AISIRT fills an immediate need to ensure that AI is safe, contributes to the growth of our nation, and continues to evolve in an ethical, equitable, inclusive, and responsible way.

Learn More

Best Practices and Lessons Learned in Standing Up an AISIRT

September 12, 2024 Podcast
Lauren McIlvenny

In the wake of widespread adoption of AI practices in critical infrastructure, best practices and lessons learned in standing up a AI Security Incident Response Team...

learn more

The Challenge of Adversarial Machine Learning

May 15, 2023 Blog Post
Matt Churilla, Nathan M. VanHoudnos, Robert W. Beveridge

This SEI Blog post examines how machine learning systems can be subverted through adversarial machine learning, the motivations of adversaries, and what researchers are doing to mitigate their...

read

Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems

June 10, 2021 Podcast
Nathan M. VanHoudnos, Jonathan Spring, Allen D. Householder

Allen Householder, Jonathan Spring, and Nathan VanHoudnos discuss how to manage vulnerabilities in AI/ML systems....

learn more

Adversarial ML Threat Matrix: Adversarial Tactics, Techniques, and Common Knowledge of Machine Learning

October 22, 2020 Blog Post
Jonathan Spring

This SEI Blog post introduces the Adversarial ML Threat Matrix, a list of tactics to exploit machine learning models, and guidance on defense against...

read

On Managing Vulnerabilities in AI/ML Systems

October 01, 2020 Conference Paper
Jonathan Spring, Allen D. Householder, April Galyardt, Nathan M. VanHoudnos

This paper explores how the current paradigm of vulnerability management might adapt to include machine learning systems....

read