search menu icon-carat-right cmu-wordmark

SEI Coauthors Responsible AI Guidelines

SEI Coauthors Responsible AI Guidelines
Article

December 1, 2021—The Department of Defense’s (DoD) Defense Innovation Unit (DIU) recently released a new report on responsible artificial intelligence (AI), coauthored by DIU and the SEI. This report, Responsible AI Guidelines in Practice, offers a framework for how AI systems developed as part of DIU programs can be built in a way that aligns with the DoD AI Ethical Principles.

"As DIU fields and scales commercial technology, we are building on the DoD's commitment to responsible AI,” explained DIU’s technical director of AI and machine learning, Jared Dunnmon. “With the human-centered AI researchers at the SEI, we developed the Responsible AI Guidelines. As it facilitates agreements between DoD partners and commercial vendors, this framework enables DIU to stimulate, structure, and document a process of building AI capabilities that aligns with the DoD AI Ethical Principles on DIU programs. Our collaboration with the SEI has been a significant part of the success in the effort to put responsible AI into action."

Dunnmon and two other DIU authors wrote the report with the SEI AI Division’s Alex Van Deusen, a design researcher, and Carol Smith, a senior research scientist in human-machine interaction. “Human-centered AI is now recognized as important to successful AI systems,” Smith said, “but guidance is needed for how to implement AI systems that are responsible and ethical. Our work developing the DIU Responsible AI Guidelines is providing actionable guidance to do this important work.”

Human-centered AI is one of the three pillars of AI engineering, an emergent discipline led by the SEI and focused on developing tools, systems, and processes to enable the application of AI in real-world contexts. The SEI AI Division proposed the three pillars, which also include scalable AI and robust and secure AI, to guide its approach to AI engineering.

In early 2020, the DoD defined five ethical principles for the use of AI systems: responsible, equitable, traceable, reliable, and governable. DIU then began collaborating with the SEI’s AI researchers to operationalize those ethical principles in DIU’s own commercial prototyping and acquisition programs. The lessons learned from this effort, along with best practices from government, industry, academia, and nonprofits, culminated in the newly released report.

“These guidelines are a way we can embed responsible practices into the entire AI workflow, from planning, to development, to deployment,” said Van Deusen. “Our goal was to establish a process that is reliable, replicable, and scalable across the DIU and expandable to other DoD organizations.”

Download Responsible AI Guidelines in Practice and learn more about the initiative on the DIU website. Hear Smith, Van Deusen, and Dunnmon discuss their human-centered AI work during the 2021 SEI Research Review.