All Projects
-
Explainable AI: Why Did the Robot Do That?
To help human users trust their robot team members in critical situations, we develop tools that allow autonomous systems to explain their behavior.
Learn More -
Verifying Distributed, Adaptive Real-Time (DART) Systems
Distributed, adaptive real-time (DART) systems must satisfy safety-critical requirements. We developed a method to verify DART systems and generate assured code.
Learn More -
Multi-Agent Decentralized Planning for Adversarial Robotic Teams
We created multi-agent planning techniques, middleware, and algorithms that enable single users to manage fleets of UASs in real-world environments with changing adversaries.
Learn More -
QUELCE: Quantifying Uncertainty in Early Lifecycle Cost Estimation
Costs for large new systems are hard to estimate. We developed a method to quantify uncertainty and increase confidence in a program's cost estimate.
Learn More -
Automated Code Repair
Finding security flaws in source code is daunting; fixing them is an even greater challenge. Our researchers are creating automated tools that can repair bugs automatically or by prompting developers for more information to make effective repairs.
Learn More -
Using Automation to Prioritize Alerts from Static Analysis Tools
The new CERT method for validating and repairing defects found by static analysis tools helps auditors and coders address more alerts with less effort.
Learn More -
Improving Verification with Parallel Software Model Checking
Current methods for software model checking can take too much time. We develop algorithms for SMC that execute many operations in parallel to improve scalability.
Learn More -
Design Pattern Recovery from Malware Binaries
The U.S. Department of Defense (DoD) and industry face many malware problems. CERT researchers automate malware analysis capabilities, including those focused on malware family evolution and similarity.
Learn More -
Supporting the U.S. Army's Joint Multi-Role Technology Demonstrator Effort
We build and analyze virtual software systems to find problems early in development, before a system is built. Early discovery reduces cost and certification time.
Learn More -
Automating Vulnerability Discovery in Critical Applications
CERT researchers develop automated tools that discover and mitigate software vulnerabilities and transfer them to researchers, procurement specialists, and software vendors.
Learn More -
Converting a Navy Weapon System from a 32- to a 64-Bit Architecture
The SEI provided an independent assessment of the risks of migrating a weapons control system deployed by the U.S. Navy from one architecture to another.
Learn More -
GraphBLAS: A Programming Specification for Graph Analysis
The GraphBLAS Forum is a world-wide consortium of researchers working to develop a programming specification for graph analysis that will simplify development.
Learn More -
Positive Incentives for Reducing Insider Threat
Insiders present unique challenges to cybersecurity. We research insider threats and develop tools to analyze threat indicators in sociotechnical networks.
Learn More -
Managing Technical Debt with Data-Driven Analysis
Most software projects carry technical debt. We develop tools and techniques that identify it and provide a complete view of the debt that you need to manage.
Learn More -
A Tool Set to Support Big Data Systems Acquisition
We offer an approach that reduces risk and simplifies the selection and acquisition of big data technologies when you acquire and develop big data systems.
Learn More -
Helping Government Realize the Agile Advantage
We develop a wealth of resources to help the U.S. Department of Defense (DoD) and federal agencies make informed decisions about using Agile and lean approaches in achieving their goals.
Learn More -
Security-Aware Acquisition
The techniques developed by CERT researchers help you evaluate and manage cyber risk in today’s complex software supply chains.
Learn More -
System and Platform Evaluation
CERT researchers develop and perform advanced penetration testing and cyber vulnerability assessments of organizations' systems and platforms.
Learn More -
Empirical Research Office
We improve the capability delivered for every dollar of U.S. Department of Defense (DoD) investment made in software systems by improving the use of data in decision making.
Learn More -
Digital Forensics: Advancing Solutions for Today's Escalating Cybercrime
As cybercrime proliferates, CERT researchers help law enforcement investigators process digital evidence with courses, methodologies and tools, skills, and experience.
Learn More -
Acquiring Systems, Not Just Software
The U.S. Department of Defense (DoD) and federal agencies are increasingly acquiring software-intensive systems instead of building them with internal resources. However, acquisition programs frequently have difficulty identifying the critical software acquisition activities, deliverables, risks, and opportunities.
Learn More -
USPS Case Study
The SEI teamed with the U.S. Postal Service to help it improve its cybersecurity and resilience and collaborated on a program to develop a strong cybersecurity workforce.
Learn More -
Cyber Lightning Case Study
The SEI hosted Cyber Lightning, a three-day joint training exercise involving Air National Guard and Air Force Reserve units from western Pennsylvania and eastern Ohio.
Learn More -
SEI Hosts Crisis Simulation Exercise for Cyber Intelligence Research Consortium
In SEI crisis simulation exercises, participants use scenarios that present fictitious malicious actors and environmental factors based on real-world events.
Learn More -
Runtime Assurance for Big Data Systems
To help assure runtime performance in big data systems, we designed a reference architecture to automatically generate and insert monitors and aggregate metric streams.
Learn More -
Smart Grid Maturity Model (SGMM)
The smart grid is a constantly evolving infrastructure of digital technology and power industry practices for improving the management of electricity generation, transmission, and distribution. The Smart Grid Maturity Model (SGMM) helps utilities plan their smart grid journeys.
Learn More -
Developing Tomorrow’s Solutions for Improving Cyber Simulations
The CERT Division of the SEI develops tools that virtualize systems to deliver high-quality training and user performance validation to ensure cyber teams are ready to face ever-evolving threats and challenges.
Learn More -
Training Army Analysts to Use the Big Data Platform
ARCYBER is teaming with the SEI CERT Division to create training capabilities that help Army analysts develop the necessary skills for using its Big Data Platform.
Learn More -
Cyber Intelligence Study
The practice of cyber intelligence helps organizations protect their assets, know their risks, and recognize opportunities. In 2018, the SEI conducted a cyber intelligence study on behalf of the United States Office of the Director of National Intelligence (ODNI). Our task was to understand how organizations perform the work of cyber intelligence throughout the United States.
Learn More -
Delivering Real-World Experience with Cyber Simulations
The SEI CERT Division develops simulations that offer cyber operators a way to get the experience they need to perform at elite levels.
Learn More -
Architecture Analysis and Design Language (AADL)
Software for mission- and safety-critical systems, such as avionics systems in aircraft, is growing larger and more expensive. The Architecture Analysis and Design Language (AADL) addresses common problems in the development of these systems, such as mismatched assumptions about the physical system, computer hardware, software, and their interactions that can result in system problems detected too late in the development lifecycle.
Learn More -
AI Trust Lab: Engineering for Trustworthy AI
The SEI’s Trust Lab advances the development of trustworthy AI through accelerated research and collaboration. We develop frameworks, tools, and guidelines driven by trustworthy, human-centered, and responsible AI engineering practices.
Learn More -
Learning Patterns by Observing Behavior with Inverse Reinforcement Learning
The Software Engineering Institute (SEI) uses Inverse Reinforcement Learning (IRL) techniques—an area of machine learning—to more efficiently and effectively teach novices how to perform expert tasks, achieve robotic control, and perform activity-based intelligence.
Learn More -
xView 2 Challenge
The xView 2 Challenge applied computer vision and machine learning to analyze electro-optical satellite imagery before and after natural disasters to assess building damage. The competition’s sponsor was the Department of Defense’s Defense Innovation Unit (DIU). This technology is being used to assess building damage from wildfires in Australia and the United States.
Learn More -
Train, But Verify
Attacks on machine learning (ML) systems can make them learn the wrong thing, do the wrong thing, or reveal sensitive information. Train, But Verify protects ML systems by training them to act against two of these threats at the same time and verifying them against realistic threat models.
Learn More -
Community Guidance to Prevent Common Coding Errors
The SEI leads a community initiative to establish secure coding practices that prevent coding errors and that are reliable, usable, and effective.
Learn More -
Knowing When You Don’t Know: Engineering AI Systems in an Uncertain World
This project is benchmarking methods for quantifying uncertainty in machine learning (ML) models. It is also developing techniques to identify the causes of uncertainty, rectify them, and efficiently update ML models to reduce uncertainty in their predictions.
Learn More -
AI Engineering: A National Initiative
The SEI is taking the initiative to develop an AI engineering discipline that will lay the groundwork for establishing the practices, processes, and knowledge to build new generations of AI solutions.
Learn More -
Applying Causal Learning to Improve Software Cost Estimation and Project Control
SEI researchers have applied causal learning to help the Department of Defense identify factors that increase software costs and to provide guidance to control them.
Learn More -
Characterizing and Detecting Mismatch in ML-Enabled Systems
The development of machine learning-enabled systems typically involves three separate workflows with three different perspectives—data scientists, software engineers, and operations. The mismatches that arise can result in failed systems. We developed a set of machine-readable descriptors for elements of ML-enabled systems to make stakeholder assumptions explicit and prevent mismatch.
Learn More -
Architecting the Future of Software Engineering: A National Agenda for Software Engineering Research & Development
This study identifies the technologies and areas of research that are most critical for enabling future software systems. The technology roadmap that resulted from this work is intended to guide the research efforts of the software engineering community toward future systems that are safe, predictable, and evolvable.
Learn More -
Untangling the Knot: Enabling Rapid Software Evolution
Our automated refactoring solution recommends ways to refactor existing software, significantly increasing the efficiency of software evolution.
Learn More -
Juneberry
Juneberry automates the training, evaluation, and comparison of multiple ML models against multiple datasets. This makes the process of verifying and validating ML models more consistent and rigorous, which reduces errors, improves reproducibility, and facilitates integration.
Learn More -
An Innovative Approach to Internet of Things (IoT) Security at the Edge
Internet of Things (IoT) devices can provide useful capabilities, but many have known security vulnerabilities that have been exploited by malicious actors. The SEI KalKi security platform leverages software-defined networking (SDN) and network function virtualization (NFV) to enable secure integration of IoT devices into Department of Defense (DoD) networks, even devices that are not fully trusted or configurable.
Learn More -
Tactical Cloudlets: Bringing the Cloud to the Tactical Edge
Making cloud computing resources available to military personnel and first responders in the field presents accessibility and security challenges. Tactical cloudlets provide secure, reliable, and timely access to cloud resources to help military and emergency personnel carry out their mission at the tactical edge despite unreliable connectivity to the cloud.
Learn More -
Connecting Securely to IoT Devices in Edge Environments
The SEI developed new layers of security and functionality so that field personnel can securely access IoT devices in edge environments.
Learn More -
Creating a Roadmap that Supports a Secure Move to the Cloud for the Army
The SEI helped establish a roadmap for each phase of the product lifecycle so that the AEC can update its OT&E activities and support the Army’s move to the cloud.
Learn More -
AI Workforce Development
The SEI is advancing the professional discipline of AI engineering through the latest academic advancements at Carnegie Mellon University.
Learn More -
Predicting Changing Conditions in Production Machine Learning Systems
The inference quality of deployed machine learning (ML) models degrades over time due to differences between training and production data, typically referred to as drift. The SEI developed a process and toolset for drift behavior analysis to better understand how models will react to drift before they are deployed and detect drift at runtime due to changing conditions.
Learn More -
DevSecOps Platform Independent Model (PIM)
The DevSecOps Platform Independent Model (PIM) enables organizations to implement DevSecOps in a secure, safe, and sustainable way in order to fully reap the benefits available from DevSecOps principles, practices, and tools.
Learn More -
Artificial Intelligence Engineering Body of Knowledge
AI Engineering focuses on developing tools, systems, and processes to enable the application of artificial intelligence in real-world contexts. The body of knowledge will be a standardization of this emergent discipline and will guide practitioners in implementing AI systems.
Learn More -
AISIRT Ensures the Safety of AI Systems
The SEI created an AISIRT to ensure that organizations develop, adopt, and use AI effectively and safely to safeguard the security of the nation.
Learn More -
The Advanced Computing Lab
The Advanced Computing Lab has extensive expertise in software performance optimization on diverse hardware architectures, and hardware and system design for software-based systems.
Learn More -
Automated Repair of Static Analysis Alerts (Redemption of False Positives)
The SEI Redemption tool extensibly repairs code associated with static analysis alerts. Currently, it repairs uninitialized memory, null pointer, and other C/C++ weaknesses.
Learn More -
Automating Container Minimization for the Edge
The SEI's Container Minimization Tool prunes and deduplicates files to reduce storage waste and software vulnerabilities in the resource-limited environment of the tactical edge.
Learn More