search menu icon-carat-right cmu-wordmark
quotes
2022 Research Review / DAY 2

Safety Analysis and Fault Detection Isolation and Recovery Synthesis for Time-Sensitive Cyber-Physical Systems

The operational complexity of cyber-physical systems (CPS) forces new autonomous features into day-to-day systems, such as vehicles and factories, a phenomenon termed increasingly autonomous CPS systems (IA-CPS) [Alves 2018]. IA-CPS have a complex architecture that weaves hardware, AI-enabled functions or decision-making processes, human operators, and software. They are time sensitive and substitute human actions with high-frequency real-time algorithms. In such systems, the conjunctions of faults and their timed propagation can cause fatal incidents, such as those involving autonomous cars. In these particular cases, the safety mechanisms were either too inefficient to prevent a fault or actually caused the incident.

This situation creates concerns for future DoD programs: These systems not only need to be able to detect failures and recover once, but they also need to be able to reconfigure multiple times—autonomously—as they adapt to different situations without human intervention.

SAFIR addresses safety analysis of time-sensitive CPS in both its theoretical and practical dimensions.

Dr. Jerome Hugues
Senior Architecture Researcher
Photo of Jerome Hugues.

Implementing the DoD’s AI vision requires advances in safety analysis, and fault detection isolation and recovery synthesis (or SAFIR) to (1) model and analyze dynamic reconfiguration and fault propagation due to fault sequences, and (2) enforce safe reconfiguration. In the first two years, SAFIR has investigated the properties a CPS architecture must demonstrate to integrate autonomy functions and fulfill safety objectives and how to integrate them into a model-based systems engineering (MBSE) practice:

  • SAFIR improved the IA-CPS systems engineering body of knowledge, focusing on safety mechanisms, from design to verification and validation (V&V), using model-based engineering (MBE) techniques.
  • SAFIR delivered an updated taxonomy to express fault models of IA-CPS and derive efficient detection mechanisms. Together with Georgia Tech, we explored the techniques to detect tampering with sensors data either in case of faults or cyber attacks, conditions for detectability of these attacks, and the possibility to derive a controller for an IA-CPS in the case of timing errors.
  • SAFIR delivered formally backed reasoning and simulation capabilities for IA-CPS architectures by mechanizing the SAE AADL language using the Coq theorem prover, expanding V&V capabilities for MBE tooling.
  • SAFIR defined and implemented the Architecture-Supported Audit Processor (ASAP): a tool that generates a number of safety-specific system views that deeply integrate a system’s architecture and arguments.

SAFIR addresses safety analysis of time-sensitive CPS in both its theoretical and practical dimensions, and it contributes to the SEI’s line of research on artificial intelligence and autonomy. At the end of the second year, SAFIR has established the theoretical foundation to perform safety evaluations in the context of time-dependent failure conditions.

SAFIR expands MBSE with mathematically grounded techniques to analyze the architecture of AI-based cyber-physical systems—such as drones—and derive an argument on their safety. SAFIR expands MBSE with mathematically grounded techniques to analyze the architecture of AI-based cyber-physical systems—such as drones—and derive an argument on their safety.

In Context

This FY2021–23 project

  • builds on SEI expertise in MBSE, safety analysis, and the AADL language and extends past contributions from Integrated Safety and Security Engineering (ISSE) and TwinOps
  • aligns with the CMU SEI technical objective to bring capabilities through software that make new missions possible or improve the likelihood of success of existing ones and to be trustworthy in construction and implementations
  • aligns with the CMU SEI technical objective to be resilient in the face of operational uncertainties, including known and yet-unseen adversary capabilities
Mentioned in this Article

[Alves 2018]
Alves, E. E.; Devesh, B.; Hall, B.; Driscoll, K.; Murugesan, A.; & Rushby, J. Considerations in Assuring Safety of Increasingly Autonomous Systems. Technical Report NASA/CR-2018-220080, NF1676L-30426, Nasa Air Transportation And Safety. 2018.

More from Day 2 of Research Review 2022

Chain Games: Powering Autonomous Threat Hunting

Principal Investigator Phil Groce

Maturing Assurance Contracts in Model-Based Engineering

Principal Investigator Dr. Dionisio de Niz