icon-carat-right menu search cmu-wordmark

Observational Human–AI (OHAI): A Defender Attribution Framework for Distinguishing Human vs. AI Threats

White Paper
OHAI is a proposed framework for providing probabilistic human‑to‑autonomous attribution in cyber incidents.
Publisher

Software Engineering Institute

Abstract

Artificial Intelligence (AI) is collapsing the cost, time, and skill barriers that once separated casual intruders from advanced persistent threats. For cyber defenders, the resulting evidence stream—rapid and adaptive—blurs the line between human and machine-enabled operations, undermining classic attribution methods and incident-response playbooks. This whitepaper introduces the Observational Human–AI (OHAI) Attribution Framework, a five‑stage cycle of Triage, Classify, Analyze, Profile, and Report that enables at-rest and in-flight inspection to assign probabilistic confidence to the spectrum of attacker archetypes between human and fully autonomous AI attacks. We catalog observable AI indicators and demonstrate practical application through two publicly documented AI‑enabled incidents. OHAI supplies defenders with definitions, analytic heuristics, and automation‑ready data fields, enabling the faster discrimination of AI‑driven threats, sharper predictive analytics, and more resilient human–machine defensive teaming. By operationalizing attribution, the framework aims to reduce misallocation of response resources, improve early‑warning fidelity, and inform future toolchains that will operate against autonomous adversaries.