icon-carat-right menu search cmu-wordmark

SOK: Bridging Research and Practice in LLM Agent Security

White Paper
This systematic review discusses academic surveys, grey literature sources, and real-world case studies on securing LLM agents.
Publisher

Software Engineering Institute

DOI (Digital Object Identifier)
10.1184/R1/30610928

Abstract

Large Language Model agents are rapidly transitioning from research prototypes to deployed systems, raising new and urgent security challenges. Unlike static chatbots, LLM agents interact with external tools, data, and services, creating pathways to real-world harm even during early stages of development. Existing guidance on securing agents is fragmented, creating obstacles for developers and organizations looking to build secure systems. To clarify the security landscape, we conduct a systematic review covering academic surveys, grey literature sources, and real-world case studies. We then (i) categorize the known threats to LLM agents and analyze key attack surfaces, (ii) construct a taxonomy of actionable security best practices encompassing the full LLM agent development lifecycle, highlighting gaps in the security landscape, and (iii) evaluate the adoption of these recommendations in practice. Together, these contributions establish a framework for developing comprehensive risk-mitigation strategies. Our synthesis promotes standardization, surfaces gaps in current practice, and establishes a foundation for future work toward secure LLM agents.