A Cloud Security Alliance survey has found that most enterprises are running undiscovered AI agents, with nearly two-thirds reporting related security incidents in the past year.
The report, commissioned by US-based Token Security and conducted online in January 2026, draws on responses from 418 IT and security professionals across organisations of varying sizes and geographies. Titled Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises, the study examines how organisations are managing (or failing to manage) the growing volume of autonomous agents deployed across enterprise environments.
Incidents with tangible business consequences
Among organisations that reported AI agent-related incidents, 61% experienced data exposure, 43% faced operational disruption, and 35% reported financial losses. No respondent indicated zero material business impact, pointing to a broad pattern of harm rather than isolated cases.
In addition, the study highlights a notable gap between perceived and actual visibility. While 68% of respondents expressed confidence in their ability to monitor AI agents, the same cohort reported discovering previously unknown agents, with 41% saying this occurred multiple times. Shadow agents were most commonly found in internal automation or scripting environments (51%), LLM platforms including custom tools and plugins (47%), SaaS tools with built-in automation (40%), and developer-built workflows (40%).
Governance gaps and lifecycle risk
A key finding centres on AI agent decommissioning. Only 21% of organisations surveyed have formal processes for retiring agents once they are no longer needed. The report describes this as 'retirement debt', a condition in which agents persist beyond their intended use, retaining permissions and credentials that expose organisations to ongoing risk.
Autonomy models vary considerably across the sample. A majority (53%) operate agents autonomously for low-risk tasks while requiring human review for higher-risk actions. A further 24% rely on human-in-the-loop models for most tasks, and just 13% report fully autonomous deployments. When agents exceed their defined scope, 38% of respondents require human approval, 24% require the action to be logged, and only 11% automatically block it.
Moreover, the survey also identifies action risk and human authorisation as the primary signals shaping AI agent governance. Context-aware controls are expected to grow in importance, with 79% of respondents rating them as important or very important over the next two years. In parallel, 66% report having established guardrails defining agent boundaries.
As a result of the incidents documented, organisations are prioritising risk management (29%), monitoring (28%), and permission control (19%), a shift that the report frames as a transition from agent discovery towards behavioural governance at scale.
The findings carry broader implications for enterprise security teams at a time when autonomous AI deployments are accelerating. Furthermore, the combination of poor decommissioning practices, inconsistent oversight models, and widespread shadow deployment suggests that governance frameworks have not kept pace with operational adoption.