When AI Becomes an Admin: Lessons for CISOs from the BodySnatcher Vulnerability

The BodySnatcher incident is a useful warning shot, more than an “AI flaw in ServiceNow.” It shows what happens when agentic AI is introduced into a critical workflow platform without being fully brought under identity and access governance. For CISOs, the core issue is clear: when AI agents can create accounts, assign roles, or drive operational workflows, they have to be treated as first‑class identities within your control model, not as side features of an application.

In other words, this incident sits squarely in the CISO remit: it is less about “AI gone wrong” and more about identity, authorization, and governance being out of step with how powerful these agents have become.

 

What Actually Happened

BodySnatcher (tracked as CVE‑2025‑12420) is a critical authentication and authorization vulnerability in ServiceNow’s Now Assist AI Agents and Virtual Agent API. AppOmni disclosed it in late 2025 and has since been patched by ServiceNow.

Seen through a control lens, several assumptions broke at once.

Identity Fabric Fragmentation

The ServiceNow Virtual Agent stack suffered from a breakdown in identity propagation. By exploiting a flaw in how the agent managed session tokens, an actor could pivot from an unauthenticated ‘guest’ state to a fully impersonated user. This effectively created an identity silo where the agent operated outside the visibility and enforcement of the enterprise’s primary SSO/MFA conditional access policies.

Service Account Over-Privileging

The incident highlights a critical governance gap: the escalation of intent. Agentic workflows, often granted broad service-account permissions to streamline ITSM tasks, became a vehicle for unauthorized administrative actions. This underscores the risk of ‘Agentic Over-Privilege,’ where the AI is granted rights that exceed the security context of the user it is supposed to be assisting.

Unintended API Exposure.

What was designed as a human-centric interface was essentially an undocumented API surface. By invoking these agent capabilities programmatically, attackers bypassed the ‘friction’ of the user interface. For security leaders, this serves as a warning that AI agents are not just UI enhancements; they are high-risk API endpoints that require the same Zero Trust scrutiny as any public-facing gateway.

Shift from Interactive UI to Headless Automation Surface Security guardrails were built on the assumption of benign, human-in-the-loop interaction via the chat interface. However, because these capabilities were exposed via programmatically accessible APIs, attackers could bypass the intended UI logic. This transformed the agent from a helpful assistant into a headless automation surface, allowing for high-velocity, scripted exploits that functioned entirely outside the visibility of traditional user-behavior monitoring.

The result was that an external actor with tenant details and an email address could act as an internal user, exercise privileged workflows, and create persistent backdoor admin access without going through normal login and approval paths. Because ServiceNow often sits at the center of ITSM, HR, security operations, and customer support, that exposure has platform and supply‑chain impact, not just single‑application impact.

The aim is not to single out one vendor. The pattern is familiar: AI features are being added quickly to core SaaS platforms, while some of the underlying identity and authorization assumptions remain static. The challenge now is to assume this pattern exists elsewhere and ensure the identity governance and administration layer can absorb it.

 

A Practical Lens for CISOs

Many CISOs are already contending with three overlapping pressures: rapid AI adoption, complex SaaS ecosystems, and rising expectations from boards and regulators. BodySnatcher sits where all three meet.

A practical way to frame it is:

  • Agentic AI is an identity surface. Any agent that can act in your environment should be modeled, governed, and monitored like a privileged human identity – ideally within a risk‑aware identity management approach.

  • Vendor patches are necessary but not sufficient. They close specific vulnerabilities, but they do not automatically extend Segregation of Duties, lifecycle, and risk monitoring to AI agents across your broader estate.

  • Governance has to be enterprise‑wide. ITSM, ERP, CRM, HR, and security tools now all embed AI; treating each AI feature as an isolated item makes it hard to reason about aggregate risk, which is exactly why converged identity and access management is becoming a board topic.

With that lens, the question changes from “What went wrong here?” to “What guardrails are needed so the next AI feature, in any platform, lands on a safer foundation?”

 

A Three‑Tier Plan After BodySnatcher

This plan is about getting out of the dark quickly and closing the most easily abused gaps, without derailing useful AI projects. To keep the response manageable, it helps to frame it in three tiers: immediate stabilization, medium‑term governance, and longer‑term architecture.

Tier 1: Next 90 days – establish visibility and remove obvious weaknesses

The priority is understanding exposure and closing straightforward gaps without blocking legitimate AI usage.

  • Inventory where agentic AI is already in use. Map built‑in agents, chatbots with action capabilities, and external AI integrations across ITSM, ERP, CRM, HR, and security tooling.

  • Eliminate static and shared credentials for AI and chat integrations. Identify any static, shared, or tenant‑wide secrets and replace them with per‑agent identities and modern secrets management with rotation and revocation.

  • Bring action‑capable agents into your identity store. Ensure any agent that can perform actions (not just answer questions) is represented as an identity that can appear in access reviews and joiner/mover/leaver processes, ideally within your central IGA platform.

These steps reduce “unknowns” and the easiest abuse paths, while giving a clearer view of where agentic AI already touches critical workflows.

 


Tier 2: 3–12 months – bring AI fully under identity governance

Once basic visibility is established, the next step is to fold AI agents into the same governance disciplines already used for human and service accounts.

  • Define least‑privilege scopes for each agent. For every agent, document the specific business actions it needs (for example, “create IT incident,” “read HR ticket status”) and remove broad “create anywhere” or “admin‑equivalent” capabilities.

  • Extend the Segregation of Duties rules to cover AI roles. Update SoD rules and libraries to include AI roles as well as human ones; for example, an agent that can open a change or ticket should not also be able to approve, close, or override it.

  • Normalize logging and monitoring of AI behaviour. Ensure logs clearly distinguish between human, service account, and AI‑initiated actions and feed them into SIEM and UEBA tooling for anomaly detection.

At this stage, the goal is for agents to be formally within the identity graph and Segregation of Duties model, and for their risk to be assessed with the same level of confidence as human‑privileged access.

 

Tier 3: 12–24 months – move toward an agent‑native identity architecture

Longer term, many organizations will move beyond patching and tuning toward more structural changes in how identity and AI interact. Industry work, such as CSA’s agentic AI IAM guidance, points in this direction.

  • Adopt an “agent‑as‑principal” model. Shift to patterns where agents obtain time‑bound, delegated “on‑behalf‑of” privileges with dynamically evaluated policies, rather than broad, long‑lived platform tokens.
  • Integrate AI use‑case reviews into existing risk processes. Run new AI workflows through security and risk design reviews similar to high‑risk code changes or privileged role designs, including threat modeling and rollback plans.
  • Align with emerging AI governance frameworks. Map controls to frameworks such as NIST AI RMF profiles and sector‑specific guidance that already treat AI as part of the critical control environment, including identity and access.

The aim is to make identity‑governed AI the default design choice, instead of something layered on after deployment.

 

Control Objectives to Anchor On

Underpinning these tiers are a few control objectives that can guide design and prioritization.

  • Treat AI agents as first‑class identities. Each agent should have its own identity, lifecycle, and accountable owner; no shared “external agent” identities and no anonymous workflows for high‑impact actions. This is the same mindset shift described in SafePaaS’s risk‑based identity governance work.

  • Scope and constrain agent capabilities. Capabilities should be defined in business terms and limited to specific tables, workflows, or transaction types, avoiding generic “create anywhere” or “admin” privileges.

  • Include agents in Segregation of Duties and risk modeling. SoD, rules, and risk simulations should explicitly consider agent roles and ask, “What happens if this agent is misconfigured or abused?”

  • Use dynamic, non‑static authentication and delegation. Replace platform‑wide or tenant‑wide static secrets with per‑agent secrets, rotation, and scoped delegation that match the defined privileges.

  • Ensure telemetry and explainability. Logs and analytics should allow security teams and auditors to answer who (or what) did what, where, and with what downstream effect.

 

Role of an Identity Governance Platform

An enterprise governance platform can’t prevent every vendor‑side defect, but it can materially reduce how far a defect can be exploited and how quickly it is detected. The useful question is what capabilities this layer should offer, regardless of which product is in the spotlight.

A modern identity governance platform should help you:

  • Build a unified view of humans, service accounts, bots, and AI agents so access reviews, certifications, and risk scoring apply consistently across all actors 

  • Run preventive access risk analysis on AI use cases before deployment, by simulating the entitlements and Segregation of Duties, conflicts an agent will introduce across ERP, ITSM, CRM, and HR systems.

  • Continuously monitor entitlements and activity to detect when agents deviate from expected patterns or violate defined policies, independent of the underlying SaaS vendor – aligned to the AI in IGA guidance.

  • Provide audit‑ready evidence that AI‑driven workflows are governed: documented policies, access decisions, and review histories for agents as well as human users.

SafePaaS operates in this governance layer: it ingests application entitlements and activity from any business‑critical system, applies SoD and risk analytics, and supports governance workflows that can now be extended to AI agents as they become part of core business processes. The objective is to apply familiar identity controls to a new class of actors, without slowing responsible AI adoption.

 


A Constructive Path Forward

BodySnatcher highlights a set of gaps that many organizations could encounter as AI embeds deeper into critical workflows. The constructive response is neither to halt AI nor to assume vendors can fully absorb the risk, but to intentionally bring AI agents into the identity and access governance practices that are already proven.

By taking a phased, identity‑first approach – establishing visibility, applying governance, and evolving architecture – CISOs can make AI‑driven automation an asset rather than an unmanaged extension of the attack surface.

Talk to an Expert  Book a Demo

Facebook
Twitter
LinkedIn
Get in touch
bloquote

Drive efficiency, reduce risk and unlock productivity with SafePaaS. Book a demo.