Identity is still the only control surface security truly owns—but AI has quietly punched a 92%‑wide hole straight through it.

The 92% blind spot AI quietly opened in your identity program

For years, identity has been the closest thing to a reliable control surface. It stayed in place as users and identities moved from on‑prem to SaaS to cloud, even as devices and networks changed underneath. Now AI has carved out a gap big enough to undermine that entire control model.

Recent industry research and customer assessments consistently show most organizations lack centralized visibility into AI and non-human identities, and many lack confidence in detecting misuse. That’s not about speculative AI models or hypothetical artificial general intelligence; it’s about very real, very present identities acting inside your ERP, finance, HR, CRM, and data platforms that many organizations currently have limited governance controls for.

The focus is not on speculative AI models or hypothetical artificial general intelligence. Rather, it concerns the tangible, existing identities operating within your core platforms—ERP, finance, HR, CRM, and data—for which many organizations currently lack robust governance controls.

Picture an LLM‑powered agent wired into your stack with a single API key. It may access customer data from the CRM, update records in the ERP, and interact with collaboration tools—all under a generic service account with no clear owner, no certification history, and no segregation-of-duties review. Or a “temporary” service account created for an AI integration that outlives the project by years, quietly retaining broad access to finance or HR data with no lifecycle tied to HR or IT workflows.

In both cases, the failure isn’t “AI risk” in the abstract. The failure is painfully common: no visibility, no ownership, and no policy‑based control over non‑human identities—AI agents, service accounts, API keys, machine identities—that now outnumber humans in many environments.

Identity has stopped being just human, yet most IAM and IGA stacks still treat only employees and contractors as first‑class.

 

Why the tools you already have can’t see your AI

It’s tempting to assume this is just another IAM configuration problem. Turn a few knobs, add some integrations, and those AI agents will fall neatly into line. Unfortunately, that’s not how any of this works.

AI agents do not authenticate like human identities. Agents often act through shared credentials, long‑lived tokens, or embedded secrets rather than individual named accounts. In your logs and SSO dashboards, they appear as faceless automation identities—“automation@integration‑svc” instead of reflecting an accountable user who approved access to critical systems like SAP.Meanwhile, non‑human identities have proliferated. In many environments, non-human identities already outnumber humans by an order of magnitude or more. These identities typically don´t show up in HR systems , never complete training, and rarely appear in traditional access reviews—yet they often have the broadest and most durable access to your crown‑jewel systems and data.

The lifecycle story is broken as well. Human accounts flow through joiner–mover–leaver processes: job change in HR, access change in IAM, and offboarding when employment ends. AI and machine identities are spun up in projects, toggled on inside SaaS, created ad hoc by developers, and almost never retired in a timely, governed way.

And then there’s the fragmentation. AI agents don’t respect platform boundaries. A single agent might read from ERP, enrich with CRM context, query a data warehouse, and write back into finance or case management—all orchestrated through APIs and integration layers your IAM tools see only in pieces. No single IAM product is building you a consolidated view of “this agent, in this use case, can do these things to these datasets across these systems.”

So when a board member or regulator asks, “Who owns our AI agents touching SAP, Salesforce, and finance data, and what policy governs them?”, the honest answer in most organizations is still: we don’t really know.

 

From federated identity to federated governance for AI identities

Federation is a familiar idea in identity. Federated identity allows users to authenticate across domains using a trusted identity provider, rather than requiring separate credentials for every system. It primarily answered the question: “Who is getting in?”

The AI era is forcing a different question: not only who is getting in, but whether they should be there, what they’re allowed to do while they are there, and how you prove control afterwards.

This is where federated governance becomes essential. It acts as as a control layer that sits above your IAM, PAM, and application settings, unifying everything that can act in your environment—humans, machines, agents—into a single, policy‑driven picture.

In practical terms, federated governance means:

  • Consuming identity and access data from multiple IAM, ERP, SaaS, data, and AI systems.
  • Normalizing humans and non‑human identities—AI agents, bots, service accounts, API keys, machine identities—into a common model with owners, purpose, and risk attributes.
  • Applying consistent policies (SoD, least privilege, critical access, AI‑specific rules) and driving remediation back into those domains when something falls out of bounds.
  • Add an idea about covering apps that live outside of ERP and IGA – the Zoetis story (SAP GRC for SAP and SailPoint)

Federated governance is a control plane that sits above individual IAM, IGA, GRC and PAM systems , unifying policy, visibility, and evidence for every identity—human or non‑human—across ERP, SaaS, data, and AI platforms.

If federated identity management lets AI agents in, federated governance decides whether they should be there, what they can do, and how you prove it to auditors, boards, and regulators.

 

How a federated governance layer actually changes how you operate

It’s easy to talk about “control planes” in the abstract, but the value shows up when you look at what changes in day‑to‑day operations once that control plane is in place.

The first shift is simple but significant: you get a complete inventory of everything that can act in your environment. A federated governance layer aggregates identities from directories and IAM tools, from SAP and Oracle, from Salesforce and ServiceNow, from AI platforms and orchestration frameworks. Human identities, AI copilots, MCP‑connected agents, service accounts, API keys, machine identities—they all land in a single catalog.

But the catalog is only the first step. Each entry is enriched so it looks and behaves like a first‑class identity:

  • There’s an accountable owner.
  • A clearly defined business purpose and use case.
  • System context: which applications, environments, and MCP servers it touches.
  • Risk attributes: SoD conflicts, critical transactions, and data classifications involved.

An AI agent pushing invoices through finance no longer looks like a vague “integration user”; it looks like a recognizable, reviewable identity, just like a financial analyst role.

Once that foundation is in place, policy takes over. Instead of chasing exceptions in spreadsheets, you define rules that cut across systems and identity types. For example:

  • No AI identity with write access to general ledger accounts without an explicit owner, SoD review, and time‑bound justification.
  • No agent with simultaneous access to vendor bank accounts and vendor creation workflows in production.
  • No “headless” service account with access to both HR master data and off‑cycle payment processes.

The governance layer uses those policies, plus attributes, to auto‑classify and risk‑score identities: long‑lived tokens, cross‑system access paths, toxic combinations, or identities with no owner surface as high‑risk by default.

Reviews begin to look different as well. Instead of separate campaigns for “users” here and “service accounts” there (with AI consistently out of scope), business owners see people and non‑human identities in the same pane of glass. Each AI agent comes with context: where it runs, what datasets it touches, what actions it can take, and which SoD or data‑policy exceptions apply. Certifications stop being a human‑only routine and become a complete view of who and what is operating in your critical systems.

Most importantly, policies now have teeth. When an identity fails certification or violates a rule, the federated governance layer doesn’t just log an issue; it orchestrates the fix: revoking a role in SAP, rotating an API key, narrowing a scope in a SaaS app, or decommissioning a stale agent. Continuous discovery ensures new AI agents, integrations, and service accounts are pulled under policy as soon as they appear, not at your next annual audit.

In practice, that means the story changes from “we found a rogue AI agent in the audit” to “the system flagged a new agent accessing finance data with no owner; we routed it to the data owner, and because they didn’t approve, its access was automatically revoked in SAP and the underlying IAM tool.”

 

Turning AI from a black box into a board‑ready metric

Once AI identities sit inside the same governance fabric as your human identities, your conversations with stakeholders evolve quickly.

At the board level, you can stop talking about AI solely in terms of opportunity and experiment count. Instead, you can show a concise, metrics‑driven picture:

  • Total AI and non‑human identity count.
  • Percentage with named owners and a defined business purpose.
  • Number of high‑risk AI identities with SoD violations or critical access.
  • Time‑to‑remediation for AI‑related policy breaches.

AI stops being a mysterious innovation topic and becomes an identity‑governance metric you can defend, trend, and improve.

For audit and compliance, the days of “AI is out of scope for now” are over. When auditors ask how AI interacts with SAP, Oracle, or other crown‑jewel systems, you can produce certified lists of AI agents and service accounts that include entitlements, owners, last review dates, and remediation history. AI access no longer sits in a black‑box exception bucket; it runs through the same SoD and ITGC frameworks you already use for high‑risk human roles.

Operationally, you gain the ability to say yes to more AI use cases without rolling the dice. There’s a repeatable pattern: catalog the identity, classify the risk, bind it to policy, certify it, monitor it. Instead of “we’re not sure who owns the risk,” the message becomes, “we have a governed path for AI in ERP and SaaS.”

From a platform perspective, this is what it means to turn your identity program into the AI control plane for ERP, finance, and SaaS—without ripping and replacing the IAM stack you already invested in.

 

The objections you’ll hear—and how to answer them

Two objections surface in almost every roadmap and budget conversation around this topic.

The first: “Can’t we just extend our IAM tool?” It’s true that many IAM products can authenticate AI agents, manage secrets, and enforce MFA for service accounts. But they are not designed to model agents with ownership, business context, SoD patterns, data classifications, and lifecycle workflows across ERP and SaaS systems. Nor do they typically include a federated policy engine that can enforce cross‑system rules and feed closed‑loop remediation back into applications. Extending IAM is necessary—but without a governance layer above it, IAM simply enforces whatever fragmented, inconsistent decisions exist underneath.

The second objection: “Isn’t this just more complexity?” The uncomfortable truth is that the complexity already exists—just in hidden form. Long‑lived service accounts, shadow AI tools, and embedded tokens scattered across dozens of systems are already increasing your blast radius; you’re just not seeing them in one place.

A federated governance layer doesn’t add complexity so much as surface and organize it. It centralizes policy and evidence while allowing different teams and platforms to continue using the tools that make sense for them. You get a single, coherent answer to “who or what did what, to which data, where, and under which policy?” even as AI vendors, MCP servers, and SaaS integrations change over time.

 

A simple test: do you actually have a federated governance layer?

You don’t need a multi‑month project to see whether this gap exists in your environment. You can start with a few uncomfortable questions.

Inventory where AI and non‑human identities already touch your crown‑jewel systems—ERP, finance, HR, CRM, data warehouses, collaboration platforms. Then ask yourself:

  • Can we produce a single, up‑to‑date list of those identities?
  • Do we know their owners, business purpose, and entitlements?
  • Are they covered by the same policies, SoD rules, and certifications as our high‑risk human roles?
  • Could we explain and evidence all of that to a regulator tomorrow?

If the answer is anything but a confident yes, you don’t just have an AI problem—you have a federated governance problem.

From there, the logical next step is to formalize a short readiness assessment, workshop, or checklist focused on “AI identity visibility and federated governance maturity.” Use it to map your current AI and non‑human identity inventory, highlight blind spots around ownership and policy, and prioritize integrations into a federated governance control plane that covers both human and non‑human identities across ERP, finance, and SaaS.

AI isn’t going to wait for your control framework to catch up. But with federated governance in place, you at least decide the rules: every human, machine, and agent operates inside the risk boundaries you define—not outside them.

 

If you’re already seeing these gaps in your own environment, let’s make them visible and fixable together—book a demo or working session with our team and see federated governance for AI identities in action. From there, we can map your AI and non‑human identities, quantify the risk, and design a control plane that fits how your business actually runs.

Share: