Boards have spent years asking if people have too much access. In 2026, the harder question is whether your AI agents do — and whether you can prove that to regulators, auditors, and your board.
New AI and cybersecurity rules, from the EU AI Act to SEC cyber disclosure requirements, are turning AI agents into SOX-relevant internal control risks when they influence financial processes, access, or data flows that underpin financial reporting. As these rules move from headlines to enforcement deadlines, “AI policy” is rapidly becoming something auditors can test through controls and evidence, and regulators can investigate.
Force #1 – AI regulation moves from abstract to auditable
For years, AI policy has been dominated by ethics frameworks and voluntary guidance. The EU AI Act changes that by introducing a binding, risk‑based regime with specific duties for providers and deployers of high‑risk AI systems. High‑risk AI, like systems that influence hiring, credit, or critical infrastructure, must meet requirements for risk assessment and mitigation, data governance, logging, documentation, transparency for deployers, human oversight, and security.
By August 2026, most obligations for high-risk AI systems under the EU AI Act become enforceable, including ongoing risk management, technical documentation, logging, and post-market monitoring. For internal audit, this reshapes what needs to be tested. When AI systems influence financial reporting, HR decisions, procurement approvals, or security controls, auditors will need traceable logic, clear data lineage, and hard evidence of who or what is allowed to act — not just model cards and DPIAs.
That shifts the key question from “Is the model safe?” to “Do we have identity‑centric lifecycle and access controls around how AI is allowed to act inside ERP, HCM, CRM, and other business‑critical systems?” Organizations need to know which agents exist, which identities they assume, which systems they can access, and how their access evolves over time.
Force #2 – SEC and SOX pull cyber and identity into the boardroom
In the US, the SEC’s cybersecurity disclosure rules require public companies to promptly disclose material cyber incidents and provide detailed information on cybersecurity risk management, strategy, and governance. Material incidents must be reported on Form 8‑K within four business days of being deemed material, with disclosures covering the nature, scope, timing, and impact on financial condition and operations.
This effectively pulls identity failures, especially in financial systems, into the scope of securities regulation. If an incident involves misused identities, over‑privileged accounts, or opaque automation in core applications, boards will have to explain not only what happened, but how they governed the underlying cyber and access risks — on top of unchanged SOX obligations around internal control over financial reporting.
Now add AI. AI‑assisted changes to roles, entitlements, and access policies in financial applications can become control deficiencies if they are unclear or unmonitored. When agents can request, grant, or route access, the classic question “Who approved this?” becomes “Which autonomous process did what, under which policy, and can we prove it end‑to‑end?” That proof depends on trustworthy identity lifecycle management and auditable access control.
Force #3 – AI agents and non‑human identities outnumber people
Analysts report that machine and non-human identities already outnumber human users in many enterprises, a trend expected to accelerate as AI agents proliferate across business processes. Each employee may depend on several agents that can log into applications, trigger transactions, and handle sensitive data at machine speed, while traditional IAM models struggle because they were built for relatively static human accounts.
This shift shows up in two ways. First, “shadow AI”: line‑of‑business teams plug copilots, workflow bots, and connectors into SAP, Oracle, Workday, Salesforce, and custom apps without centralized identity governance, often reusing generic service accounts or hard‑coded tokens. Second, agents quickly accumulate powerful permissions as organizations experiment and stack automations without consistent joiner‑mover‑leaver discipline.
The fraud and error risk is obvious. When bots can execute journal postings, change vendor master data, approve transactions, or bypass manual checks, identity lifecycle and least‑privilege controls must apply just as strictly to non‑human identities as they do to humans. That means clear ownership, formally approved roles, and revocation paths for agents — not just for people.
What “good” looks like in 2026 AI identity governance
Against this backdrop, a mature AI identity governance posture in 2026 has several defining characteristics:
- Unified inventory of human and non‑human identities. Organizations maintain a consolidated view of users, service accounts, bots, and AI agents across ERP, HCM, CRM, data platforms, and custom applications, with clear mapping to business owners and risk levels.
- Policy‑driven identity and access lifecycle management. Joiner‑mover‑leaver and role change processes apply uniformly to agents and service accounts: no unmanaged tokens, no “forever” entitlements, and no AI workflows outside formal policies.
- Continuous monitoring of high‑risk access. Controls detect high‑risk access combinations, unusual access escalation, and abnormal behavior patterns, whether actions originate from a person or an AI identity.
- End‑to‑end audit trails. Every access grant, policy change, and sensitive transaction is traceable back to a human decision or an authorized agent under a defined governance framework, with logs preserved to meet EU AI Act, SOX, and SEC expectations.
- Board‑level reporting that speaks to risk, not just technology. Identity and AI risks are translated into financial and operational exposure, allowing boards to answer investor and regulator questions with confidence rather than anecdotes.
In concrete terms, this means treating AI agents as a new class of workforce identity: they are onboarded with clear purposes and scoped access, continuously monitored for drift and misuse, and cleanly deprovisioned when no longer needed.
Yes. Add a short contrast paragraph in the “emerging playbook” section that describes the *old/competitor way* as what costs time, trust, and money, then pivot back to your vision.
Here’s a rewrite of that part you can drop in:
***
## The emerging playbook for AI identity governance
What emerges from these forces is a common playbook. Organizations that are moving fastest are:
- Making identity — not networks or endpoints — the primary control plane for AI, including non‑human identities embedded deep in business processes.
- Extending lifecycle management, access governance, and control monitoring capabilities they already apply to ERP and other critical systems to cover AI agents and service accounts with the same rigor.
- Using continuous control monitoring and analytics to surface high‑risk access conditions, privilege creep, and anomalous activity tied to AI, and feeding that back into both security operations and audit.
- Building cross‑system policies and evidence packages that simultaneously answer SOX, SEC, and EU AI Act questions, rather than treating each regulation as a separate project.
By contrast, many programs still bolt AI on top of fragmented, manual identity processes: separate tools for each application, spreadsheet‑driven reviews, ticket queues for every access change, and point solutions for “AI security” that sit off to the side. That model costs leaders time (managers rubber‑stamping requests they don’t understand), erodes trust (no single, defensible answer to “who has access to what, and why?”), and wastes money as the same access issues resurface in every audit cycle.
The common thread is federation: AI, identity, and new compliance obligations are no longer separate conversations; they define a single control problem that lives where critical business applications and data reside. Ultimately, the organizations that treat AI identities as a core part of their identity governance program — with real lifecycle management, monitoring, and accountability — will be best positioned when regulators and auditors inevitably ask, “How do you know your AI isn’t your biggest insider?”
Move from talking about AI risk to proving you can control it. Request a demo to surface every non‑human identity in your ERP and finance systems and experience what governed, auditable AI access looks like in your environment.