Service accounts were a solved problem. Agentic identity broke the solution.
The identity governance gap your NHI policy didn’t anticipate
14 min read | May 2026
Every CISO has a service account governance process. It covers provisioning, rotation, least privilege, and audit. That process was built for identities with fixed behavior and narrow purpose. Agents have neither. The existing process has no mechanism to detect the difference, and most teams won’t find that out until an auditor asks for the permission scope of an identity that decided its own scope at runtime.
Key Insights
Service account governance was made tractable by one assumption: the identity’s actions could be enumerated in advance. Agents break that assumption structurally, not incidentally.
“Treat agents like service accounts” fails specifically at the authorization layer because just-in-time and context-aware controls require knowing intent, and agents don’t declare intent the way a batch job does.
Ephemeral agent identity sprawl is not a scaling problem. It is a lifecycle problem: the provisioning and deprovisioning logic that made service accounts manageable does not map to identities that can be reprompted into a new task without reprovisioning.
The natural language manipulation surface is a new attack vector against identity governance that service account policy was not designed to defend.
A credible NHI governance framework for agents requires four controls that existing service account policy cannot provide: runtime scope validation, chained action authorization, identity lifecycle triggers tied to task completion, and audit trails that capture why an action was taken, not just that it was.
The Sharp Reframe
What people think: non-human identity governance is a mature discipline. Service account policy covers provisioning, least privilege, rotation, and deprovisioning. Applying it to AI agents is a configuration exercise, not an architectural gap.
What actually happens: the tractability of service account governance depended entirely on a property agents don’t have. A service account does one thing. Its permissions map to that thing. The set of actions it will ever take can be enumerated before it is provisioned. That enumeration is the foundation of every control in the governance process: least privilege is defined against a known action set, audit alerts are scoped to known behaviors, deprovisioning is triggered by retirement of a known function.
An agent’s action set is not known in advance. It is determined at runtime by a reasoning process operating on natural language instructions. The same agent identity, with the same permissions, will take entirely different action sequences depending on what it is asked to do. The authorization model built for service accounts assumes a fixed mapping from identity to action. The agent breaks that mapping by design.
The hidden antagonist is not the agent technology. It is the governance assumption that identity scope can be defined at provisioning time. For service accounts, that assumption was true and productive. For agents, it is false and dangerous, because every control downstream of that assumption inherits the flaw.
What Everyone Is Doing
Security and IAM teams extending their governance processes to cover AI agents are doing what they were trained to do. They apply existing NHI policy. They create service account entries for agent identities. They scope permissions to the relevant systems. They add the agent to the rotation schedule. They document the owner. Some add a label in the identity store to distinguish agent identities from traditional service accounts.
All of this is reasonable. None of it surfaces the authorization gap.
Least privilege scoping assumes enumerable actions. Least privilege for a service account means: identify every action this identity needs to perform, grant only those permissions. For an agent, identifying every action it needs to perform requires knowing every task it will ever be given. Most deployments do not scope at the task level. They scope at the system level: this agent has read access to the knowledge base and write access to the ticketing system. That is not least privilege. That is system-level access granted to an identity whose action scope is determined at runtime by whoever prompts it.
Rotation schedules address credential exposure, not behavioral scope. Rotating an agent’s credentials on a 90-day cycle addresses the risk that the credential is compromised. It does not address the risk that the agent’s behavioral scope has expanded beyond what was intended at provisioning. The credential is fresh. The permission accumulation from six months of uncheckpointed sessions is not rotated with it.
Audit alerts are tuned to known-bad patterns. Service account audit alerts fire when an identity accesses a system it has never accessed before, or takes an action outside its documented behavior. For agents, the documented behavior is “whatever the task requires.” The alert that would catch a service account doing something unexpected cannot be tuned to catch an agent doing something unexpected, because the baseline for “expected” does not exist in the same form.
Deprovisioning is tied to function retirement, not task completion. A service account is deprovisioned when the function it supports is retired. An agent identity is used for a task, then potentially reprompted for a different task without reprovisioning. The deprovisioning trigger that makes service account lifecycle management tractable does not fire for agent identities because the identity outlives the task scope it was originally validated against.
Rav’s core insight: the governance process works. The model it was built for has changed. The process has not caught up, and the gap between them is not a configuration problem. It is an architectural one.
The Moment I Saw It
I was three hours into an NHI governance audit at a regulated financial institution. The team had done solid work. Their service account inventory was current. Their rotation schedules were enforced. Their deprovisioning process had clear ownership. They had extended the same framework to cover their AI agent deployments, and on paper it looked complete.
We were working through the authorization section when I asked the question that reframed the review.
“What is the scope of permissions this agent was granted, and how was that scope validated against what it actually needs to do?”
The IAM lead pulled up the provisioning record. “It was provisioned with the standard read-write service account template. We scoped it to the relevant systems but we didn’t enumerate the actions because the agent decides that at runtime.”
I followed up: “So when you validated least privilege at provisioning, what did you validate against?”
A pause. “We validated that it only had access to the systems relevant to its use case. We didn’t validate at the action level because we couldn’t enumerate the actions in advance.”
Then the CISO asked the question that closed the room: “If this identity takes an action that causes a problem and we need to demonstrate to the regulator that the authorization was appropriate, what evidence do we produce?”
The answer was the provisioning record. Which showed system-level access. Not task-level scope. Not a validated action set. Not a record of what the identity actually did and why it was authorized to do it.
What the team had built was operationally sound by the standards of the framework they were applying. The provisioning was documented. The ownership was clear. The rotation was scheduled. The gap was not in the execution of the process. It was in the process itself.
The authorization model for service accounts works because you can define, in advance, what the identity will do. The moment you introduce an identity whose action set is determined at runtime by natural language instructions, that definition is no longer possible at provisioning time. The control that everything else depends on cannot be applied. The team had extended a complete governance framework to an identity type it was not designed for, and the extension looked correct right up until someone asked for the action-level authorization evidence.
When I realized this wasn’t unique to one system, the pattern was uncomfortable. Every regulated deployment I had reviewed where AI agent identities had been brought under existing NHI policy had the same gap: system-level access documented, action-level scope undefined, and no control between the two.
Why This Is Different
Most people think NHI governance for agents is a scaling challenge: more identities, same process, more automation required. The comparison misses the structural break.
Traditional service account:
Fixed behavior: the identity performs a defined, repeatable set of actions
Enumerable scope: every action can be listed before provisioning
Predictable paths: access to system A means actions of type X, always
Human-defined intent: a developer wrote the code that determines what the identity does
Lifecycle tied to function: when the function is retired, the identity is deprovisioned
Audit is pattern-matching: known actions, known systems, deviations are detectable
AI agent identity:
Dynamic behavior: the identity performs different action sequences depending on runtime instructions
Non-enumerable scope: the action set is determined by the task, which changes with each prompt
Unpredictable paths: access to system A means whatever the agent reasons it needs to do there
Natural language intent: the instruction surface is manipulable by anyone who can prompt the agent
Lifecycle decoupled from task: the identity persists across tasks that were never individually scoped
Audit requires intent: knowing what action was taken is insufficient without knowing why the agent took it
The fundamental shift is this: service account governance is tractable because the identity’s behavior is a function of code, and code can be reviewed before deployment. Agent behavior is a function of a reasoning process operating on runtime input. That reasoning process cannot be reviewed at provisioning time. The control that makes least privilege possible, enumerating the action set in advance, does not exist for agents. Every governance framework built on top of that control inherits the gap.
This matters differently in regulated environments than in standard enterprise deployments. In a regulated environment, authorization is not just an access control question. It is an evidence question. Can you demonstrate, after the fact, that every action taken by this identity was within the scope of what it was authorized to do? For a service account, the answer is yes: compare the action log to the provisioned permissions. For an agent, the action log shows what was done. It does not show whether the reasoning that produced that action was within the scope the organization intended when it provisioned the identity.
The Framework
Layer L1: Entry points
What it breaks: Agent identities enter the system without the fixed behavioral profile that entry-point controls rely on to verify scope, making caller verification a check on credential validity rather than a check on authorized behavior.
Example: An agent is provisioned with valid credentials and passes every entry-point control. It is then prompted to perform a task that was never scoped at provisioning. The entry-point controls confirm the identity is valid. They do not confirm the action the identity is about to take is within the scope of what was authorized. The credential check and the authorization check are decoupled, and the authorization check for runtime-determined behavior does not exist.
Layer L3: Authorization and trust
What it breaks: Just-in-time and context-aware authorization controls require a declared intent to evaluate against. Agents don’t declare intent at the action level; they determine it through reasoning at runtime, making the authorization gate either absent or operating on system-level access that is too coarse to reflect actual scope.
Example: An agent with read-write access to a financial data system is given a task that requires it to retrieve account records and write a summary to an internal report. Halfway through the task, a prompt injection in a retrieved document redirects it to write the account records to an external endpoint. The authorization layer checks: does this identity have write access to external endpoints? No. The call fails. But the authorization layer never had an opportunity to check: was writing to an external endpoint within the scope of the task this identity was provisioned to perform? That question was never formalized as a control, because the task scope was never defined at a level the authorization layer could evaluate.
Threat And Playbook Map
Inherited privilege escalation
The agent inherits the full permission set of the identity it was granted at provisioning. As it executes chained actions across a session, it operates with permissions that were validated against a general use case, not against the specific actions it is taking. The gap between provisioned scope and actual runtime scope is invisible to standard access controls.
How this plays out in a real system:
Entry: an agent is provisioned with read-write access to several internal systems, validated at the system level against a documented use case. No action-level scope is defined because the action set is determined at runtime.
Escalation: the agent executes a multi-step task, reading from a knowledge base, querying a data system, writing outputs to a collaboration tool. Each individual action is within the provisioned system access. The chain of actions produces an outcome, data consolidated from multiple restricted systems written to a broadly accessible location, that no human authorization process reviewed.
The miss: the authorization layer checks each action against provisioned permissions. Each check passes. There is no control that evaluates the chained action sequence against the intended task scope. The gap between “permitted at the system level” and “authorized for this specific action chain” is not modeled.
Impact: the agent has effectively escalated its operational scope beyond what any individual provisioning decision intended, not through a vulnerability, but through the legitimate exercise of permissions that were scoped too coarsely for runtime task execution.
Audit blind spot: the access log shows every action was permitted. It does not show that the combination of permitted actions produced an outcome outside the intended scope. The escalation is visible only in retrospect, if someone traces the full action chain and compares it to the original use case documentation.
Playbooks that surface the gaps:
Tool permissions: surfaces whether the agent’s permissions are scoped at the action level or only at the system level, and whether a mechanism exists to validate runtime action scope against provisioned intent
Agent governance: surfaces whether the agent’s use case is documented at a level of specificity that supports action-level authorization review, and who owns the scope definition
Persistent authorization without revalidation
The agent was authorized at provisioning time. That authorization persists across sessions, task changes, and context shifts with no revalidation trigger. The identity the organization reviews quarterly is not the same behavioral entity it authorized initially, because the tasks it performs have evolved and the authorization has not.
How this plays out in a real system:
Entry: an agent identity is provisioned for a specific use case, reviewed, and approved. The review is thorough. The use case is well-documented. The permissions are appropriate for the documented purpose.
Escalation: over the following months, the agent is used for additional tasks that were not part of the original use case. Each new task is operationally convenient. None triggers a reprovisioning review because the identity already exists with appropriate system access.
The miss: the rotation schedule fires. The credentials are rotated. The governance record is updated. The use case documentation is not reviewed against the tasks the agent is currently performing, because the rotation process is a credential hygiene process, not a scope revalidation process.
Impact: the agent identity has accumulated a behavioral scope across its operational history that was never collectively authorized. The provisioning record reflects the original use case. The actual usage pattern reflects everything the identity has been used for since. The gap between the two is not surfaced by any standard governance process.
Audit blind spot: the governance record shows an identity with documented ownership, current credentials, and an approved use case. The audit trail shows actions taken across a much broader behavioral scope. The question “was each of these actions authorized under the original provisioning decision?” cannot be answered from the governance record, because the governance record was never updated to reflect scope expansion.
Playbooks that surface the gaps:
Delegation graph: surfaces whether the agent’s authorization chain is explicitly modeled, whether scope expansion events trigger a reauthorization review, and whether the current use case matches the provisioned purpose
Agent governance: surfaces whether there is a revalidation trigger tied to task scope change, not just credential rotation, and who is accountable for maintaining alignment between the governance record and the actual behavioral scope
Why Reviews Miss This
Traditional NHI governance reviews are built to verify that the process was followed: provisioning was documented, permissions were scoped, rotation is scheduled, ownership is assigned. They are not built to verify that the process is adequate for the identity type being governed.
What they check:
Identity inventory completeness: is every non-human identity documented?
Provisioning documentation: is the use case recorded, is ownership assigned?
Permission scope: does the identity have access only to the systems it needs?
Rotation compliance: are credentials being rotated on schedule?
Deprovisioning coverage: are retired identities being removed?
Why this fails:
Inventory completeness checks whether agent identities are in the register. It does not check whether the governance model applied to those identities is appropriate for how they behave. An agent in a service account register, governed by service account policy, will pass an inventory completeness check while having no action-level authorization controls whatsoever.
Permission scope review validates system-level access. For service accounts, that is sufficient because system access maps predictably to action scope. For agents, system access is a necessary but not sufficient description of authorization. An agent with read access to a data system and write access to an output system has a permission scope that looks narrow. The action chains it can execute within that scope are not narrow. The review process does not model the difference.
Rotation compliance addresses credential hygiene. It says nothing about whether the identity’s behavioral scope has drifted from its authorized purpose since the last rotation. The credential is fresh. The scope drift is not reset.
What the standard lens sees: a service account register with documented agent identities, scoped permissions, and a rotation schedule.
What the architectural lens reveals: identities whose action set is determined at runtime, governed by a process that assumes fixed behavior, with no control between system-level access and task-level authorization.
Why This Matters
Regulatory authorization evidence is missing at the action level
In financial services and other regulated sectors, authorization is an evidence requirement, not just an access control. When a regulator asks whether an action taken by an AI agent was authorized, the answer must be traceable to a documented authorization decision. System-level provisioning documentation does not support that trace for runtime-determined actions. The evidence gap is structural.
Natural language is a manipulation surface with no service account equivalent
Service accounts receive instructions through code. Code is reviewed before deployment. An agent receives instructions through natural language at runtime. Natural language can be manipulated through prompt injection, indirect instruction, or context poisoning in ways that redirect the agent’s action selection without triggering any access control. Service account governance has no model for an identity whose instruction surface is manipulable at runtime. The controls were never designed for it.
Ephemeral identity sprawl breaks lifecycle assumptions
Service account governance worked at scale because the identity population was stable. Identities were provisioned deliberately, used for documented purposes, and deprovisioned through a managed process. Agent deployments generate ephemeral identities at a rate that overwhelms manual lifecycle management. The lifecycle assumptions embedded in service account process, that each identity has a defined function and a clear deprovisioning trigger, do not hold when the identity population scales with task volume rather than system count.
Scope drift is undetectable without action-level monitoring
A service account that starts accessing systems outside its documented scope triggers an alert. An agent that starts performing actions outside its intended task scope does not, because the intended task scope was never formalized at a level the monitoring system can evaluate against. Scope drift in agent identities is structurally invisible to governance processes that only model system-level access.
When you map agent identity scope explicitly: you identify the gap between system-level provisioning and action-level authorization, you add task scope documentation as a governance requirement, you design revalidation triggers tied to task change rather than credential rotation, and you build the authorization evidence chain that regulators will ask for.
Objection Handling
You might ask: isn’t this solved by zero trust architecture? If every action requires verification regardless of identity, the fixed-scope assumption doesn’t matter because every call is evaluated on its merits.
Yes, partly. A well-implemented zero trust architecture moves authorization closer to the action level and reduces reliance on perimeter-based trust assumptions. For agent identities, this is a meaningful improvement over traditional service account controls.
But zero trust evaluates whether the identity is permitted to take the action it is requesting. It does not evaluate whether the action is within the scope of the task the identity was authorized to perform. An agent with legitimate access to a data system can make a series of individually permitted calls that, in aggregate, constitute a scope expansion no human reviewer authorized. Zero trust passes each call. Nothing in a standard zero trust implementation evaluates the chain.
The deeper gap is the intent layer. Zero trust can verify that the identity has permission. It cannot verify that the action is within the intended purpose of the identity at the moment of execution. For service accounts, that question was answered at provisioning time by defining the action set. For agents, the question cannot be answered at provisioning time and most zero trust implementations do not have a mechanism to answer it at runtime.
Zero trust is a necessary component of agent identity governance. It is not sufficient without a task-scope model that gives the authorization layer something to evaluate runtime intent against.
The Reality Check
The governance gap is not that agents have identities. It is that the identity infrastructure built for service accounts cannot model, authorize, or audit autonomous decision chains.
The four-layer NHI framework for agent identities surfaces this at four points:
Action-level scope definition: can the organization define, at a level the authorization layer can evaluate, what actions this identity is permitted to take on a given task?
Chained action authorization: is there a control that evaluates the sequence of actions an agent takes, not just each action individually, against the intended task scope?
Task-completion lifecycle triggers: does the identity lifecycle include a revalidation event tied to task scope change, not just credential rotation?
Intent-aware audit trail: does the audit trail capture why the agent took each action, not just that it did, in a form that supports authorization evidence for regulatory review?
Most governance reviews ask:
“Is the identity in the register?”
“Are permissions scoped to the relevant systems?”
“Is the rotation schedule being followed?”
Those are necessary. Not sufficient.
The questions that surface the NHI governance gap are:
“Can you show me the action-level authorization for what this agent did last Tuesday?”
“What is the revalidation trigger if this agent starts being used for a task it was never provisioned for?”
“If a regulator asks whether this action was authorized, what evidence do you produce beyond the provisioning record?”
This is the part most architectures never get reviewed on.
References
NIST (2024). “NIST SP 800-207A: A Zero Trust Architecture Model for Access Control in Cloud-Native Applications.” https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207A.pdf
OWASP (2025). “OWASP Agentic Security Initiative: Top 10 for Agentic AI.” https://owasp.org/www-project-top-10-for-large-language-model-applications/
CyberArk (2024). “2024 Identity Security Threat Landscape Report.” https://www.cyberark.com/identity-security-threat-landscape/
NIST (2023). “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” https://airc.nist.gov/RMF
Saltzer, J. & Schroeder, M. (1975). “The Protection of Information in Computer Systems.” Proceedings of the IEEE 63(9). https://dl.acm.org/doi/10.1145/361011.361067
UK FCA (2024). “AI Update: Principles for the Responsible Development and Deployment of AI.” https://www.fca.org.uk/publications/feedback-statements/fs24-1-artificial-intelligence
European Parliament (2024). “EU AI Act, Article 9: Risk Management System.” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Anthropic (2024). “Responsible Scaling Policy.” https://www.anthropic.com/responsible-scaling-policy
EBA (2024). “Report on the Use of Artificial Intelligence in the Banking Sector.” https://www.eba.europa.eu/regulation-and-policy/innovation-and-fintech/artificial-intelligence
Perez, E. et al. (2022). “Ignore Previous Prompt: Attack Techniques For Language Models.” NeurIPS 2022. https://arxiv.org/abs/2211.09527
CISA (2023). “Identity and Access Management: Developer and Vendor Challenges.” https://www.cisa.gov/resources-tools/resources/identity-and-access-management-developer-and-vendor-challenges
👉 AI Agent Posture Playbooks: 30+ structured assessments to map where your agent controls were built for humans, not agents. Self-directed. No vendor cycle.
👉 Read the agentic security news. Instantly analyze the threat vector, see if it applies to your setup, and find the gaps with our interactive playbook. All free.
👉 Follow me on LinkedIn | X | Substack for weekly analysis of real agent failures, control gaps, and what the frameworks are and are not catching.


