The owner is the identity: reframing NHI

Shoham Danino|

The Accountability Gap in Non-Human Identity Security

Most security teams today can tell you how many non-human identities exist in their environment. Very few can tell you who owns them. That gap—between what is visible and who is accountable—is where breaches persist, remediations stall, and risk accumulates quietly over time. Understanding why this gap exists, and how to close it, is one of the most important problems in enterprise security right now.

The category of "non-human identities" - NHIs - covers API keys, service accounts, OAuth tokens, bots, and increasingly, AI agents. By some estimates, NHIs now outnumber human identities in enterprise environments by ten to one or more. The security industry has responded by building an entire product category around NHI discovery and management: tools that scan your environment, catalog what credentials exist, flag the ones that are over-privileged or dormant, and score them by risk.

This is useful. But the industry is treating the symptom, not the source. It is cataloging the "what" while ignoring the "who", which is precisely where most organizations are struggling.

The framing issue

When you treat an NHI as the unit of security concern, you're implicitly treating it as a self-contained object, something with properties you can measure and a risk score you can assign. That model works well enough for some purposes. But it breaks down when you try to actually remediate a finding.

Here is a concrete example. Suppose your NHI security tool flags an OAuth integration with broad access to Salesforce, Google Drive, and Slack. It was authorized 14 months ago by an employee who has since left the company. The token is still active, and the scopes are clearly excessive. What do you do with that information?

The instinct is to revoke it. But blind revocation often breaks production environments and that concern is actually secondary to a more fundamental problem. To make a safe revocation decision, you need answers to two questions. First, what are the potential ripple effects: does this credential support a live business process, and what breaks if it disappears? Second, who is the responsible stakeholder: who has the authority and context to make that call? No team claims the integration. No one can explain why it was approved or whether the use case still exists. Security can see the risk. What it cannot find is the accountable owner. And without that owner, the finding is difficult to remediate.

So the questions you need to ask are not just what this credential can access, but who created it, who is responsible for it now, and who should make the remediation decision. Those are questions credential metadata don’t answer well. And they lead to the more fundamental question that matters most: who owns this?

Ownership as the missing primitive

When CISOs describe their NHI problem to us, they consistently surface the same core concern. They don't know who owns the credential. They have no one to call.

This problem reveals a structural flaw in how NHI security has been framed.

An NHI is a delegation. Someone created it, for a purpose, in a context, and with some set of expectations about how it would be used and maintained. That human or team - the owner - is the actual unit of accountability. The credential is an artifact of their decisions.

If you want to remediate a security finding on an NHI, you need the owner. You need to ask them whether the credential is still required, whether the permissions are still appropriate, whether the use case it was created for still exists. Without the owner, you're making guesses. With the owner, you can take action.

This reframing - from NHI to what we call the NHIO, the Non-Human Identity Owner (or to be as clear as possible, the human owner of a non-human identity) - is the core of how we think about the problem at Surf AI. The credential only makes sense in relation to the person or team responsible for it.

Why this hasn't been solved

The reason most NHI tools don't surface owner information isn't that people haven't thought of it, it's that it is genuinely hard. Credentials are often created with minimal documentation, authorized in one team, used by another, and inherited by no one explicitly. The person who created an API key two years ago may have left the company. A service account established by a contractor might have no clear internal owner. OAuth integrations are often authorized by individual employees and may never have been formally reviewed by a security team.

This is what we mean by the accountability gap. It's not just that the credential exists and hasn't been secured — it's that the organizational context required to make a good decision about it has been lost or was never established.

One implication of this is that NHI security can't be solved purely by scanning and cataloging. You have to build the ownership layer. You have to find, establish, and maintain the relationship between each credential and the human or team accountable for it. That's an organizational process, not just a technical one.

The lifecycle problem

There is a related issue worth addressing: NHIs have no lifecycle by default. You create a credential, and it exists until someone explicitly removes it. There's no equivalent of the HR process that provisions access when someone joins and deprovisions it when they leave.

Human identity management has largely solved this. When an employee is terminated, their accounts are deactivated through a defined process. Their access is revoked. But the NHIs they created - or the NHIs owned by the teams they were part of - often survive them. Because there's no lifecycle event that triggers a review, those credentials persist indefinitely, even after the accountable owner is gone.

The right mental model is that NHIs should carry the same lifecycle properties as the organizational structures they serve. When the owner of a credential changes roles or leaves the company, the credentials under their ownership should come up for review. When a project ends, the credentials created for it should be decommissioned. This requires treating ownership as a live relationship, not a static attribute.

The AI agent problem

One more dimension of this is becoming increasingly important and changes the stakes considerably.

Until recently, NHI security was primarily about static credentials - keys and tokens that exist in your environment and can be inventoried. The risk is real but bounded: a stale credential can be misused, but it isn't making autonomous decisions.

AI agents are categorically different. They act. They make decisions. They may call external APIs, modify data, or take actions whose consequences weren't fully anticipated when the agent was deployed. And they have permissions — permissions that were granted by someone, at some point, with some set of intentions.

When an AI agent does something unexpected - and they will, the first governance question is not only "what happened?" but "who authorized this?"Who granted this agent its permissions? What did they intend? What was the scope of their authorization?

Without ownership attribution, answering those questions is extremely difficult. With it, you have an audit trail that leads to a human who can explain the intent and correct the underlying problem.

The urgency of the ownership question is only going to increase as AI agents become more capable and more widely deployed. The organizations that establish clear NHIO frameworks now will be substantially better positioned to manage the accountability challenges that come with agents acting autonomously at scale.

What this means in practice

Effective NHI security requires two things working together. First, the discovery layer: understanding what credentials exist in your environment, what they have access to, and how they're being used. The NHI security market has invested heavily in this, and the tooling is reasonably mature.

Second, the ownership layer: a systematic mapping of credentials to the humans and teams responsible for them, combined with workflows that route remediation tasks to those owners. This is what enables a security team to act on a finding rather than simply observe it.

At Surf AI, we've built our platform around the insight that the hardest part of security operations is fixing problems, not finding them. Fixing them requires the right human, with the right context, taking the right action. The ownership layer is what makes that possible.

Shoham Danino is a Senior Data Scientist at Surf AI with a decade of intelligence and security research experience, including nearly a decade in Unit 8200 and close to four years at Zscaler where he rose to Research Director.

Logo

Ready to operationalize your security?