← khsovereign.com Intent Provenance Protocol  ·  ipp.khsovereign.com
Agentic Security

OAuth Does Its Job.
AI Agents Need a Different Job Done.

OAuth 2.0 was not designed to govern autonomous AI agents — but that is not a criticism. It was designed to solve a different problem, and it solves it well. The question is what happens in the gap it was never meant to fill.

The standard take in agentic security circles right now is that OAuth and SAML are broken for AI agents. I have made versions of that argument myself. But I want to be more precise — because "broken" implies you should replace them, and that is the wrong conclusion.

OAuth 2.0 does exactly what it was designed to do. It authenticates agents. It governs resource access. It provides a mature, well-understood, widely deployed foundation that the enterprise has spent a decade building trust in. None of that should be thrown away.

The problem is not that OAuth is broken. The problem is that OAuth answers one question, and autonomous AI agents require a second question answered, a question OAuth was never architected to address.

The core argument

OAuth 2.0 tells you who the agent is and what resources it can access. IPP proves that every action in the chain stayed within what the human authorized — mathematically, without a central server, across organizational boundaries. You need both.


Two Different Questions

OAuth asks: Is this entity permitted to access this resource? It answers that question with tokens, scopes, and authorization servers. It does this well. Billions of API calls run on it every day.

The question autonomous AI agents raise is different: What did the human actually authorize this agent to accomplish — and can I prove, for any action taken anywhere in the delegation chain, that it stayed within those original bounds?

These look similar. They are not. The first question is about access. The second is about intent. Access tells you whether a door was unlocked. Intent tells you whether walking through it was authorized, by whom, under what constraints, traceable to a specific person, verifiable by any observer without calling a central server.

A valid OAuth token tells you the agent had permission. It does not tell you whether the action honored the human's intent — or whether the next agent in the chain honored it — or whether the one after that stayed within bounds no one ever formally stated.


Where OAuth Excels — and Where the Gap Opens

OAuth 2.0 was finalized in 2012 for a world in which a human sits at the center of every meaningful exchange. A human sees the consent screen. A human clicks Allow. A token is issued. The scope is coarse-grained but sufficient: read email, write calendar, access profile.

This model works beautifully for its intended context. The problems emerge at the boundary where OAuth was never designed to operate: multi-agent delegation chains.

When a human instructs an orchestrator agent to "optimize our cash positions and move idle balances over ten million dollars into short-term treasuries," that instruction contains a specific goal, a quantitative bound, and implicit prohibitions. None of that is expressible in an OAuth scope field. The orchestrator spawns sub-agents. Those sub-agents may spawn further agents. At every step, the scope of the original human instruction can silently expand. OAuth provides no mechanism to prevent it. This is not a bug in OAuth. It is a consequence of solving a different problem.

82:1 Machine identities to human identities in the average enterprise
TODO The actual text in the Security Considerations of the most recent IETF agentic auth framework — the authorization layer is explicitly deferred
Jun 2026 Colorado AI Act "reasonable care" standard — widely adopted standards become legal evidence of compliance

The Stack You Actually Need

Think of the agentic security problem as a four-layer stack. Each layer answers a distinct question. No single protocol answers all four. The mistake enterprises make is expecting OAuth, which handles Layer 2 exceptionally well, to cover Layer 3 as well.

The Agentic Security Stack
4
Regulatory Compliance
EU AI Act, NIST AI RMF, Colorado AI Act — requires verifiable evidence, not just access logs
3
Intent Provenance IPP
What was authorized? Did every action stay within bounds? Mathematically verifiable. No central server required.
2
Resource Authorization
OAuth 2.0, AuthZEN — what resources may this agent access? This is where OAuth lives and thrives.
1
Workload Identity
WIMSE, SPIFFE, AIMS — who is this agent? Authentication and cryptographic identity binding.

Layer 3 — intent provenance — is the gap. No existing standard fills it. The most recent comprehensive IETF agentic authentication framework explicitly defers its security model. Mastercard independently built a Layer 3 solution for payments and open-sourced it in March 2026 with Google, Fiserv, and IBM as partners. The need is confirmed at Fortune 500 scale. The payments vertical is not waiting for a general-purpose standard.

IPP is that general-purpose Layer 3 standard, designed to sit above WIMSE and SPIFFE, alongside OAuth 2.0, covering every enterprise domain, not just payments.


What Makes This Different From OAuth Extensions

Several teams are building OAuth 2.0 extensions for the agentic authorization problem. These are legitimate and important, especially for organizations that need backward compatibility with existing infrastructure. IPP is not competing with that work — it is designed to complement it.

The specific constraint that matters at Layer 3: every OAuth-based delegation validation requires a central authorization server. When Agent A at your company delegates to Agent B at a partner firm which delegates to Agent C at a cloud provider, requiring all three organizations to share an authorization server is often contractually impossible — and always a single point of failure.

IPP's Narrowing Invariant is verifiable by any party using only public keys resolvable from Decentralized Identifiers. No central server. No shared infrastructure. No pre-registration of agents. Any counterparty, auditor, or regulator can verify the complete delegation chain independently.

The Narrowing Invariant

For any derived token D issued from parent token P, six conditions must hold simultaneously: expiration, delegation depth, intent domain, resource scope, quantitative bounds, and prohibited actions, each must be equal to or more restrictive than the parent. Violation of any single condition is cryptographically detectable without a central server. Scope can only narrow, never expand, at every step in the chain. This is a mathematical guarantee, not a policy control.


What Each Protocol Covers

The table below is not "OAuth vs. IPP." It is a map of which layer each protocol is designed to address. The right answer for most enterprises is all four columns working together.

Capability
OAuth 2.0
Agentic JWT
IPP
Agent authentication
Strong
Strong
Not its job
Resource access control
Core
Extended
Complementary
Machine-enforceable intent
Not supported
Partial (JWT claims)
Core primitive
Formal scope narrowing constraint
Not supported
Not defined
Narrowing Invariant
Decentralized chain verification
Requires central server
Requires central server
No central server
In-token provenance record
Not supported
Not supported
Append-only chain
Cross-org delegation (no shared infra)
Requires federation
Requires shared AS
DID-based, any party
Works with existing OAuth infra
Native
Drop-in
Additive layer

The last row matters. IPP does not require you to replace OAuth. You add it as a layer above your existing OAuth infrastructure. Agents still authenticate with OAuth. They still get resource access tokens via OAuth. They additionally carry a Bounded Intent Token that proves every action honored the human's original intent — mathematically, at every step, verifiable by anyone.


What This Looks Like in Practice

Your existing agent code does not change. You add three things: a root intent token at task creation time, provenance recording before each action, and narrowed sub-tokens when delegating to child agents.

python · ipp-sdk · github.com/khsovereign/ipp-sdk-python
# Your OAuth infrastructure is unchanged.
# IPP adds intent provenance as an additive layer on top.

# 1. Human principal creates a root intent token — once per task
token = IntentToken.create(
    principal=Principal.from_env(),
    natural_language="Optimize cash positions across subsidiaries",
    domain="financial.treasury",
    resource_scope=["subsidiary:*", "account_type:cash"],
    quantitative_bounds={"max_single_transaction": 50_000_000},
    prohibited_actions=["equity_purchase", "account_closure"],
    delegation_depth=3,
    expires_in="8h"
)

# 2. Agent records every action before executing it
token.record_action(
    action_type="financial.treasury.transfer",
    summary="Wire $12M from Acme East to Treasury Account #7892"
)

# 3. Sub-agents get derived tokens — scope narrows, never expands
# NarrowingInvariantError raised immediately if any condition is violated
sub_token = token.derive(
    agent_id="treasury-query-agent-001",
    domain="financial.treasury.read_only",      # subdomain ✓
    resource_scope=["subsidiary:ACME-EAST"],           # subset ✓
    additional_prohibited=["*_write", "*_transfer"]  # superset ✓
)

# 4. Any party verifies the complete chain — no central server required
IntentToken.verify_chain(sub_token)

The result: your OAuth logs show the access was permitted. Your IPP provenance chain proves the intent was honored — at every step, by every agent, verifiable by any auditor, counterparty, or regulator without calling your infrastructure.


Why This Matters Before June 2026

The Colorado AI Act's "reasonable care" standard takes effect June 30, 2026. The EU AI Act's human oversight requirements follow. NIST's AI RMF Govern function requires organizational accountability for AI agent actions. All three frameworks ask the same underlying question: can you prove that a specific AI action was authorized by a specific human, under documented constraints, traceable through the complete delegation chain?

OAuth logs answer the first half. They do not answer the second. Regulators are not asking "was this agent authenticated?" They are asking "was this action authorized, by whom, under what bounded intent, with what constraints, at every step?"

The Compliance Gap

OAuth tells you an agent had a valid token. It cannot tell you whose instruction authorized the specific action, what the original intent was, whether intermediate agents honored those constraints, or who bears accountability when something goes wrong. Regulators are beginning to require exactly this. IPP is the technical answer.


Three Questions Worth Asking Your Team This Week

  1. If an AI agent took an unexpected action today: a transfer, a record modification, or an external communication — could you produce a cryptographically verifiable record of what human authorized it, under what bounded intent, at every step in the delegation chain?
  2. For agents that operate across organizational boundaries — vendors, partners, cloud providers — do you have a verification mechanism that does not require all parties to share authorization infrastructure?
  3. When your compliance team asks you to demonstrate reasonable care for a high-risk AI deployment, what is the technical artifact that answers that question?

If the answer to any of those is "not yet," the good news is that the infrastructure exists, it is open, and it is designed to layer cleanly on top of what you already have.

The IPP Specification

The Intent Provenance Protocol is published as an open standard at ipp.khsovereign.com/spec/v0.1 and filed as IETF Internet-Draft draft-haberkamp-ipp-00. Python SDK at github.com/khsovereign/ipp-sdk-python. U.S. Patent Pending 64/013,188. Free to implement under CC BY 4.0.

A
Amanda Haberkamp Author of the Intent Provenance Protocol. Building the open cryptographic standard for AI agent accountability.
ipp.khsovereign.com    amanda@khsovereign.com Get involved with KH Sovereign →

Read the IPP Specification

Protocol design, Narrowing Invariant, Genesis Seal, and SDK documentation.

View Specification →