AI

Human-centric IAM is failing: Agentic AI requires a new identity control plane

The race to deploy agent AI is on. Across the enterprise, systems that can plan, take action, and collaborate across business applications promise unprecedented efficiency. But in the rush to automate, a critical component is being overlooked: scalable security. We’re building a workforce of digital workers without giving them a secure way to log in, access data, and do their jobs without creating catastrophic risk.

The fundamental problem is that traditional identity and access management (IAM), designed for humans, breaks down at the agentic scale. Controls like static roles, long-lived passwords, and one-time approvals are useless when non-human identities can outnumber human identities ten to one. To harness the power of agentic AI, identity must evolve from a simple login gatekeeper to the dynamic control plane for your entire AI operation.

“The fastest path to responsible AI is to avoid real data. Use synthetic data to prove value and then earn the right to touch the real thing.” — Shawn Kanungo, keynote speaker and innovation strategist; bestselling author of The Bold Ones

Why your people-centric IAM is a sitting duck

Agentic AI doesn’t just use software; it behaves like a user. It authenticates to systems, assumes roles and calls APIs. Treating these agents as mere features of an application invites invisible privileges and untraceable actions. A single agent with too much privilege can exfiltrate data or trigger faulty business processes at machine speed without anyone noticing until it’s too late.

The static nature of legacy IAM is its core vulnerability. You cannot predefine a fixed role for an agent whose duties and data access requirements may change on a daily basis. The only way to keep access decisions accurate is to move policy enforcement from a one-time assignment to a continuous, runtime evaluation.

See also  AI Trends Every Business Leader Should Know

Prove value before production data

Kanungo’s guidance offers a practical ramp. Start with synthetic or masked datasets to validate agent workflows, scopes, and guardrails. Once your policies, logs, and break glass paths hold up in this sandbox, you can transfer agents to real data with confidence and clear audit evidence.

Building an identity-centric business model for AI

Securing this new workforce requires a change in mindset. Every AI agent should be treated as a first-class citizen within your identity ecosystem.

First, each agent needs a unique, verifiable identity. This is not just a technical ID; it must be associated with a human owner, a specific business use, and a software bill of materials (SBOM). The era of shared service accounts is over; they are the equivalent of giving a master key to an anonymous crowd.

Second, replace set-and-forget roles with session-based, risk-aware permissions. Access should be granted just in time, limited to the immediate task and the minimum necessary data set, and then automatically revoked when the task is completed. Think of it as giving an agent a key to a single room for one meeting, rather than the master key to the entire building.

Three pillars of a scalable agent security architecture

Context-aware authorization is key. Authorization can no longer be a simple yes or no at the door. It has to be an ongoing conversation. Systems must evaluate context in real time. Is the agent’s digital posture confirmed? Does it ask for data typical of its purpose? Does this access occur during a normal operational window? This dynamic evaluation enables both safety and speed.

See also  Google augments AI shopping with conversational search, agentic checkout and an AI that calls stores for you

Targeted data access at the edge. The last line of defense is the data layer itself. By embedding policy enforcement directly in the data query engine, you can enforce row- and column-level security based on the agent’s stated purpose. A customer service representative should be automatically blocked from performing a search designed for financial analysis. Purpose limitation ensures that data is used as intended, and not only accessible by an authorized identity.

Standard tamper-proof evidence. In a world of autonomous actions, controllability is non-negotiable. Every access decision, data query, and API call must be immutably recorded, capturing the who, what, where, and why. Link logs so they are clearly visible and replayable for auditors or incident responders, creating a clear story about each agent’s activities.

A practical step-by-step plan to get you started

Start with an identity inventory. Catalog all non-human identities and service accounts. You will likely experience sharing and overprovisioning. Start issuing unique identities for each agent workload.

Test a just-in-time access platform. Implement a tool that assigns short-term references for a specific project. This proves the concept and shows the operational benefits.

Mandate short-term references. Issue tokens that expire in minutes, not months. Find and remove static API keys and secrets from code and configuration.

Set up a synthetic data sandbox. First, validate agent workflows, scopes, prompts, and policies on synthetic or masked data. Only promote to real data after audits, logs, and outbound policies are in place.

Conduct an officer incident tabletop exercise. Practice responses to a leaked identification, a quick injection, or an escalation of tools. Proof that you can revoke access, rotate credentials, and isolate an agent in minutes.

See also  Boeing passenger plane crashes in South Korea, killing dozens

The bottom line

You can’t manage an agentic, AI-driven future with human-era identity tools. The organizations that will win recognize identity as the central nervous system for AI operations. Make identity the control plane, move authorization to runtime, tie data access to a purpose, and prove the value of synthetic data before you get into real-world practice. If you do, you can scale to a million agents without increasing your breach risk.

Michelle Buckner is a former NASA Information System Security Officer (ISSO).

Source link

Back to top button