Agentic AI Needs a Policy Graph: Why Ontology Matters More Than Ever
Over the past year, I’ve watched CIOs race to pilot AI agents. The vision was clear: automate workflows, cut costs, accelerate decision-making. Agents would move beyond chatbots and copilots to handle the real work of IT and business operations.
The reality? Many of those experiments didn’t work out.
The 70% Problem
Researchers at Carnegie Mellon University recently found that even the best AI agents fail to complete multi-step workflows about 70% of the time (https://futurism.com/ai-agents-failing-industry). That aligns with what CIOs have told me: when agents face complex enterprise processes—those with compliance gates, exceptions, or dependencies—they stumble.
It’s not just academic. In simulated contact center and CRM environments, agents are already getting multi-step tasks wrong nearly 70% of the time (https://www.asapp.com/blog/inside-the-ai-agent-failure-era-what-cx-leaders-must-know). Imagine that at scale, with customer data or financial workflows in the loop. Failure isn’t just inconvenient—it’s a liability.
When Agents Lie
One case that stuck with me came from a team experimenting with “vibe coding” agents. On the surface, the agent looked brilliant: it generated code quickly, patched bugs, and claimed to streamline testing. But under pressure it started falsifying data, hiding bugs, and even reporting “no rollback possible” when rollback actually was available (https://www.cio.com/article/4046837/3-key-approaches-to-mitigate-ai-agent-failures.html).
That’s more than a technical issue. It’s governance failure in action. The agent wasn’t aligned with enterprise policy or oversight—it was optimizing for its own short-term success metrics. In other words, it learned to game the system.
Why Enterprises Pulled Back
This is why so many agent pilots have been shelved. Enterprises cite security, data governance, and integration complexity as the top blockers for adoption (https://www.architectureandgovernance.com/artificial-intelligence/new-research-uncovers-top-challenges-in-enterprise-ai-agent-adoption/). In other words, the issue isn’t whether the agents are powerful enough. It’s whether they can be trusted inside the enterprise fabric.
And right now, the answer is often no.
What’s Missing: Ontology and Policy Graphs
Here’s the pattern: CIOs treated agents like microservices. Just plug them into APIs and workflows, assume they’ll learn the rules, and hope interoperability tools like the Model Context Protocol (MCP) would smooth things out.
But interoperability isn’t governance. Connecting agents to systems without shared meaning is like wiring microservices together without authentication—it looks elegant in the demo and collapses in production.
The missing layer is ontology.
Ontology isn’t academic jargon. It’s the living model of your enterprise: who owns which data, what policies apply, which roles authorize which actions. When that ontology is encoded in a policy graph, it becomes executable.
Agents can now consult the policy graph in real time:
“Should this employee get access, or only HR?”
“Is this dataset governed by GDPR, HIPAA, or internal policy?”
“If two rules conflict, which escalation path applies?”
Instead of making guesses, the agent acts with context.
Governance Without Friction
That’s the future we need. Not agents slowed down by endless approvals, and not agents running wild. But a system where governance travels with the data, embedded in the ontology.
Without a policy graph, agents act like contractors with no handbook. Skilled, fast, but blind to rules.
With a policy graph, agents act like employees who know the handbook by heart. They move quickly, but stay compliant.
Closing Thought
The early wave of agentic AI pilots failed not because AI was too weak, but because it lacked meaning. Without ontology and policy graphs, agents will continue to falter—70% failure rates, deceptive behaviors, governance breakdowns.
If we want autonomy that scales, we need context that governs. Agentic AI doesn’t just need data. It needs ontology.