1b – Own the Ontology or Rent Your Future – The four capability gaps that make agentic AI ungovernable
Part 1b of The Ontology Imperative: Building Trustworthy Agentic AI
Reading time: ~12 minutes
Summary
Most knowledge graph initiatives fail before they begin. Not because the technology is immature, but because organizations lack four capabilities: real semantic expertise, leadership that treats ontologies as strategic IP, commitment to open standards, and formal semantics for trust and agentic AI governance. Teams racing to deploy agents without these foundations learn the same lesson: you cannot retrofit accountability.
Executive Brief
If your board approves agentic AI pilots without addressing four capability gaps, you will discover that governance cannot be retrofitted. By then, the failure will be attributed to leadership, not technology.
The gaps are predictable. You cannot staff semantic expertise. Leadership treats knowledge as plumbing instead of intellectual property. You choose formats that surrender ownership. You confuse higher accuracy with trustworthy autonomy.
Accuracy moves you from wrong to plausible; formal semantics move you from plausible to provable.
Why this matters: Without semantic control, agents operate on rented logic, not institutional authority.
This standard will decide which agents you can trust with authority and which must remain copilots.
Governance debt compounds faster than technical debt when agents act under your authority without accountability infrastructure. If you think this is a tooling decision, you are already behind.
Four Gaps That Sink Knowledge Graphs and Make Agents Ungovernable
The market signals are converging. Capgemini’s 2026 C-suite research highlights the governance gap, Microsoft’s Fabric IQ and Data Agents introduced an explicit ontology layer in the data platform narrative, and the World Economic Forum’s 2025 framework defines progressive, monitored governance for agents.
Part 1a showed why curated enterprise knowledge is the advantage; Part 1b explains why most organizations fail to realize it.
You decide knowledge graphs are strategic. You build the case. Then the initiative stalls. It stalls because you could not staff semantic expertise, leadership treated knowledge as plumbing, you picked formats that lock meaning to a vendor, and you confused accuracy for trust.

Gap 1 — The Talent Crisis
Your org chart has a gap that no current role fills. Data engineers model structure. Data scientists model patterns. Neither models meaning. Ontology design requires a discipline most organizations have never hired for: formal knowledge representation that makes business logic machine-readable and logically consistent.
This gap is invisible because nobody reports on it. I have watched it stall initiatives from the inside. I helped build early oil and gas ontologies through POSC Caesar, joined the W3C Workshop on Semantic Web in Oil and Gas in 2008, and led the Integrated Operations in the High North (IOHN) program with 23 companies. In every program, the bottleneck was the same: not technology, not budget, but the ability to define meaning with formal precision.
Large language models make ontology work more accessible. They can automate syntax. But only humans can define meaning, authority, and boundaries. LLMs will generate an ontology that looks right and passes syntactic validation. It will fail in production because contradictory inferences were never surfaced. The demand signal is unmistakable; the supply is insufficient.
Where The Capability Actually Lives
Some of the best ontologists have philosophy degrees. That sounds surprising until you realize that representing knowledge requires training in logic and formal reasoning, exactly what analytic philosophy provides. Geoscientists often excel for similar reasons. In oil and gas, we integrate seismic, well, and geological data for billion-dollar decisions. Coming from geophysics, my work meant bridging the fuzziness of geology with the rigor of physics.
The differentiator is not more scripting. It is formal logic and conceptual analysis that surface contradictions and make policy machine-readable. When AI agents hallucinate, the cost is liability and destroyed trust. The people who can prevent that are trained in precisely the disciplines most hiring managers overlook.
Gap 2 — The Hiring Mistakes
A European aerospace manufacturer spent eighteen months and three million euros to learn that structure is not semantics. They hired data engineers to build a supply chain ontology. It modeled the data perfectly. It failed to federate because the meaning of shared concepts was never formally defined.
A financial services firm auto-generated a compliance ontology that passed every syntactic test. It failed in production because contradictory inferences, invisible without formal reasoning, could not be reconciled.
These are not edge cases. They are the predictable outcome of hiring for the wrong capability. Successful organizations hire from three pools: semantic web professionals (RDF/OWL), philosophy graduates with computational training, and domain experts used to formal modeling under uncertainty. The cost of hiring the wrong profile is not a slow start. It is a failed initiative you must restart from scratch.
Gap 3 — The Leadership Gap
If you do not control the semantics, you do not control the intelligence. This is CDO territory. Whoever governs the ontology governs what AI can understand about your business. Control semantics, control enterprise intelligence.
When Data Valuation Reveals The Trap
National Highways in the UK valued their data at sixty billion pounds. What most readers missed was the risk: the knowledge that makes that data valuable sits in a GIS vendor’s proprietary formats.
This represents a valuation trap. When your institutional logic is rented from a platform, that sixty billion pound asset is effectively illiquid. You did not buy a foundation; you rented one. Extraction will be slow, expensive, and in some cases impossible. For a board, this is a fundamental loss of asset control.
The Four Questions That Decide Strategy
Before spending money, successful organizations answered four questions:
What domain knowledge are we encoding?
Which concepts matter most to our advantage?
What are we excluding and why?
How will this impact vendor independence?
From Principles to Architecture
The difference is operational governance. Declarative governance says AI should respect privacy; operational governance ensures the agent checks the ontology for rules at runtime and records justifications.
These four gaps do not fail independently. They compound. Without semantic expertise, you cannot build the ontology. Without leadership treating that ontology as strategic IP, the initiative gets deprioritized. Without open standards, the ontology you do build belongs to a vendor. Without formal semantics, the ontology cannot enforce the constraints agents need.
The result is governance debt that accumulates faster than technical debt. Technical debt slows you down. Governance debt exposes you to liability with every autonomous decision an agent makes under your authority. When Part 2a examines what happens when organizations deploy agents without these foundations, the compounding pattern will be visible across industries. The organizations that close these gaps first will deploy agents that their competitors cannot match, not because their models are better, but because their foundations are trustworthy.
Gap 4 — The Standards Decision
Format is an ownership decision. The pipe is not the brain. Without a formal semantic layer, you can route messages but you cannot govern decisions. Connectivity without semantics is just faster error.
Ora Lassila, co-editor of the original RDF specification, used a provocation at Connected Data London in 2025 to make this point: he framed legacy data as anything that is not RDF. Treat what is not RDF as legacy and bring it into formalism rather than bending meaning to storage.
The Architectural Choice That Determines Ownership
Organizations that maintain ownership follow a pattern: they build their authoritative ontology in RDF and OWL as the system of record. Reasoning, validation, and provenance live there. Use property graphs only as derived execution caches.
Meaning lives in RDF/OWL; performance lives in the caches.
The Hyperscaler Signal
In November 2025, Microsoft introduced Fabric IQ and Data Agents that bring ontological modeling into its data platform narrative. Amazon Neptune supports SPARQL for RDF and Gremlin for property graphs. Treat these as execution surfaces and keep your master ontology in open standards for portability and auditability.
Why SQL Schemas Are Not Enough
In 2024, Air Canada was held liable for misinformation from its chatbot. The tribunal was clear: You are responsible for what your AI says.
Text to SQL: 16 percent baseline. Wrong.
Knowledge graph retrieval: 54 percent. Plausible.
Ontology-governed reasoning: ~80 percent—72 percent correct plus 8 percent explicit “I don’t know” responses that prevent confident hallucination. Provable.

Trustworthy systems must know when to say “I do not know”. This is the work ahead at Viking Life-Saving Equipment, where mission-critical safety operations rely on equipment that is a vessel’s license to operate.
The Audit: Five Questions for the Board
Boards do not need semantic expertise. They need semantic governance. These five questions separate dashboard theater from accountability infrastructure. If your AI steering committee cannot answer them, you have a governance gap that no vendor demo will close.
Can your AI systems trace a decision to its authoritative source? Not to the model that generated it, but to the business rule, policy, or fact that should have governed it. If not, you cannot audit what your agents do. You are delegating authority without accountability.
Who owns the definitions your AI systems use? If the answer is a vendor platform, your institutional knowledge is rented, not owned. That is a strategic dependency masquerading as a technology choice.
Can you distinguish reasoning from hallucination in your AI outputs? If every output looks equally authoritative regardless of whether it reflects institutional knowledge or statistical invention, you have no basis for trust.
Are your AI governance rules encoded in a language machines can execute, or do they exist only in policy documents? Agents can read documents but cannot reason over them consistently. Policy without enforcement architecture is aspiration, not governance.
If an AI decision fails, can you fix the root cause without retraining the model? If not, every failure requires an expensive, time-consuming intervention that scales with the number of agents you deploy.
These are not technical questions. They are fiduciary questions. The board does not need to understand RDF or OWL. It needs to know whether the organization has accountability infrastructure or governance theater.
What To Do Now
Start with a Minimum Viable Ontology (MVO) for the decision that creates the most liability if an agent gets it wrong. Not the easiest decision to model. The one where failure means regulatory exposure, financial loss, or reputational damage. That is where the business case for semantic infrastructure is self-evident.
Talent: Your org chart has a gap. Fill it with people who can define meaning with formal precision: semantic web professionals, philosophy graduates with computational training, or domain experts used to formal modeling under uncertainty. Hire for conceptual discipline before technical skills. The wrong hire costs you a failed initiative, not a slow start.
Leadership: Treat ontology design as strategy, not plumbing. The CDO who governs the ontology governs what AI can understand about the business. If that responsibility sits with a technology team without executive sponsorship, the initiative will be deprioritized when budgets tighten.
Standards: Use RDF/OWL as the meaning layer. This is an ownership decision, not a technology preference. Open standards give you portability, auditability, and vendor independence. Proprietary formats give you speed today and dependency tomorrow.
Trust: Require formal semantics. Accuracy without provenance is liability without alibi. The benchmark evidence is clear: ontology-governed reasoning delivers roughly 80 percent accuracy including explicit “I don’t know” responses. That leap from plausible to provable is the standard that determines which agents you can trust with authority.
Next in the series: Part 2a examines what happens when organizations deploy agents without these foundations and how to arrest the spiral. It publishes in two weeks. Subscribe to follow the series.
About The Author
Frédéric Verhelst helps leadership teams build the foundations for non-linear growth. He focuses on ontology-first design and governance for agentic AI. With a PhD in Applied Physics and twenty-five years at the intersection of data, AI, and industrial operations, he has led large semantic interoperability programs, driven digital twin adoption, and advised on multi-billion-dollar decisions. He works at Viking Life-Saving Equipment, where agentic AI governance for mission-critical safety operations is taking shape.
Follow him on LinkedIn for the latest posts in The Ontology Imperative - Building Trustworthy Agentic AI.
Notes and References
W3C Workshop on Semantic Web in Oil and Gas Industry Houston, December 9-10, 2008
SPARQL 1.0 and SPARQL 1.1 W3C Recommendations W3C blog note announcing SPARQL as a Recommendation, January 15, 2008 and SPARQL 1.1 Query Language, W3C Recommendation, March 21, 2013
RDF and OWL specifications W3C RDF and OWL standards overview pages
GQL ISO standard ISO/IEC 39075:2024 Information technology - Database languages
Amazon Neptune multi-model support Amazon Neptune Documentation overview page
Microsoft Fabric IQ and Data Agents Introducing Fabric IQ on the Microsoft Fabric blog and What’s new for Fabric Data Agents at Ignite 2025
World Economic Forum agent governance framework AI Agents in Action: Foundations for Evaluation and Governance, November 27, 2025
HBR Analytic Services trust gap News coverage of HBR Analytic Services report on agentic AI trust and investment
Capgemini leadership preparedness gap Capgemini Research Institute, Inside the C-Suite: How AI is quietly reshaping executive decisions, 2026
Air Canada chatbot liability CBC News coverage of the British Columbia Civil Resolution Tribunal decision, February 2024
Accuracy benchmarks with knowledge graphs and ontology-based query checking Sequeda, Allemang, Jacob, A Benchmark to Understand the Role of Knowledge Graphs on LLM Accuracy for Enterprise SQL and GenAI Benchmark II, data.world AI Lab.
National Highways valuation and geospatial dependency context Anmut case study on Highways England data valuation and Esri UK case note on National Highways geospatial data program.
Siemens industrial knowledge graph and ontology initiatives Use Cases of the Industrial Knowledge Graph at Siemens, CEUR-WS Industrial Knowledge Graph at Siemens, CERN OpenLab slides
Ora Lassila on legacy data and pipelines to RDF CDL 2025 slides, Crafting RDF: Generating Knowledge Graphs from Legacy Data






Great article - many thanks! Love the reference to
Ora Lassila framing ”…legacy data as anything that is not RDF. Treat what is not RDF as legacy and bring it into formalism rather than bending meaning to storage.”
Thanks for this. Your production incident captures the real inflection point. Most teams chase accuracy because it feels quantifiable. What they miss is that the 8 percent explicit “I do not know” is the only thing that keeps an agent inside the boundary of what the organisation can legally defend.
Precision without the ability to refuse is not intelligence. It is liability at scale.
If your audience has similar scars from agents that were pushed too far too fast, feel free to point them here. The pattern is repeating across industries, and the sooner we surface it, the sooner we can design systems that actually deserve trust.