AI is outpacing your data platform - what CIOs need to understand
Tue, 12th May 2026 (Today)
After more than two decades designing enterprise data and analytics platforms, I have seen a few major technology shifts reshape how organisations think about data. The move from on-premises to cloud was one. The move from traditional analytics to modern data platforms was another. Now, we are in the middle of a much faster shift: from generative AI experimentation to agentic AI in production.
That was the clearest signal I took from the 2026 Microsoft MVP Summit in Seattle this March. Across the sessions I attended, and in conversations with Microsoft product teams and the Most Valuable Professional (MVP) community, this theme was consistent. AI is moving beyond generative use cases and into systems that can act, orchestrate, monitor, optimise and operate across business processes. The question is no longer whether to adopt AI, but how quickly it can be operationalised responsibly and at scale.
For me, four key themes emerged from the summit. The urgency of enterprise agentic AI readiness, the need to rethink data foundations around governance and accountability, the role of trust as a baseline requirement for AI to operate, and the importance of CIOs actively pressure testing their readiness and taking an intentional approach.
Agentic AI is scaling fast - New Zealand's pivot from pilot to production
After a week with global data engineers, one thing is certain: the "Year of the Pilot" is over.
Many New Zealand organisations are still operating in proof-of-concept mode when it comes to Agentic AI. They are experimenting, testing use cases, and working through questions of security, privacy and risk. Whilst this is important, the USA and other leading markets have moved beyond experimentation and are rapidly building Agentic AI into production environments, deployed at scale, and governed as part of core business operations.
Across the Microsoft product ecosystem, agentic capabilities are no longer a future Roadmap item, instead they are live, operational and evolving at "AI speed". For internationally connected economies like New Zealand, the opportunity is to move from cautious exploration to deliberate, well-paced execution. We have the chance to be "strategic fast-followers" learning from early global scaling pains to build more resilient systems from the outset.
Where Agentic AI readiness meets reality
The hurdle for New Zealand organisations isn't lack of data; it's the availability of agent-ready data. To bridge the gap between a successful demo and a production-grade agent, business leaders must address five critical areas:
Unlocking the Unstructured data - High-value data/context hidden in emails, PDFs, documents, and meeting transcripts is often siloed. If the data is not governed or retrievable, agents lose the institutional memory and hence cannot be effective.
Defining retrieval strategy - Organisations have data stored but this is not enough. Production grade agents require a clear strategy for how they search, prioritise, and use this data.
Moving beyond "gold layers" - While clean, structured data is essential for reporting, agents require broader "contextual data." Relying solely on highly curated tables limits an agent's ability to understand the "why" behind the numbers.
Addressing the data debt - Agents act as a magnifying glass for data quality. Duplicates and inconsistencies that humans might overlook are amplified by agents, making data integrity more critical than ever.
Upskilling for agent orchestration - There is a growing demand for expertise in orchestration, design and management of how agents interact with complex data ecosystems.
Real readiness looks like data that is accessible, governed, and context-rich, supported by clear retrieval strategies, empowering teams to master orchestration and the capability to enable agents to act with accuracy and confidence.
Agentic AI readiness requires a rethink of data foundation design
Agentic AI fundamentally changes the role of the data platform from supporting decisions to enabling action. Data is no longer prepared for human consumption through, for example, reports and dashboards. It must be continuously available, AI-agent readable, and ready to be acted on in real time. This shifts the data foundation design focus from batch processing and structured reporting layers to platforms that can support real-time retrieval, context, and execution across both structured and unstructured data.
To support this shift, data foundations must evolve in the following ways:
-
Real-time, low-latency access to support continuous decision-making
-
Explicit contextual metadata, so agents can interpret data without human judgement
-
Clear sources of truth, removing ambiguity across systems and datasets
-
Defined retrieval frameworks, enabling agents to consistently find and prioritise the right information
-
Observability across agent interactions, including full visibility into what data is accessed and how decisions are made by the agents
-
New orchestration capability layers, combining data engineering with AI orchestration to manage agent behaviour
Trust, the new operating boundary
Trust is the minimum viable condition for agentic AI to operate, because as organisations shift from human-led decisions to agent-driven actions, the tolerance for imperfect data disappears. In traditional environments, inconsistencies, gaps, and errors can be identified and corrected by human intervention before action is taken, but in an agentic model those same issues are no longer contained, they are amplified. Manual workarounds become automated patterns, and inconsistencies are propagated at speed and scale.
To meet the minimum trust conditions, the following are critical:
-
Verifiable data lineage, so agent decisions and errors can be traced and understood.
-
Clear, dynamic permission structures, ensuring agents only access and act on authorised data.
-
Embedded governance and quality guardrails, enabling intervention when data quality drops or conditions change.
If you don't govern it. You can't effectively AI it.
When organisations strike the right balance between speed, control, and trust, they unlock the full value of agentic AI by enabling teams to act with confidence and focus on innovation rather than validation. In my experience this is achieved through a "trust but verify" approach. Teams are empowered with self-service capabilities, supported by strong governance, clear data ownership, and continuous monitoring. This shift in data foundations is exactly what platforms like Microsoft Fabric are trying to enable.
Fabric simplifies tooling - it does not simplify thinking
At the MVP Summit, the scale and direction of Microsoft's investment in unified data platforms, particularly through Microsoft Fabric was clear. They are solving the fragmentation of the "modern data stack" by establishing a single, integrated environment that brings together data engineers, data scientists, and analysts into a common operating model. By creating a unified entry point across data personas, Microsoft is addressing one of the most persistent challenges in enterprise data: fragmentation across teams, tools, and workflows. This is being reinforced through platforms like Azure AI Foundry, which extend that unified model into AI development and agent orchestration, connecting data, models, and applications within a single ecosystem.
However, the platform does not remove the complexity of the data estate. The ease of use, speed of deployment, and accessibility of Fabric can create a false sense of simplicity.
Key risks to manage:
-
Workspace sprawl - Teams independently creating workspaces and solutions without a unified design or governance model
-
Recreating silos within a unified platform - Fragmentation persists as teams define their own data structures, logic, and interpretations
-
Lift-and-shift Trap - Migrating existing data and processes into Fabric without addressing underlying data quality and architectural issues
-
Architectural planning upfront - Skipping foundational design (e.g. medallion or data mesh approaches) leading to rework and inefficiency
-
Governance and lineage consideration - No clear strategy for data ownership, traceability, or data loss prevention, increasing risk of data issues or leaks
-
False sense of readiness due to ease of use - Rapid initial progress masking deeper structural issues that surface later at scale
-
A deliberate architecture and governance model is required to avoid costly rework.
CIOs considering Fabric should ask these questions before they implement:
-
What is the goal of the data platform?
-
Are you trying to break down silos between engineers and analysts?
-
Fabric is unified architecture, you will be using a shared pool of compute resources - are you ready for shared enterprise capacity, or are we still thinking in departmental pools?
-
Have you got a unified strategy for the whole company?
-
Are you prepared to maintain data quality continuously if our goal is to power AI Agents using Fabric?
-
Unified does not mean simplified. Leaders must do the thinking and the plumbing before implementing Fabric.
What CIOs should pressure-test now
If an organisation wants to be genuinely AI-ready over the next 12 months, I recommend pressure-testing three specific areas immediately.
Unstructured data governance. Can your organisation govern, retrieve and use the knowledge trapped in documents, transcripts, emails and other unstructured sources? More importantly, can an agent find the right version, respect permissions and understand context without manual help?
Semantic consistency. If different departments define "total revenue" differently, which definition should an agent use? Does the platform understand context well enough to apply the right logic in the right scenario? These semantic models also need to be tested with AI agents, not just dashboards. What works for human interpretation may produce inconsistent or conflicting outputs when used by an AI agent.
The "Hidden" manual workarounds. Where are people patching reports, correcting data, reconciling numbers or applying business logic outside the platform? These manual "band-aids" may be the clearest sign that AI agents will struggle.
These are not just technical checks. They are leadership questions.
Data foundations are never finished
With over seven years of experience as a Microsoft MVP, I often describe data engineering as building a house, rather than delivering a finished product. You build it, move in, and think it is done. But once you start living in it, new needs emerge. You want a garden, maybe a pool, and suddenly there is ongoing maintenance. You need to weed, prune, and keep improving. It is never truly finished.
The same is true for modern data platforms, including those built on Microsoft technologies. While tools like Microsoft Fabric are advancing rapidly and enabling agentic AI at scale, they do not remove the need for strong, continuously evolving data foundations. Agentic AI raises the stakes. Data is no longer supporting decisions, it is driving them. That means any gaps in quality, governance, or design are not contained, they are amplified.
Agentic AI readiness, therefore, is not achieved through platform adoption alone. It is built through ongoing discipline and clear human thinking. As systems become more autonomous, the role of people does not disappear. It becomes more important. We the humans are the ones who define the architecture, set the guardrails, and determine what "good" looks like. Without that clarity of thinking, even the most advanced platforms will scale inconsistency. With it, organisations can scale AI with confidence.