The disruption question most boards ask is: How much will AI change things and how fast? A good question, but not the first one for enterprise operating models. The first question is organisational: Is your enterprise configured to support AI at scale, or only at the edge?
For most large organisations, the honest answer is…the edge. Two years of experimentation have produced genuine value: content generation in marketing, predictive modelling in supply chain, copilot tools in legal and finance. But what they have not produced is compounding, enterprise-wide intelligence. The reason is not the quality of the tools. It is the absence of the enterprise design to connect them.
Why AI belongs in a different category
Enterprise resource planning offers two reference points worth distinguishing. S/4HANA migrations and the broader shift to cloud-hosted ERP were, for most organisations, infrastructure changes: they did not require rethinking decision rights, functional coordination or operating logic. The original ERP wave of the 1990s and 2000s, however, was different: organisations had to redesign processes and commit to structural change before the software delivered its potential. AI is likely to be more disruptive than either, for two reasons:
- •The structural parallel. The enterprises that captured most value from ERP were not those with the most capable software—they were those that committed to the structural prerequisites: standardised data, redesigned processes and shared information flows across functions. Those that treated ERP as a functional upgrade found it reflected their fragmentation rather than resolving it.
- •The more radical implication. Whereas ERP's founding logic rests on a single premise—that coherent enterprise operation requires data consolidated into one integrated model, AI challenges that premise directly: intelligent systems can interrogate distributed, disparate data in real time without consolidating it first. If the need for rigid system integration weakens, so does the architecture built around it. Several AI-native platforms are already making this argument commercially. The question leaders now face is not only how to layer AI onto an ERP-centred design but whether that design remains the right foundation at all.
In both cases, the key insight is the same: AI's value compounds when intelligence is connected and governed; it fragments when it is siloed.
The operating model gap
September 2024 research by McKinsey & Co. on scaling generative AI confirms that the operating model is the binding constraint, not the technology. Most organisations have moved from experimentation to deployment without addressing the structural conditions that determine what deployment produces.
The gap shows up in three consistent patterns:
- •Absence of common standards. AI operates across human decisions, processes and technology systems. Where these do not work to shared definitions and rules, AI amplifies inconsistency rather than resolving it. If the functions feeding a model cannot agree on what a "customer", a "product" or a "cost centre" means, no model can compensate. Data fragmentation is a consequence: incompatible inputs produce unreliable outputs regardless of model quality.
- •Late governance. Pilots succeed, expand, attract scrutiny and are partially rolled back while governance catches up. The cost is not only the rollback—it is the erosion of organisational confidence that makes the next deployment harder to approve.
- •Infrastructure duplication. Teams building AI capability independently accumulate inconsistent tooling and incompatible outputs. Model sprawl is the AI equivalent of shadow IT, with equivalent risk.
AWS describes the structural choice as between centralised, decentralised and hybrid AI operating models, each with distinct trade-offs between agility and control. Neither extreme scales cleanly. A recent McKinsey Global Survey found that 65 per cent of companies now use gen AI regularly—twice as many as the previous year—yet only 31 per cent of gen AI high-performers and 11 per cent of other organisations have adopted a component-based development model, one of the foundational enablers of scalable deployment. The evidence suggests that many enterprises are scaling AI activity faster than they are redesigning the structural conditions required to support it.
Three structural models
Three patterns are emerging across enterprise practice:
- •Centralised AI hub. Characterised by strong governance and clear accountability, but still producing bottlenecks. A single central function cannot match the pace at which business units want to deploy; innovation slows precisely when the technology is moving fastest.
- •Federated or embedded. Fast and domain-specific, but producing federated risk: duplication, inconsistency and accumulation of exposure over which no one has full visibility.
- •Hybrid platform with distributed execution. Governance, data standards and core tooling at the centre; domain-specific application at the edge. The enterprise defines the guardrails; business units operate within them.
"Most organisations have moved from experimentation to early deployment without addressing the structural conditions that determine what deployment produces."
The third model reflects the same logic that governs how large organisations manage legal, financial and technology risk across decentralised operating structures. What is new is applying it to intelligence, at the pace AI demands.
The Intelligent Enterprise Spine
What the hybrid model requires in practice is a shared organisational backbone: the Intelligent Enterprise Spine. Not a single system or a single team, but a set of connected capabilities that form one essential element of the enterprise operating model, enabling intelligence to move coherently across it.
Its five components, in order of foundational dependency, are:
- Enterprise data governance. Without shared data standards and clear ownership, AI cannot scale reliably regardless of model quality. This is an organisational and executive question, not a technical one: the people whose decisions shape what data exists, in what form, with what governance, are business unit leaders, CFOs and COOs.
- Model lifecycle management. The organisational capability to procure, deploy and retire AI models in a governed, consistent way. Without it, portfolios of models accumulate from different vendors with no visibility of what each is doing or who is accountable for its outputs.
- Access control and policy frameworks. Who can use which AI capability, in which contexts, with what oversight. Microsoft's internal Copilot deployment built this before wide rollout: role-based access, usage monitoring and policy enforcement across business units. Governance is not a constraint on deployment; it is the precondition for sustaining it.
- Shared development capability. Common platforms that allow domain teams to build on a consistent foundation without rebuilding infrastructure from scratch for each use case.
- Monitoring and audit. Ongoing visibility into what AI systems are doing, where outputs are being used and where risk is accumulating. Without this, organisations are deploying intelligence they cannot observe.
"Governance is not a constraint on AI deployment. It is the precondition for scaling it safely enough to sustain."
Schneider Electric illustrates the spine in practice. Appointing a Chief AI Officer in 2021 and building a hub of 250 AI specialists, the company created centralised standards and governance with domain-level deployment across global business units. By 2023, it could scale AI use cases across a complex global enterprise because the structural conditions were in place. The hub did not own every AI decision. It owned the backbone that made every decision traceable.
Procter & Gamble followed a comparable path with its internal generative AI tool, "chatPG": enterprise- wide infrastructure on a common data foundation, with access governed centrally and use cases built at business-unit level.
What changes structurally
The spine has predictable implications for three parts of the enterprise:
- •The corporate centre acquires a new mandate. Its role shifts to stewardship of the backbone: data standards, governance frameworks and audit. Business units retain operational latitude within those standards. This requires new executive-level capabilities—specifically AI literacy, data governance expertise and policy design—not just technology leadership.
- •GBS faces a strategic fork. GBS built its model on consolidating transactional processes—accounts payable, payroll, HR administration—where scale drove cost. Generative AI automates many of these directly. But the governance, standardisation and cross-functional coordination GBS has historically provided are precisely what the spine requires. KPMG's 2024 research on GBS evolution identifies AI- enabled transformation as the defining challenge for shared services leaders, with the most advanced organisations already repositioning around integrated enterprise capability rather than transactional efficiency. GBS can become the spine, or be replaced by it.
- •Accountability sharpens as autonomy increases. As AI becomes embedded in operational workflows, the critical question shifts from who builds and uses it to who is accountable when it produces a wrong output, a biased result or a consequential error. Role-based controls and audit trails are not bureaucratic caution; they are the structural conditions for identifying, investigating and correcting failures.
The Agentic intensification
The shift from generative to agentic AI—systems capable of executing multi-step workflows across enterprise systems with degrees of autonomy—makes the spine considerably more urgent.
An agent resolving a customer complaint may simultaneously access CRM, billing, inventory and contract management. Without shared data, governed access and audit capability, there is no reliable way to know what it did, why it did it, or how to correct it.
UiPath's oversight framework for autonomous agents, Salesforce's agentic enterprise framing and Microsoft's governance architecture for agent connectivity reflect a convergent position across the enterprise technology landscape:
"Agentic AI without structural governance is risk accumulation at machine speed."
Palantir's AI Platform illustrates the architecture at its most developed: an organisational ontology layer that represents the enterprise as a connected set of objects, relationships and actions, enabling AI systems to operate across domains within defined boundaries of authority. The principle it demonstrates is the one that matters: the spine defines not just what AI systems can access, but what they are authorised to do with that access.
What this asks of leaders
The question is not whether to build the Intelligent Enterprise Spine. Organisations that scale AI without one will build it reactively, at greater cost, with more disruption, and after avoidable failures. The question is whether to build it deliberately.
Four shifts are required:
- Treat data governance as a strategic priority, not a technical function. Data quality determines AI quality at scale. The executives whose decisions shape enterprise data are business unit leaders, CFOs and COOs. Delegating this to the technology function is the most reliable way to get an answer that arrives too late and goes too narrow.
- Establish the governance model before tools multiply. Most organisations are making procurement decisions faster than governance decisions. Establishing early who approves AI deployments, monitors outputs and is accountable when something fails is not a constraint on pace. It is the condition that makes pace sustainable.
- Invest in the interface layer before expanding the tool layer. The binding constraint in most enterprises is not access to AI capability. It is the number of people who can work effectively at the boundary between the shared infrastructure and the domains it serves. Building this interface capability before expanding tooling produces compounding returns; the reverse produces compounding complexity.
- Measure outcomes, not deployments. Tools deployed, pilots completed and efficiency targets hit are incomplete proxies for genuine capability. The measures that matter: are decisions better? Are risks surfaced earlier? Does the organisation iterate faster? Building those into how AI investment is evaluated separates enterprises that build capability from those that accumulate tools.
The Intelligent Enterprise Spine is not a technology initiative. It is an organisational design decision about how the enterprise coordinates intelligence, distributes authority and maintains accountability as AI becomes embedded in its core.
The disruption is real. Its scale will depend less on the sophistication of the tools deployed than on the coherence of the organisation around them. The enterprises that lead will not be those that deployed AI fastest. They will be those that built the structural conditions under which intelligence compounds.