A03

Enterprise Operating Model Design

Enterprise design in the AI era

What actually changes, what doesn't

10 min read

Enterprise AI investment is accelerating rapidly. IBM Institute for Business Value research, covering 2,000 executives across 33 geographies, found that AI investment is expected to surge approximately 150% between 2025 and 2030. Yet 79% of those same executives say AI will significantly contribute to their revenue by 2030, while only 24% can clearly identify where that revenue will come from.1 The gap between ambition and clarity is striking. And it is not explained by the technology.

The explanation lies in the enterprise design. Most enterprises are deploying AI into existing structures: adding tools to existing processes, automating tasks within existing functions, and embedding models into existing workflows. The result is productivity gains, often substantial, within a design built for a different era of work. Productivity without transformation.

The enterprises pulling ahead are doing something different. They are not inserting AI into an existing operating model. They are redesigning how work is executed, how decisions are made and how accountability flows, using AI as the mechanism for execution while keeping human judgement where it uniquely matters. This is the distinction that separates AI-enabled organisations from AI-first ones.

Most enterprises are targeting productivity gains. The ones pulling ahead are targeting transformation.

AI is moving into the operational backbone

In early deployment cycles, most enterprise AI was applied at the margins: content generation, data summarisation, customer-facing chatbots and internal search. Useful, but peripheral. The newer wave is different. Agentic AI, capable of taking multi-step actions, making bounded decisions, and completing complex tasks with limited human intervention, is now being deployed in processes that determine financial integrity, regulatory exposure, and operational performance.

Accounts payable, record-to-report, financial crime monitoring, underwriting support and procurement sourcing: these are control-heavy workflows where errors carry material consequences. HFS Research describes the shift directly: autonomy is moving into the control layer, where working capital, compliance and audit outcomes are determined.2 The emerging standard is machine-processed, human-validated: AI handles repeatable processing and bounded decisions; humans shift toward oversight, exception management and judgement. (See also Agentic-led GBS: the future enterprise platform.)

This changes the metrics that matter. When AI sits in the control layer, performance can no longer be measured in productivity terms alone. Straight-through rates, exception thresholds, cycle-time compression, control effectiveness at scale: these become the relevant indicators. The transition from AI-enabled to AI-operated is not incremental. It requires a different kind of enterprise.

The real constraint is enterprise design

Enterprises that treat this as an automation upgrade are discovering a consistent problem: the technology performs, but the outcomes disappoint.

The reason, invariably, is not the AI. It is the structure in which the AI has been embedded. HFS Research estimates that Global 2000 enterprises carry approximately $10 trillion in accumulated debt across process, data, people and technology—the compounded legacy of short-term fixes, fragmented systems, unclear ownership and inconsistent data.2 When agentic AI is embedded into workflows carrying this debt, the weaknesses surface immediately: exception spikes, governance friction and data quality failures. As one executive at a global energy company put it: "There is no artificial intelligence without process intelligence."

IBM IBV research confirms the scale of the design gap: 68% of executives surveyed worry their AI efforts will fail due to a lack of integration with core business activities.1 The concern is not about technology capability. It is about the gap between AI adoption (adding tools to existing processes) and AI-first design, where the operating model is rebuilt around what AI can now do.

Layering intelligent tools onto legacy workflows does not produce an intelligent enterprise. It produces an expensive legacy workflow with an AI layer on top. Genuine transformation requires redesigning how work is structured, how decisions are distributed and how accountability is maintained, before, not after, AI is deployed at scale.

The AI enterprise execution model

The design challenge has five distinct layers. Each layer differs in what it requires and in how much it changes as AI becomes the primary execution mechanism. Together they form a framework that structures and bounds the AI-first enterprise and focuses assessment, planning and investment:

LayerDescriptionStatus
Layer 1: StrategyWhat the enterprise is trying to achieve: customer value, differentiated capability, commercial advantage. AI amplifies strategy; it does not replace it. The competitive edge comes from encoding the organisation's specific business logic and proprietary data into AI, not from using generic tools more widely.Stable
Layer 2: Human judgement and governanceRisk tolerance, regulatory posture, ethical boundaries, automation limits and accountability. These remain fundamentally human. AI expands leadership responsibility rather than removing it. Humans remain the ultimate authority layer—the layer that answers for outcomes when the machine gets something wrong.Human-held
Layer 3: Decision architectureThe explicit design of which decisions machines make, which humans retain, where escalation occurs and how machine outputs are validated. This is where most enterprises are currently weakest. AI simultaneously increases both decision speed and decision volume. Without explicit decision architecture, automation fragments and accountability dissolves.Design gap
Layer 4: Human–AI execution systemThe most visible transformation layer. Execution shifts from human-performed tasks to machine-processed workflows with human validation at defined checkpoints. The enterprise operates through hybrid sequences: human → machine → human → machine. This requires deliberate redesign of roles, workflows and performance expectations. The organisation becomes a centaur enterprise.Design gap
Layer 5: Intelligent enterprise spineThe integrated operational backbone connecting data, workflows, AI models and enterprise systems. AI does not scale through isolated tools. It requires architectural coherence across the whole execution layer. Enterprise platforms, Global Business Services (GBS) and Enterprise Platforms organisations and shared service centres often play a central role in building and sustaining this spine.Foundation

Many enterprises are currently investing heavily in aspects of Layer 5, the technology and integration infrastructure. This is necessary, but not sufficient. The real challenges, and the greatest source of unrealised value, sit in Layers 3 and 4: decision architecture and human–AI execution redesign. Most organisations have not yet answered, at the level of individual processes, the questions those layers pose.

Most enterprises are investing in the spine. The performance gap sits in decision architecture and execution design.

Five structural shifts

Five things change substantially as enterprises move from AI-enabled to what Microsoft calls the Frontier Firm.3 Each shift maps directly to a layer in the execution model and requires an explicit design response.

How work is executed. The dominant model moves from human-performed tasks to machine-processed workflows with human validation at defined checkpoints. This is a redistribution of execution authority: AI handles scale, speed and consistency; humans handle exception, judgement and accountability. The transition is not automatic. It requires redesigning workflows from the ground up, not automating the workflows that already exist.

How decisions are made. AI increases both the volume and the speed of decisions flowing through an organisation. Without an explicit decision architecture—the deliberate design of what machines decide, what humans decide and how the two interact—accountability dissolves. IBM IBV found that 55% of executives believe competitive advantage in 2030 will depend more on speed of execution than on making perfect decisions.1 Speed without decision architecture is not agility. It is accountability risk at scale.

How the operational backbone is structured. The enterprise system of record, built over decades as a repository, must become an active execution layer: processing, routing, flagging and validating in real time. Building this intelligent enterprise spine requires integration across data, AI models, workflows and enterprise platforms—not individual tool deployments, but architectural coherence. Organisations that achieve this coherence anticipate substantially stronger outcomes: IBM IBV research found that organisations focused on integrating AI across products, services and workflows, and on using purpose-built rather than generic models, project 54% productivity improvements by 2030, compared with an average expectation of 42%.1

What humans do. Every significant AI deployment changes not just how much work people do, but what kind of work they do. The emerging pattern is consistent: human roles concentrate at three points. First, the synthesis of AI outputs into decisions that require accountability (see The human side of human–AI teams). Second, the governance and oversight of machine-processed workflows at scale. Third, the judgement calls that sit outside AI's reliable capability zone: where context, relationships and institutional knowledge cannot be delegated. IBM IBV found that 74% of executives expect AI to redefine leadership roles and two-thirds expect AI to create entirely new ones.1

How governance works. In a machine-processed enterprise, governance cannot remain a periodic review function applied after decisions are made. It must be embedded in the execution layer itself: as defined escalation thresholds, as audit-ready decision logs, as real-time monitoring of AI output quality and as clear accountability structures that answer, in advance, what happens when a machine-assisted decision turns out to be wrong. Governance designed into architecture is more effective and less costly than governance retrofitted after failure.

What changes far less than people think

A great deal of what executives are being told must change does not, in fact, need to do so. The core of an enterprise—its strategy, its accountability structures, its values, its culture—is not a casualty of the AI transition. It is the anchor.

Strategy remains the first layer precisely because it determines what AI should be optimised toward. IBM IBV found that 57% of executives expect competitive advantage to come primarily from the sophistication of their AI models by 20301, but the sophistication that matters is not raw capability. It is how well the AI reflects the organisation's specific business logic, customer relationships and proprietary knowledge. Generic AI applied to a generic operating model produces generic results. The differentiator is the organisation's IP, encoded into AI that knows its specific business.

Human accountability does not diminish; it sharpens. As AI takes on more execution, the humans who remain accountable for outcomes carry a larger, not smaller, responsibility. What changes is not who is accountable, but what they must understand and how they must exercise judgement in an AI-mediated environment.

Governance, risk frameworks and regulatory compliance endure. What changes is where they sit. They can no longer be bolt-on functions applied over decisions that have already been made. In an AI-operated control layer, they must be embedded at the point of execution.

And the process knowledge that encodes decades of domain expertise—industry logic, client relationship patterns, exception handling built from experience—retains its value precisely because AI cannot replicate what it has not seen. As HFS Research notes, process depth is the differentiator, not agent capability.2 The organisations that win will be those that encode their proprietary process intelligence into AI, not those that apply the most powerful generic models to legacy operations.

The structural gap most enterprises face

The gap between the current state and the AI-first enterprise is, for most organisations, not primarily a technology gap. It is a design gap at Layers 3 and 4. The IBM IBV data is specific about the consequences: AI-first organisations—those that redesign their operating models rather than simply adopting tools—anticipate 70% greater improvement in productivity, 74% greater reductions in process cycle times, and 67% greater improvements in project delivery times by 2030, compared to their peers.1

The compounding effect is significant. Organisations leaning into AI-first operations do not just perform better; they iterate faster. In the time it takes a conventionally structured organisation to complete one development, testing and delivery cycle, AI-first organisations complete multiple, each one generating data that improves the next. The gap between the two is not static; it widens with every cycle.

Most enterprise AI deployments to date have focused on Layer 5: investing in platforms, data infrastructure, AI tools, and integration architecture. The result is organisations with capable infrastructure and fragmented impact: AI tools in use across the enterprise, but no coherent design for how decisions flow, who is accountable, where human judgement intervenes and how governance is maintained at machine speed. The technology is ready. The design is not.

The leadership task: five design questions

Five questions define the enterprise design agenda for the AI era. They are architectural in nature, but they require answers at the executive level because the answers determine what the organisation becomes, not just what technology it uses.

Where does machine execution end and human judgement begin? Not in the abstract, but in each core process. For every workflow considered for agentic deployment, the boundary between machine-processed and human-validated must be designed explicitly rather than assumed. This is the foundational decision of Layer 4. It cannot be deferred to implementation.

What decisions are we handing to machines, and who is accountable? Decision authorities must be defined as clearly in an AI-operated organisation as they are in any other governance structure. The absence of explicit decision architecture is not neutrality; it is an accountability vacuum. As IBM IBV research notes, the organisations that govern AI well treat governance as a competitive differentiator, not a compliance cost: they can move faster and take smarter risks precisely because they have defined accountability in advance.

Where is our enterprise debt, and what does it constrain? HFS Research estimates $10 trillion in accumulated process, data, people and technology debt across Forbes Global 2000 enterprises. When agentic AI is deployed into workflows carrying such debt, the weaknesses surface as scaling constraints: exception spikes, data quality failures, and governance gaps that were invisible at lower volumes. Debt reduction is not a preliminary project. It is a prerequisite for agentic scale.

What does the intelligent enterprise spine look like for our operations? The integration architecture that connects data, workflows, AI models and enterprise systems is not a technology procurement decision. It is an operating model decision that reflects the organisation's specific processes, governance requirements, and risk posture. For many enterprises, the Global Business Services (GBS) and Enterprise Platforms or shared services organisation is positioned to become the intelligent enterprise spine: the layer that connects AI execution to business outcomes at scale. (See also Agentic-led GBS: the future enterprise platform.)

How are we building the human capability this requires? Every AI deployment changes what people do, not just how much of it they do. The six psychological and practical disciplines that determine whether human–AI collaboration produces superior performance must be developed alongside the operating model redesign, not after it. Capability development that runs ahead of tool deployment builds the human foundation that makes AI investment consequential.

The question is not whether to deploy AI. It is whether the enterprise is designed to make it consequential.

The enterprises that close the gap between AI adoption and AI-first design will not be those with the most powerful technology. They will be those whose leadership had the discipline to answer these five questions before they became urgent, and the organisational capability to act on the answers.

References

  1. IBM Institute for Business Value (2026). The Enterprise in 2030: Engineered for Perpetual Innovation. In partnership with Oxford Economics; survey of 2,000 executives, Q3–Q4 2025, across 33 geographies and 23 industries. ibm.com/ibv
  2. HFS Research (2026). Genpact FOCUS 2026: An Operating Model Reset for Agentic AI. Dana Daher & Hridika Biswas. hfsresearch.com
  3. Microsoft WorkLab (2025). 2025: The Year the Frontier Firm is Born. Work Trend Index Annual Report. microsoft.com/worklab
  4. Morichella Associates (2026). The Human Side of Human–AI Teams. Human–AI Organisation Design, Part II. morichella.com/perspectives

Working on AI transformation?

If you're navigating AI-enabled enterprise change and value hands-on partnership, we welcome a conversation.

Start a conversation →