The transformation playbook that served organisations through ERP implementation, digital channel build-out and cloud migration carries a defining assumption: that the target state is knowable, and the task is to plan toward it. Define the future operating model. Map the gap. Build the change programme. Execute sequentially or in structured waves. Govern progress against the plan.
That assumption breaks with agentic AI. The target operating model is not knowable in advance. It emerges through deployment. Organisations that apply the familiar playbook without adjustment, whether by deploying broadly on enterprise vendor platforms or by commissioning a diagnostic and designing the architecture before moving, will generate a specific and expensive form of structural risk: lock-in to a model defined before the organisation understood what it actually needed.
The question this creates for executive teams is precise. If the destination cannot be fully designed in advance, how do you govern a transformation that must, by definition, be discovered through execution?
Why the traditional model does not apply
Conventional transformation logic is well suited to problems with a known destination. Cloud migration has one: the workloads being moved exist, and the architecture they will occupy can be specified. ERP implementation has one: the processes the system will govern can be mapped, the data it will use can be defined, the governance it requires can be designed.
Agentic AI does not have this property. It is not simply a new technology layer. It changes how decisions are made, how work is executed and how value is created. Those changes cannot be fully specified in advance because they depend on how the technology performs in the specific context of the enterprise.
The capabilities available are developing faster than organisations can consume them: Gartner recorded a 1,445 per cent increase in client enquiries about multi-agent AI systems in 2024 alone.1 The operating model implications (how work gets done, which decisions humans retain, how workflows are redesigned) depend on deployment experience that most organisations do not yet have. The value case for a specific use case cannot be reliably pre-specified, because the performance of AI systems in operational context cannot be reliably predicted from vendor demonstrations or peer benchmarks.
The structural consequence is clear. Organisations designing their AI operating model comprehensively before deployment are not being rigorous. They are being premature. And organisations deploying broadly before understanding what they need are not being bold. They are accumulating a new form of technical and operational legacy at pace.
What the data actually shows
The deployment picture is not uniform, and the divergence is instructive.
McKinsey's 2024 State of AI report found that 88 per cent of organisations had deployed AI in at least one business function.2 But MIT's Center for Information Systems Research, in a study of 721 organisations, found that only seven per cent had scaled AI enterprise-wide, and that a small group of high performers, roughly six per cent of the total, were generating more than 80 per cent of observed value.3 BCG's parallel analysis put the proportion of organisations successfully scaling AI at four per cent.4
The gap is not primarily about access to technology or investment volume. McKinsey's analysis found that AI-enabled transformations require, on average, 2.8 times more workflow redesign than conventional technology deployments.5 The organisations failing to scale have, in most cases, deployed capability into workflows that were not redesigned to use it. Tools have proliferated; operating models have not changed. The result is AI activity without AI value.
88% of organisations have deployed AI. 4% have scaled it.
The gap is not technology. It is operating model design.
The implication is not that deployment should wait. It is that deployment and operating model redesign must proceed together: not sequentially, and not with the redesign defined in advance of the learning that only deployment generates.
What vendors are selling, and why incentives diverge
The market for AI transformation support has consolidated around two broadly competing narratives, and neither is neutral.
The first is a scale narrative. Salesforce's Agentforce, Accenture's AI Refinery, and comparable enterprise platforms have built substantial commercial infrastructure around the proposition that the right response is broad deployment at pace: enterprise-wide agent networks, unified data infrastructure, and coordinated rollout across functions and geographies. The scale narrative is commercially rational for its proponents. Enterprise platform revenue is driven by breadth of deployment. The faster organisations commit to a platform architecture, the more difficult and expensive subsequent reconfiguration becomes.
The second narrative is closer to directed experimentation. Palantir's AIP Bootcamp model (intensive, use-case-specific engagements designed to identify and deliver value in specific operational domains within defined timeframes) reflects a different logic: that value is discovered through deployment in bounded, high-stakes contexts, not planned across the enterprise in advance.6 IBM's Garage methodology and Google's AI-native co-creation model operate from similar premises.
The distinction matters because it reflects a genuine disagreement about the relationship between planning and learning in AI deployment. Neither narrative is straightforwardly wrong. But both are shaped by the commercial incentives of the organisations promoting them. Executive teams evaluating AI transformation support should ask a direct question: does this provider's commercial model benefit from early lock-in, or from demonstrated value in bounded domains?
The risk of outsourcing understanding
There is a specific organisational risk that does not feature prominently in vendor narratives: deploying AI solutions at scale before the organisation has developed sufficient internal understanding to govern what it has deployed.
The new form of legacy is not outdated technology.
It is operating logic embedded before the organisation understood what it needed.
MIT Center for Information Systems Research's (CISR's) research identifies a consistent characteristic of high-performing organisations: they treat AI deployment as an organisational learning activity, not a technology installation.3 They build internal capability alongside external deployment. They retain ownership of the design logic (the decisions about how AI is integrated into workflows, how human oversight is structured, and how performance is measured) rather than delegating it to vendors or systems integrators.
McKinsey's work on capability building in AI transformation is consistent: organisations that develop internal AI fluency alongside deployment (embedding data scientists in business units, building operational leaders who understand AI performance characteristics, and creating governance mechanisms that surface learning continuously) achieve materially better outcomes than those that treat AI as a capability to be installed and managed externally.5
The risk of outsourcing understanding is not merely dependency. It is the accumulation of embedded workflow logic that the organisation cannot interrogate, adjust or explain: a new form of legacy that sits inside operating model design rather than technology infrastructure, and is correspondingly harder to identify and more expensive to replace.
The required approach: stable intent, adaptive execution
The required approach is a form of managed disequilibrium: holding intent stable while allowing execution to adapt continuously. What the evidence supports is neither comprehensive planning before deployment nor broad deployment without design. It is a specific architecture of intent and adaptation.
The elements that must be held stable are the ones that do not require deployment experience to define. Strategic intent, meaning the domains in which AI should generate value and the performance standards it must meet, can and should be defined before deployment begins. Governance architecture (the decision rights, risk thresholds and accountability frameworks set out in Speed by design: Governance and the architecture of AI transformation) must be established before material operational exposure is committed. Organisational learning ownership (the internal responsibility for making sense of deployment experience) must sit with the enterprise, not with vendors or partners.
The elements that must be allowed to evolve are the ones that cannot be defined without deployment experience. The specific operating model (how work is divided between humans and AI systems, how processes are redesigned, how oversight is structured) will be shaped by what deployment reveals. Workflow design will change. The human-AI division of labour will shift as capability and confidence develop. The scale and sequencing of further deployment will be determined by what is actually learned, not by what was projected.
This is the difference between what Amazon distinguishes as one-way and two-way doors: decisions that are difficult or impossible to reverse, and decisions that can be revisited as understanding improves.7 The former require rigour before commitment. The latter should be made quickly, at the right level, with explicit learning logic built in.
Henry Mintzberg's distinction between deliberate and emergent strategy, first articulated in 1985, is directly applicable here. Effective strategy, Mintzberg argued, is neither fully planned nor fully improvised. It holds intent stable while allowing execution to adapt continuously to what the environment reveals.8 For agentic AI transformation, that insight is not an academic observation. It is the practical design principle.
What this looks like in practice
Five disciplines separate the four per cent of organisations scaling AI effectively from the majority deploying it without material impact.
- Define intent at enterprise level, before deployment begins. This is not a capability inventory or a use case map. It is a clear statement of the performance standards AI must meet, the domains in which it should generate value, and the governance boundaries within which deployment can proceed. JPMorgan Chase established this architecture (a top-down mandate, proprietary data infrastructure and a managed portfolio of governed use cases) before scaling AI across 60,000 employees. The result included a 95 per cent reduction in anti-money laundering false positives and a projected $2 billion in AI-generated value.9 The return was not despite the architecture. It was because of it.
- Deploy initially in high-value, bounded domains. The purpose is not minimum viable product thinking — it is structured exposure to real operating conditions. It is that bounded deployment generates the specific learning that enterprise-level operating model design requires: what the technology actually does in context, where human oversight is essential, and what workflow redesign is needed to realise value. Investec's initial AI deployment focused on a specific, high-stakes operational domain (regulatory document processing) with defined performance criteria and explicit progression conditions. The learning that deployment generated informed subsequent operating model decisions across the business.10
- Instrument learning by design. Learning from AI deployment does not accumulate automatically. Organisations must build the measurement and feedback infrastructure to capture it: performance against defined criteria, workflow redesign experience, human override patterns and edge-case failures. Without instrumentation, learning is anecdotal. With it, deployment experience becomes the design input for the next operating model decision. This is the mechanism through which governance architecture stays current with operational reality.
- Make operating model evolution explicit. The assumption that AI deployment will automatically reshape the operating model is one of the most reliable mechanisms for ensuring that it does not. Operating model evolution must be a governed activity, with defined ownership, explicit criteria for progression, and a clear link between what deployment reveals and how operating decisions are adjusted. It is a leadership responsibility, not a technology outcome.
- Scale on defined criteria, not on pressure. The failure mode most prevalent among the 88 per cent of organisations deploying without scaling is premature broadening: expanding AI deployment before the operating model conditions for scale have been established. BCG's research on the consistently scaling four per cent identifies one common factor: they make explicit decisions about what must be true before deployment expands, and they enforce those criteria against commercial and competitive pressure.
What this asks of leaders
The shift is not in what leaders decide. It is in how they decide. The operating model for agentic AI transformation asks something specific and uncomfortable of each leadership role.
- For CEOs, it requires replacing the instinct to define the target state with the discipline of governing learning toward it. The value of CEO involvement is not directional clarity about the destination; that clarity will be incomplete, in ways that cannot be anticipated in advance. It is architectural clarity about strategic intent, governance boundaries, and the pace of learning the organisation is designed to sustain.
- For COOs, it demands a new mode of operating model stewardship: not designing the future state comprehensively in advance but owning the logic by which the current state is continuously redesigned in response to what deployment reveals. This requires the ability to hold operational continuity and structural adaptation simultaneously (the two-systems challenge described in The Discipline of Structural Change: Why transformations stall before they can scale) with the additional variable that the adaptation is toward a destination being discovered, not a destination already defined.
- For CFOs, the investment framing must shift. The question is not the business case for a defined operating model, which cannot be reliably specified in advance. It is the governance of a portfolio of bounded deployments, each generating learning that informs the next investment decision. Return on AI investment is a compounding asset if it is governed that way, and a sunk cost if it is not.
- For Boards, the oversight question is whether the organisation is learning from AI deployment at the pace its competitive environment requires. Not whether the transformation programme is on plan (in this model, there is no fixed plan) but whether the learning infrastructure is functioning, whether operating model evolution is governed explicitly, and whether the organisation is accumulating institutional understanding faster than it is accumulating operational dependency.
Strategic consequence
The Perspectives in this series are complementary. Speed by design: Governance and the architecture of AI transformation establishes the governance architecture: the foundational design decisions that must precede material deployment. This article describes what runs within that architecture: not a defined target state, but a disciplined process of discovery. The governance guardrails identified in that article are what make adaptive execution safe. The adaptive execution described here is what keeps those guardrails relevant.
The organisations that will generate sustained value from agentic AI are not the ones with the most comprehensive transformation roadmaps. They are not the ones that deployed most broadly, most quickly, or on the most capable vendor platforms. They are the ones that developed the institutional capacity to learn from deployment and to use that learning to govern operating model evolution at pace.
The primary risk is not moving too slowly. It is locking into the wrong model too early, whether that lock-in comes from premature operating model design, from broad vendor platform deployment before the organisation understood what it needed, or from treating AI transformation as a one-time programme rather than an ongoing governance challenge.
In an environment where the capabilities are still developing, operating models are still forming, and performance data is still accumulating, the competitive advantage is not the best plan. It is the fastest, most disciplined learning.
Stable intent. Adaptive execution.
The loop between them is where the value is.
References
- Gartner (2024). What's New in Artificial Intelligence from the 2024 Gartner Hype Cycle. Gartner Inc. www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2024-gartner-hype-cycle
- McKinsey & Company (2024). The State of AI in 2024: GenAI adoption spikes and starts to generate value. McKinsey Global Survey. www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- Wixom, B. and Beath, C. (2024). Becoming an AI-Fueled Business. MIT Center for Information Systems Research, Research Briefing, Vol. XXIV, No. 6. cisr.mit.edu/publication/2024_0601_AIFueledBusiness_WixomBeath
- BCG (2024). Where's the Value in AI? Boston Consulting Group Henderson Institute. www.bcg.com/publications/2024/wheres-the-value-in-ai
- McKinsey & Company (2025). Superagency in the workplace: Empowering people to unlock AI's full potential. McKinsey Global Institute. www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential
- Palantir Technologies (2024). AIP Bootcamp. Palantir.com. www.palantir.com/aip/bootcamp/
- Bezos, J. (2016). 2015 Letter to Shareholders. Amazon.com, Inc. www.aboutamazon.com/news/company-news/2016-letter-to-shareholders
- Mintzberg, H. and Waters, J.A. (1985). Of Strategies, Deliberate and Emergent. Strategic Management Journal, 6(3), pp. 257–272. www.jstor.org/stable/2486186
- JPMorgan Chase & Co. (2024). 2023 Annual Report. JPMorgan Chase. www.jpmorganchase.com/ir/annual-report
- Investec plc (2024). How Investec is using AI in a responsible way. Investec Group. www.investec.com/en_gb/focus/investing/how-investec-is-using-ai-in-responsible-way.html