AI is here, and its importance for business and broader life will be on the rise for some time. Organisationally, the question most worth asking is not how much AI to adopt, but whether your organisation is designed for a durable partnership between humans and the systems they deploy.
Here, at the outset of AI integration into the enterprise, most are approaching AI tactically: which tools to purchase, which processes to automate, how much headcount to rationalise. These concerns are overshadowed by a more fundamental and strategic one: how to rapidly establish sustainable conditions in the organisation for humans and AI to collaborate effectively and drive unprecedented growth. Without that foundation, organisations tend toward one of two failure modes: either underusing AI and limiting its value or over-relying on it in ways that quietly accumulate risk.
A 2023 study by Harvard Business School (HBS) and Boston Consulting Group (BCG)1 found that consultants using AI produced work rated significantly higher in quality, except when the AI was wrong, in which case the work was rated worse than that of those using no AI at all. The technology amplified performance in both directions. What determined the outcome was not the tool, but the quality of the human – AI relationship: whether the person using it understood enough to know when to push back.
The hybrid business—in which humans and AI operate alongside each other as a matter of course—is not a transition. It is the destination. This article offers a practical framework for designing your organisation to get the most from it: how to structure the collaboration, where to assign accountability, and what leaders need to do differently: the core concerns of organisation and governance design.
"The hybrid business is not a transition. It is the destination. This article offers a practical framework for designing your organisation to get the most from it."
The case for partnership
The evidence points clearly in one direction: the combination of human and AI, when well designed, consistently outperforms either working alone. The practical implication follows directly—the priority is not to deploy more AI, but to build better collaboration around the AI you already have.
The HBS/BCG study reviewed the effect of AI assistance on consultant performance across a range of comple AI-assisted consultants completed around 25% more tasks and produced outputs rated roughly 40% higher in quality within the AI's capabilities. But for tasks where the AI system was confidently wrong, those using it performed 19 percentage points worse than those working without it.2
More AI does not mean better outcomes. Better collaboration does.
The same pattern appears across sectors:
- •In radiology, a 2026 international study found that AI detected 38% more pancreatic cancers than radiologists at equivalent specificity—not replacing clinical judgment but substantially extending what screening can find.3
- •In legal research, AI accelerates case preparation dramatically but has produced documented instances of fabricated citations, resulting in judicial sanctions against the filing attorneys.4
- •In software development, a large-scale Microsoft Research study found that developers using AI coding assistance completed tasks around 55% faster.5
In each case, the value of AI is conditional on the quality of the human alongside it. Ask not What can this tool do? but How well are our people equipped to work with it?
After Deep Blue defeated Kasparov in 1997, something unexpected emerged from the competitive chess world. In open freestyle formats where human – computer teams competed against each other, it was not the most powerful computers that won. It was human players who had developed the most effective working relationship with their machines—who knew when to trust the engine, when to override it, and when to ask it a different question. Kasparov described these teams as centaurs.6 The label has stayed because it describes something real: a form of capability that belongs to neither party alone. The goal for any organisation serious about AI is to build centaurs, not simply to accumulate tools.
What each partner brings
Building an effective human–AI partnership begins with a clear-eyed assessment of what each party contributes—not by drawing a boundary between "AI tasks" and "human tasks," but by understanding the distinct and complementary strengths each brings.
AI strengths lie in volume, speed, and pattern recognition within defined parameters: drafting, anomaly detection, scenario modelling, compliance checking, and scheduling optimisation. Automation here is not a concession but a sensible use of available capability.
But AI systems have characteristic blind spots that every leader must internalise. They optimise for stated objectives, often missing what was left unstated. Amazon's AI hiring tool, trained on ten years of male-dominated data, began systematically penalising applications from women.7 The system did exactly what it was built to do—it simply could not recognise that the objective itself was flawed. The lesson: AI optimises; humans must set and interrogate the objective.
Humans bring contextual interpretation, ethical judgment and the ability to challenge an objective rather than merely optimise for it. A human reading AI-produced recommendations can sense when a technically correct answer will land badly, notice when a stakeholder relationship changes what is being asked and absorb the downstream consequences (reputational, relational, legal) when a decision proves wrong.
The HBS/BCG study found that performance uplift was highest when consultants engaged actively with AI outputs—questioning, refining, redirecting—rather than accepting or rejecting them wholesale.8 Passive use of AI is not a safe default. It is the condition most likely to produce the worst outcomes.
"When mapping roles, ask not "can AI do this?" but "what does this work actually require—speed and pattern recognition, or judgment and accountability?""
Designing for collaboration
Classify every AI-enabled process into one of three configurations—not as rigid categories, but as prompts for clear thinking about who does what and who owns what.
Human validates
translates both ways
AI informs
AI-primary processes—forecasting, scaled content production, financial analysis, compliance monitoring—place systems at the centre of execution while humans define parameters, review exceptions and validate outputs. Morgan Stanley's deployment of a GPT-4-based assistant to 16,000 financial advisors illustrates the model: the system handles research retrieval at speed; advisors use it to deepen client conversations.9 Apply this where volume is high, the task is well-defined and human review of exceptions is feasible.
Human-primary processes—strategy, stakeholder negotiation, regulatory positioning, ethical judgment—must remain with people, with AI providing analysis, modelling implications or surfacing relevant precedent.Apply this wherever the consequences of a wrong decision are significant and contextual judgment is essential.
Interface roles are the most important and most underdeveloped in most organisations—the people who translate AI outputs into human decisions, and human priorities into AI inputs. Invest in this capability before expanding AI tooling further. In most organisations, it is the binding constraint on the value AI can deliver.
Accountability as the architecture of trust
Accountability is not a governance formality—it is the structural condition that makes human – AI collaboration reliable. This is the foundation of effective governance design. When responsibility is unclear, partnership degrades, often invisibly, until something goes wrong.
Air France 447 (2009): automation disengaged mid-flight; pilots who had been monitoring rather than flying were unable to recover manual control.10 Not a failure of technology or competence, but of the human – automation relationship—accountability had never been clearly assigned for the conditions that eventually arose.
The 2018 Uber self-driving accident in Tempe, Arizona, reinforced the same point.11 Multiple layers of automation had been introduced, but active human oversight had not been maintained alongside them.
Before any AI-enabled process goes live, three things must be true:
- 01Someone owns the objective the system is optimising for
- 02Someone is responsible for reviewing outputs against broader context on an ongoing basis
- 03Someone has standing and information to intervene when something is wrong
If any of these is unclear, the process is not ready to deploy.
The most effective governance model combines central standards—tool approval, risk thresholds, audit and oversight—with genuine distributed authority for execution within those guardrails.
What this asks of leaders
Four concrete shifts in how leaders think and act:
- 01
Map before you deploy.
Map where AI already sits across your decision and execution flows. For each instance ask: was this positioning chosen or inherited? Is accountability clearly assigned? This reveals where value is being left on the table and where risk is quietly accumulating.
- 02
Clarify ownership as a prerequisite, not an afterthought.
For every AI-enabled workflow, name the person responsible before the workflow goes live. Ownership means setting the objective, reviewing outputs, managing exceptions and answering for consequences. Where you cannot name that person, do not deploy.
- 03
Build interface capability first.
The limiting factor in most organisations is not access to AI tools; it is the number of people who can work effectively at the human – AI boundary. The HBS/BCG study found that variance in outcomes between the best and worst human – AI pairings was greater than the variance between teams using AI and those not.12 The quality of the interface is the quality of the outcome.
- 04
Measure what matters.
Cost reduction is measurable and will attract attention, but it is an incomplete basis for evaluating AI. The more durable advantage lies in whether human – AI collaboration produces better decisions, surfaces risks earlier and enables faster iteration. Build those measures into how AI performance is evaluated.
References
- Dell'Acqua, F. et al. (2023). Navigating the Jagged Technological Frontier. HBS Working Paper No. 24-013. www.hbs.edu/faculty/Pages/item.aspx?num=64700
- Dell'Acqua et al. (2023), op. cit. www.hbs.edu/faculty/Pages/item.aspx?num=64700
- Alves, N. et al. (2026). PANORAMA study. The Lancet Oncology. www.thelancet.com/journals/lanonc/article/PIIS1470-2045(25)00567-4/abstract
- Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023). Reuters, 23 June 2023. www.reuters.com/legal/new-york-lawyers-sanctioned-using-chatgpt-cite-bogus-cases-2023-06-22/
- Peng, S. et al. (2023). The Impact of AI on Developer Productivity. arXiv:2302.06590. arxiv.org/abs/2302.06590
- Kasparov, G. (2010). The Chess Master and the Computer. New York Review of Books, 11 Feb. www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/
- Dastin, J. (2018). Amazon scraps AI recruiting tool that showed bias. Reuters, 10 Oct. www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
- Dell'Acqua et al. (2023), op. cit. www.hbs.edu/faculty/Pages/item.aspx?num=64700
- CNBC (2023). Morgan Stanley rolling out AI chatbot for 16,000 financial advisors. 18 Sep. www.cnbc.com/2023/09/18/morgan-stanley-chatgpt-financial-advisors.html
- Oliver, N., Calvard, T. & Potočnik, K. (2017). Flight AF447. Harvard Business Review, Sep. hbr.org/2017/09/the-tragic-crash-of-flight-af447-shows-the-unlikely-but-catastrophic-consequences-of-automation
- NTSB (2019). Uber self-driving vehicle collision, Tempe, Arizona. NTSB/HAR-19/03. spectrum.ieee.org/ntsb-investigation-into-deadly-uber-self-driving-car-crash-reveals-lax-attitude-toward-safety
- Dell'Acqua et al. (2023), op. cit. www.hbs.edu/faculty/Pages/item.aspx?num=64700