The diginomica network’s latest research lands a clear, uncomfortable verdict for enterprise technology teams: artificial intelligence is not primarily a technical deployment problem — it is a change management problem. Within an invitation‑only community of CIOs and CTOs, the report finds near‑ubiquitous experimentation with AI but a persistent gap between pilots and measurable business value. That gap is driven less by model architecture or API plumbing than by legacy data, poor governance, misaligned expectations across the C‑suite, and a failure to treat adoption as a sustained organizational program rather than a one‑off rollout.
The diginomica network research synthesizes discussions with senior IT leaders — a curated cohort of CIOs and CTOs running large, often global, enterprises. The headline figure reported inside that community is striking: 93% of network members say they have implemented some form of AI, from chatbots to advanced use cases like drug discovery. Yet the same community reports that AI often fails to meet the “elevated expectations” of boards and CEOs unless accompanied by rigorous change management, data work, and governance.
This is not a niche argument. Independent policy and industry studies cited in the same body of reporting show a broadly similar dynamic: many firms are experimenting, but far fewer have converted pilots into durable, auditable improvements in firm‑level productivity. That mismatch matters because it shapes procurement decisions, investor patience, and vendor viability across the AI stack.
Source: Diginomica the diginomica network research reveals change management, not tech, is biggest AI challenge. Can enterprises keep up?
Background / Overview
The diginomica network research synthesizes discussions with senior IT leaders — a curated cohort of CIOs and CTOs running large, often global, enterprises. The headline figure reported inside that community is striking: 93% of network members say they have implemented some form of AI, from chatbots to advanced use cases like drug discovery. Yet the same community reports that AI often fails to meet the “elevated expectations” of boards and CEOs unless accompanied by rigorous change management, data work, and governance.This is not a niche argument. Independent policy and industry studies cited in the same body of reporting show a broadly similar dynamic: many firms are experimenting, but far fewer have converted pilots into durable, auditable improvements in firm‑level productivity. That mismatch matters because it shapes procurement decisions, investor patience, and vendor viability across the AI stack.
Why change management, not tech, tops the list
The core claim: adoption is an organizational problem
CIOs in the diginomica network repeatedly framed AI as a fundamental rethinking of workflows and decision rights rather than a technology checkbox. The research quotes members saying that without sustained behaviour change, communications, and role redesign, a shiny Copilot or LLM integration will deliver only surface metrics — clicks, seats, or anecdotal wins — rather than true business outcomes. The observation is blunt: past technology waves (SaaS, cloud) frequently captured only 10–20% of potential value because organisations stopped at tool deployment and neglected follow‑through.Why technical capability alone is insufficient
There are clear technical blockers — data quality, legacy integrations, model‑ready pipelines — but the painful insight from CIOs is that these are necessary conditions, not sufficient ones. An organisation can solve model hosting and latency yet still fail if frontline staff don’t change how they work, if KPIs aren’t redefined, or if governance leaves dangerous shadow uses unchecked. The diginomica playbook emphasizes treating adoption as a product: role‑based training, gate‑staged scaling, and human‑in‑the‑loop safeguards.What the research actually reports (numbers to note)
- 93% of diginomica network members have implemented some form of AI. This reflects a high‑capability, early‑adopter cohort.
- Over half of respondents report initial AI efforts achieving around a 50% success rate, but a majority say outcomes frequently fall short of boardroom expectations. Expectation mismatch is a recurring theme.
- Some firms capture only 10–20% of the potential benefit from technology projects when change management is absent — an historical benchmark that CIOs fear repeating with AI.
Root causes: data, legacy systems, and organisational friction
Data quality is the de facto starting line
CIOs say the main technical barrier to successful AI adoption is poor data quality. Legacy systems, fragmented data sources, and inconsistent lineage make it hard to build model‑ready pipelines. The research and cross‑industry analysis both highlight the same refrain: without canonical data sources, feature stores, and clear ownership, models remain brittle and expensive experiments.Legacy systems and integration drag
Many organisations have modernised selectively for resilience and continuity, but those upgrades were not always designed to enable AI‑first workflows. Connecting forecasting models, agentic systems, or copilots to transaction systems — CRM, ERP, billing — can require months of engineering and process redesign. That integration cost favours incremental pilots over transformational redesigns, creating a natural conservative bias.Skills and talent shortages amplify the problem
The war for data engineers, ML engineers, and MLOps talent remains intense. Firms lack the specialist staff to move pilots into repeatable production, and many mid‑market companies cannot match the compensation or brand pull of hyperscalers and big tech. The result is that adoption becomes concentrated where talent, capital, and governance converge.Leadership, governance and the C‑suite alignment problem
Misaligned expectations between CEOs and CIOs
CEOs increasingly treat AI as a lever to cut costs and accelerate productivity in a pressured macro environment. CIOs understand that pressure but also know that technology alone will not satisfy immediate ROI demands. The research describes a tension: boards want quick wins; IT leaders must temper hype and insist on staged plans, governance, and investment in people and data.Confusion over terminology increases risk
Multiple senior executives reportedly conflate generative AI, agentic systems, and robotics. This sloppy language matters because it drives poor procurement choices, mismatched KPIs, and unrealistic timelines. Part of the CIO role has become decoding what leaders mean by “AI” and translating that into concrete risk/benefit assessments.Boards and regulators must demand evidence, not demos
The diginomica research and independent policy reporting both urge boards to require KPIs, staged gating, and cost observability before authorising broad rollouts. Vendor demos and glossy case studies are inadequate substitutes for reproducible, auditable metrics tied to the business.Learning from failure: why pilots that “fail” often aren’t dead ends
Rapid model evolution complicates POCs
Large Language Models and AI tools are improving fast. A proof‑of‑concept that fails today can succeed months later simply because the underlying models have improved. This dynamic creates friction with business stakeholders who expect consistency across trials. CIOs must design pilots and procurement with the timeline of model improvement in mind to avoid false negatives.Failure modes are instructive
The research outlines two clear failure scenarios: (1) purchasing broad Copilot licences and rolling them out without workflow integration or training, and (2) running expensive pilots with no measurement framework or data plumbing, leaving results anecdotal. Both produce good usage metrics but no bottom‑line impact. Successful pilots are small, instrumented, and tied to business outcomes.A practical playbook for CIOs and IT leaders (numbered roadmap)
- Anchor pilots to measurable business outcomes. Define 2–4 high‑value use cases with explicit KPIs (time to revenue, error reduction, conversion uplift) and include control periods for causal measurement.
- Harden data plumbing first. Audit canonical sources, fix lineage gaps, and create a model‑ready pipeline with versioned feature stores and clear data ownership.
- Treat adoption as a product. Build role‑based training, instrument human‑in‑the‑loop review points, and operate adoption squads that measure and iterate.
- Build pragmatic, operational governance. Create cross‑functional AI steering (legal, security, operations), standardise model docs, and establish SLAs that include explainability and rollback procedures.
- Design for portability and cost observability. Separate data stores, vector storage, and model hosting. Implement inference chargeback, caps, and automated alerts to avoid runaway costs.
- Phase scaling with gates. Move from pilot → bounded production → scaled production only after KPIs and operational readiness criteria are met. Avoid “forklift” rollouts of agentic systems.
Vendor implications and market risk
Vendors must change how they sell
The research signals a clear warning to AI vendors: selling point solutions to business lines without deep engagement with CIO/CTO functions is risky. If AI deployments repeatedly fail to deliver ROI, procurement budgets will shrink and investor patience will falter. Vendors will need to demonstrate measurable outcomes, technical portability, and contractual protections for data and compliance to remain credible.Costing and procurement friction
Consumption‑based inference billing and seat licences can create unpredictable long‑term costs. CIOs are increasingly demanding observability tools, chargeback mechanisms, and contractual guarantees on data handling and non‑training clauses. Vendors that offer clear tools for observability and portability will be advantaged.Can enterprises keep up? A sober prognosis
Enterprises can keep up with AI — but only with deliberate, sustained programs that couple technology with organisational change. The favourable scenario requires four conditions:- Executive alignment on realistic timelines and measurable KPIs.
- Investment in data foundations and MLOps to make pilots repeatable.
- A productised approach to adoption that includes role redesign, training, and human‑in‑the‑loop processes.
- Pragmatic governance and cost controls that reduce regulatory and budgetary surprise.
Risks and unresolved areas (what to watch)
- Hallucination and provenance: Generative outputs require robust grounding and provenance to be trusted in decision workflows. Enterprises must bake evidence‑return and traceability into systems.
- Environmental and infrastructure costs: Large models increase compute and energy needs. TCO models must include sustainability and power considerations for scaled deployments.
- Talent concentration: Continued demand for specialised AI skills may centralise capability among a few firms, leaving gaps for mid‑market players. Workforce planning and reskilling will be critical.
- Vendor claims vs. reproducibility: Many vendor case studies need independent validation; procurement should insist on reproducible benchmarks and data on methodology.
Practical checklist for getting unstuck (one‑page action plan)
- Audit your data: map canonical sources, owners, retention rules, and lineage gaps.
- Choose high‑impact pilots with clear KPIs and control periods.
- Build an adoption squad: training, role redesign, and feedback loops.
- Enforce governance: per‑agent data restrictions, audit trails, and human‑in‑the‑loop gates.
- Control costs: implement inference chargeback and consumption caps.
- Require vendor reproducibility and contract protections for data use.
Conclusion
The diginomica network research delivers a pragmatic, timely lesson: enterprise AI will be won or lost not in the model benchmarks or vendor demo rooms but in the daily practice of change management. CIOs and CTOs who combine clean data foundations, measured pilots, productised adoption, and operational governance will translate AI’s promise into measurable business outcomes. Those who treat AI as a point‑solution purchase risk repeating the same shallow adoption cycle that left earlier technology waves far short of their potential. The opportunity remains enormous, but capturing it is an organizational discipline as much as a technical one — and that is a challenge enterprises must treat as their central priority.Source: Diginomica the diginomica network research reveals change management, not tech, is biggest AI challenge. Can enterprises keep up?


