Reckitt’s move from isolated AI experiments to a truly operational, enterprise-grade program is a turning point for consumer-packaged-goods (CPG) firms wrestling with fragmented data, distributed teams, and lofty sustainability targets. The company rebuilt its data plumbing on Microsoft Azure, layered Copilot and Power BI-driven analytics on top, and then scaled agentic AI and SKU-level carbon analytics into daily workflows—delivering what vendor and partner accounts describe as major efficiency and measurement gains while deliberately maintaining human accountability for decisions.
Reckitt is the maker of large global brands such as Lysol, Mucinex, Dettol and Durex. Like many multinational CPG companies, Reckitt confronted two linked problems: data and scale. Years of acquisitions, regional IT stacks and brand-specific reporting left consumer, sales, media and operations data fragmented and of inconsistent quality. Reckitt’s executive team concluded that layering generative AI on top of this fractured data would simply amplify errors and organizational friction, not fix them. Their alternative: rebuild the data layer first and treat generative AI as a decision‑acceleration technology rather than an autonomous decision-maker. The program has three visible pillars:
What is corroborated by more than one credible account:
Reckitt’s leap from pilots to daily, enterprise use of generative AI is not accidental. It rests on deliberate infrastructure choices, narrowly targeted agentic automation, partner orchestration, and a governance posture that keeps humans accountable. The measurable productivity gains and the ability to compute SKU‑level emissions at scale show how AI can shift both go‑to‑market velocity and sustainability rigor for large consumer firms. But the model is not plug‑and‑play: it requires sustained investment in data quality, operational governance and independent validation of headline claims before organizations can confidently translate pilot wins into durable enterprise advantage.
Source: PYMNTS.com Reckitt Moves Beyond AI Pilots to Daily Enterprise Use | PYMNTS.com
Background
Reckitt is the maker of large global brands such as Lysol, Mucinex, Dettol and Durex. Like many multinational CPG companies, Reckitt confronted two linked problems: data and scale. Years of acquisitions, regional IT stacks and brand-specific reporting left consumer, sales, media and operations data fragmented and of inconsistent quality. Reckitt’s executive team concluded that layering generative AI on top of this fractured data would simply amplify errors and organizational friction, not fix them. Their alternative: rebuild the data layer first and treat generative AI as a decision‑acceleration technology rather than an autonomous decision-maker. The program has three visible pillars:- A centralized, Azure‑based data platform that unifies cross-brand and cross-region data.
- Embedded analytics and assistant capabilities (Power BI + Copilot) that let non‑technical teams query and act on enterprise data in natural language.
- Agentic AI and specialized automation for targeted, repeatable workflows in marketing and sustainability (product‑level carbon accounting).
Data first: building an AI‑ready decision layer
Reckitt’s foundational decision—fix the data before scaling AI—is the most consequential practical choice many organizations face. The company consolidated roughly 35 integrated data sources onto Microsoft Azure and implemented canonical schemas and governance that make retrieval‑augmented generation (RAG) and Copilot grounding reliable at scale. That shift converts data from a messy obstacle into a reproducible input to downstream AI agents. Key technical decisions that enabled the rebuild:- Centralized storage and unified schemas on Azure to ensure consistent entity definitions (product, channel, geography).
- A retrieval layer that attaches provenance, access control and confidence metadata to any artifact supplied to Copilot or other models.
- Tight governance and data‑quality tooling—lineage, schema validation and monitoring—so that automated insights can be traced and audited.
Embedding Copilot and Power BI: natural language for business analytics
Instead of delivering one‑off, centralized reports, Reckitt embedded Copilot functionality into Power BI to let marketers and insights teams query the unified dataset in natural language. The reported result: a substantial uplift in marketing efficiency and a dramatic reduction in time spent on routine analysis. Microsoft’s published case study with Reckitt reports a roughly 60% improvement in marketing efficiency, reductions in routine analytics time up to 90%, and materially faster concept‑to‑decision cycles. Those figures came alongside internal validation that AI‑generated insights performed at least as well as traditional research outputs when benchmarked historically. Practically, this means:- Marketers can ask conversational questions and get data‑grounded explanations rather than waiting for bespoke dashboards.
- The analytics team spends less time preparing routine reports and more time on interpretation and strategic analysis.
- The organization reduced friction between data producers and consumers by shifting to a human‑in‑the‑loop model where AI surfaces signals for human judgment rather than making autonomous calls.
Scaling AI across global marketing: narrow use cases, broad adoption
A frequently repeated pitfall in enterprise AI is “pilot fatigue”—many experiments that never scale because they weren’t designed from day one for governance, deployment and end‑user enablement. Reckitt took the opposite approach: identify high‑frequency, low‑value tasks through time‑and‑motion studies, then deploy agentic automation narrowly to remove the repetitive steps. The result was adoption at scale—more than 500 marketers globally using agentic assistants in daily workflows by 2025—and measured time savings reported in brand tracking and performance analysis. Public reporting credits 20%–40% time reductions in these areas. What made this scale possible:- Targeted problem selection: automation for tasks that repeat often and have clear KPIs.
- Governance and training baked into the rollout: role‑based onboarding, champions networks and standards for human sign‑off.
- Framing AI as a capacity multiplier, explicitly avoiding headcount-reduction narratives and instead redeploying human effort toward insight, creativity and consumer understanding.
Product-level emissions: moving from averages to SKU precision
Reckitt’s most technically ambitious and arguably most strategic AI deployment sits in sustainability. Scope 3 emissions (the emissions embedded in supply chains and product use) typically represent the majority of a CPG company’s footprint, but they’re notoriously hard to measure at product granularity. Reckitt partnered with CO2 AI and Quantis to automate and scale product‑level footprinting across roughly 25,000 products, moving from a handful of representative averages to a SKU‑level model in under four months. The partners report a 75x improvement in footprint accuracy versus prior methods, enabling minute‑level calculation that once took months of manual modeling. Why SKU‑level data changes the game:- It surfaces which ingredients, suppliers, or packaging choices drive emissions for specific SKUs.
- It enables targeted supplier engagement, packaging redesign and reformulation decisions with a quantified emissions impact.
- It turns sustainability strategy from headline targets into actionable, product‑level roadmaps that can be measured against net‑zero milestones (Reckitt’s 2030 and 2040 targets).
Partners and ecosystem: Microsoft, TTEC Digital, CO2 AI, Quantis
Reckitt did not go it alone. The program relied on a set of external partners who provided technology, systems integration and domain expertise:- Microsoft supplied the Azure data platform, Azure OpenAI capabilities and Copilot for Power BI—forming the backbone for the insights generator and analyst assistants. Microsoft’s customer story with Reckitt documents the marketing efficiency and task‑time reductions cited above.
- TTEC Digital supported the migration and modernization of marketing operations—moving workflows onto a modern martech stack and embedding AI‑driven insight generation across global campaign planning and execution. TTEC Digital’s own materials describe a broad modernization that aligns with the claims of unified campaign execution across many markets.
- CO2 AI and Quantis provided the domain‑specific platform and scientific methods to convert supplier activity data and emissions factors into SKU‑level carbon accounting at scale. Both partners publish case materials describing the 25,000‑SKU project and its outcomes.
Verifying the numbers: what’s corroborated and what needs caution
Several of Reckitt’s headline metrics are documented by multiple parties; others require cautious reading.What is corroborated by more than one credible account:
- The Azure / Copilot + Power BI program and the approximate magnitude of efficiency gains and time reductions are described in Microsoft’s customer story and echoed in trade coverage. These accounts together show consistent direction and scale for the reported marketing productivity improvements.
- The product‑level emissions exercise—25,000 SKUs, four‑month turnaround, and the 75x accuracy improvement—is described in both CO2 AI’s customer materials and Quantis’ case study. The project scope and speed are therefore corroborated by at least two independent partners who participated directly.
- The global marketing modernization partnership with TTEC Digital and the movement to Dynamics 365 / unified campaign execution is reflected in TTEC Digital’s case communications and third‑party posts by participants.
- Vendor and partner case studies are valuable but may emphasize best‑case metrics drawn from controlled pilots or internal benchmarks. Independent, academic or third‑party audits would further increase confidence—especially for high‑impact claims around absolute percentage improvements (for example, exactly 60% efficiency gains). Treat published percentages as strong indicators of impact, but not immutable fact without third‑party replication.
- The Digiday‑reported figures (20%–40% time savings) are directionally consistent with the company’s claims, but external press coverage can sometimes synthesize multiple internal measures into a headline range. Confirm local KPIs in any replication.
Governance, safety and the deliberate human‑in‑the‑loop design
Reckitt’s explicit decision not to let AI make autonomous enterprise decisions is a key operational control. Instead, AI surfaces correlations, anomalies and prioritised signals while humans retain final judgment and accountability. This design reduces the risk of automation bias and regulatory exposure in domains like product claims, advertising compliance and sustainability reporting. Essential governance measures observed in Reckitt’s approach:- Role‑based access controls and tenant scoping for Copilot grounding to ensure outputs only draw from permissioned sources.
- Logging and audit trails for prompts, outputs and decision points—critical for traceability and internal/external audits.
- Human approval gates for any externally facing content or legally consequential outputs.
- Ongoing monitoring of model drift, confidence scores and operational KPIs to detect erosion of performance and surface needed retraining or data fixes.
Organizational impact: changing jobs, not just headcount
Reckitt framed the rollout as a productivity and capacity program rather than a workforce reduction exercise. That framing is significant: by automating reporting and routine analysis, the company expects to redeploy human work toward interpretation, creativity and consumer understanding—areas where human judgement retains a comparative advantage. The company invested in training, role‑based onboarding and governance to accelerate adoption and reduce resistance. Operational effects reported or implied:- Shorter concept‑development timelines and faster asset adaptation/localization for marketing.
- Reduced time burden on analysts for post‑campaign measurement, freeing them to engage in higher‑value tasks.
- Faster supplier and procurement engagement on emissions hotspots due to SKU‑level transparency—an action stream that moves sustainability from strategy into supply‑chain negotiations.
Technical sketch: how the pieces fit together
At a high level, Reckitt’s stack can be represented as:- Data lake and canonical schemas on Microsoft Azure (OneLake/Fabric or equivalent patterns).
- ETL/data engineering pipelines that pull consumer, sales, media and operational data into the canonical layer, with lineage and validation.
- Retrieval layer and vector indices that attach provenance and access controls for Copilot and other models.
- Copilot for Power BI and custom “insights generator” logic that synthesizes multi‑source signals into marketer‑friendly outputs.
- Agent orchestration and domain‑specific pipelines for marketing automation and SKU‑level emissions modeling (the latter integrating CO2 AI’s platform and Quantis’ scientific methods).
Practical lessons and recommendations for IT leaders
Reckitt’s story offers concrete takeaways for CIOs and CMOs planning similar moves:- Start with data readiness: invest in canonical schemas, provenance and schema validation before wide agent deployment. Without this, LLM outputs amplify existing data defects.
- Narrow use cases first: choose high-frequency, repetitive tasks where time savings are easy to measure and quality controls can be put in place. Use time‑and‑motion studies to identify these candidates.
- Design governance from day one: build human‑in‑the‑loop workflows, enforce role‑based approvals and log prompts/outputs to create an audit trail.
- Partner for domain expertise: rely on specialist partners for complex domains (e.g., Quantis for emissions science) rather than attempting to re‑engineer scientific models in-house.
- Measure and publish KPIs: convert time saved into business outcomes (campaign throughput, time to market, supplier interventions) and validate vendor claims in your context.
- Audit data sources and build canonical entity models.
- Implement retrieval/provenance for model grounding.
- Pilot Copilot + Power BI in one region or brand for marketing analytics.
- Instrument results, iterate, and train users.
- Expand to agentic assistants with governance, then to adjacent domains (sustainability, supply chain).
Risks, unknowns and where to be skeptical
No mature enterprise AI program is risk‑free. The main risks to monitor include:- Data sprawl and permission creep: AI can make existing permission problems worse if access boundaries aren’t tightened.
- Vendor lock‑in and consumption costs: heavy reliance on a single cloud + agent framework concentrates risk and may escalate bills as usage scales.
- Over‑trust in automation: even good systems produce low‑probability errors; a disciplined human review for high‑impact outputs remains essential.
- Regulatory scrutiny on sustainability claims: SKU‑level emissions numbers will attract attention from regulators, auditors and NGOs—transparency in methodology and external validation are critical.
- The precise, auditable methodology used to compute the “75x” accuracy gain in footprinting should be documented in an independent audit for maximum credibility. Public partner materials align on the improvement magnitude, but independent verification would strengthen the claim.
Why Reckitt matters as a model for other enterprises
Two attributes make Reckitt’s program notable beyond the CPG sector:- The program ties agentic AI adoption to a platform‑level investment in data and governance. This data‑first, governance‑always posture turns AI from a marketing novelty into a durable operational capability.
- The sustainability use case proves that AI can accelerate compliance and corporate responsibility by converting slow, manual modeling into rapid, verifiable levers for supplier engagement and product redesign. SKU‑level emissions analytics is a game‑changer for companies that must show real decarbonization pathways.
Reckitt’s leap from pilots to daily, enterprise use of generative AI is not accidental. It rests on deliberate infrastructure choices, narrowly targeted agentic automation, partner orchestration, and a governance posture that keeps humans accountable. The measurable productivity gains and the ability to compute SKU‑level emissions at scale show how AI can shift both go‑to‑market velocity and sustainability rigor for large consumer firms. But the model is not plug‑and‑play: it requires sustained investment in data quality, operational governance and independent validation of headline claims before organizations can confidently translate pilot wins into durable enterprise advantage.
Source: PYMNTS.com Reckitt Moves Beyond AI Pilots to Daily Enterprise Use | PYMNTS.com