Satya Nadella’s public praise for Swiggy’s tech stack crystallizes a turning point in how on-demand delivery platforms are being redesigned around real‑time data and generative AI — and it offers a practical blueprint for other enterprises chasing low‑latency operational intelligence at scale. On December 11, 2025, Microsoft’s CEO highlighted Swiggy’s use of Microsoft Fabric and its Real‑Time Intelligence capabilities to process “billions of data points in near real time,” calling the deployment “a really great use case.” That endorsement pulls back the curtain on a substantive engineering shift inside one of India’s largest delivery networks: moving from batch refreshes and lagging dashboards to a streaming, event‑first architecture that integrates telemetry, inventory, routing, fraud detection, and conversational AI into a single operational surface.
Swiggy is a major player in India’s on‑demand economy, operating food delivery, quick‑commerce (Instamart), and multiple adjacent services across hundreds of cities. Over the past year the company reported substantial volumes across its platform and has been experimenting with automation and AI to handle the inherent volatility of urban logistics: hyper‑local demand spikes, traffic disruptions, inventory churn at dark stores, coupon misuse, and the complexity of coordinating hundreds of thousands of delivery partners.
Microsoft’s response to these operational needs is Microsoft Fabric, a consolidated data and analytics platform that includes a Real‑Time Intelligence (RTI) workload designed for event‑driven scenarios. RTI brings together high‑throughput ingestion, indexed event storage, low‑latency queries, geospatial mapping, and an “activator” rule engine that can trigger alerts or automated actions. Swiggy’s public case — validated by both Microsoft’s customer story and multiple news reports — illustrates RTI in production: sensor‑level and application telemetry flow into Fabric, are indexed in eventhouses, analyzed with Kusto‑style queries, and then fed to downstream workflows and conversational agents powered by Azure OpenAI Service.
Key platform primitives that matter for operational use cases:
At the same time, the move to streaming RTI raises difficult questions around cost, vendor dependency, privacy, and worker impact. Organizations pursuing similar transformations should balance speed with governance: start small, instrument extensively, protect sensitive data, and keep humans in the loop where outcomes affect livelihoods. When executed with those guardrails, real‑time fabric architectures deliver a meaningful competitive edge — and in Swiggy’s case, a ready example of how modern data platforms can reshape entire industries.
Source: NDTV Profit A Great Use Case: Microsoft CEO Lauds Swiggy For Utilising Its AI Platform
Background
Swiggy is a major player in India’s on‑demand economy, operating food delivery, quick‑commerce (Instamart), and multiple adjacent services across hundreds of cities. Over the past year the company reported substantial volumes across its platform and has been experimenting with automation and AI to handle the inherent volatility of urban logistics: hyper‑local demand spikes, traffic disruptions, inventory churn at dark stores, coupon misuse, and the complexity of coordinating hundreds of thousands of delivery partners.Microsoft’s response to these operational needs is Microsoft Fabric, a consolidated data and analytics platform that includes a Real‑Time Intelligence (RTI) workload designed for event‑driven scenarios. RTI brings together high‑throughput ingestion, indexed event storage, low‑latency queries, geospatial mapping, and an “activator” rule engine that can trigger alerts or automated actions. Swiggy’s public case — validated by both Microsoft’s customer story and multiple news reports — illustrates RTI in production: sensor‑level and application telemetry flow into Fabric, are indexed in eventhouses, analyzed with Kusto‑style queries, and then fed to downstream workflows and conversational agents powered by Azure OpenAI Service.
Overview: Microsoft Fabric and Real‑Time Intelligence
What Microsoft Fabric delivers for operational teams
Microsoft Fabric is presented as a unified platform that bridges data engineering, lakehouse analytics, and real‑time event processing. Its Real‑Time Intelligence workload specifically targets the gap between streaming telemetry and business action.Key platform primitives that matter for operational use cases:
- Eventstreams — connectors and pipelines to ingest streaming sources (Kafka, Event Hubs, CDC feeds, IoT).
- Eventhouses — time‑partitioned, indexed stores optimized for high‑cardinality event queries.
- Activator — a rules and actions engine that triggers workflows, notifications, or API calls when patterns are detected.
- Maps & Geospatial — built‑in mapping and spatial layers for live operational visualization and routing awareness.
- OneLake integration — central catalog and sharing layer to combine streaming telemetry with historical lakehouse data.
Why real‑time matters for food delivery
Delivery promises are short — often under 30 minutes for restaurant orders and under 10 minutes for ultra‑fast formats. In this world:- A 5–10 minute dashboard lag can mean customers ordering items that have just gone out of stock.
- Traffic slowdowns or sudden cancellations ripple across tens of thousands of orders in minutes.
- Discount coupon misuse, if only detected after a batch refresh, can escalate into substantial financial leakage.
How Swiggy is using Fabric: features and implementations
Streaming the business: telemetry, orders, inventory, and road conditions
Swiggy ingests a wide range of streams into Fabric:- Live order events with timestamps and status transitions.
- Rider telemetry including GPS traces and status updates.
- Dark‑store inventory and POS change streams (stock in/out).
- External signals like traffic data, weather, and road closures.
- Promotion and coupon redemption events.
Real examples of RTI in action
- Inventory updates: Stock levels at dark stores are reflected almost immediately on the customer app. If an item becomes scarce, the UI can show extended ETA or prevent orders for that item, reducing failed fulfillment and customer frustration.
- Coupon misuse detection: Fabric RTI allows Swiggy to spot anomalous spikes in the use of specific discount codes and to proactively pause or revoke them before losses escalate.
- Dynamic routing and rider allocation: When an area sees a sudden surge of orders, RTI dashboards surface the trend and the Activator can reassign riders or create micro‑hubs, improving match rates and delivery times.
- Operational dashboards: Teams now see near‑real‑time operational state rather than waiting for batch refreshes that once lagged five to ten minutes.
Conversational AI: Azure OpenAI Service and ‘Driver Dost’
Swiggy pairs RTI with generative AI using Azure OpenAI Service. Two concrete bot classes emerged:- Customer support bots — automated handlers that answer common queries like “Where is my order?” with contextual, real‑time information pulled from streaming telemetry.
- Driver Dost — a tool for delivery partners offering onboarding assistance, earnings visibility, route guidance, and quick answers to operational questions.
Verified claims, numbers and quotes
- The CEO of Microsoft publicly recognized Swiggy’s deployment of Microsoft Fabric, describing it as a strong example of near‑real‑time data processing used to drive delivery innovations.
- Swiggy’s customer volume and operational scale — including daily order volumes, monthly transacting users, and the number of delivery partners and dark stores at the time of these reports — were provided in company and partner communications and widely reported by business outlets. These figures were presented by Swiggy and Microsoft as contextual metrics for the case study.
- Operational improvements described by Swiggy’s technology leadership include reducing dashboard lag (previously up to roughly 5–10 minutes) and shifting to near‑real‑time detection and update cycles, enabling faster inventory updates and coupon‑fraud detection.
Why this architecture works for high‑volume delivery platforms
Strengths and tangible benefits
- Latency reduction — moving intelligence from batch to stream lowers detection and response times from minutes to seconds, which is material in logistics.
- Operational visibility — indexed eventhouses and real‑time dashboards give operations teams a live view of the system’s state.
- Actionable automation — activator rules can automate routine interventions (pause coupon, alert store manager, reroute riders), reducing manual toil.
- Unified platform — consolidating telemetry, lakehouse analytics, and AI under one fabric reduces integration friction and duplication.
- Conversational interfaces — generative AI makes complex telemetry usable to non‑technical staff and to riders in the field.
- Scalability — cloud scale makes it feasible to ingest and query billions of events while retaining a single source of operational truth.
Business outcomes
- Fewer failed orders and fewer disappointed customers thanks to accurate live inventory and ETA messaging.
- Lower site friction from disabled or rescinded promotions detected in real time.
- Reduced contact center costs during peaks because chatbots handle common queries.
- Faster operational learning loops — teams can run experiments (pricing, routing) and observe effects with minimal delay.
Critical analysis: risks, tradeoffs, and practical limitations
No architecture is a silver bullet. The same strengths that make a streaming RTI model powerful also introduce new operational and ethical challenges.Vendor lock‑in and portability
- Building deep operational logic inside a single vendor’s platform can accelerate development but increases switching costs.
- Eventhouses, activator rules, and platform‑specific query languages may not map cleanly to competitor clouds or open‑source stacks.
Cost and operational complexity
- Real‑time ingestion and indexing at scale can be expensive if not carefully architected. High cardinality eventhouses, long retention windows, or excessive replication multiply cost.
- Teams need to manage data egress, streaming pipeline health, and failure modes that weren’t pertinent in a batch‑based setup.
Data privacy and residency
- Real‑time telemetry often includes sensitive metadata (customer locations, rider identity, transaction timestamps). Local data residency and privacy laws may constrain where and how this data is stored and processed.
- Integrating generative models with real‑time user data raises questions about PII leakage in model outputs.
Model governance and safety
- Generative AI that answers “Where is my order?” must avoid hallucination. When models are combined with streaming telemetry, the system must ensure outputs are grounded to live state.
- Incorrect or biased routing logic could systematically disadvantage certain neighborhoods or riders, with reputational and regulatory consequences.
Worker impact and fairness
- Real‑time systems often translate into tighter control of workforce behaviors: dynamic routing, micro‑incentives, and real‑time performance metrics can increase pressure on delivery partners.
- There’s a risk of introducing opaque decisioning that affects earnings or route assignments without clear appeal processes.
Resilience and single‑point failures
- Real‑time systems can create brittle dependencies: if eventstreams or activator rules fail, automated fallbacks must be well defined.
- Outages or misconfigurations could trigger mass cancellations, incorrect cancellations of coupons, or inaccurate inventory signals.
Competitive context and alternative architectures
Several paths exist for companies seeking real‑time intelligence:- Build on open‑source streaming stacks (Apache Kafka/KSQL, Flink, ClickHouse) combined with cloud object storage and bespoke action engines.
- Use other cloud provider offerings with comparable streaming/real‑time analytics services.
- Adopt hybrid architectures: real‑time for critical short‑lived telemetry, batch lakehouse for long‑term analytics and model training.
Practical guide: implementing a real‑time operational fabric (recommended steps)
- Inventory streams and use cases. Map every telemetry source and prioritize use cases that require sub‑minute reaction (inventory, order state, coupon redemption).
- Define SLOs and retention. Establish latency SLOs for ingestion, query, and action. Determine retention windows for hot vs. cold storage.
- Design event schema and identity model. Standardize event formats and IDs to make joins and aggregations reliable.
- Begin with a pilot. Start with a single geography and a narrow use case (e.g., coupon misuse detection) to validate pipelines and costs.
- Add activations. Implement rule‑based actions and observe false positive/negative rates. Allow manual overrides for early operations.
- Integrate conversational AI carefully. Ground model outputs to the latest state, apply filters for PII, and include confidence thresholds.
- Instrument for observability. Monitor stream lag, processing backpressure, error rates, activator triggers, and action latencies.
- Run resilience tests. Simulate failures and ensure graceful degradation and clear operator escalation paths.
- Iterate on governance. Implement RBAC, audit logs, and an approvals workflow for changes to activator rules and model updates.
- Scale and generalize. Expand to more geographies and use cases, while continuously measuring cost per event and operational impact.
Implementation pitfalls to avoid
- Doing full system migration in one monolithic cutover — instead, decouple incrementally.
- Forgetting to model backpressure — spikes can overwhelm downstream systems even if ingestion continues.
- Over‑automating without human‑in‑the‑loop controls — some decisions (e.g., market‑wide promotions) require a human safety stop.
- Ignoring explainability — operational teams need interpretable alerts, not just opaque model outputs.
The ethical and regulatory lens
As companies embed AI into operational decisioning, regulators and public opinion increasingly scrutinize outcomes that affect workers and consumers. Important considerations include:- Transparency — disclose the existence of algorithmic decisioning when it materially affects customers or partners.
- Auditability — retain immutable logs of decisions and the state that led to them.
- Non‑discrimination — test routing and allocation models for disparate impacts.
- Data minimization — send only the data required for the live decision and avoid unnecessary PII exposure to models.
What this means for the industry
Swiggy’s deployment illustrates a broader market trajectory: operational systems are converging around low‑latency data fabrics and real‑time AI. The model is expected to expand beyond food delivery to logistics, ride hailing, retail micro‑fulfillment, and public safety. The combination of:- near‑real‑time telemetry,
- indexed event storage,
- rule‑based activations,
- and conversational AI interfaces
Conclusion
Satya Nadella’s public commendation of Swiggy is more than a PR soundbite — it signals how enterprise data platforms are maturing from retrospective analytics to proactive, event‑driven operational systems. Swiggy’s integration of Microsoft Fabric Real‑Time Intelligence and Azure OpenAI Service demonstrates a replicable pattern: consolidate data in motion, index it for instant queries, automate routine reactions, and present insights through natural language interfaces. The result is faster deliveries, fewer failed orders, and operational teams that can see and act in seconds.At the same time, the move to streaming RTI raises difficult questions around cost, vendor dependency, privacy, and worker impact. Organizations pursuing similar transformations should balance speed with governance: start small, instrument extensively, protect sensitive data, and keep humans in the loop where outcomes affect livelihoods. When executed with those guardrails, real‑time fabric architectures deliver a meaningful competitive edge — and in Swiggy’s case, a ready example of how modern data platforms can reshape entire industries.
Source: NDTV Profit A Great Use Case: Microsoft CEO Lauds Swiggy For Utilising Its AI Platform