MTN says it has completed the cloud migration of its Enterprise Value Analytics platform to Microsoft Azure — a move the company and partners describe as the largest telco cloud implementation in the Middle East and Africa and a practical blueprint for telco cloud modernization across the continent. The revamped EVA 3.0 environment is reported to run on Azure Databricks, use Delta Lake semantics for data durability, and be protected by Microsoft Defender, processing roughly 22 billion records per day, orchestrating 800+ analytics workflows from 1,700+ data feeds while backed by a concentrated skills push that produced more than 1,350 Microsoft Azure certifications across MTN’s engineering teams. These claims appear in MTN- and partner-led coverage and have been repeated by multiple trade outlets as the rollout in South Africa completes and the company prepares to replicate the architecture across other MTN markets.
MTN’s EVA platform has been central to the operator’s transformation agenda for years, moving from siloed analytics to a centralized, real-time analytics backbone that supports operations, customer experience, and product innovation. The reported migration — widely characterized as a re‑engineering rather than a straight lift-and-shift — adopts a modern lakehouse architecture: Azure Data Lake Storage for durable storage, Delta Lake for ACID-like table semantics and incremental processing, and Azure Databricks as the Spark-based compute fabric that runs ETL, streaming, and machine learning pipelines. Microsoft security tooling, notably Microsoft Defender, is framed as the protection and control layer for a platform that now serves as MTN’s analytics backbone in South Africa and a repeatable blueprint for other markets. Two independent trade outlets reporting on the announcement underscore the same headline points: the platform’s scale, the architecture choices, and MTN’s parallel investment in workforce capability. They describe EVA 3.0 as a high-throughput lakehouse that enables near-real-time detection of network issues, faster root-cause analysis, and more relevant, AI-driven customer experiences — benefits that are consistent with the capabilities of the technologies MTN selected.
MTN’s EVA 3.0 is a powerful case study in what is possible when hyperscaler platforms, modern lakehouse design, and a disciplined skilling program converge. The long-term value will depend on disciplined governance, cost control, transparent verification and the company’s ability to translate technical scale into tangible improvements in reliability, customer experience and new revenue streams across the continent.
Key documents and reporting consulted in preparing this analysis include MTN/Microsoft public statements and multiple independent trade reports that covered the migration and summarized technical choices and numeric claims. Where headline metrics have not yet been published in third-party technical case studies, those claims are highlighted here as company-reported and flagged for verification.
Source: News Ghana MTN Completes Africa's Largest Telco Cloud Migration With Microsoft | News Ghana
Background / Overview
MTN’s EVA platform has been central to the operator’s transformation agenda for years, moving from siloed analytics to a centralized, real-time analytics backbone that supports operations, customer experience, and product innovation. The reported migration — widely characterized as a re‑engineering rather than a straight lift-and-shift — adopts a modern lakehouse architecture: Azure Data Lake Storage for durable storage, Delta Lake for ACID-like table semantics and incremental processing, and Azure Databricks as the Spark-based compute fabric that runs ETL, streaming, and machine learning pipelines. Microsoft security tooling, notably Microsoft Defender, is framed as the protection and control layer for a platform that now serves as MTN’s analytics backbone in South Africa and a repeatable blueprint for other markets. Two independent trade outlets reporting on the announcement underscore the same headline points: the platform’s scale, the architecture choices, and MTN’s parallel investment in workforce capability. They describe EVA 3.0 as a high-throughput lakehouse that enables near-real-time detection of network issues, faster root-cause analysis, and more relevant, AI-driven customer experiences — benefits that are consistent with the capabilities of the technologies MTN selected. What MTN says EVA 3.0 delivers
- Massive ingestion and scale: MTN reports EVA 3.0 processes in the order of tens of billions of records per day (published numbers point to ~22 billion daily records), handling thousands of distinct telemetry and business feeds and running hundreds of analytic workflows.
- Faster time-to-insight: The new architecture is intended to shorten detection-to-remediation timelines for network faults and service degradations via real-time streaming analytics and automated root-cause correlation.
- Operational automation (closed-loop): The platform is described as enabling “closed-loop automation” — detecting a problem, identifying the affected network elements, correlating with other data, and triggering remediation with minimal human intervention.
- Product personalization and revenue uplift: By combining network telemetry, OSS/BSS, CRM and behavioral signals at scale, MTN expects to design more relevant offers, improve retention, and open new enterprise-grade analytics services.
- Security and governance: Microsoft Defender and Azure governance controls are cited as the backbone for protecting sensitive telemetry and customer PII and for enabling responsible AI practices.
- Replicable blueprint: EVA 3.0 is positioned as a centralized reference architecture MTN can adapt across its 19 markets, lowering duplicate engineering effort and speeding deployments across operating companies.
Technical anatomy: architecture and chosen components
Core building blocks (as reported and inferred)
- Ingest layer: High-throughput collectors for streaming telemetry (network probes, probes, packet and flow telemetry), call detail records, OSS/BSS events and application logs. Typical transport choices would include Kafka/Event Hubs and managed ingestion tools to handle both streaming and batch feeds.
- Storage: Azure Data Lake Storage Gen2 (ADLS Gen2) as the object store holding Delta Lake tables for ACID-like behaviour and efficient incremental processing.
- Compute: Azure Databricks running Apache Spark for distributed streaming and batch ETL/ELT, feature engineering, ML training and scoring. Databricks also delivers orchestration primitives and collaborative notebooks for data engineering and data science.
- Data semantics: Delta Lake for transactional table semantics, schema enforcement and time-travel; these features are central for reliable, incremental pipelines at telco scale.
- Security & governance: Microsoft Defender for workload protection and DLP, Azure Active Directory/Entra for identity and role-based access, and Unity Catalog or an equivalent for fine-grained data governance and lineage.
- Consumption: APIs, dashboards (Power BI or custom NOC consoles), feature stores/serving endpoints for model-driven actions, and integrations with OSS/BSS and CRM systems for operational and commercial use cases.
Verification: what is independently confirmed — and what remains company-reported
Multiple independent outlets report the same high-level facts: MTN migrated EVA to Azure, the platform runs on Azure Databricks with Delta Lake semantics and Microsoft security tooling, and MTN positions the result as a group-scale blueprint. ITWeb Africa and CIO Africa both published pieces repeating the core numeric claims and architecture descriptions. These independent trade write-ups corroborate the press coverage pattern and confirm the migration’s significance in regional context. However, several headline metrics — notably the 22 billion daily records, 800+ workflows, 1,700+ data feeds, and the 1,350 Azure certifications — currently appear in company statements and partner summaries and were not accompanied by a public, third‑party audited technical case study at the time of reporting. Multiple analyses therefore treat those figures as MTN-reported operational metrics that require hands-on validation for procurement or engineering due diligence. Readers should treat these numeric claims as reported by MTN and expect formal benchmarks, SLAs, or independent audits to confirm throughput, latency percentiles, ingest reliability and unit economics.Why the architecture choices make sense for telcos
- Scale for telemetry: Telcos ingest extremely high-velocity event streams; a Spark-based lakehouse (Databricks + Delta Lake on ADLS Gen2) is a well-understood pattern for combining streaming and batch workloads at scale while enabling ML lifecycle workflows.
- Unified governance: Unity Catalog and Azure-native identity controls enable centralized policy application across datasets and analytic artifacts, an operational necessity for multi-market telcos subject to varying privacy and telecom regulations.
- Faster model productization: Databricks’ integrated environment supports collaborative notebooks, model registries and production job orchestration — features that shorten the path from experimentation to production for predictive use cases like churn, fraud, and predictive maintenance.
- Operational flexibility: Managed services reduce infrastructure operations overhead so internal teams can focus on data quality, analytics logic and productization rather than VM maintenance or patching.
Strengths and strategic upsides
- Operational speed and responsiveness: Real-time streaming analytics and integrated ML can reduce Mean Time To Repair (MTTR) and allow more proactive customer interventions.
- Repeatability across markets: A templated lakehouse reference architecture — when coupled with a Cloud Centre of Excellence — reduces duplication and accelerates rollouts in multiple countries.
- Human capital investment: MTN’s reported certification drive strengthens internal capabilities and reduces dependence on external integrators; the Cloud Centre of Excellence model institutionalizes best practices.
- Security posture: Using Azure-native controls (Defender, Entra, Unity Catalog) centralizes monitoring and policy enforcement across datasets and workloads, which is essential for regulatory compliance and customer trust.
- Platform for AI monetization: A governed, high-throughput data foundation positions MTN to deliver new AI-driven products to consumers and enterprise customers across Africa.
Material risks, trade-offs and open questions
Even well-executed projects at this scale carry real operational, financial and regulatory risks. The most important items to watch:- Vendor lock-in / technical gravity
Deep integration with managed Azure services (Databricks managed runtime, Unity Catalog, Defender tooling) increases switching costs. Telcos that prioritize portability should adopt open formats (Delta Lake/Parquet), maintain export routines, and design decoupling layers for critical business functionality. - Data sovereignty & compliance
Centralizing telemetry across multiple African jurisdictions raises cross-border transfer questions. While Azure provides regional data residency options, concrete contractual guarantees and locality-specific architectures are necessary to meet national telecom and data-protection laws. - Operational complexity at telco scale
Processing billions of records daily requires mature SRE practices: observability, back-pressure, automated retries, chaos testing and incident runbooks. Without this maturity the platform risks pipeline degradation, inconsistent outputs, and hidden cost overruns. - Cost governance
Cloud compute and storage can scale faster than budgets. Left unchecked, high-volume Spark workloads and long retention windows can produce runaway costs. Effective tagging, quotas, budget alerts, and autoscaling policies are essential. - Security surface expansion
A larger cloud footprint increases exposure. Defender and related tooling reduce risk, but only if tuned and integrated into an active security operations model (SOC, threat hunting, regular red-team exercises). - Unverified throughput claims
The headline numbers (records/day, workflows, feed counts) are company-reported; independent benchmarks and audited telemetry would materially strengthen the credibility of the claims. Prospective partners and vendors should insist on SLA-backed KPIs and proof-of-value pilots.
Practical checklist for telcos and CIOs planning similar migrations
- Define business outcomes first (MTTR reduction, churn improvement, new revenue from analytics) and map measurable KPIs.
- Start with a proof-of-value pilot that exercises peak ingestion, low-latency alerting, and an automated remediation workflow.
- Build a Cloud Centre of Excellence to enforce standards, manage service catalogs and drive skilling across teams.
- Adopt open data formats and document export procedures to maintain portability (Delta/Parquet/Avro).
- Design for data residency from day one: region-aware tenancy, key management and contractual guarantees.
- Implement rigorous cost governance: tagging, quotas, chargeback and autoscale controls.
- Invest in SRE/observability: metrics, tracing, logs, synthetic tests and automated remediation runbooks.
- Formalize responsible AI guardrails: model registries, bias tests, reproducibility requirements and human-in-the-loop approvals.
- Insist on audit artifacts: throughput reports, penetration test results, compliance attestations and third-party validation where possible.
Organizational and skills implications
MTN’s emphasis on certifications — cited as more than 1,350 Azure certifications attained by its engineering ranks — underscores an often overlooked truth: technology choices alone do not create value. Platform engineering, SRE, data stewardship and governance practices require sustained, on-the-job experience. Certifications are a necessary indicator of capability but must be paired with structured rotations, on-call responsibilities, and an institutionalized platform operating model to preserve knowledge and reduce risk of staff churn.Economics: CAPEX vs OPEX, and long-term unit economics
Cloud migrations shift spend from capital to operational expense. That change offers flexibility but also makes unit costs — cost per million events processed, cost per model training run, storage $/TB-month — central to long-term viability. Large telcos must invest in:- Cost modeling for sustained throughput and peak bursts
- Retention policies for raw and intermediate data
- Compression and partitioning strategies to reduce storage and query costs
- Spot/low-priority compute strategies for non-critical training workloads
Responsible AI and governance
MTN frames EVA 3.0 as enabling responsible AI. That assertion matters because analytics-driven automation (rate plans, churn predictions, automated remediation) can materially affect customers and regulatory exposure. Practical steps to operationalize responsible AI include:- Model registries and versioning
- Bias and fairness testing for production models
- Explainability measures and human-in-the-loop approval gates for high-impact decisions
- Monitoring and drift detection in production to trigger revalidation
Why this matters for Africa’s digital ecosystem
MTN’s EVA 3.0 migration is both symbolic and practical. Symbolically, it signals that African operators can adopt cloud-native lakehouse platforms at continental scale and operate them with internal talent investment. Practically, a repeatable reference architecture can accelerate the emergence of ISVs, analytics service providers, and enterprise products that rely on telco data — potentially driving new services that support financial inclusion, smart cities, and enterprise digitization. The initiative also raises important public policy questions about data sovereignty, local skills development and the balance between global hyperscalers and regional vendor ecosystems.What to watch next
- Publication of audited performance benchmarks (latency percentiles, ingest reliability, cost per unit of telemetry).
- Release of detailed operational runbooks and governance artifacts demonstrating how data residency and law-enforcement access are handled per market.
- Third-party security attestations or penetration test results validating the Defender-backed controls at scale.
- Measurable business outcomes: MTTR improvements, NPS/CSAT lift, reduced churn, and revenue attributable to analytics-driven offers.
- How MTN adapts the reference blueprint for markets with strict data-residency laws or limited local Azure region availability.
Final assessment
MTN’s migration of EVA to Azure Databricks represents a credible, well-aligned choice for telco-scale analytics: the lakehouse pattern addresses the core technical needs (high-velocity ingestion, hybrid streaming and batch processing, ML lifecycle) and Azure Databricks/Delta Lake provide the capabilities to operationalize that pattern. The parallel investment in certifications and a Cloud Centre of Excellence is a pragmatic hedge against the operational risks that commonly plague migrations of this scale. That said, the most consequential claims — the exact daily record counts and workflow volumes — remain company-reported at present. Independent audits, bench tests, and operational artifacts would materially strengthen the credibility of the headline figures and help other operators evaluate the true unit economics of telco cloud modernization. Until those artifacts are published, vendors and buyers should treat the numbers as indicative rather than definitive, and require measurable SLAs and exit/portability clauses when negotiating large-scale cloud transformations.MTN’s EVA 3.0 is a powerful case study in what is possible when hyperscaler platforms, modern lakehouse design, and a disciplined skilling program converge. The long-term value will depend on disciplined governance, cost control, transparent verification and the company’s ability to translate technical scale into tangible improvements in reliability, customer experience and new revenue streams across the continent.
Key documents and reporting consulted in preparing this analysis include MTN/Microsoft public statements and multiple independent trade reports that covered the migration and summarized technical choices and numeric claims. Where headline metrics have not yet been published in third-party technical case studies, those claims are highlighted here as company-reported and flagged for verification.
Source: News Ghana MTN Completes Africa's Largest Telco Cloud Migration With Microsoft | News Ghana