Microsoft’s cloud and AI engines — the same infrastructure the company says it polices through terms of service — are now the focus of a renewed debate over corporate responsibility after leaked documents showed U.S. Immigration and Customs Enforcement (ICE) dramatically expanded its Azure footprint, and Microsoft workers and allied activists demanded the company cut ties with the agency.
In mid‑February 2026, reporting led by The Guardian and partner outlets revealed that ICE’s data stored on Microsoft’s Azure cloud rose from roughly 400 terabytes in July 2025 to nearly 1,400 terabytes by January 2026 — a more than threefold increase in six months. The reporting, which relied on procurement records and leaked documents, also said ICE used Microsoft productivity suites and AI‑driven tools to search and analyse that data, and that the agency had bought virtual machines and vision/video analysis capabilities.
The revelations arrived against a fraught backdrop for Microsoft. In September 2025, the company publicly disclosed that it had “ceased and disabled a set of services” for a unit inside Israel’s Ministry of Defence after independent reporting alleged that those services were used to ingest and analyse mass quantities of intercepted Palestinian communications. Microsoft said the action was taken to enforce its terms of service, which prohibit using its products for mass surveillance of civilians.
Those two threads — Microsoft’s limited enforcement against an Israeli military unit, and now the agency use case in the United States — have been stitched together by worker‑led groups and activists under the banner No Azure for Apartheid and allied campaigns. They argue that the identical cloud and AI building blocks that can be used to power surveillance in one theatre are being deployed by enforcement agencies elsewhere with similar human‑cost consequences. PC Gamer published a direct statement from No Azure for Apartheid that echoed those concerns and reiterated long‑standing demands for Microsoft to sever ties with ICE.
Their public framing emphasizes a moral equivalence: the same cloud and AI stacks that power large‑scale surveillance elsewhere are being repurposed to target migrant communities and other vulnerable populations in the U.S. The group also points to the opacity of government contracts and to the ways companies sometimes rely on resellers or partners, which activists say can mask the extent of involvement.
Key governance weaknesses highlighted by the current situation include:
Where worker and activist leverage is strongest:
Microsoft’s prior action in September 2025 to disable certain services used by an Israeli military unit demonstrated the company can, under pressure, restrict subscriptions — but it also revealed that such measures are reactive, often limited, and politically fraught. The new ICE reporting throws a mirror back at those events: the same features that sparked global protest over overseas use can, domestically, produce outcomes civil‑liberties advocates find equally alarming.
This episode highlights a structural problem: tools that scale and lower the cost of complex analysis do not come with matched public governance. If the public, Congress, regulators, and technology companies do not close that gap with concrete rules, independent audits, and stronger procurement transparency, then the same platforms that bring enormous public benefit will continue to be repurposed in ways that can inflict real and lasting harm.
(Technical background and discussion also informed by internal discussion threads and community reporting collated from public forums and investigative threads.)
Source: Rock Paper Shotgun No Azure for Apartheid call on Microsoft to cut ties with ICE, amid reports of agency deepening reliance on company's cloud and AI
Background
In mid‑February 2026, reporting led by The Guardian and partner outlets revealed that ICE’s data stored on Microsoft’s Azure cloud rose from roughly 400 terabytes in July 2025 to nearly 1,400 terabytes by January 2026 — a more than threefold increase in six months. The reporting, which relied on procurement records and leaked documents, also said ICE used Microsoft productivity suites and AI‑driven tools to search and analyse that data, and that the agency had bought virtual machines and vision/video analysis capabilities.The revelations arrived against a fraught backdrop for Microsoft. In September 2025, the company publicly disclosed that it had “ceased and disabled a set of services” for a unit inside Israel’s Ministry of Defence after independent reporting alleged that those services were used to ingest and analyse mass quantities of intercepted Palestinian communications. Microsoft said the action was taken to enforce its terms of service, which prohibit using its products for mass surveillance of civilians.
Those two threads — Microsoft’s limited enforcement against an Israeli military unit, and now the agency use case in the United States — have been stitched together by worker‑led groups and activists under the banner No Azure for Apartheid and allied campaigns. They argue that the identical cloud and AI building blocks that can be used to power surveillance in one theatre are being deployed by enforcement agencies elsewhere with similar human‑cost consequences. PC Gamer published a direct statement from No Azure for Apartheid that echoed those concerns and reiterated long‑standing demands for Microsoft to sever ties with ICE.
What the leaked files say (and what they don’t)
Key claims from the reporting
- ICE’s Azure storage rose from ~400 TB to ~1,400 TB between July 2025 and January 2026.
- The procurement records link that shift to increased purchases of blob storage, virtual machines, and AI‑enabled video and image analysis tools on Azure.
- Microsoft’s public response framed its relationship as delivering “cloud‑based productivity and collaboration tools to DHS and ICE, delivered through our key partners,” and reiterated that its policies “do not allow our technology to be used for the mass surveillance of civilians.”
Important gaps and caveats
The leaked procurement materials are specific about capacity and product lines, but they do not by themselves prove how ICE is using particular data sets, nor do they show whether Azure-hosted systems are directly ingesting intercepted communications versus supporting administrative systems, detention logistics, or other operational software. The reporting notes this ambiguity and Microsoft reiterates that it does not have visibility into the content of customer data where privacy protections apply. That distinction matters for legal and ethical analysis — and it must be stated plainly when weighing the strength of the allegations.Who is No Azure for Apartheid and what are they demanding?
No Azure for Apartheid is a worker‑led movement originally formed inside Microsoft by employees and ex‑employees critical of the company’s contracts with the Israeli military and other security agencies. The group has staged public protests, internal petitions and occupation actions in Redmond and made repeated demands that Microsoft end relationships that, in the activists’ view, enable state violence and mass surveillance. Their most recent statement connects the ICE reporting to the group’s earlier activism and calls for Microsoft to cut ties with ICE — a demand the organization frames as consistent with earlier employee campaigns that began in 2018.Their public framing emphasizes a moral equivalence: the same cloud and AI stacks that power large‑scale surveillance elsewhere are being repurposed to target migrant communities and other vulnerable populations in the U.S. The group also points to the opacity of government contracts and to the ways companies sometimes rely on resellers or partners, which activists say can mask the extent of involvement.
Microsoft’s official stance and the limits of corporate visibility
Microsoft’s public statements on both the Israeli military episode and the ICE revelations follow a consistent pattern:- Reaffirmation of contractual terms and acceptable‑use policies that prohibit the mass surveillance of civilians.
- A claim that the company lacks direct visibility into customer content where privacy rules and customer controls apply.
- An argument that enforcement of acceptable‑use obligations is applied through audits and review processes; in some cases, Microsoft has said it disabled specific subscriptions when internal reviews found evidence of misuse.
- What portion of the service relationship is direct vs. brokered through resellers and systems integrators? When third parties provision or manage services, the principal vendor’s operational visibility can diminish.
- Which Azure features are being used — customer‑managed storage, PaaS services, or managed services hosted and controlled by the vendor or its partners? Each model has different implications for auditability and contractual enforcement.
- How robust are Microsoft’s telemetry and contractual audit provisions for high‑risk public‑sector customers? Public statements reference reviews and disabling of subscriptions, but do not detail the technical or contractual mechanisms that would prevent misuse over time.
Technical anatomy: how Azure could be repurposed for surveillance and analysis
To evaluate the risk profile, we need to be concrete about the technologies named in the reporting and how they are commonly used:- Blob storage (object storage) — Scales to petabytes and is typical for storing raw media (audio, video) and large CSV/JSON datasets. Large‑scale storage makes it feasible to aggregate long histories of communications or footage for later retrieval. Leaked procurement documents specifically reference Azure storage capacity increases.
- Virtual machines and compute instances — Provide the CPU/GPU cycles to run indexing, search, and analysis pipelines; these may host ingestion services, metadata extractors, or transcription workflows for audio files.
- AI‑driven vision and video analysis — Commercial cloud offerings now provide prebuilt or managed models that perform face detection, object tracking, OCR, and content indexing at scale. These can be chained into pipelines that turn raw footage into searchable metadata.
- Search and indexing services — When coupled with large object stores and compute, search tools enable investigators to query and cross‑reference people, phone numbers, locations and timestamps. AI tools can accelerate triage and target identification.
- Productivity and collaboration tools — These are used for case management, documentation, and workflows. While less sensational technocratically, they can amplify operational capacity by making datasets and analysis results widely available inside an agency.
Policy and legal context: what the law says (and what it doesn’t)
There is no single U.S. statute that expressly bans federal agencies from using private‑sector cloud or AI capabilities for investigative or enforcement purposes. Instead, federal procurement is governed by a patchwork of acquisition rules, privacy statutes (e.g., Privacy Act), criminal procedure protections, and internal agency policies. Oversight typically exists through Congress, IG offices, and courts — but the scale and speed of AI adoption often outruns those mechanisms.Key governance weaknesses highlighted by the current situation include:
- Procurement opacity: Agencies can purchase cloud capacity through multiple contracting vehicles, and when partners or resellers are used, disclosure and public transparency are reduced. Leaked procurement documents often surface only when whistleblowers or investigative journalists obtain them.
- Lack of clear statutory limits for AI surveillance: Policymakers are only now debating the contours of acceptable AI use by law enforcement and immigration agencies. Until clear legal limits are set, corporate terms of service and internal agency policies become primary mitigants — and those are variable in their enforceability.
- Patchwork oversight: Office of Inspector General reviews, congressional inquiries, and litigation can reveal misuse after the fact, but they do not necessarily prevent real‑time harm. The Microsoft‑IMOD review showed the company can act defensively, but its action was reactive and limited in scope.
Corporate responsibility: what Microsoft can (and can’t) do
Microsoft’s response to the Israeli military reporting — disabling some subscriptions after an internal and external review — sets a precedent that the company can, in certain circumstances, restrict customers. But several factors constrain what a vendor can do:- Contractual limits: Cloud contracts typically define services, SLAs, and acceptable use; enforcement requires evidence and due process. Broad, pre‑emptive blocks could provoke contract disputes or even national security concerns when sovereign customers are involved.
- Technical visibility: When customers control encryption keys and host workloads behind customer‑managed environments, vendor telemetry into content is limited. Microsoft’s claim that it “does not have visibility” into certain customer usage is a technical reality for many cloud deployments.
- Market competition and procurement incentives: Microsoft (and other hyperscalers) competes for public‑sector business. Companies that refuse controversial customers may cede market share to rivals willing to accept risk, unless consistent industry standards or regulation align incentives.
- Operational dependence and the ‘essential services’ argument: Vendors sometimes point to the cybersecurity or continuity services they provide as reasons to maintain overall relationships even while restricting certain products or subscriptions. Microsoft said its disabling action did not affect broader cybersecurity work for Israel, which suggests fine‑grained selective enforcement is technically possible — but politically and operationally sensitive.
Civil society and worker leverage: tactics, leverage points, and limits
No Azure for Apartheid and allied campaigns have used multiple tactics to press Microsoft: internal petitions, public protests, sit‑ins, press campaigns, and public policy advocacy. Those tactics create reputational and operational pressure that can push companies to explain or change practices, as seen with the September 2025 action against IMOD subscriptions.Where worker and activist leverage is strongest:
- Reputational risk: Sustained coverage and employee unrest can affect talent retention, investor relations, and customer perception. That creates incentives for public statements and reassessments.
- Operational friction: Protest actions and internal dissent can interfere with product roadmaps and morale, encouraging executives to engage.
- Policy advocacy: Activists can push for legislative or regulatory scrutiny, which changes the operating environment for vendors and customers.
Risks and tradeoffs: surveillance, mission creep, and dual‑use AI
The ICE‑Azure reporting raises several concrete risks:- Mass indexing of vulnerable populations: The storage and AI stack that makes it feasible to index millions of calls or hours of footage turns routine bureaucratic tasks into broad surveillance opportunities with chilling civil‑liberties implications.
- Function creep: Administrative or efficiency tools (case management, translation, search) can be repurposed for targeting and enforcement escalation, especially if oversight is weak.
- Vendor lock‑in and concentration risk: Hyperscalers provide economies of scale that agencies prize; but concentration means a single supplier’s policy choices reverberate widely across government practice.
- Evasion and shadow procurement: Agencies can move workloads between providers, or use intermediaries to obscure operations — complicating transparency and accountability.
Practical recommendations: what stakeholders should do now
For Microsoft (and other hyperscalers)
- Publish clearer contractual guardrails for high‑risk public‑sector customers that include mandatory audit rights, transparency reporting, and independent review triggers when abuse is alleged.
- Enhance technical controls that enable vendor‑enforced constraints without undermining legitimate privacy protections, including fine‑grained service flags and conditional access tied to documented uses.
- Create an independent escalation mechanism: An externally supervised process (with civil‑society observers and technical experts) that can rapidly adjudicate claims of misuse and recommend proportional actions.
For policymakers and oversight bodies
- Mandate procurement transparency for law‑enforcement and immigration contracts that involve cloud or AI services, with redaction rules to protect sensitive operations but public disclosure of service categories and capacities.
- Set statutory limits on mass‑scale surveillance and create specific restrictions for AI‑enabled indexing of sensitive personal data.
- **Fund independent auditAI deployments and require audit trails that cannot be erased without authorization.
For civil society and technologists
- Demand and support independent technical audits of suspicious deployments, and invest in investigative capacity that can parse procurement trails and configuration artifacts.
- Push for employee protections that enable responsible whistleblowing and internal reporting without retaliation.
The larger lesson: corporate power, public accountability, and a fast‑moving tech landscape
The ICE‑Azure episode is not only about a single vendor and a single agency. It is a case study in how modern cloud and AI platforms concentrate power and capability in the hands of a few private firms — and how that concentration reshapes the levers of governance. When procurement, contracts, and technical architectures converge in opaque ways, accountability becomes diffuse and slow, while capacity for harm grows rapidly.Microsoft’s prior action in September 2025 to disable certain services used by an Israeli military unit demonstrated the company can, under pressure, restrict subscriptions — but it also revealed that such measures are reactive, often limited, and politically fraught. The new ICE reporting throws a mirror back at those events: the same features that sparked global protest over overseas use can, domestically, produce outcomes civil‑liberties advocates find equally alarming.
Conclusion
The leaked procurement records linking ICE’s surge in Azure usage to expanded storage, compute and AI services have refocused a broader debate about the ethics of cloud and AI supply chains. Worker groups such as No Azure for Apartheid are pushing Microsoft to make a hard choice: end relationships with agencies they argue are causing harm, or continue to service those customers and manage the mounting political and ethical cost. Microsoft’s public posture — invoking contractual terms and limited visibility into customer content — answers some questions but leaves others open about enforceability, transparency, and the technical levers that could prevent misuse.This episode highlights a structural problem: tools that scale and lower the cost of complex analysis do not come with matched public governance. If the public, Congress, regulators, and technology companies do not close that gap with concrete rules, independent audits, and stronger procurement transparency, then the same platforms that bring enormous public benefit will continue to be repurposed in ways that can inflict real and lasting harm.
(Technical background and discussion also informed by internal discussion threads and community reporting collated from public forums and investigative threads.)
Source: Rock Paper Shotgun No Azure for Apartheid call on Microsoft to cut ties with ICE, amid reports of agency deepening reliance on company's cloud and AI






