Big Tech’s cloud stacks, commercial AI models, and social-media engines are no longer peripheral tools of war — they are integrated enablers that speed targeting, amplify state narratives, and complicate accountability in the Israel–Gaza conflict.
The last two years have exposed a complex web linking major U.S. technology companies to Israeli military operations: cloud contracts and managed services that provide storage and compute at scale; AI models and pipelines repurposed to transcribe, translate, prioritize, and score human targets; and social-platform policies and PR arrangements that affect how the conflict is narrated online. Multiple investigations and whistleblower accounts describe a spike in military use of commercial cloud and AI resources after October 7, 2023, the emergence of named scoring systems such as “Lavender” that produce prioritized lists, and substantial data volumes stored on commercial clouds — figures cited in public reporting to convey scale and operational intensity.
Those revelations revive uncomfortable historical analogies — corporations supplying tools and infrastructure that enable mass violence — and raise urgent technical, legal, and ethical questions about corporate responsibility, human oversight, and the opacity of algorithmic targeting. The case illustrates the modern dual-use dilemma: civilian‑facing products delivering undeniable efficiencies while being repurposed for lethal ends.
Caveat: some specific claims about individual corporate PR contracts or sums may be reported by individual outlets or activist organizations and remain contested or insufficiently corroborated in public records. Where a claim is reported by a single outlet without independent confirmation, it should be treated with caution pending further verification.
For technologists and policy makers, the lesson is blunt: dual‑use software is not ethically neutral. The choices engineering teams, product managers, and executives make about partnerships, access controls, and transparency materially shape how modern conflict unfolds.
The debate is not academic. The technologies now at the heart of this controversy — storage measured in petabytes, speech‑to‑text models, relationship‑graph scoring systems, and global social platforms — are the infrastructure of modern life. When they are repurposed without robust oversight, the costs are paid in civilian lives and eroded trust. The urgency of reform is clear: regulation, corporate governance, and public scrutiny must evolve as fast as the technologies they seek to govern.
Source: Stop the War Big Tech makes war efficient, engineering algorithms to murder and PR machines to conceal it
Overview
The last two years have exposed a complex web linking major U.S. technology companies to Israeli military operations: cloud contracts and managed services that provide storage and compute at scale; AI models and pipelines repurposed to transcribe, translate, prioritize, and score human targets; and social-platform policies and PR arrangements that affect how the conflict is narrated online. Multiple investigations and whistleblower accounts describe a spike in military use of commercial cloud and AI resources after October 7, 2023, the emergence of named scoring systems such as “Lavender” that produce prioritized lists, and substantial data volumes stored on commercial clouds — figures cited in public reporting to convey scale and operational intensity.Those revelations revive uncomfortable historical analogies — corporations supplying tools and infrastructure that enable mass violence — and raise urgent technical, legal, and ethical questions about corporate responsibility, human oversight, and the opacity of algorithmic targeting. The case illustrates the modern dual-use dilemma: civilian‑facing products delivering undeniable efficiencies while being repurposed for lethal ends.
Background: history, scale, and the new industrial axis of war
From rails and code to cloud and models
History offers precedent for private-sector complicity in state violence: transportation networks and logistics firms were essential to the transatlantic slave trade; industrial suppliers profited under apartheid; and corporate technologies have been abused in prior genocidal contexts. Today's commercial cloud platforms and AI models are the 21st‑century equivalents: general‑purpose, widely distributed technologies that can be reconfigured to serve military intelligence and operational workflows. The strategic attraction is obvious: speed, scale, and flexibility without the capital and lead time to build equivalent in‑house infrastructure.Data and compute at a scale that matters
Investigations point to dramatic increases in data storage and AI usage by the Israeli military following the October 2023 escalation. Reporting cites amounts of stored data on commercial infrastructure measured in petabytes and usage spikes orders of magnitude above pre‑conflict levels — metrics intended to show the real, operational heft these services provide. These are not mere backend conveniences; they are the computation and memory that enable near‑real‑time mass‑surveillance workflows.The evidence: contracts, systems, and personnel ties
Major contracts and projects
- Microsoft’s long‑term commercial relationship with the Israeli government — including a reported multiyear contract — has been characterized in reporting as significant and operationally consequential, with figures such as a multi‑million dollar provisioning often cited in investigative accounts.
- Project Nimbus — a joint cloud initiative involving Google and Amazon to supply Israeli government and defense customers with cloud services — receives attention as a direct infrastructure arrangement connecting commercial clouds to state operations.
Named tools and alleged scoring systems
Reporting and whistleblower accounts have named internal systems — often rendered as code‑names — which are claimed to produce prioritized targeting information. Two names that recur in reporting are “Lavender” and “Gospel,” described as algorithmic pipelines that ingest intercepted communications, biometric and identity data, relationship graphs, and other signals to rank individuals by a risk or “militancy” score. These systems reportedly output lists and scores intended to help triage scarce human analytical resources.Data volumes, compute, and human ties
The functional picture often drawn by reporting combines three operational elements:- Massive ingestion and storage of surveillance and communications data on commercial clouds, quantified in petabytes.
- AI and ML models (commercial and bespoke) used to transcribe, translate, extract entities, map relationships, and score targets.
- Organizational ties — ex‑military officers joining corporate teams, or corporate employees embedded in military projects — that make rapid technical support and customization possible.
How algorithmic targeting is described to work
From interception to a score
The general pattern described in public reporting and whistleblower accounts is a multi‑stage pipeline:- Data collection: drone imagery, checkpoint cameras, intercepted calls and messages, biometric captures, and administrative records are collected by intelligence units.
- Ingest and storage: these data are uploaded to cloud storage for indexing and batch or streaming analysis.
- Preprocessing: transcription and automatic translation convert audio to text; entity extraction identifies names, phone numbers, and locations.
- Feature construction: relationship graphs, family histories, movement patterns, and communication graphs are combined as features that feed models.
- Scoring and ranking: algorithms assign numeric scores (for example, a 0–100 scale is reported in some accounts) to prioritize targets for human analysts.
- Human review and operational decision: analysts — under time pressure and with imperfect context — review high‑scoring items and recommend or authorize kinetic action.
The role of “hallucination” and translation errors
Commercial speech‑to‑text and translation models are not infallible. They can hallucinate — produce plausible but fabricated text — or mistranslate dialects or noisy audio. In warfare contexts, where nuance matters and mistakes can be fatal, those errors can have catastrophic downstream consequences. Multiple accounts describe plausible scenarios in which a mistranslation or a mis‑attributed relationship graph contributed to a wrongful targeting decision.Strengths claimed by proponents — and why they matter
Advocates for using commercial cloud and AI in military operations emphasize clear operational benefits:- Scale and speed: commercial clouds provide elastic compute that can process massive datasets quickly, enabling faster decision cycles in dense urban battlefields.
- Rapid deployment and iterative improvement: off‑the‑shelf models and managed services shorten the time from concept to operational deployment compared with bespoke in‑house builds.
- Resource efficiency: scoring systems help prioritize scarce human analytical time, focusing effort where it may be most needed.
Risks, failures, and accountability gaps
Technical failure modes with lethal consequences
- Translation and transcription errors: spoken dialects, noisy channels, and context‑dependent phrases regularly confound automated systems; errors can be translated into actionable intelligence.
- Data quality and biased inputs: incomplete or historically biased datasets — for example, law‑enforcement or administrative lists — can encode prejudiced priors into scores. High scoring can thus reproduce or amplify discriminatory patterns.
- Over‑reliance on automation: if downstream workflows treat model outputs as authoritative due to time pressure, the “human‑in‑the‑loop” safeguard becomes perfunctory rather than substantive.
Legal and moral opacity
Commercial providers often operate under confidentiality and proprietary constraints that limit external audit. When proprietary cloud logs, model weights, and data provenance are opaque, independent verification of whether a specific algorithm or dataset contributed to a harmful action becomes legally and technically difficult. That opacity complicates compliance with international humanitarian law obligations — for example, the duty to distinguish civilians from combatants and to take precautions to minimize civilian harm.Corporate governance and the “downstream use” problem
Companies commonly rely on contractual terms that disclaim downstream liability, arguing that they sell neutral tools. But when a company provides bespoke configuration, dedicated support, personnel access, or policy changes that enable specific military workflows, the line between neutral supplier and active enabler blurs. Internal protests inside major tech firms over military contracts — including publicized staff actions and consequent disciplinary responses — underscore the reputational and governance stress this raises.Labor resistance and corporate responses
Employee activism has become a salient part of this story. Worker groups at major firms organized protests and campaigns to demand limits on military-facing contracts, with named campaigns such as “No Azure for Apartheid,” and public vigils or occupations reported. Company reactions have varied: some firms opened internal reviews and public statements; others are reported to have disciplined or dismissed employees involved in protests. These flashpoints show internal ethical friction and the operational risk to corporate culture and recruiting when employees believe their employers contribute to harm.Narrative control and the information battlefield
Beyond backend infrastructure and AI pipelines, social media platforms and search engines shape perception in real time. The contested terrain includes:- Platform moderation policies that can suppress or amplify specific content streams.
- PR and advisory contracts that may be used to promote government messaging or counter criticism.
- Personnel hires and policy roles that influence how content is flagged and prioritized.
Caveat: some specific claims about individual corporate PR contracts or sums may be reported by individual outlets or activist organizations and remain contested or insufficiently corroborated in public records. Where a claim is reported by a single outlet without independent confirmation, it should be treated with caution pending further verification.
Legal, ethical, and policy implications
International humanitarian law (IHL) and algorithmic decision support
IHL demands distinction, proportionality, and precautions. Algorithmic systems that triage and recommend kinetic actions must be integrated into workflows that ensure these norms are respected. The central legal questions include:- Who made the final decision to strike: the human operator, the analyst, or the algorithmic recommendation?
- Were feasible precautions taken to verify identity and to minimize civilian harm? The opacity of proprietary systems complicates such determinations.
- Can companies be held accountable under domestic or international law for providing systems that facilitate unlawful acts? Legal doctrine is still developing on the liability of vendors whose products are adapted or repurposed for misuse.
Corporate ethics and export controls
The dual‑use nature of modern AI and cloud services challenges traditional export‑control frameworks designed for physical weapons. Policymakers and regulators may need to update controls, procurement rules, and compliance obligations to reflect the real operational value of commercial software and cloud services in conflict scenarios.Recommendations for accountability and risk mitigation
- Transparency and auditability: companies should publish redacted transparency reports about government and defense contracts, including independent third‑party audits of downstream uses where feasible.
- Strict “do no harm” clauses and enforceable contractual limitations: contracts with governments should include enforceable terms limiting the use of services for targeting that risks civilian harm.
- Human‑centered operational safeguards: preserve meaningful human decision authority with adequate time, context, and investigative resources to assess high‑risk automated recommendations.
- International norms and export rules: multilateral bodies should develop common standards that cover the export and deployment of high‑risk AI systems for military use.
- Worker voice and whistleblower protections: firms that supply sensitive capabilities should establish safe channels for employees to raise concerns and require independent review panels for contentious contracts.
What is verified, what is disputed, and what remains unproven
- Verifiable and corroborated: the surge in military use of commercial cloud and AI tools after October 2023, and broad involvement of several major technology providers in supplying cloud, compute, and managed services to Israeli government clients, are documented across multiple investigative reports.
- Corroborated operational patterns: the basic architecture — ingestion of intercepted communications, use of speech‑to‑text and translation models, feature construction, and algorithmic prioritization — is consistently described across sources.
- Reported system names and scoring details: accounts naming systems such as “Lavender” and “Gospel,” and describing score ranges or specific scoring mechanics, appear in investigative reporting and whistleblower accounts; multiple outlets reproduce these names, giving them some corroboration.
- Disputed or insufficiently corroborated claims: specific contract sums, alleged PR fees, or single‑source allegations about particular firm behavior (for example, precise dollar amounts for particular PR contracts) may appear in some outlets but lack independent public confirmation and should be treated cautiously until corroborated. These items are flagged for caution within the public record.
The wider stakes for the technology industry and society
The intersection of commercial tech and lethal state power calls for a reckoning on several levels. Corporations face reputational and legal risk when their technologies are linked to civilian harm. Governments face a policy gap if procurement systems and export controls do not reflect the operational reality that modern clouds and AI models are strategic infrastructure. And civil society faces new obstacles to accountability where private platforms mediate evidence, public narrative, and remedial processes.For technologists and policy makers, the lesson is blunt: dual‑use software is not ethically neutral. The choices engineering teams, product managers, and executives make about partnerships, access controls, and transparency materially shape how modern conflict unfolds.
Conclusion
Commercial cloud platforms and general‑purpose AI models have become force multipliers in contemporary conflicts. They offer militaries unprecedented speed and scale, but their use also widens the gap between operational efficiency and humanitarian safety. Where commercial providers supply the compute, code, and policy levers that enable rapid targeting and narrative control, accountability must follow: through enforceable contractual safeguards, independent audits, worker protections, and updated international norms.The debate is not academic. The technologies now at the heart of this controversy — storage measured in petabytes, speech‑to‑text models, relationship‑graph scoring systems, and global social platforms — are the infrastructure of modern life. When they are repurposed without robust oversight, the costs are paid in civilian lives and eroded trust. The urgency of reform is clear: regulation, corporate governance, and public scrutiny must evolve as fast as the technologies they seek to govern.
Source: Stop the War Big Tech makes war efficient, engineering algorithms to murder and PR machines to conceal it