Few moments have defined the growing fissure between Silicon Valley and political activism quite like the dramatic interruptions during Microsoft’s recent Build developer conference. From the outset of Satya Nadella’s opening keynote, a new wave of direct action protest burst onto the global stage—not from outside the tech industry, but from within its very ranks, marking an escalation in the ongoing debate over the ethical responsibilities of software giants when their tools intersect with global conflicts.
This year’s Build conference was intended as a coronation of Microsoft’s growing dominance in cloud and AI. Instead, it became a crucible for dissent, with not one but multiple disruptions by protesters challenging Microsoft’s business ties to the Israeli military. The first protester to interrupt Nadella’s keynote was Joe Lopez, a firmware engineer and core organizer of the “No Azure for Apartheid” (NOAA) collective. Lopez’s interruption was pointed and explicit, demanding Nadella “show them how Microsoft is killing Palestinians,” an accusation that instantly ricocheted across social media and industry news feeds.
Within moments, a second protest threw the event further off script. This time, a former Google employee seized the stage, urging all tech workers to recognize tech’s complicity in what he described as “the Israeli genocide against Palestinians.” This refrain—casting technological infrastructure as a conduit for military operations and surveillance—has become the rallying cry of a cross-company movement that includes not only NOAA but also affiliates within Google and Amazon.
The tension did not dissipate after these high-profile outbursts. On the second day of Build, Jay Parikh, Microsoft’s Executive VP of CoreAI, found his session interrupted by a Palestinian tech worker pleading, “My people are suffering. Cut ties with Israel! No Azure for apartheid! Free, free Palestine!” NOAA later confirmed that it supported all protesters, ensuring the message—amplified through live tweets and viral video clips—spread through the developer community and beyond.
The specifics are complex. Microsoft’s business with the Israeli Ministry of Defense (IMOD) primarily involves cloud services, by way of the Azure platform, as well as AI development tools. Critics point to public contracts and cloud modernization projects as evidence of a direct line from Redmond to the battlefield.
According to BDS, “Microsoft provides the Israeli military with Azure cloud and AI services crucial in empowering and accelerating Israel’s genocidal war on 2.3 million Palestinians in the illegally occupied Gaza Strip.” Given the sheer scale of Azure’s reach—including data storage, analytics, and machine learning tools—the concern is that any support, even for theoretically non-military functions, enables broader military logistics, intelligence processing, and target acquisition systems.
However, the statement also included a notable caveat: “Microsoft does not have visibility into how customers use our software on their own servers or other devices. This is typically the case for on-premise software. Nor do we have visibility into the IMOD’s government cloud operations, which are supported through contracts with cloud providers other than Microsoft. By definition, our reviews do not cover these situations.”
This qualification is significant. It is, in effect, an admission of the very opacity that protesters claim allows “plausible deniability.” Microsoft manages, audits, and, to a degree, controls what happens inside its own public cloud, but it relinquishes insight—by design—in on-premise or hybrid cloud arrangements. This lack of robust end-use monitoring is not unique to Microsoft; it is endemic in the industry. Nevertheless, it is precisely what has drawn so much ire, allowing companies to profit from contracts with powerful clients while abdicating real responsibility for downstream uses of their tools.
After being ejected from Build, Joe Lopez reportedly sent an internal email to Microsoft staff, accusing leadership of lying about Azure’s role: "Leadership rejects our claims that Azure technology is being used to target or harm civilians in Gaza. Those of us who have been paying attention know that this is a bold-faced lie. Every byte of data that is stored on the cloud (much of it likely containing data obtained by illegal mass surveillance) can and will be used as justification to level cities and exterminate Palestinians." While the rhetoric is incendiary, the substance points to a legitimate gray area: large-scale data collection, storage, and analysis tools can be harnessed for both benign and harmful ends.
Human rights groups, including Human Rights Watch and Amnesty International, have repeatedly warned that advanced surveillance and AI-assisted targeting (allegedly enabled by cloud platforms like Azure) accelerate and amplify harm, especially when deployed in urban warfare scenarios. As yet, no public evidence has directly tied specific Azure-powered applications to a particular attack in Gaza. However, the BDS campaign and allied researchers have made a circumstantial case: that Microsoft’s business in Israel is so deeply woven into the country’s high-tech defense ecosystem—including “Project Nimbus,” a $1.2 billion cloud initiative spanning Google, Amazon, and other contractors—that complete ignorance of use cases is implausible.
By contrast, IBM and Salesforce, amid pressure on unrelated international contracts, have voluntarily conducted independent and externally-verified ethical reviews, and in some cases curtailed business or provided technical limitations on sensitive services. This sets a precedent, but one Microsoft has not fully emulated. Such transparency is not merely a PR maneuver; independent reviews, especially if conducted by respected human rights and technical experts, could help restore faith in corporate responsibility frameworks.
This groundswell has proven effective in raising public awareness and, in some cases, forcing companies to clarify or even cancel contracts. Yet, as seen in the swift termination of Aboussad and Agrawal, retaliation remains commonplace, and there is little legal clarity about how far internal dissent can be tolerated.
Several notable risks emerge:
The core weakness remains: without robust, verifiable transparency, corporate assurances amount to little. Without independent accountability, there is scant evidence that oversight can be enforced, particularly when profit incentives clash with ethical or human rights considerations.
Additionally, the swift firing of dissenters stands in stark contradiction to Microsoft’s public embrace of “growth mindset” and employee empowerment. If whistleblowers are silenced through termination rather than engaged, the cycle of protest and reputational risk will only intensify.
Whatever the outcome of further internal reviews, regulatory scrutiny, or ongoing activism, one thing is clear: the nexus between big tech and geopolitical conflict is under a new, combative spotlight. As Microsoft seeks to steer between transparency, profit, and employee activism, its approach to its own power—and its willingness to subject itself to meaningful outside scrutiny—will spell the difference between enduring trust and continued turmoil. The developer community, and the world at large, will be watching.
Source: PC Gamer Microsoft's Build conference interrupted by renewed protests over its ties with the Israeli military
Microsoft Build 2025: A Tech Showcase Interrupted
This year’s Build conference was intended as a coronation of Microsoft’s growing dominance in cloud and AI. Instead, it became a crucible for dissent, with not one but multiple disruptions by protesters challenging Microsoft’s business ties to the Israeli military. The first protester to interrupt Nadella’s keynote was Joe Lopez, a firmware engineer and core organizer of the “No Azure for Apartheid” (NOAA) collective. Lopez’s interruption was pointed and explicit, demanding Nadella “show them how Microsoft is killing Palestinians,” an accusation that instantly ricocheted across social media and industry news feeds.Within moments, a second protest threw the event further off script. This time, a former Google employee seized the stage, urging all tech workers to recognize tech’s complicity in what he described as “the Israeli genocide against Palestinians.” This refrain—casting technological infrastructure as a conduit for military operations and surveillance—has become the rallying cry of a cross-company movement that includes not only NOAA but also affiliates within Google and Amazon.
The tension did not dissipate after these high-profile outbursts. On the second day of Build, Jay Parikh, Microsoft’s Executive VP of CoreAI, found his session interrupted by a Palestinian tech worker pleading, “My people are suffering. Cut ties with Israel! No Azure for apartheid! Free, free Palestine!” NOAA later confirmed that it supported all protesters, ensuring the message—amplified through live tweets and viral video clips—spread through the developer community and beyond.
Core Claims: Microsoft’s Azure and Israeli Military Use
At the center of these protests is a single, combustible question: To what extent, if at all, is Microsoft’s Azure cloud and AI technology complicit in the Israeli military’s operations in Gaza, especially in light of catastrophic civilian casualties? Protest groups like NOAA and the wider BDS (Boycott, Divestment, and Sanctions) movement allege that Microsoft “knowingly provides Israel with technology, including artificial intelligence, that is deployed to facilitate grave human rights violations, war crimes, crimes against humanity (including apartheid), as well as genocide.” These accusations cite investigations, notably one led by The Guardian in partnership with the Israeli-Palestinian publication +972 Magazine, documenting the Israeli military’s urgent turn to Microsoft to meet new technological needs during the ongoing conflict.The specifics are complex. Microsoft’s business with the Israeli Ministry of Defense (IMOD) primarily involves cloud services, by way of the Azure platform, as well as AI development tools. Critics point to public contracts and cloud modernization projects as evidence of a direct line from Redmond to the battlefield.
According to BDS, “Microsoft provides the Israeli military with Azure cloud and AI services crucial in empowering and accelerating Israel’s genocidal war on 2.3 million Palestinians in the illegally occupied Gaza Strip.” Given the sheer scale of Azure’s reach—including data storage, analytics, and machine learning tools—the concern is that any support, even for theoretically non-military functions, enables broader military logistics, intelligence processing, and target acquisition systems.
Microsoft’s Response: Denial and Limited Transparency
In a statement released just days before Build, Microsoft attempted to distance itself from direct complicity. The company asserted, “Based on our review, including both our internal assessments and external review, we have found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or AI Code of Conduct.”However, the statement also included a notable caveat: “Microsoft does not have visibility into how customers use our software on their own servers or other devices. This is typically the case for on-premise software. Nor do we have visibility into the IMOD’s government cloud operations, which are supported through contracts with cloud providers other than Microsoft. By definition, our reviews do not cover these situations.”
This qualification is significant. It is, in effect, an admission of the very opacity that protesters claim allows “plausible deniability.” Microsoft manages, audits, and, to a degree, controls what happens inside its own public cloud, but it relinquishes insight—by design—in on-premise or hybrid cloud arrangements. This lack of robust end-use monitoring is not unique to Microsoft; it is endemic in the industry. Nevertheless, it is precisely what has drawn so much ire, allowing companies to profit from contracts with powerful clients while abdicating real responsibility for downstream uses of their tools.
Internal Dissent and Retaliation
The protests at Build follow a pattern of internal activism and, notably, apparent retaliation. Less than two months ago, two Microsoft employees, Ibtihal Aboussad and Vaniya Agrawal, publicly interrupted Microsoft’s 50th anniversary event to denounce its work with IMOD in Gaza. Both were terminated shortly thereafter, a pattern consistent with tech industry responses to whistleblowing and protest across other firms like Google and Amazon.After being ejected from Build, Joe Lopez reportedly sent an internal email to Microsoft staff, accusing leadership of lying about Azure’s role: "Leadership rejects our claims that Azure technology is being used to target or harm civilians in Gaza. Those of us who have been paying attention know that this is a bold-faced lie. Every byte of data that is stored on the cloud (much of it likely containing data obtained by illegal mass surveillance) can and will be used as justification to level cities and exterminate Palestinians." While the rhetoric is incendiary, the substance points to a legitimate gray area: large-scale data collection, storage, and analysis tools can be harnessed for both benign and harmful ends.
Context: The Gaza Conflict and Civilian Harm
All this unfolds against the catastrophic backdrop of Israel’s ongoing military campaign in Gaza—a campaign that began in October 2023 after a deadly Hamas-led incursion into southern Israel. According to reporting from The Guardian, The New York Times, and United Nations estimates, more than 53,000 Palestinians have been killed, with some figures far higher. Civilian infrastructure—hospitals, schools, water, and energy systems—has been ground to dust.Human rights groups, including Human Rights Watch and Amnesty International, have repeatedly warned that advanced surveillance and AI-assisted targeting (allegedly enabled by cloud platforms like Azure) accelerate and amplify harm, especially when deployed in urban warfare scenarios. As yet, no public evidence has directly tied specific Azure-powered applications to a particular attack in Gaza. However, the BDS campaign and allied researchers have made a circumstantial case: that Microsoft’s business in Israel is so deeply woven into the country’s high-tech defense ecosystem—including “Project Nimbus,” a $1.2 billion cloud initiative spanning Google, Amazon, and other contractors—that complete ignorance of use cases is implausible.
Comparing Transparency: Microsoft vs. Its Peers
Microsoft’s review and the statement that “we found no evidence our technology was used to harm people” adopts virtually the same language seen in official communications from Google, Amazon, and Oracle when facing similar protests. All assert strict compliance with terms of service and external audits, while simultaneously conceding that once their technologies are deployed on government premises or in hybrid clouds, oversight ends.By contrast, IBM and Salesforce, amid pressure on unrelated international contracts, have voluntarily conducted independent and externally-verified ethical reviews, and in some cases curtailed business or provided technical limitations on sensitive services. This sets a precedent, but one Microsoft has not fully emulated. Such transparency is not merely a PR maneuver; independent reviews, especially if conducted by respected human rights and technical experts, could help restore faith in corporate responsibility frameworks.
Tech Worker Solidarity: A New Phase
The intensity and visibility of this year’s Build disruptions suggest a new phase in tech worker activism. Unlike previous protests that relied on external advocacy groups, today’s dissent often comes from well-placed insiders with technical expertise. The “No Azure for Apartheid” movement is only the latest expression of frustrated staff—mirrored in Google’s “Drop Project Nimbus” campaign and Amazon’s recent worker walkouts.This groundswell has proven effective in raising public awareness and, in some cases, forcing companies to clarify or even cancel contracts. Yet, as seen in the swift termination of Aboussad and Agrawal, retaliation remains commonplace, and there is little legal clarity about how far internal dissent can be tolerated.
Industry Risks: Reputation, Retention, and Regulation
For Microsoft, the risks extend well beyond headlines. Reliance on large government, defense, and intelligence contracts is a core driver for Azure’s growth, and the sheer scale of cloud operations means even minor disruptions can have amplifying effects throughout the global workforce. Just as critically, these controversies threaten Microsoft’s carefully cultivated image as a responsible steward in the burgeoning fields of AI ethics and cloud security.Several notable risks emerge:
- Talent Retention: Top engineers increasingly factor corporate ethics into employment decisions. If self-described whistleblowers continue to be fired, Microsoft may face long-term attrition and suffer from negative brand associations among both new grads and experienced hires.
- Regulatory Scrutiny: The European Union, following its AI Act, and U.S. lawmakers probing defense tech, may impose stricter reporting and ethical auditing requirements on multinational firms.
- Client Trust: As enterprise customers become more ESG-focused, association with controversial contracts could lead to boycotts or loss of business, especially among international NGOs, universities, and private sector partners wary of reputational spillover.
- Shareholder Advocacy: Recent years have seen a surge in activist investors pressing for greater transparency on human rights and war-related contracts.
Opportunities for Meaningful Accountability
If Microsoft’s leadership is serious about restoring trust and quelling dissent, several options remain:- Commission an Independent Review: A third-party audit by recognized human rights and security experts with a mandate for full transparency could address gaps left by internal assessments.
- Publish a Transparency Report: Details on the scale, scope, and oversight of government and defense contracts—modeled after existing transparency reports in cybersecurity—would empower stakeholders to scrutinize company claims.
- Empower an Ethics Committee: Expand the scope and authority of internal AI ethics boards to include binding recommendations regarding high-risk clients or use cases.
Critical Analysis: Strengths and Shortcomings
On the positive side, Microsoft’s public acknowledgment of its limited oversight and willingness to state its terms on the record sets it apart from some industry rivals. The company’s broad commitment to AI ethics, in theory, creates space for dialogue and reform. Yet, these steps are far from enough given the scale of the allegations.The core weakness remains: without robust, verifiable transparency, corporate assurances amount to little. Without independent accountability, there is scant evidence that oversight can be enforced, particularly when profit incentives clash with ethical or human rights considerations.
Additionally, the swift firing of dissenters stands in stark contradiction to Microsoft’s public embrace of “growth mindset” and employee empowerment. If whistleblowers are silenced through termination rather than engaged, the cycle of protest and reputational risk will only intensify.
Conclusion: The Debate That Won’t Fade
The epicenter of technology is no longer confined to battles over features or market share; today, it’s the moral compass of the companies that build the digital world. The protests at Microsoft Build 2025 are not an aberration but a harbinger—a sign that questions of ethics, human rights, and wartime complicity are inseparable from commercial ambition.Whatever the outcome of further internal reviews, regulatory scrutiny, or ongoing activism, one thing is clear: the nexus between big tech and geopolitical conflict is under a new, combative spotlight. As Microsoft seeks to steer between transparency, profit, and employee activism, its approach to its own power—and its willingness to subject itself to meaningful outside scrutiny—will spell the difference between enduring trust and continued turmoil. The developer community, and the world at large, will be watching.
Source: PC Gamer Microsoft's Build conference interrupted by renewed protests over its ties with the Israeli military