• Thread Author
Microsoft’s historical role in the world of computing is nearly unrivaled, not simply due to the dominance of its operating systems, but also because of how its technology has become entangled in geopolitical debates. Few moments illustrate this nexus as clearly as a recent controversy sparked by renowned ambient composer Brian Eno, who publicly called out Microsoft over its connections to the Israeli government and the company’s role in modern surveillance and warfare technology. Eno’s accusations, amplified by his past partnership with Microsoft as the creator of the iconic Windows 95 startup chime, reignite deep questions about the responsibilities that global technology firms bear in conflict zones—and whether “business as usual” is ever defensible in such contexts.

High-tech data center servers connected globally with a digital map highlighting Israel.
An Iconic Collaboration and a Modern Reckoning​

In the mid-1990s, Brian Eno was an unlikely but ultimately pivotal figure in Microsoft’s branding, delivering the legendary six-second Windows 95 startup sound. For both parties, this collaboration was emblematic of a time when personal computing felt creative and humanized. But, as Eno himself now makes clear, the evolution of Microsoft’s business—from consumer software to vast, unseen cloud infrastructures supporting powerful governments—marks a radical shift.
Reflecting on his role, Eno said, “I gladly took on the project as a creative challenge,” underscoring that, at the time, he wouldn’t have believed Microsoft “could one day be implicated in the machinery and oppression of war.” The phrase is a sobering commentary on the company’s growth from a software innovator to a key backbone of state-level technology operations, which now include cloud services, artificial intelligence, and massive data capabilities used by governments worldwide.

Microsoft’s Services for Israel’s Ministry of Defence: Transparency or Obfuscation?​

Central to Eno’s critique is Microsoft’s ongoing provision of “software, professional services, Azure cloud services, and Azure AI services, including language translation” to Israel’s Ministry of Defence (IMOD). This partnership, disclosed in a recent Microsoft blog post, is described by the company as “a standard commercial relationship.” The company claims that all such engagements are “bound by Microsoft’s terms of service and conditions of use,” requiring “core responsible AI practices” and “prohibit[ing] the use of our cloud and AI services in any manner that inflicts harm on individuals or organisations or affects individuals in any way that is prohibited by law.”
Crucially, Microsoft asserts that it has “found no evidence that Microsoft’s Azure and AI technologies… have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.” Yet, in a notable admission, the company also stated it “does not have visibility into how customers use our software on their own servers or other devices.” This caveat is significant, opening the door to a central critique: the lack of true oversight once technology leaves Microsoft’s direct control.
This standard disclaimer, intended to frame the arrangement as both typical and safe, has drawn ire from numerous quarters. Critics like Eno argue that providing advanced AI and cloud services to a government “engaged in systematic ethnic cleansing is not ‘business as usual’. It is complicity.” The test for Microsoft and its peers is not just whether they intend their technologies for peaceful ends, but whether they are willing to confront uncomfortable truths about their clients and partners.

The Question of Complicity: Moral and Legal Dimensions​

Eno’s contention—“If you knowingly build systems that can enable war crimes, you inevitably become complicit in those crimes”—forces a re-examination of ethical boundaries in tech. His assertion is echoed by a growing body of academics, legal scholars, and human rights experts, who point out that the line between supplying “neutral tools” and direct involvement in violations of international law is increasingly blurred in the era of AI and cloud computing.
International law is itself evolving to grapple with this ambiguity. Under principles set by the United Nations, companies can be held responsible not merely for direct actions, but also for knowingly enabling state actors in scenarios where human rights abuses are likely. The United Nations Guiding Principles on Business and Human Rights emphasize both due diligence and remediation in supply chains and technology deployments. If a company’s platform serves as the infrastructure for surveillance, targeting, or the suppression of civilian populations, the case for “neutrality” becomes ever more tenuous.
Yet, there’s a counter-argument presented by defenders of companies like Microsoft: absent direct evidence of misuse, it’s inappropriate or even unfair to assume complicity. Microsoft’s official stance is predicated on the assertion that, based on its internal and external reviews, there is “no evidence” of harmful deployment. This mirrors statements by other major cloud vendors embroiled in similar controversies, from Amazon to Google.

Employee Dissent and Public Backlash​

The matter is not merely theoretical. Discontent within Microsoft itself has surfaced, as demonstrated during a recent and widely-publicized interruption of CEO Satya Nadella’s keynote by Joe Lopez, a company firmware engineer. Lopez shouted, “How about you show Israeli war crimes are powered by Azure?”—a striking public protest that reflects broader internal dissatisfaction among tech workers regarding their employers’ military and intelligence contracts.
Such moments fit within a rising tide of technology worker activism, seen previously in high-profile walkouts at Google and Amazon over issues ranging from climate change to immigration enforcement. For Microsoft, whose “growth mindset” culture purports to value ethical leadership and responsible innovation, the stakes are reputational as well as operational.

The Limits of Corporate Policies: Responsible AI and Real-World Enforcement​

Microsoft’s defense leans heavily on its Responsible AI standards, which ostensibly prevent misuse of its services. The company’s AI Code of Conduct emphasizes principles like human oversight, fairness, and accountability. But these frameworks face major obstacles in environments marked by secrecy, power asymmetries, and the military’s inherent lack of transparency.
More pointedly, Microsoft acknowledged the limitation: “We do not have visibility into how customers use our software on their own servers or devices.” Such statements are not unique. Cloud and platform providers uniformly warn that after deployment, client monitoring is limited if not impossible, particularly in secure or classified installations.
Consequently, the risk of “plausible deniability” becomes built into the business model. Technology providers can publicly champion ethical standards while relying on contractual and logistical barriers to genuine oversight. For critics, this is not just a practical limitation, but a deliberate abdication of ethical responsibility.

Israel, Microsoft, and the Geopolitics of Technology​

For many analysts, the controversy surrounding Microsoft’s relationship with Israel’s Ministry of Defence encapsulates a broader trend: the deep intertwining of Western tech companies with militaries and intelligence agencies across the globe. Microsoft, Google, Amazon, and Oracle all maintain high-value contracts with the U.S. Department of Defense and its allies, supplying cloud and artificial intelligence systems that underpin modern warfare, surveillance, and military logistics.
Israel, as a close U.S. ally and leader in military technology, is a significant partner for these firms. Israeli government agencies, including the Ministry of Defence, have rapidly adopted advanced analytics and cloud infrastructure to expand intelligence gathering, operational planning, and automated decision-making—all processes potentially enabled, in part, by Microsoft’s platforms.
This aligns with research from the Carnegie Endowment for International Peace, which identifies a global “AI arms race,” in which both democratic and authoritarian governments are scrambling to leverage commercial technology for strategic advantage. The risks, as flagged by numerous human rights organizations, include the potential misuse of AI for population surveillance, targeted assassinations, automated border control, and predictive policing.

Human Rights Concerns in Palestine: Claims and Counterarguments​

Eno’s denunciation resonates with a growing international outcry about civilian casualties and widespread destruction in Gaza and the West Bank during Israeli military operations. Major human rights groups—including Amnesty International and Human Rights Watch—have accused Israeli authorities of policies that amount to apartheid and, in some cases, war crimes. UN experts and multiple government commissions have echoed these concerns, calling for accountability from state and non-state actors alike.
These accusations have not gone unchallenged. The Israeli government categorically rejects claims of systematic ethnic cleansing or genocide, maintaining that its use of technology is targeted at combating terrorism and protecting its citizens. Supporters of Israel’s defense actions argue that such technology, including AI-driven surveillance and targeting systems, actually serves to minimize harm by improving the precision of military operations.
Against this backdrop, Microsoft frames its partnership with the IMOD as a lawful and responsible engagement, bounded by strict terms of use and regular reviews. Still, critics counter that no set of contractual promises or periodic audits can guarantee against misuse—especially when clients are sovereign nation-states.

Corporate Responsibility: Where Should the Line Be Drawn?​

The explosion of cloud-based technology into the security, defense, and intelligence sectors creates an urgent need to revisit the notion of “vendor neutrality.” Industry watchers point out that service providers no longer sell discrete products; they maintain, update, and optimize vast, integrated infrastructures that implicitly shape how state clients exercise power.
There are growing calls—echoed by Eno—for technology giants to suspend services that might directly or indirectly support violations of international law. Some legal scholars have gone further, advocating for explicit “human rights due diligence” not just at the point of sale, but throughout the entire lifecycle of a government contract.
This would be a radical break from prevailing industry practices. Most large vendors argue—often convincingly from a legal perspective—that liability ends at the point of license transfer or deployment, and that sovereign states bear primary legal responsibility for their actions.
But international precedent is evolving. Notably, the U.S. and European courts have sometimes held firms liable for willful blindness in export and arms control cases. The tech sector’s scale, complexity, and centrality in the machinery of modern state power may, over time, draw more intense legal and regulatory focus.

The Impact of Boycotts and Ethical Disassociation​

Eno’s personal response—pledging to donate his Windows 95 chime fee to support Gaza—mirrors broader movements for divestment and ethical disassociation in technology and finance sectors. The Boycott, Divestment, Sanctions (BDS) movement has long targeted firms perceived as enabling Israeli military actions, and similar campaigns have affected Google, Amazon, and others.
Yet there are mixed results. Despite increasing activism, major technology providers have, in many cases, doubled down on government and defense contracts, citing strategic importance and national security imperatives. At the same time, some companies have, quietly or otherwise, withdrawn from especially controversial programs under sustained pressure.
For Microsoft, the reputational calculus is complex. The company’s aggressive push for AI-first dominance and government contracts may yield immense financial rewards, but as controversies like this one multiply, the risk to its public image, employee morale, and even long-term market position rises. Trust is a hard-won asset—and in the context of human rights, it is easily lost.

Strengths: Transparency, Due Diligence, and Ethical Aspirations​

To Microsoft’s credit, the company has proactively disclosed details about its relationship with the Israeli Ministry of Defence—something many competitors do not do. The existence of a public Responsible AI framework, and periodic third-party reviews, represents a material improvement over the opacity that historically characterized defense tech contracts.
Furthermore, Microsoft has—at times—walked away from especially problematic engagements. In 2020, for example, the company declined to build facial recognition solutions for law enforcement in the United States, citing concerns about racial bias and civil liberties. This demonstrates a capacity for principled decision-making when public scrutiny is intense and the ethical stakes are clear.
Additionally, Microsoft invests heavily in philanthropic work and digital skills training in conflict-prone regions, providing a counter-narrative to the “amoral technocrat” critique that sometimes dogs the tech giants.

Risks: Technological Ambiguity, Limited Oversight, and Ethical Fatigue​

Nevertheless, the limitations of Microsoft’s approach are significant. The “black box” nature of modern cloud and AI infrastructure—where providers have little or no ongoing insight into client use—creates conditions ripe for abuse. High-minded principles mean little if they cannot be monitored or enforced, especially when clients are powerful, often-secretive state agencies.
Moreover, the rapid pace of AI development outstrips the ability of law and regulation to keep up. Even well-intentioned companies can find their platforms appropriated or manipulated by bad actors. The ongoing controversies about the use of commercial AI tools in drone warfare, facial recognition, and predictive policing amplify these risks.
Perhaps most concerning is the phenomenon of “ethical fatigue.” As controversies proliferate and become more complex, the power of public outcry to force real change seems to diminish. Companies develop ever-more sophisticated crisis communications, while activists and employees cycle through new campaigns with limited tangible results.

The Future: Accountability in the Age of Geopolitical Tech​

The debate unleashed by Eno’s challenge to Microsoft is not going away. As world events continue to spotlight the role of technology in conflict, the expectation that global firms act not merely as profit-seeking entities but as ethical stakeholders will grow. Governments may begin to demand more rigorous human rights risk assessments as a condition for public contracts. Investors and consumers may show less tolerance for companies that hide behind legal technicalities when their technologies fuel violence and repression.
Microsoft, for its part, faces a stark choice. It can double down on the logic that there are no “bad clients,” only bad actors, and hope the storm passes. Or it can take a more proactive stance, tightening due diligence, building in greater transparency, and—where the facts warrant it—walking away from business that cannot be ethically justified.
As Eno observed in his statement, the sounds and systems we build today become part of the world’s structure—sometimes for good, sometimes not. The challenge for Microsoft and its peers is to ensure that the infrastructure of the digital age uplifts human rights rather than abetting their erosion.

Conclusion: The Tech Reckoning Has Only Begun​

The controversy ignited by Brian Eno’s denunciation of Microsoft’s ties to the Israeli military demonstrates that the technology sector is now a central player in questions of war, peace, and human dignity. As AI and cloud platforms grow ever more powerful, the ethical stakes rise alongside their commercial value.
For Windows enthusiasts and everyday users, it’s easy to forget that the systems we interact with daily are intricately woven into the machinery of global power. But as the space between consumer applications, cloud infrastructure, and military deployments collapses, the standards we demand of our technology providers must likewise evolve.
Will Microsoft—and the broader industry—rise to the challenge? The answer, as the past months have shown, will not be given in a press release or an internal code of conduct, but in the vigilance, courage, and accountability of those who build, use, and critique the technologies shaping our future.

Source: MusicTech “If you knowingly build systems that can enable war crimes, you inevitably become complicit in those crimes”: Brian Eno calls out Microsoft over ties to Israeli government
 

Microsoft’s role as a global technology powerhouse has placed it at the intersection of complex geopolitical debates, especially as artificial intelligence and cloud computing reshape national defense capabilities and ignite ethical debates about the responsibility of providers in conflict zones. Recent revelations have further sharpened this conversation, as Microsoft formally acknowledged its “standard commercial relationship” with the Israel Ministry of Defence (IMOD), shedding new light on the intersection between big tech, government contracts, and international humanitarian concerns.

Digital globe highlighting Africa, Europe, and Asia with cloud computing network connections.
Microsoft’s Relationship with the Israel Ministry of Defence​

Earlier this year, an investigative report by the Associated Press ignited concerns about how Microsoft’s Azure cloud computing and AI technologies were being utilized by the Israeli military. The AP report alleged that Microsoft’s solutions were deployed for “transcribing, translating, and processing intelligence gathered through mass surveillance,” with such data potentially cross-checked with Israel’s “AI-enabled targeting systems.” These claims stoked apprehension among human rights advocates, technologists, and a substantial subset of Microsoft’s own employees, given the context of the ongoing conflict in Gaza and mounting civilian casualties.
In response, Microsoft conducted an internal review, assessing employee testimony and documentary evidence. The company concluded that there was “no evidence to date” that its technologies had been used to harm or target civilians during the conflict. It emphasized that its engagement with the IMOD follows established norms for commercial relationships, a framework under which it supplies software licenses, professional services, Azure cloud capacity, and AI services.

Scope of Microsoft’s Support and Oversight Concerns​

Microsoft’s official statement details that the company, like other multinational tech providers, works with the IMOD to bolster Israel’s cybersecurity, particularly defending against external threats. The company revealed that it has “occasionally provided special access” to its technologies beyond the terms of standard agreements. Notably, Microsoft cited its provision of “emergency support to the Israeli government in the weeks following October 7, 2023, to help rescue hostages,” clarifying that these interventions were “limited in scope” and governed by “significant oversight,” with certain requests approved and others denied.
Yet, despite its oversight claims, Microsoft conceded a critical limitation: it does not possess visibility into how customers—including the IMOD—use its software once deployed on their own servers and devices. That means while Microsoft’s terms of service forbid use of its cloud and AI platforms “in any manner that inflicts harm on individuals or organizations,” the actual end use can fall beyond Microsoft’s direct oversight, particularly in sovereign, highly sensitive national defense environments. The company also clarified that Israel’s government cloud operations are generally supported under contracts with cloud providers other than Microsoft, making direct scrutiny even more challenging.

Accountability Amidst Limited Visibility​

Microsoft’s recognition of its limited purview is not unique in the tech sector. It reflects a standard challenge in cloud computing: while providers can enforce certain legal agreements and limit access to sensitive features, much of what transpires on customer infrastructure is opaque by design. This architecture, while crucial for data privacy and sovereignty, can frustrate attempts at external ethical review or real-time enforcement of international law. Microsoft asserts that “militaries typically use their own proprietary software or applications” for “surveillance and operations,” and stresses that it “has not created or provided such software or solutions to the IMOD.”
Still, ethical oversight groups remain concerned, arguing that enabling foundational infrastructure for surveillance or kinetic military operations carries distinct moral risk—even if the provider does not directly author “kill chain” applications or targeting software. Given allegations that cloud-based AI such as Microsoft’s could enhance the scale, efficiency, or targeting mechanisms employed in the Gaza conflict, critics urge for deeper transparency and, where appropriate, disengagement.

The Debate Inside and Outside Microsoft​

The issue has triggered notable internal activism at Microsoft. The “No Azure for Apartheid” petition, launched by current and former employees, expressly calls for the company to publish the full findings of its investigation and to review or curtail its business with clients implicated in human rights abuses. Parallel to this, the Palestinian Boycott, Divestment, and Sanctions (BDS) movement recently advocated a boycott of not just Microsoft’s cloud services, but also its Xbox gaming platform, escalating public scrutiny on the tech giant’s dual commercial and ethical obligations.
Microsoft has responded by seeking to reaffirm its human rights commitments. The company states: “Our commitment to human rights guides how we engage in complex environments and how our technology is used. We share the profound concern over the loss of civilian life in both Israel and Gaza and have supported humanitarian assistance in both places. Based on everything we currently know, we believe Microsoft has abided by these commitments in Israel and Gaza.” The company has presented itself as adhering to “principals on a considered and careful basis,” balancing efforts to save hostages while honoring privacy and the rights of civilians.

Critical Analysis: Strengths and Merits of Microsoft’s Approach​

Upholding Legal and Human Rights Standards​

Microsoft’s clear articulation of its commercial relationship terms sets a benchmark for transparency in big tech procurement. Detailing both direct commercial contracts and exceptional “emergency” arrangements, the company provides at least partial visibility into how a major American cloud vendor interfaces with foreign military clients—a matter often shrouded in corporate secrecy.
Moreover, Microsoft’s explicit invocation of human rights commitments and its decision to withhold or deny some technology access requests in sensitive situations suggest the presence of active ethical review mechanisms. Though the company ultimately lacks operational visibility into government client activities, its documentation, interviews, and internal investigations represent a step toward accountability. By openly stating that the IMOD is bound by its service terms prohibiting harmful uses, Microsoft positions itself as respecting both the letter and the spirit of international norms.

Support for Humanitarian Assistance​

In a region fraught with humanitarian crises, Microsoft’s assertion that it has supported humanitarian efforts in both Israel and Gaza signals a recognition that technological capacity must be coupled with moral responsibility. In the aftermath of violent incidents—like the October 7, 2023 attacks referenced in the company’s statement—facilitating technology to rescue hostages or rebuild civilian infrastructure can save lives and reduce suffering. If Microsoft’s support demonstrably contributed to such outcomes without abetting human rights violations, it demonstrates a positive example of thoughtful corporate citizenship.

Setting Precedent in Tech Sector Accountability​

The company’s measured responses, at least in the context of the information made available, illustrate the tension that all global cloud providers face when balancing profitability, regulatory compliance, and ethical stewardship. Microsoft’s admission that it cannot see or tightly control all downstream use of its commodities is refreshingly candid, and could set expectations for clearer end-user responsibility and calls for joint accountability.

Weaknesses, Risks, and Uncertainties​

Inherent Lack of Operational Transparency​

Despite Microsoft’s review, its own admission of not having “visibility to the IMOD’s government cloud operations” is a key limitation. This systemic gap undermines the company’s ability to conclusively declare that its technologies are not misused. Commercial cloud environments are, by design, black boxes once deployed on customer premises or in government-controlled data centers. This lack of operational transparency makes external assurance or third-party ethical auditing almost impossible, and leaves the company reliant on contractual rather than technical enforcement methods.

Special Access and Oversight: Room for Ambiguity​

The existence of “special access” arrangements—especially the emergency support provided post-October 7—introduces an area of potential risk. While Microsoft asserts its oversight was “significant,” and access was granted “on a limited basis,” the precise nature and extent of oversight remains unclear. The ability to approve or deny requests is a positive control, but without independent auditing and transparency regarding rejected requests or the criteria used for such decisions, trust is reliant on Microsoft’s self-assessment.

Potential for Dual-Use and Escalation​

Cloud AI technologies are fundamentally dual-use: tools designed for productivity, communication, or disaster response can also be repurposed for surveillance, data fusion, or even lethal autonomous targeting. Critics rightly note that even if Microsoft does not deploy custom targeting systems for militaries, providing underlying infrastructure effectively enables much of the military capability sought by modern armed forces. The fact that Israel, like many governments, is accelerating its move toward AI-driven intelligence analysis only magnifies these risks if checks and balances are insufficient.

Reputational and Business Continuity Risks​

Mounting activist campaigns—both internal (No Azure for Apartheid) and external (BDS)—threaten Microsoft’s reputation, employee morale, and potentially the stickiness of government contracts. If significant numbers of employees or consumers become convinced that Microsoft’s stance is insufficiently robust, the company could face not just reputational damage, but also a possible chilling effect on future commercial agreements in other ethically sensitive regions.

Legislative and Compliance Pressures​

Governments around the world, including the United States and members of the European Union, are tightening export controls and compliance requirements for dual-use technologies. The risk for Microsoft—and the cloud industry more broadly—is that failure to adequately address real or perceived abuses could trigger regulatory crackdowns, forced disengagements from lucrative markets, or legal liabilities under wartime conduct statutes and international humanitarian law.

Verifiability of Microsoft’s Claims​

A critical review of the underlying claims is necessary. The Associated Press report cited by GamesIndustry.biz provides an independent source for the original allegations that triggered internal reviews and activism. Microsoft’s defense—finding “no evidence to date” of misuse in Gaza—was cross-checked with both employee interviews and document reviews. However, without public release of their full investigation or any independent external oversight, the conclusions remain difficult for third parties to verify. The demand from the No Azure for Apartheid petition for public disclosure of findings is thus both understandable and, from a transparency perspective, justified.
The technical assertion that “militaries typically use their own proprietary software or applications for surveillance and operations… [and] Microsoft has not created or provided such software to the IMOD” aligns with known practices in military IT, where governments combine off-the-shelf and in-house developed solutions. This is corroborated by industry analysis and is consistent with how many state actors—especially those with advanced cyber capabilities like Israel—manage strategic systems.
Microsoft’s stance that it “does not have the ability to see how customers use [its] software on their own servers and devices” is an accurate, if problematic, reflection of current global cloud business models. Sovereign clouds are explicitly designed to shield sensitive data and operational details from foreign or commercial surveillance, privileging national security over vendor oversight. This reality, while strengthening privacy, exposes gaps in post-deployment accountability.

Moving Forward: What’s Next for Microsoft and Big Tech in Conflict Zones?​

Demands for Transparency and Accountability​

As petitions and calls for public release of investigative findings grow, Microsoft and peers must weigh the merits of increased transparency—possibly even allowing for third-party auditing of select engagements in high-risk regions. Such steps would not entirely close the accountability gap posed by sovereignty and technical opacity, but could offer greater public assurance and set a new industry standard.

Strengthening Contractual and Technical Controls​

Microsoft’s example demonstrates the importance of robust contractual language—for instance, explicit prohibitions on using cloud and AI for harm, and emergency-use-only clauses. However, the future may see increasing demand for technical solutions: watermarking, advanced audit logs, or even “zero-knowledge” computation guarantees. It remains to be seen whether such measures would be adopted widely or whether governments would accept them, but they could offer ways for vendors to exercise oversight without breaching client sovereignty.

Ongoing Ethical Evolution​

For all tech giants, navigating commercial opportunities in a world beset by war, repression, and contested sovereignties will only become more complex. Balancing profit and principle, market access and ethical responsibility, will be the defining challenge for cloud providers in the coming decade. Microsoft’s current approach—characterized by partial transparency, acknowledged limitations, and a commitment to human rights—charts a middle path, but the pressure for more decisive action and public accountability continues to build.

Conclusion: Microsoft at the Crossroads​

Microsoft’s acknowledgment of its commercial relationship with the Israel Ministry of Defence and its subsequent internal review underscore the profound responsibilities and operational challenges inherent in being a foundational infrastructure provider in an age of AI-powered warfare. While the company has taken measurable steps to assert ethical standards, legal compliance, and limited oversight, its own admission of operational blindness and the calls for fuller transparency reveal lingering vulnerabilities in the existing paradigm.
As conflicts like the one in Gaza continue to draw global outrage and spark activist campaigns, the moral dimensions of technology transcend traditional commercial logic. Consumers, employees, and civil society increasingly demand clarity, accountability, and verification—not just promises—about how products intended for productivity or digital transformation can, under circumstances beyond a vendor’s direct control, shape the course of human lives in both positive and devastating ways.
Microsoft’s navigation of these dilemmas will serve not only as a signal for the rest of the technology sector but as a live, global experiment in reconciling technological innovation with enduring commitments to humanity, law, and peace. The stakes—for the industry, for those affected by conflict, and for society’s relationship with its most powerful technologies—could not be higher.

Source: GamesIndustry.biz Microsoft acknowledges "standard commercial relationship" with Israel Ministry of Defence, conducts internal review of AI services
 

Back
Top