• Thread Author
The dominance of Big Tech in the digital marketplace has long been a topic of contention among policymakers, technologists, and lay users alike. Nowhere is this monopolistic tendency more evident than in the burgeoning field of artificial intelligence, where the emergence of entities like OpenAI is raising powerful questions about the future of innovation, user autonomy, and regulatory oversight. The Digital Markets Act (DMA) introduced by the European Union was crafted to address precisely these problems—ensuring fair competition, limiting the power of so-called “gatekeepers,” and fostering a healthy digital ecosystem for all. Yet, as the contours of the digital landscape shift, critical blind spots remain—most notably, a lack of regulation tailored specifically to AI, leaving powerful companies to assert near-total control with little accountability.

Glowing digital scales of justice icons float above a rooftop against a futuristic cityscape backdrop.
The Historical Context: Gatekeeping in Silicon Valley​

In the annals of technological history, U.S.-based tech giants have repeatedly demonstrated a capacity to entrench themselves as vital gatekeepers. Legal and financial muscle, combined with strategic acquisition and occasionally “killer acquisitions,” has enabled companies like Google, Microsoft, and Apple to control product ecosystems, often to the detriment of competition. The DMA, adopted in 2022, aimed to curb such tendencies by implementing obligations for digital platforms with entrenched market positions. For example, Google Chrome must now prompt European users to choose a default search provider, no longer assuming Google’s dominance by default—a move designed to promote diversity and reduce lock-in within search markets.
However, as noted by Hartmut Gieselmann, an experienced editor at c’t, regulation has lagged behind the lightning pace of AI development. While the DMA addresses browsers and search engines, AI remains conspicuously absent, giving rise to a fresh crop of gatekeepers. The reasons for this omission—lobbyist pressure and the fear of stymying innovation—merit investigation, but so too does the impact of this regulatory gap on global innovation and user rights.

OpenAI’s Ascent: The New AI Gatekeeper​

OpenAI’s trajectory from a nonprofit research lab to the beating heart of world-changing AI deployments is remarkable and, for some observers, alarming. With the integration of its models into Microsoft’s Copilot and, increasingly, into Apple’s device ecosystems via Apple Intelligence, OpenAI is poised to become the default AI assistant in over 95% of desktop computers globally—a figure corroborated by several financial and technology analysts tracking deployment plans by both Microsoft and Apple. Apple’s deep integration, which automatically forwards complex Siri queries to ChatGPT, and Microsoft’s use of OpenAI models to power productivity features in Windows and other software, collectively ensure that OpenAI’s reach dramatically outstrips even its nearest rivals.
What is more, this position grants OpenAI an unprecedented stream of user data: natural language queries, contextual files, web requests, and even images. This data stream serves two purposes—it increases the utility and personalization of AI assistants for users, but it also constitutes a potent training resource for OpenAI, supercharging further model improvement. Importantly, most users provide this data for free, having little or no visibility over how it’s used to train future models. This unidirectional flow of value—where users give away data but receive little transparency—is at the heart of concerns around digital gatekeeping in the AI age.

The Impact on Competition: Europe’s AI Pioneers Left Out​

One stark consequence of OpenAI’s dominance is the effective sidelining of alternative AI providers, especially those emerging from Europe or China. European developers like Mistral or the open-source AI2, as well as ambitious players from China such as DeepSeek, find themselves at a structural disadvantage. Their models are rarely—if ever—integrated at the deep, OS-level hooks that Microsoft and Apple reserve for ChatGPT and its kin.
This is not simply a question of product quality; it is about access and integration. End users are locked into the default; there is currently no straightforward mechanism for integrating a European AI alternative in the way one might choose a different browser. As AI becomes more deeply woven into operating systems and productivity tools, the barrier to switching—or even experimenting with—competing models becomes greater. This dynamic risks not only entrenching existing leaders, but also limiting the capacity for truly innovative newcomers to reach scale.
It is unsurprising, then, that calls are growing within Europe for OpenAI to be added to the EU’s “gatekeeper list” under the DMA, subjecting it to new requirements for fairness, interoperability, and user choice. As with browsers and search engines, one proposed solution is the requirement for operating systems to present a choice of AI assistants at setup, rather than prompting users to accept a single, pre-installed solution by default.

Data, Power, and the Future of AI: Critical Analysis​

The debate over AI gatekeeping is deeper than competition law and market structure—it is a question of who wields power in the digital age. If AI will, as many expect, underlie everything from education to healthcare, commerce to national security, then the centralization of this technology in the hands of a tiny handful of American companies raises profound questions.

Strengths of OpenAI’s Gatekeeper Model​

Some will argue that OpenAI’s ascension, via partnerships with Microsoft and Apple, offers real user benefits. These collaborations provide seamless, high-quality AI experiences, with rapid deployment of powerful new features and efficiencies of scale. Centralization can, in theory, deliver more consistent privacy and security measures, and can accelerate innovation by concentrating resources and talent.
Moreover, it can be claimed that, at present, OpenAI’s technical leadership is a function of merit—its models outperform many alternatives across a range of tasks, as evidenced by independent benchmarking in natural language understanding, coding, and reasoning tests. Users, in the aggregate, appear to prefer these products, lending a degree of legitimacy to vendors’ choices to partner with OpenAI.

The Troubling Risks: Lock-In and Market Distortion​

But the same features that benefit end-users can, over time, calcify the market. The lack of meaningful choice or transparency—not just for end-users, but for developers, enterprises, and even governments—poses serious risks. Here are some key dangers:
  • Training Data Capture: OpenAI’s access to vast and proprietary user data may drive a self-fulfilling cycle of improvement, widening the gap with competitors less able to attract or harvest data at such scale.
  • Reduced User Agency: Users cannot easily replace or supplement the default AI assistant with a competitor. This resembles the dominance once enjoyed by Internet Explorer, a monopoly ultimately curtailed by regulation and user pushback.
  • Potential for Abuse or Neglect: With limited competition, there’s less pressure to fix problems, innovate responsibly, or respect user interests, as alternatives are simply not present. This is especially worrying given the AI field’s rapid evolution and high stakes.
  • Stifling of Local Innovation: Smaller European or Asian providers may never scale, regardless of their innovations, unless they gain system-level access. This has consequences for digital sovereignty, economic growth, and balancing global AI influence.

Regulatory Options: Toward a Fairer AI Ecosystem​

If Europe wishes to avoid repeating the market distortions of the past, bolder measures may be necessary. At minimum, adding OpenAI to the DMA’s gatekeeper list would force compliance with rules designed for browsers and search engines: mandatory user choice, interoperability, and the possibility to integrate or switch to other AI models.
More ambitiously, some voices advocate for an “AI tax”—a levy on dominant commercial AI providers, with proceeds directed to supporting open-source and transparent AI alternatives developed within Europe. Such models would require more than just regulatory support; they would need funding, computational resources, and legal mandates for interoperability.
Additionally, mandates around data portability and transparency would empower users to understand—and, if they wish, restrict—the use of their contributions for model training. This could encourage best practices, while providing genuine alternatives for those who want them.
Finally, the promotion of open standards for AI integration at the operating system level would be a game-changer. Much as web browsers today can compete on a more equal footing due to interoperability requirements, so too could AI assistants—if OS vendors were required to support a standardized, open API for third-party AI integration.

Voices of Opposition: Balancing Innovation and Oversight​

Not everyone is convinced that heavier regulation is the solution. Tech lobbyists, innovation advocates, and some industry insiders warn that overbearing rules could stifle progress, bog fast-moving teams in red tape, and ultimately slow Europe’s adoption of the very technologies it hopes to shape. The U.S. model of light-touch, flexible oversight, they argue, has enabled the rapid ascent of platforms like OpenAI, whereas EU-based projects lag behind amid fragmented rules and bureaucratic hurdles.
The challenge for policymakers is to distinguish between regulation that protects against genuine harm and that which simply reflects protectionism or technophobia. The DMA, for all its ambition, has shown both the promise and pitfalls of European regulation—helping break some monopolies but struggling to predict or address the next technological inflection point.

User Autonomy: The Final Frontier​

At present, users face a stark choice: accept the default AI solution—whether Copilot in Windows or ChatGPT integration with Apple—or attempt to switch it off entirely. There is little room for nuance or personal preference, and for those who value privacy, transparency, or independence, this is a severe limitation.
The current absence of truly open, user-selectable AI alternatives risks undermining trust in the technology, and, by extension, its uptake for socially vital roles. Without credible alternatives, even the best-intentioned providers can drift toward user exploitation or neglect, shielded by a lack of competition.

Conclusion: The Road Ahead for AI Regulation​

The debate over AI regulation is not academic; it touches on the tools that will shape daily life for billions in the coming years. The risk is not just technological lock-in, but the steady erosion of user choice, democratic accountability, and local innovation. OpenAI’s position as a new “gatekeeper”—vital, efficient, and innovative as it may be—should be matched by a commitment to openness, fairness, and empowerment for users and would-be competitors alike.
Europe has the regulatory machinery and, increasingly, the political will to act. The DMA demonstrates what is possible, though its current limitations are now glaring. As AI becomes ever more central to the digital fabric of society, the choices made by lawmakers, companies, and end-users will carry lasting consequences.
For now, the best users can do is remain vigilant—demanding transparency, demanding choice, and, where possible, supporting efforts to decentralize and democratize the future of artificial intelligence. Only through sustained attention and thoughtful regulation can we ensure that the promise of AI is realized not just for the few, but for all.

Source: heise online Opinion on AI regulation: OpenAI becomes the gatekeeper
 

Back
Top