• Thread Author
The tower of artificial intelligence, it seems, may have a very American lock on the front door. This, at least, was the sobering consensus coming out of a recent summit called by Germany’s national competition regulator, the Bundeskartellamt, where a cross-section of the European AI ecosystem gathered to probe one of the digital age’s defining questions: do US hyperscalers truly hold the keys to the AI kingdom, and if so, what are the consequences for innovation, competition, and European technological sovereignty?

Gathering the German AI Braintrust​

Convened in June by the Bundeskartellamt, the German competition watchdog, the meeting drew in representatives from 14 of Germany’s most important tech businesses and industry associations. The makeup of the group reflected both the breadth and fragility of the German—and by extension, European—AI scene: a mix of established tech operations, budding startups, and specialized research outfits were all present. Their shared concern was clear: the growing centralization of AI development capability in the hands of just three American tech giants—Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
At issue was not only the raw scale and technical sophistication of these “hyperscalers,” but a unique confluence of factors—massive computing infrastructure, deep reserves of proprietary and user data, regulatory leverage, and bespoke AI hardware—that together create something more than simply “big cloud.” Instead, they amount to a form of infrastructural gatekeeping that, in the view of many European stakeholders, now stands as the chief obstacle to a level AI playing field.

The Hyperscaler Advantage: Hardware, Data, and Ecosystem Lock-In​

To grasp the roots of this imbalance, one must understand the modern AI workflow. Training a foundation model—say, the likes of OpenAI’s GPT-3, or any large-language model—demands both staggering computational horsepower and huge, well-organized datasets. Estimates place the raw, pre-filtered dataset used for GPT-3 at about 45 terabytes, with the eventual training set still weighing in at an immense 570 gigabytes. This is just one example, but it encapsulates the broader point: large-scale AI no longer runs on clever code alone, but on who can marshal the most resources to amass, organize, and process data at scale.
For European startups, universities, and even established firms, this turns access to hyperscaler clouds from an option into a necessity. Without their GPU clusters or their data pipes, even modest AI innovation becomes difficult; at industrial scale, it becomes almost impossible. The situation is further exacerbated by market shortages in GPU accelerators. In a climate where supply chain crunches already bite, the largest players—AWS, Google, Microsoft—can legally and practically corner the market for the GPUs vital to modern AI, forcing everyone else to “rent” at premium rates.
Cloud, meanwhile, isn’t just a resource—it’s a lock-in mechanism. As European industry figures have warned, the moment you build your AI software on one of the big three platforms, you begin incurring switching costs, lose bargaining power, and face the perennial risk of ecosystem dependency. As the Bundeskartellamt’s president Andreas Mundt put it, “The cross-market presence of big tech poses various risks to competition. It may lead to dependencies for smaller competitors, for example, in terms of access to cloud services and data, and lock-ins in specific ecosystems, among other things.”

The Data Divide: Access as a Competitive Moat​

A less technical, but perhaps even more formidable moat is data itself. Hyperscalers don’t just own the means of computation; they command some of the world’s largest lakes of real-world behavioral and enterprise information. For AI, especially in applications such as language modeling, search, and recommender systems, more data typically means better results.
Panelists noted during the German meeting that having priority access to such data sets isn’t just a question of compliance or cost, but structural power: European players, often hampered by stricter data-protection regulation and without equivalent user bases, start every race already behind. Even the best open-source alternatives—be they model weights, training corpora, or academic infrastructure—struggle to match the breadth and freshness of the data engines powering Amazon, Microsoft, and Google.

Regulatory Lag: The Trans-Atlantic Power Imbalance​

It isn’t only European policymakers who have sounded the alarm. The US Federal Trade Commission (FTC) also convened panels last year on whether America’s own AI ecosystem was being suffocated by big cloud incumbency. Panelists cited a shortage of GPU accelerators, making it “impossible” for many US hardware and software startups to scale new models on their own. “We found ourselves going back to the same three cloud vendors, time and again, for all our largest experiments,” lamented one startup CTO, echoing sentiments worldwide.
Any hope that US regulation might level this playing field, however, may be misplaced. The current political climate appears to be shifting in the opposite direction, as the Trump administration pushes for a sweeping ten-year ban on state-level regulations targeting the AI market. Such a move, if enacted, would all but guarantee Washington remains hands-off as hyperscalers cement their roles as infrastructure arbiters not just for America, but the world.
Meanwhile, European remedies—antitrust probes, GDPR enforcement, government-backed cloud alliances—struggle to keep pace with the scale and rapid iteration of US-based big tech. Even Google, itself facing antitrust scrutiny in Europe, has sounded alarms over Microsoft’s “anticompetitive licensing practices” that could further stifle meaningful competition within and beyond the boundaries of cloud AI.

Counting the Cost: Europe’s “Tax” on AI Productivity​

Perhaps the most striking assessment of this dependency came not from a government regulator, but from Steve Brazier, Informa Fellow and longstanding tech industry analyst. By his reckoning, the average European office worker already pays, one way or another, a hidden “tax” of €100 a month to American companies for the privilege of using their productivity tools—a figure expected only to rise as AI becomes further entwined with daily workflow.
“And with the arrival of AI,” Brazier noted, “that €100 a month is simply going to go further up.” The implicit warning: without decisive intervention and incentives to nurture alternatives, every incremental improvement in AI’s power or reach merely deepens Europe’s technological subservience.

The Stakes: Innovation, Autonomy, and the Future Path​

It is tempting, amid such formidable obstacles, to view the AI market as a foregone conclusion—a game already won by those with the deepest pockets and the largest user bases. But the Bundeskartellamt meeting did not conclude with resignation; rather, it highlighted several reasons for hope, and a roadmap for action.

1. Potential for Abuse and Early Intervention​

The regulator’s president, Andreas Mundt, underlined the urgency of “identifying the potential for abuse early on” while keeping markets open enough for true competition. This sentiment is echoed among competition authorities worldwide, as the risks of entrenched monopolies—higher costs, slower innovation, and the erosion of digital sovereignty—assume ever-greater significance in AI deployment.
While “no relevant cases” have yet materialized, scrutiny is intensifying. The mere existence of this regulatory dialogue is itself a warning shot: Europe, and perhaps others, will not remain passive if concentration of power begins to imperil broader economic or social objectives.

2. Diversification of Cloud and Open-Source Alternatives​

A number of European firms and consortia are now investing in indigenous cloud solutions, AI labs, and large language models. Initiatives like GAIA-X—while criticized for slow progress—represent attempts to lay foundational infrastructure outside big-tech reach. Meanwhile, the open-source AI movement gathers steam, producing everything from open weights to fully OSS stacks built for European compliance.
While these alternatives may not rival US hyperscalers for pure scale, their value lies in transparency, interoperability, and local control. Each new system built outside the “big three” is a form of risk mitigation, both technological and geopolitical.

3. Public-Private Partnerships and Strategic Funding​

Several participants urged European governments to expand targeted funding for cloud access and GPU hardware, especially for early-stage startups and research bodies. Strategic procurement (e.g., b-buying access to scarce chips or compute) could help level the playing field, at least in the short term.
More ambitiously, pan-European R&D funds could back joint development of foundational models and cloud services, giving European AI a fighting chance at autonomy while sidestepping duplication and local fiefdoms.

4. Regulatory Guardrails and Pro-Competition Interventions​

Calls are mounting for Europe’s regulatory apparatus to move preemptively, rather than reactively, to prevent exclusionary practices. Enabling true portability of AI workloads, enforcing transparency in cloud pricing and switching costs, or mandating minimum interoperability standards could all act as checks on lock-in.
Additionally, closer scrutiny of data access and privacy rules—balancing protection with innovation—will be required to foster both competitive parity and public trust.

The Risks: Stagnation, Overregulation, and Global Fragmentation​

If the warning signs go unheeded, the risks are substantial. Chief among them is the stifling of competition, as control of cloud infrastructure and datasets calcifies into effective monopolies. With three US firms as AI’s “gatekeepers,” entrepreneurs, academics, and even governments elsewhere could find themselves perpetually shut out of the most lucrative and transformative sectors of the digital economy.
Equally plausible is a scenario where regulatory overreach or miscalibration ends up stifling the very innovation it sought to protect. Too stringent privacy rules, data localization demands, or administrative barriers might only fragment the global AI research landscape, leaving both Europe and smaller US competitors less able to participate in global scaling.
Finally, the elephant in the room is geopolitics. With the US government moving protectively around its homegrown tech champions—and China pursuing its own distinct AI stack—the space for a pluralistic, interoperable international AI ecosystem may narrow.

Looking Ahead: Toward an Open, Competitive AI Market​

The Bundeskartellamt’s probe represents a crucial moment of introspection for both Europe and the global AI community. Its central question—do US hyperscalers in fact hold the keys to the AI kingdom—demands more than just technical answers. It forces policymakers, entrepreneurs, and citizens alike to grapple with big-picture issues: Who determines the thresholds of innovation? Who sets the economic “rent” on progress? And, crucially, how can societies ensure that moonshots in machine intelligence do not become closed shops for the privileged few?
If the status quo persists, the risk is not only economic dependency on foreign platforms, but a digital ecosystem that increasingly privileges only those with the heft to pay the entry fee. Yet Europe—and any region that values autonomy—still has options. The next years will see whether the Bundeskartellamt’s warning spurs a new era of competition, cooperation, and innovation, or merely confirms a global digital order locked behind American cloud doors.
The AI revolution, after all, is only as open as the market that shapes it. The debate in Germany is a timely reminder: keyholders matter. And right now, most of the keys are jingling in Silicon Valley pockets.

Source: theregister.com Germany asks if US hyperscalers hold keys to AI kingdom