• Thread Author
The story of artificial intelligence over the past several years has become one of remarkable technical progress punctuated by a rapidly narrowing field of players. While AI startups and research labs abound, public consciousness and enterprise adoption have largely consolidated around just a handful of major platforms—primarily OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and, to a lesser degree, Meta AI. Despite the apparent diversity on the surface, the reality is that most organizations and individuals engaging with AI are doing so through products and infrastructure controlled by a small circle of powerful tech giants. This consolidation brings benefits in terms of accessibility and innovation, but it also raises serious questions about risk, resiliency, sustainability, and the long-term health of the AI industry. As enterprises embrace AI tools en masse, few are considering what could happen if these dominant providers stumble or withdraw support. The numbers simply aren’t adding up, and the stakes are higher than many realize.

A businessman in a suit observes a line of solar-powered data servers under a sunset sky with digital icons overhead.The Big Four: Dominance by Design​

At the heart of this dynamic lie the investments and brand power wielded by Microsoft, Google, Meta, and OpenAI. Combined, these companies have poured tens of billions into AI research, infrastructure, and talent. According to public disclosures and reporting by outlets like The New York Times and the Financial Times, Microsoft alone pledged $13 billion to OpenAI, while Google and Meta have significantly ramped up their AI spending in the wake of ChatGPT’s unprecedented commercial success. The sheer scale of these investments has allowed these companies to quickly roll out robust, enterprise-ready AI products—leaving smaller challengers with little chance to catch up, even when they boast comparable technical capabilities.
This outsized influence is reflected in survey data from recent studies. For instance, the South African Generative AI Roadmap 2025 survey, conducted by World Wide Worx, Dell Technologies, and Intel, found that 67 percent of business respondents had adopted generative AI, up steeply from 45 percent the year before. Yet, when asked which platforms they used, responses clustered around the familiar names: ChatGPT, Copilot, Gemini, and, to some extent, Meta AI. Lesser-known platforms, despite technical competence, struggled to gain traction even among technologically progressive organizations.
What’s more, the adoption of AI rarely translates to deep integration. As of the latest survey, 32 percent of businesses reported using AI tools informally, without any governance or structured oversight. This exposes organizations to a host of risks, from data privacy breaches to poor regulatory compliance, but also speaks to the reality that many are experimenting within the safe, well-marked boundaries established by the dominant platforms.

Concentration Risk: What Happens If a Giant Falls?​

If history has taught the tech sector anything, it’s that industry concentration breeds systemic vulnerability. The financial collapse of a single large player or the sudden discontinuation of an essential service can have outsized ripple effects across entire ecosystems. The concentration of AI infrastructure in the hands of a few creates a modern variation of the “too big to fail” problem.
Consider OpenAI, which, per internal communications and third-party analysis reported in outlets like Business Insider and The Information, is projected to hit $10 billion in annual recurring revenue. This figure is impressive on its face, but less so when weighed against the estimated $8.5 billion the company spent to bring in $5 billion in revenue in 2024, resulting in roughly a $3.5 billion loss for the year. As a privately held firm, OpenAI is under no obligation to disclose its full financials, but industry insiders and analysts have repeatedly flagged its uncertain path to profitability. While Apple, Google, and Microsoft can cushion losses in AI with profits from diverse business lines, OpenAI lacks that safety net. A large-scale contraction in AI demand, or a spike in operating costs, could put its existence at risk.
Meta, Google, and Microsoft possess more diversified portfolios, but even they are not immune to AI’s relentless hunger for resources. The cost of running state-of-the-art AI models is driven as much by hardware and energy as by research spending. Training and inference for advanced models require vast server farms equipped with cutting-edge Nvidia GPUs, all of which consume massive quantities of electricity around the clock.

Unsustainable Energy Demands​

No discussion of AI’s future is complete without grappling with its appetite for power. According to a landmark International Energy Agency (IEA) report published this year, global electricity use by data centers (encompassing those dedicated to AI as well as general cloud services) is set to more than double by 2030—to 945 terawatt-hours (TWh). To put that in perspective, this is more than the total annual electricity consumption of Japan—one of the world’s largest economies.
The ramifications for the US and other advanced economies are stark. IEA analysts found that US data centers alone will account for nearly half of the country’s overall growth in electricity demand over the next five years. By 2030, the US economy is expected to consume more power to process data (largely driven by AI) than it does to manufacture all energy-intensive industrial goods—including aluminum, steel, cement, and chemicals. Across the developed world, data center expansion is projected to drive over 20 percent of the growth in electricity demand, potentially reversing years of flat or declining consumption—a trend last seen prior to the cloud computing boom.
These figures should set off alarm bells. Many industry advocates tout renewable energy as a panacea, envisioning server farms seamlessly powered by solar, wind, or even hydrogen. In reality, scaling up renewables to match anticipated data center demand would require construction on a scale that few countries are prepared (or able) to undertake within the next five years. This is not a matter of swapping out one power source for another, but of generating an entirely new continent’s worth of clean energy to service the world’s insatiable digital appetite.

Tech Hype Cycle and the Risk of AI Overreach​

For all the breathless optimism surrounding AI, the lessons of earlier technology hype cycles—most notably, blockchain and NFTs—should not be forgotten. These technologies exploded into public awareness with bold promises of revolutionizing finance, art, supply chains, and beyond. Flush with venture capital, hundreds of startups appeared almost overnight, only to see the air rapidly come out of the balloon as structural flaws, regulatory uncertainty, and unsustainable economics reasserted themselves.
AI, for all its practical value, is not immune to similar forces. Like blockchain, it currently operates in a regulatory vacuum—an environment of minimal government oversight, weak consumer protections, and few established standards or guide rails for ethical use. Even as AI transforms industries at breakneck speed, governments around the world are scrambling to assess its risks and devise suitable controls. In the United States, proposed federal legislation on AI has repeatedly stalled in Congress; the EU AI Act, while groundbreaking, remains in the process of phased implementation with much of its true impact yet to be felt.
Lurking beneath the surface are further risks, including legal battles over copyright infringement. OpenAI, Google, and Meta all face high-stakes lawsuits from authors, publishers, and media companies alleging that their models ingested massive quantities of copyrighted material without permission. Should the courts side decisively with rights holders, the business models of AI’s biggest players could be upended virtually overnight. Until these legal questions are definitively resolved, every AI investment carries a degree of uncertainty that the industry would rather not acknowledge.

Lock-In and the Business Dependency Trap​

Perhaps the most underappreciated risk facing businesses today is the potential for catastrophic disruption should their chosen AI vendor shutter or pivot. Despite theoretical portability—where, in principle, another company could offer a functional equivalent or continuity of service—in practice, many AI tools are inextricably tied to their creator’s proprietary infrastructure, model architectures, or data sets. It’s an open question whether businesses relying on OpenAI, Copilot, or Gemini could easily transition to another platform in the event of insolvency, hostile regulation, or a paradigmatic technological shift.
In this sense, AI’s current state mirrors the era of proprietary mainframes in the 1970s and 1980s: robust, efficient, but perilously dependent on the continued health of the platform vendor. When IBM, DEC, or other mainframe providers sighted trouble, their user bases were left scrambling for alternatives—a fate that could easily befall AI-dependent businesses today if current trends persist.
Anecdotal evidence from both tech journalism and recent enterprise surveys suggests that most businesses have not built fallback plans for AI discontinuity. As tools like Copilot and Gemini become embedded in workflows—replacing human labor, automating core processes, or generating essential content—the cost of disruption goes from inconvenient to existential.

Sustainability and the Mirage of Productivity Gains​

Advocates are quick to argue that AI’s enormous up-front costs—both monetary and ecological—are justified by productivity improvements, better customer service, and new forms of competitive advantage. The South African Generative AI Roadmap 2025 survey, for instance, found that participating companies cited improving productivity and enhancing customer service as the main drivers for AI adoption. But as skeptics point out, the true net gains from AI remain unproven at scale, and many initial implementations have yielded—at best—modest efficiency improvements offset by unforeseen transition costs.
Further, if the incremental productivity realized by AI is overshadowed by resource shortages, regulatory red tape, legal liabilities, or platform collapse, the purported gains may never fully materialize. Indeed, the entire compact between AI providers and enterprise clients relies on the assumption that these businesses can count on long-term continuity and that the productivity dividend will outweigh both short- and long-run costs. If that promise falters, the AI experiment could rapidly transition from a competitive advantage to an operational liability.

The Looming Regulatory Reckoning​

The lack of a consistent global framework for AI regulation is, in itself, a major risk factor. Without clear rules, companies operate in uncertainty, potentially opening themselves to compliance failures, privacy violations, or reputational damage as norms suddenly change. The situation recalls the unregulated, often chaotic days of early social media, where lack of oversight allowed damaging content and privacy abuses to go unchecked until governments belatedly intervened.
Though there are signs of momentum—such as the EU’s AI Act, isolated state and federal initiatives in the US, and China’s aggressive approach to algorithmic regulation—the process of legislating and implementing effective oversight remains halting and uneven. Meanwhile, companies continue to integrate AI into critical infrastructure, public services, and sensitive decision-making contexts without a clear sense of what new laws or penalties may appear in the next legislative session.
For enterprises making strategic bets on AI, this should be the flashing red light on the dashboard. A single lawsuit or enforcement action could render carefully structured AI programs noncompliant, leading to forced shutdowns or expensive overhauls at the drop of a hat. Businesses that fail to anticipate these shifts risk not just financial losses, but also reputational harm and the erosion of customer trust.

Legal Turbulence Threatens the Future​

It is impossible to discuss the long-term prospects of contemporary AI without addressing the ever-mounting legal challenges. In the United States, OpenAI and Google have been sued by major authors, news publishers, and even record labels, all of whom allege that their content was scraped without permission to train powerful language models. While AI companies argue that fair use and transformative application shield them from liability, the courts have yet to provide a definitive judgment. Should plaintiffs prevail, monetary damages could stretch into the billions, and core model architectures may require fundamental redesign.
Meanwhile, many companies are left in the uncomfortable position of deploying AI tools whose training data and compliance posture remain ambiguous. In fields like healthcare, finance, and legal services, where regulatory requirements for data provenance and consumer protection are strict, the risk of inadvertently violating requirements is high. As regulatory attention intensifies and case law clarifies the rules of the game, future investments in AI may come entangled with new costs and constraints.

Critical Analysis: Strengths and Risks​

It would be misleading to characterize the current state of AI as wholly fraught. The dominance of a few platforms undoubtedly brings upsides: rapid innovation, streamlined support, and economies of scale contribute to products that are more reliable and accessible than those built by fragmented, undercapitalized competitors. For many businesses, the opportunity to tap into world-leading language models and automation engines simply would not exist without the willingness of Big Tech to plow billions into research, hardware, and market development.
Yet, these same strengths create parallel vulnerabilities. The industry’s consolidation around four or five main players furthers an already worrisome dependency on private, often unaccountable entities whose long-term priorities may not align with those of their users. Should a single provider stumble (financially, legally, or technologically), the repercussions could be felt across continents.
Worse, the sector’s insatiable demand for electricity and hardware is fundamentally at odds with growing societal commitments to sustainability and climate action. Without major breakthroughs in efficiency or renewables, the current trajectory will soon be untenable.
Finally, the absence of robust regulatory guardrails and the ongoing specter of legal liability make any bet on AI riskier than it appears. Companies drawn by the productivity and innovation on offer must temper their optimism with caution, planning for the possibility of disruption, retrenchment, or even outright system failure.

The Future: Prudent Adoption and Open Questions​

What does all of this mean for businesses considering or already implementing AI in 2025? Above all, it is a call for prudence, diversification, and contingency planning. Rather than racing blindly into the arms of Copilot, ChatGPT, or Gemini, organizations must conduct rigorous risk assessments—considering not only the direct costs and benefits but also the wider systemic factors that could imperil their operations.
This includes:
  • Developing robust fallback plans in case of sudden service discontinuity or adverse regulatory change.
  • Prioritizing AI deployments that are modular and, wherever possible, vendor-agnostic.
  • Participating actively in policy discussions around AI regulation and standards, ensuring that their perspective is represented.
  • Monitoring the legal landscape for developments in copyright and data use that could impact their operations.
  • Investing in energy efficiency and sustainability as an operational imperative, not just a marketing point.
The reality is that the all-eggs-in-one-basket scenario the industry currently faces is unsustainable and potentially perilous. Until a more mature, competitive, and responsible AI ecosystem emerges, caution is not only warranted—it is essential.
Businesses tempted to replace core functions or entire workflows with proprietary AI platforms should pause and ask themselves what will happen if, one morning, those platforms simply vanish from the landscape. It is a scenario that feels unthinkable now, but history is replete with similarly “impossible” failures.
The signs—the spiraling costs, legal headwinds, energy crunch, and regulatory uncertainty—suggest that the AI sector could be racing toward a reckoning. As always in technology, the brightest dawns risk being followed by the darkest nights. The challenge is to chart a course that leverages AI’s promise while remaining painfully aware of the fragility and unpredictability that underpin today’s gold rush. Only then can businesses, and the broader society, harness AI’s potential without falling prey to the all-too-human tendency to believe that today’s winners are forever inviolate.

Source: htxt.co.za All the AI eggs are in a handful of baskets and that's concerning - Hypertext
 

Back
Top