The parallels between the dot‑com boom of the late 1990s and today’s AI surge are unmistakable: breathless narratives, new vanity metrics, and money piling into infrastructure and market share long before sustainable profits appear — but the differences matter just as much, and they determine whether history will repeat or merely rhyme.
Background
The late‑1990s internet era promised to remake commerce, media and finance. Success was measured in “eyeballs,” pageviews and clicks; scale was presumed to be the pathway to future profits. When that logic failed to convert into durable earnings, the Nasdaq plunged and many firms collapsed — yet the survivors rebuilt sturdier, ultimately reshaping the economy. The current AI cycle follows a similar arc in its early phase: investors now prize “tokens processed,” “inference demand,” and model‑query volumes over immediate margins. That language shift masks a familiar tension between
narrative‑driven valuation and
unit economics.
This article pulls together what the dot‑com episode teaches us about AI today. It summarizes the core comparisons, verifies key facts about major players and market behaviour, and assesses the structural strengths and risks that make the present moment both different and dangerously familiar.
Where the comparisons are strongest
Metrics changed; incentives haven’t
In 2000, executives celebrated “reach” — getting users to visit pages, click banners and sign up. The rationale: build the audience now, monetise later. Today, the technical equivalents are scale measures such as tokens processed, inference throughput, model parameter counts and API call volumes. Those numbers show technical progress and user engagement, but they are not the same as unit economics.
Investors and engineering teams frequently treat raw compute and user growth as proxies for eventual monetisation. That creates incentives to prioritise engineering scale and feature breadth over
margins per query or
revenue per enterprise contract. Several internal analyses and industry briefs document the rise of these “vanity” AI metrics and warn that they can obscure weak monetisation.
Infrastructure spending has replaced advertising burn
Where dot‑com startups poured cash into customer acquisition and brand advertising, today’s AI companies are burning capital on GPUs, specialised data‑centre capacity, energy and the people who build models. That’s not a trivial shift: it changes not just accounting lines but the capital structure and competitive dynamics.
- Dot‑com burn: marketing, distribution, warehousing partnerships.
- AI burn: chips, cloud commitments, proprietary datasets, model tuning and safety engineering.
The economic difference is important because compute costs are continuing operating expenses tied to every additional query, whereas many marketing costs are one‑time acquisition expenses that can be amortised if customer lifetime value is strong. If model monetisation lags, running more queries deepens losses rather than diluting them.
Narrative as capital — again
Back in 2000 many securities were priced on “what could be” rather than “what was.” Analysts counted potential monetisable users as justification for lofty multiples. Today’s market uses phrases like “data advantage,” “inference demand” and “moat via proprietary fine‑tuning” to justify valuation. Those are plausible strategic advantages — but they remain
forecasts, not cash flows. Where markets reward conviction more than evidence, the risk of a sharp re‑rating rises.
Key facts and verified figures
- The dot‑com bust was dramatic: the Nasdaq lost roughly 78% from its peak to the trough in 2002. That correction wiped out trillions of dollars in market cap and left many public and private investors nursing heavy losses.
- eToys — emblematic of the marketing‑first approach — burned through IPO‑era capital and filed for bankruptcy in April 2001, becoming a cautionary case of scale without sustainable economics.
- Nvidia’s valuation rose to multi‑trillion levels during the AI frenzy, briefly reaching about $3.9–$4.0 trillion in mid‑2025 as markets priced it as the indispensable supplier of GPUs for generative AI training and inference. That meteoric rise shows how a single hardware vendor can capture outsized market capitalisation when its product sits at the center of a transformative technology stack.
- OpenAI — the private company behind ChatGPT — has reported rapidly growing revenues and material operating losses as it scales model training and deployment. Public filings and partner disclosures indicate the company’s revenue surged, while operating and accounting items tied to model training and financing commitments have produced billion‑dollar hits in short windows. Microsoft’s financial statements have at times reflected its share of OpenAI‑related losses under equity accounting, underscoring how private losses can spill into public financials for strategic partners. Some reported figures show heavy quarterly losses implied by Microsoft’s disclosures; these calculations depend on ownership percentages and accounting treatments and therefore vary across press reports and investor summaries — they should be treated with care rather than as a single canonical number.
- Independent deployment studies suggest a gap between experimentation and measurable financial benefit. A prominent synthesis found that a large share of generative‑AI pilots do not yet deliver meaningful P&L outcomes for adopters — a signal that adoption alone is not a guarantee of productivity gains or revenue lift. This echoes a critical dot‑com lesson: trials and hype do not equal sustainable business models.
Structural differences that matter
1) Profitable incumbents versus unproven startups
A critical difference between the two eras is the financial health of the companies anchoring the ecosystem. In 2000, many leading internet names were unprofitable and dependent on additional capital infusions. Today, a cluster of hyperscalers and hardware firms — Microsoft, Google (Alphabet), Amazon, Meta and Nvidia — are cash‑generative and able to fund multi‑year investment cycles. That reduces systemic banking risk and changes the likely shape of any correction: more of the contraction may be borne by venture‑backed startups and private investors, rather than triggering a broad financial crisis. Yet the concentration of power also raises competition and governance concerns.
2) Capital intensity is different, but so is resilience
AI’s capital intensity is real — billions flow into chips and data centers. But because much of this is equity‑financed capex rather than bank debt, contagion through the credit system is less likely. That lowers the probability of a system‑wide crash but increases the chance of a painful
sectoral shakeout: many startups will fail, private valuations will compress, and talent and assets will consolidate toward winners.
3) Deeper social integration and higher public trust stakes
The internet reshaped transactions and media. AI intrudes more deeply — into how decisions are made and information is produced. A lost faith moment (a high‑profile failure, dangerous misuse or broad accuracy collapse) could erode public trust faster and more fundamentally than an ecommerce correction did, slowing adoption and inviting regulatory pushback. That makes the stakes for safety, auditability and governance far higher.
Economics: tokens, queries and the hidden unit cost
Each additional AI query consumes compute, memory and often cross‑service orchestration in cloud data centres. While model providers offer tiered pricing (including free or low‑cost entry tiers), the marginal cost of high‑quality inference — especially for large, multimodal models or real‑time agents — can be substantial. Industry rate cards and independent pricing analyses show per‑token or per‑1,000‑token prices that vary widely by model, but the common pattern is clear: if your product’s revenue per interaction doesn’t exceed the marginal cost plus overhead, scale increases losses.
That reality reframes the growth playbook. Scale is only valuable when it improves margins — either by allowing higher pricing power, enabling higher‑margin enterprise contracts, or by significantly lowering per‑query compute through architectural innovation. Many current valuations assume that technical progress will drive costs down sufficiently fast, or that new monetisation models (subscription, outcomes pricing, enterprise contracts) will raise ARPU enough to offset running costs. Both assumptions are plausible but require validation through robust unit economics.
What happened to the dot‑com survivors — a cautionary tale
When the dot‑com bubble burst, even some profitable firms lost huge market value. Yahoo! and eBay, for instance, suffered steep share‑price declines despite surviving the shakeout and ultimately remaining relevant players in their niches. The lesson: being fundamentally valuable doesn’t immunise a firm from market panic and re‑rating. Positioning, diversification, and demonstrated cash‑flow paths matter more than ever. Today’s AI incumbents can survive short‑term shocks, but concentrated valuations mean steep headline corrections can still wipe out owner wealth and damage public confidence.
Policy and corporate takeaways — actionable guidance
- Demand measurable KPIs before scaling: Procurement teams should require pilots to define concrete outcomes (revenue lift, time saved with validated measurement, error‑rate reduction) and a timeline to demonstrate payback before green‑lighting broad rollouts. This is the practical antidote to narrative‑driven spending.
- Price for usage, not hype: Vendors and customers should adopt metered or outcome‑based pricing for high‑cost inference workloads. Flat‑fee, unlimited models tempt heavy usage that may erode economics and create surprise bills.
- Invest in governance and auditability: Given the deep integration of AI with decisions, companies must invest in model cards, provenance, and human‑in‑the‑loop checks before production launch. The reputational and regulatory costs of failure exceed the cost of governance.
- Policymakers should prioritise public‑benefit projects: When public money or subsidies are on the table, fund deployments that produce measurable productivity or social outcomes rather than top‑line prestige projects that primarily signal technological leadership. That approach maximises public return and reduces the risk of propping up vanity projects.
Risks that could turn a correction into a crisis
- Concentration risk — A market correction that hits a handful of mega‑cap players simultaneously could trigger broader index declines and investor panic. Even if banks are insulated, pension and retail investors may face material wealth erosion.
- Model‑data exhaustion — Several industry studies warn that under current scaling, the stock of high‑quality human‑generated training data becomes constrained within a few years. If the next frontier of improvement requires prohibitively costly or proprietary data, the economics could worsen. This is an engineering and economic limit, not merely a narrative one.
- Regulatory backlash — High‑profile misuse, systemic bias, or safety incidents could cause governments to impose heavy restrictions, slowing deployment and raising costs. That risk is higher today given AI’s proximity to decision‑making.
- Energy and sustainability constraints — Large model training consumes significant electricity and creates environmental scrutiny. Policy responses (carbon pricing, procurement rules) could materially alter economics for heavy model manufacturers and consumers.
Where genuine opportunity still lives
Despite the risks, the long‑term strategic case for AI is not an illusion: when models are productised into features that demonstrably reduce costs, shorten cycle times, or unlock new revenue lines, the payoff is real. Examples include:
- Enterprise vertical models trained on proprietary corporate data that automate high‑value workflows (legal review, medical image triage).
- Chip and hardware innovation that lowers per‑inference cost materially.
- SaaS products that productise AI into measurable, repeatable outcomes with clear retention mechanics.
These are the durable moats that convert narrative into cash flow — the same kind of durable economics that, after a correction, allowed survivors of the dot‑com era to become dominant players in the following decades.
Final verdict — realism, not alarmism
History has a rhythm: technological revolutions spark excess capital, followed by painful corrections, then consolidation and durable value creation. The AI surge shares many superficial traits with the dot‑com era — hype metrics, large private valuations and speculative capital — but the context differs in ways that reduce some systemic risks while amplifying concentration and governance concerns.
Two practical lessons endure from the dot‑com aftermath:
- Scalability without profitability is not a business model. Growth that increases marginal cost faster than revenue deepens losses; investors and executives must insist on unit economics, not just scale metrics.
- Intangible assets deliver value only when converted to measurable cash flow or clear social benefit. Brand, data and algorithms are only assets if they produce sustained returns or measurable public value; otherwise they are storytelling dressed as balance‑sheet strength.
If those rules are followed — by investors, product teams, procurement officers and policymakers — the AI wave will create vast productivity gains and durable businesses. If not, the market may need another painful correction to separate genuine winners from speculative froth. Either way, the conversation should shift from dazzled headlines to disciplined metrics: ask for the ARPU, the marginal cost per query, the payback timeline, and the governance guardrails. Those numbers are the best evidence we have that the present AI boom is building something real — not just a new vocabulary for an old mistake. Conclusion: The AI era promises profound change; the economic truth from 2000 endures — technology alone does not guarantee profit. Durable winners will be those who convert technical advantage into sustained, measurable value.
Source: NZ Herald
What 2000's dotcom crash can teach us about the AI boom