Wall Street’s latest tech swoon looks less like a fundamentals-driven correction and more like a panic attack: analysts from Bank of America and William Blair argue the sell-off is fear, not fundamentals — a rapid, narrative‑driven repricing triggered by a single product announcement and amplified by fragile market sentiment. The shockwave began when Anthropic’s agentic tool, Claude Cowork, and related integrations prompted investors to ask whether AI agents will automagically replace established software workflows. The immediate market reaction punished a broad swath of software names — even those with strong earnings — prompting veteran analysts to compare the move to January 2025’s DeepSeek panic. Yet behind the headlines, capital spending on cloud and data‑center infrastructure surged in 2025, and many of the same companies punished by the market are reporting robust fundamentals. The result: a classic technology‑era paradox where expectations about disruption move stocks faster than the disruption itself.
That narrative spread quickly across social feeds and trading desks. For investors conditioned by a multi‑year AI boom, the question became existential: will software-as-we-know-it survive the agentic wave?
That is the context in which several sell‑side and independent equity research teams pushed back, arguing that the reaction was disproportionate to the actual economic consequences of the new agent tools.
What happened next is instructive: capex did not collapse. Corporate cloud and data‑center spending accelerated through 2025 as hyperscalers and enterprises doubled down on infrastructure to preserve competitive position. Major indexes recovered and ended the year materially higher. Analysts now use DeepSeek as a reminder that technical surprises can trigger reflexive selling, but the underlying investment cycle — particularly infrastructure and cloud capex — often resumes or even accelerates.
Crucially, that spending creates demand for:
Investors who base decisions on sweeping narratives risk buying into volatility and missing the structural winners. Those who dig into revenue composition, data moats, governance, and run‑time economics will be better positioned to separate panic-driven price moves from durable opportunity. The immediate sell‑off looks likely to be an overcorrection; whether it becomes a generational entry point or a painful value destruction depends on how quickly companies adapt and how the AI infrastructure cycle evolves. In markets shaped by stories, the prudent response is not denial of risk, but disciplined differentiation — because in the agentic era, nuance will pay.
Source: AOL.com Why Wall Street analysts see the tech sell-off as overblown, and fueled by 'fear, not fundamentals'
Background: what sparked the sell-off and why analysts are pushing back
Anthropic’s Claude Cowork and the agentic AI narrative
In early 2026, Anthropic and its partners elevated the conversation about agentic AI — systems that plan, execute, and manage multi‑step workflows with minimal human prompts. Claude Cowork is one such step toward assistants that don’t just answer questions but act across applications to complete clerical and workflow tasks. That capability, when combined with major platform play‑ins (notably Microsoft’s recent Copilot Cowork integration), created a headline narrative: AI agents could bypass UI‑heavy software, eating the value of many traditional point solutions.That narrative spread quickly across social feeds and trading desks. For investors conditioned by a multi‑year AI boom, the question became existential: will software-as-we-know-it survive the agentic wave?
The market response: indiscriminate selling, concentrated fear
The Nasdaq and software-heavy indices took losses as headlines and analyst notes circulated. What looked like selective revaluation — downward pressure on companies with UI/automation exposure — briefly became broad‑based selling across the software sector. High‑quality names with mission‑critical products and recurring revenue profiles suffered drops after earnings beats; semiconductor and infrastructure names also saw sharp moves on the fear that AI capex might flip from growth to contraction.That is the context in which several sell‑side and independent equity research teams pushed back, arguing that the reaction was disproportionate to the actual economic consequences of the new agent tools.
Overview: the analyst case — why the sell-off is overblown
Two central claims from the bulls on the street
- The market is pricing inconsistent scenarios at once: it is simultaneously discounting severe weakness in AI capital spending and also assuming AI will instantaneously replace whole software business models. Those two outcomes — falling AI capex and instantaneous software obsolescence — can’t both happen at scale, analysts argue.
- Historical precedent suggests these panics are temporary. Bank of America and others point to the DeepSeek‑triggered rout in January 2025 as a close parallel: a surprise technological narrative caused a huge single‑day market hit, but it was followed by renewed spending, rising capex, and strong index performance thereafter.
William Blair’s diagnosis: a “sentiment problem”
William Blair trimmed the picture to three core points:- The sell‑off was driven by sentiment, not by a sudden deterioration in company fundamentals.
- The agentic AI announcements are meaningful but selective in impact — they threaten some point‑solutions and UI‑heavy tools more than platforms built around data, APIs, and integration.
- Winners will be defined by adaptation — companies that embrace AI as part of product architecture and governance will preserve or grow value; those that don’t will be at risk.
DeepSeek 2025: a cautionary case study in AI panic
What happened and why it matters
In January 2025 a Chinese model called DeepSeek (and the accompanying headlines about rapid model advances outside the expected incumbents) triggered a violent market reaction. Estimates of daily market‑cap losses varied across reports, but the episode erased hundreds of billions — with one commonly cited figure close to $1 trillion of headline market value at the rout’s apex — and created a narrative that the AI arms race had shifted in ways that threatened Western hyperscalers and chip vendors.What happened next is instructive: capex did not collapse. Corporate cloud and data‑center spending accelerated through 2025 as hyperscalers and enterprises doubled down on infrastructure to preserve competitive position. Major indexes recovered and ended the year materially higher. Analysts now use DeepSeek as a reminder that technical surprises can trigger reflexive selling, but the underlying investment cycle — particularly infrastructure and cloud capex — often resumes or even accelerates.
The lesson for investors
The DeepSeek episode underlines a behavioral pattern: markets will sometimes sell first, ask questions later. If capital allocation decisions at the enterprise and hyperscaler level continue — and if cloud/infrastructure spending increases — the short‑term narrative will mean little to long‑term cash flows. That’s why several research teams flagged the 2026 sell‑off as an overreaction.Why fundamentals still matter: cloud capex, hyperscalers, and run‑time economics
Investment evidence contradicts the “end of software” narrative
A core piece of the analysts’ pushback was simple: enterprises and hyperscalers are still spending to build AI infrastructure. Multiple trackers and investment banks raised global cloud capex forecasts for 2025 and 2026, with hyperscalers pushing heavy investment into GPUs, racks, and new data‑center capacity. While year‑over‑year growth rates differ by tracker and methodology, the consistent signal across independent data providers is strong growth in capex — not contraction.Crucially, that spending creates demand for:
- GPUs, accelerators, and chips
- Rack and power capacity
- Networking and interconnect
- AI‑oriented cloud services and orchestration tooling
- Security, governance, and MLOps platforms
The economics of agentic AI: not free, not frictionless
There is an important economic nuance: creating AI‑enabled features can lower development costs, but running agentic AI is expensive. Inference on GPUs, model hosting, and orchestration carry runtime costs that are materially higher than traditional CPU queries. That dynamic favors companies that can:- Blanket the complexity with value‑added service and performance guarantees
- Orchestrate hybrid compute (CPU + accelerators) to optimize costs
- Integrate security, compliance, and audit requirements into their product stack
Which software categories are most exposed — and which are best positioned to win?
At‑risk categories
- UI‑centric, configuration‑heavy “nice‑to‑have” tools: Products that depend on manual screen navigation, point‑and‑click workflows, or heavy user configuration are the most vulnerable to AI agents that can automate interactions.
- Small point solutions lacking deep data moats: Vendors whose value is a single task without entrenchment in customer workflows or data pipelines risk displacement.
- Low‑trust, low‑security tools: Solutions that require sensitive data but cannot demonstrate enterprise‑grade governance will see slower adoption of agentic features.
Best positioned to benefit or survive
- API‑first, data‑centric platforms: Organizations built around structured data, integration, and scale will be easier to embed into agentic workflows; their value accrues to the platform and data layers.
- Companies with strong service/governance capabilities: Firms that can guarantee compliance, uptime, lineage, and accountability become the safest path for enterprises adopting AI.
- Infrastructure and MLOps providers: Vendors who make AI cheaper, faster, and more reliable (including cost governance, model routing, and observability) sit at the center of the AI plumbing and are natural beneficiaries.
- Security and privacy specialists: As agents access and move data, security — especially for regulated industries — becomes paramount.
The cognitive disconnect: how markets can price contradictory outcomes
One of the most damning observations from the sell‑side is that market price action implied two mutually exclusive outcomes at the same time: on the one hand, a collapse in AI capex and ROI; on the other, an inevitability that AI adoption will decimate business models immediately. Both cannot occur together. This cognitive mismatch is symptomatic of:- Narrative momentum trumping granular company analysis
- Algorithmic selling and ETF rebalancing amplifying moves
- Short‑term traders reacting to headlines without parsing differing exposure across vendors
Practical guidance for investors, IT leaders, and CIOs
For investors: a framework to separate noise from signal
- Segment exposure: Map each company’s revenue into categories: mission‑critical platform, API/data services, point solutions, infrastructure, and services. Treat each bucket differently.
- Prioritize revenue quality: High recurring revenue, multi‑year contracts, and platform fees matter more in an agentic world because they imply stickiness and integration.
- Evaluate data and integration moats: Does the vendor own unique data, workflows, or integrations that agents would find hard to replicate?
- Stress test run‑time economics: Ask whether the product’s value accrues enough to offset higher inference/operational costs when AI features are turned on.
- Use tranches and optionality: In a volatile narrative cycle, staggered purchases or options strategies can mitigate timing risk.
For CIOs and IT leaders: adoption playbook
- Treat agentic features as projects, not drop‑in miracles. Pilot, measure ROI, and enforce governance.
- Prioritize reliability, auditability, and data lineage; these are the factors enterprise buyers will pay for.
- Expect hybrid deployments: nearline inference for cheap tasks, GPU inference for heavy lifting; plan cost governance.
- Reassess vendor roadmaps: favor partners that provide explicit MLOps, security, and cost controls.
Risks analysts may be underestimating
Analyst pushback has merit, but it is not a blind endorsement of the status quo. Several structural and tactical risks could yet drive deep, persistent damage in categories of software:- Rapid commoditization in niche areas: For certain narrow tasks with low regulatory friction, agents could scale quickly and commoditize small vendor niches.
- Regulatory fragmentation: New rules on AI operations, privacy, or data residency could favor large incumbents and punish smaller players who can’t comply.
- Runway and capital intensity: Hyperscaler capex is large but not infinite — a macro shock or higher financing costs could slow spending abruptly, affecting smaller cloud providers and specialized vendors.
- Model reliability and consumer trust: A high‑profile safety or privacy failure tied to agentic features could slow enterprise adoption for quarters or longer.
- Concentration risk in infrastructure: If a handful of suppliers control key accelerator supply, price or supply shocks could reverberate across AI stacks.
A balanced verdict: overreaction in the short term, selective structural risk in the medium term
The best reading of the evidence is nuanced:- In the short term, the sell‑off looks overblown. Headlines about agentic AI and product announcements can and did trigger reflexive selling that punished companies with different exposure profiles equally. Historical episodes — like DeepSeek in January 2025 — show that these narrative episodes can be volatile and temporary, and they frequently precede renewed investment cycles.
- In the medium term, agentic AI does present real structural change. The change is selective, not universal. Winners will be defined by data architecture, API orientation, governance, and the ability to manage the higher run‑time costs of agentic features.
- For investors and IT buyers, the critical task is to differentiate. Blanket labels like “software” are no longer diagnostically useful. Investors should focus on revenue durability, data moats, and integration depth. CIOs should balance innovation pilots with governance and cost controls.
What to watch next: catalysts that will settle the debate
- Hyperscaler capex announcements and guidance: If cloud spending continues to accelerate, that materially undercuts the panic narrative.
- Enterprise adoption metrics: Look for sustained increases in paid adoption of AI orchestration, MLOps, and security services.
- Earnings guidance vs. headlines: Monitor whether companies revise guidance lower because of revenue weakness, or whether short‑term sentiment diverges from durable bookings and retention metrics.
- Regulatory moves: New rules or enforcement actions on AI safety, provenance, and data use will reshape adoption timing and vendor economics.
- Real‑world deployments: Case studies that show measurable ROI from agentic workflows will slow down the fear cycle and re‑price the market positively.
Conclusion
The Feb‑March 2026 tech sell‑off reads, at first pass, like another chapter in a modern market pattern: a vivid technological headline ignites a simple narrative (“AI will replace X”), and markets — wired for speed and story — overreact. Bank of America, William Blair, and other analysts who called the move “fear, not fundamentals” point to a deeper truth: the real economic shift is messy, selective, and expensive to run.Investors who base decisions on sweeping narratives risk buying into volatility and missing the structural winners. Those who dig into revenue composition, data moats, governance, and run‑time economics will be better positioned to separate panic-driven price moves from durable opportunity. The immediate sell‑off looks likely to be an overcorrection; whether it becomes a generational entry point or a painful value destruction depends on how quickly companies adapt and how the AI infrastructure cycle evolves. In markets shaped by stories, the prudent response is not denial of risk, but disciplined differentiation — because in the agentic era, nuance will pay.
Source: AOL.com Why Wall Street analysts see the tech sell-off as overblown, and fueled by 'fear, not fundamentals'