The responses from Legora and Tandem Health’s founders to the shockwaves sent by Anthropic and OpenAI are less defensive than instructive: rather than seeing Claude, ChatGPT Health and other frontier-model moves as existential threats, these Swedish founders frame them as validation of demand, a prompt to deepen industry-specific moats, and a reminder that vertical integration, data sovereignty and workflow fit still matter more than raw model muscle.
In early 2026 two separate events crystallised investor and market anxiety about large foundation-model providers encroaching on specialised software markets. Anthropic published a set of plugins for its Claude Cowork platform that included a legal automation pack capable of document review, risk-flagging and compliance checks; the release prompted a steep sell-off in shares of established legal-data and legal-software companies.
Shortly thereafter OpenAI launched a health-focused product inside ChatGPT — ChatGPT Health — designed to offer a dedicated, privacy-layered environment for health and wellness conversations and integrations with medical records and wellness apps. OpenAI stated that “over 230 million people” ask health and wellness questions on ChatGPT each week, which the company used to justify a dedicated Health experience.
Those moves exposed the fault lines between horizontal model providers — the big LLM platforms with abundant capital and distribution — and the thousands of startups that have built specialised products on top of LLMs. Public and private markets reacted quickly: legal- and professional-services-data plays experienced a sharp re-pricing as investors worried about platform-level cannibalisation.
What followed in media and conference rooms were two distinct but connected conversations: one about whether startups should fear the platforms, and another — more pragmatic — about how to build defensible value when the base models are commoditised.
From a market-structure perspective this is logical: foundation-model providers control both distribution and the high-margin instrumentation of user experience. If they can graft vertical workflows into the platform layer, incumbents who “wrap” models without proprietary data or unique workflow integrations are most exposed.
However the sell-off was also a signal event rather than a wake of instant commercial destruction. Several important caveats exist:
Tandem’s financing history is better documented at the seed stage: credible reports place a $9.5M seed led by Northzone in mid-2024, with prominent angel participation. Multiple VC and regional reports indicate further growth-stage capital in 2025, although round sizes reported across secondary sources vary (some sources suggest a Series A in the low tens of millions or a €40–50M figure). Public confirmation of a single $50M Series A press release is harder to find in major global outlets; this suggests media and sector trackers have not fully converged on a single canonical figure. Reported numbers should therefore be treated with caution until official company statements are available.
The funding dimension matters: deep-pocketed model providers have balance-sheet advantages that let them subsidise feature launches, run costly safety research, and incubate marketplace effects. Startups need capital to accelerate product differentiation, customer success and regulatory work—areas that require sustained investment.
For now, the answer is “no, not without meaningful investment and time.” That’s the space Legora and Tandem are exploiting: building sticky integrations, investing in domain datasets, and operationalising regulatory-compliant workflows. Those are expensive and slow-to-scale assets — precisely the things that blunt a platform’s advantage.
At the same time, startups must not be complacent. Platform providers have distribution, capital and a hunger for revenue that will drive them to productise more vertical scenarios. Startups should therefore treat model-provider actions as both a market validation and a competitive alarm: validation because latent demand is proven; alarm because the competitive set now includes firms with deeper pockets and direct user access.
Ultimately, winning in 2026 will be about depth over breadth: deeper integrations, deeper data, deeper safety engineering — and the business models that convert those depths into durable revenue and defensible margins.
Source: Tech.eu Legora and Tandem Health CEOs reject Anthropic and OpenAI threat
Background
In early 2026 two separate events crystallised investor and market anxiety about large foundation-model providers encroaching on specialised software markets. Anthropic published a set of plugins for its Claude Cowork platform that included a legal automation pack capable of document review, risk-flagging and compliance checks; the release prompted a steep sell-off in shares of established legal-data and legal-software companies.Shortly thereafter OpenAI launched a health-focused product inside ChatGPT — ChatGPT Health — designed to offer a dedicated, privacy-layered environment for health and wellness conversations and integrations with medical records and wellness apps. OpenAI stated that “over 230 million people” ask health and wellness questions on ChatGPT each week, which the company used to justify a dedicated Health experience.
Those moves exposed the fault lines between horizontal model providers — the big LLM platforms with abundant capital and distribution — and the thousands of startups that have built specialised products on top of LLMs. Public and private markets reacted quickly: legal- and professional-services-data plays experienced a sharp re-pricing as investors worried about platform-level cannibalisation.
What followed in media and conference rooms were two distinct but connected conversations: one about whether startups should fear the platforms, and another — more pragmatic — about how to build defensible value when the base models are commoditised.
Why the CEOs of Legora and Tandem dismissed the “frontier lab threat”
Both Legora (legaltech) and Tandem Health (healthtech) are Stockholm-born startup plays that use LLMs as core technology. Yet their public comments after Anthropic and OpenAI’s announcements emphasise different strategic protections:- Legora’s CEO Max Junestrand argued that consumer- or general-purpose chatbots are useful as pocket lawyers for one-off queries, but fail on complex, high-risk legal work that requires deep context, curated corpora, and robust document infrastructure. He highlighted Legora’s investments in document stores, knowledge graphs, and long-term legal data ingestion as the real moat that a model alone cannot replace. The company’s focus is enterprise workflows across hundreds of law firms and persistent document relationships — not single, ad-hoc queries. (Legora is widely reported to have reached unicorn-level valuation territory in late 2025.)
- Tandem’s CEO Lukas Saari framed the issue with clinical safety, regulatory compliance and workflow fit. He emphasised that medical deployments require local guideline alignment, rigorous data-protection practices (notably European GDPR and local NHS standards), and deep integration into electronic health record systems — areas where generic health chatbots are unlikely to be adequate for institutional adoption. Tandem’s roots in clinician-facing note automation and its European-first product posture were presented as advantages in a market that prizes data sovereignty and safety validation. Tandem’s seed round and rapid customer traction were also referenced to underline market demand.
What actually happened in markets — and why it mattered
When Anthropic introduced the Claude Cowork plugins (including legal), the reaction from listed legal-data and professional-services vendors was dramatic. Investors re-priced companies that derive revenue from selling access to legal documents, structured legal data, and downstream analytics — the intuition being that if a foundation model can read, triage and draft at scale, the marginal value of some legacy offerings will fall. Multiple outlets reported double-digit intraday drops in stocks tied to legal data/services during the sell-off.From a market-structure perspective this is logical: foundation-model providers control both distribution and the high-margin instrumentation of user experience. If they can graft vertical workflows into the platform layer, incumbents who “wrap” models without proprietary data or unique workflow integrations are most exposed.
However the sell-off was also a signal event rather than a wake of instant commercial destruction. Several important caveats exist:
- The plugin packages released by frontier labs are often starter kits or research previews that demonstrate intent and potential rather than a finished enterprise product with SLAs, integrations and regulatory certifications. Many of Anthropic’s Cowork plugins were released in an open-source starter form — an implicit invitation to developers and integrators to tailor them.
- Market pricing moves can overshoot when headline risk collides with uncertain timelines for enterprise adoption. The presence of a feature in a model’s plugin catalog is not the same as immediate, large-scale customer migration away from entrenched platforms that handle billing, discovery, compliance and audit trails.
- Professional services rely on human judgment, liability frameworks and defensible audit trails; stripping a junior-level task’s cost does not automatically remove strategic consulting, litigation, or bespoke compliance work.
Where verticals still hold the advantage
The responses from Legora and Tandem illustrate practical, repeatable moats that matter when frontier labs move downstream.1) Proprietary, curated data and knowledge graphs
A core differentiation is owning and curating vertical data rather than simply prompting against public web knowledge. Legora emphasizes persistent storage, indexing and linking of hundreds of millions of legal documents and the construction of knowledge graphs that map relationships between contracts, clauses, precedents and parties. Those assets are costly to assemble and — crucially — give firms workflow continuity that a stateless chatbot cannot easily replicate.2) Deep integration into customer systems and workflows
Both legal and clinical users value seamless integration into case management systems, document repositories and EHRs. Tandem’s product is presented as an ambient scribe that writes notes directly into clinical systems and adheres to local templates and billing codes—an integration surface that general chatbots rarely offer out of the box. Integration reduces friction and raises switching costs.3) Regulatory, privacy and safety engineering
Healthcare and legal deployments carry liability and regulatory requirements. Tandem’s focus on European data sovereignty and on anchoring output to clinical guidelines positions it as “the safe option” in a market where institutions demand documentation, explainability and audit trails. ChatGPT Health’s announcement emphasised privacy-by-design features, but product-level controls are not the same as institutional-grade compliance processes and certification.4) Vertical UX, domain-specific evaluation and guardrails
A product that is tuned to the cognitive workflows of a profession—legal drafting, legal briefing workflows, clinical triage, referral notes—can outperform a generalist because it encodes not just vocabulary but process: who signs off, what triggers escalation, how outputs map into billing, or how citations are formatted for court filings.The limits of the “platform will eat apps” thesis
The narrative that massive model providers will automatically swallow every vertical startup rests on assumptions that are overstretched:- Scale ≠ domain trust. Big models provide scale and raw capability, but professional adoption is often trust-driven and mediated by contracts, governance and certification. Hospitals and large law firms are conservative buyers—they want reliability, auditability and vendor accountability.
- Productisation and liability. A model that can summarize a medical record or flag contractual risk is not automatically a deployable, insured product for clinical decision-making or legal opinion. Startups that place liability, audit logs, validation datasets and human-in-the-loop controls at the centre of their offering are harder to displace.
- Channel and go-to-market: the distribution muscle required to win enterprise procurement cycles—contracts, security attestation, third-party certifications, regional offices—remains a practical moat that takes time and capital to replicate.
The real technical gaps frontier labs still face
Even as LLM quality improves, there are non-trivial engineering and product gaps that favour specialised vendors:- Long-term state and provenance: many vertical use-cases require traceable data lineage and persistent knowledge graphs. Model outputs must be linked back to source documents and versioned over time for auditability.
- Domain-evaluated robustness: simple prompt-based evaluation does not capture domain-specific failure modes. Legal and medical settings demand evaluation frameworks tailored to outcomes that professionals judge (e.g., clinical safety, malpractice risk, statutory compliance).
- Local compliance and red-teaming: healthcare and legal outputs often hinge on jurisdictional rules. Ensuring compliance across multiple countries with different laws and guidelines requires local teams and certification processes.
Funding, valuations and the capital angle
Legora’s late-2025 valuation at roughly $1.8 billion was reported by major outlets after a sizable funding round, a signal that investor appetite for legal-AI plays remained strong even before Anthropic’s plugin release.Tandem’s financing history is better documented at the seed stage: credible reports place a $9.5M seed led by Northzone in mid-2024, with prominent angel participation. Multiple VC and regional reports indicate further growth-stage capital in 2025, although round sizes reported across secondary sources vary (some sources suggest a Series A in the low tens of millions or a €40–50M figure). Public confirmation of a single $50M Series A press release is harder to find in major global outlets; this suggests media and sector trackers have not fully converged on a single canonical figure. Reported numbers should therefore be treated with caution until official company statements are available.
The funding dimension matters: deep-pocketed model providers have balance-sheet advantages that let them subsidise feature launches, run costly safety research, and incubate marketplace effects. Startups need capital to accelerate product differentiation, customer success and regulatory work—areas that require sustained investment.
Strategic options for startups in vertical markets
The responses from Legora and Tandem implicitly outline a pragmatic playbook. For startups that build on LLMs, viable strategic options include:- Double down on proprietary vertical data and long-term contracts.
- Build certified, audited, and rule-driven wrappers with human-in-the-loop controls.
- Offer on-premises or regionally isolated deployments to address data sovereignty concerns.
- Emphasise integrations that replace manual work, not abstract queries.
- Pursue selective partnerships and white-label opportunities with model providers where the collaboration is mutually beneficial.
Risks and open questions
While the defensive arguments are persuasive, there are persistent risks to the startup model in a high-velocity AI market.- Platform cannibalisation is still possible. If a model provider chooses to productise a vertical use-case end-to-end with enterprise SLAs and regional compliance to match incumbent offerings, startups that lack deep data ownership or locked-in integrations will struggle to compete on price and distribution.
- Regulatory pressure could compress margins. Governments increasingly scrutinise medical and legal AI. Stricter regulation may raise the cost of compliance for everyone and create higher barriers to entry — favouring incumbents who can finance compliance work, but also potentially advantaging startups that invest early in certification.
- Dependency on upstream APIs and compute economics. Startups that rely on a small set of model providers are exposed to pricing shocks, throttling, or licensing changes. Securing multi-model portability or negotiating favourable commercial terms is essential.
- Hallucination and liability. Model hallucinations are not just product bugs but potential legal and clinical liabilities. Startups must engineer defensible, auditable mitigations — a non-trivial engineering and governance burden.
- Capital markets volatility. Valuations that look exuberant can collapse if markets collectively re-rate software businesses exposed to model consolidation. The January–February 2026 episodes show how sentiment can shift quickly when a platform-sparked narrative builds.
What the frontier labs are signalling — and what they probably will do next
The recent plugin launches and health product moves indicate several strategic ambitions from model providers:- Platformisation of workflows: foundation-model vendors want to own more of the workflow layer — not just provide tokens/compute but also the UI, agent orchestration and marketplace for vertical packs. Anthropic’s plugin programme is an explicit step in that direction.
- Open starter packs + community mobilisation: several releases have taken the form of open-source starter plugins, inviting the community and enterprise integrators to build on top of them. That lowers the barrier to experimentation but doesn’t automatically yield enterprise-grade products.
- Strategic signalling to enterprise buyers: by demonstrating domain capability (legal automation, health-aware flows), model providers aim to accelerate enterprise conversations and aggregate demand. The resulting attention can pull enterprise customers into the model provider’s orbit, even when those customers subsequently buy specialised, third-party integrations.
What startups should do this quarter (practical checklist)
- Harden integration contracts and expand “last-mile” connectors to major EHRs, case management systems and billing platforms. Integration complexity is a high-friction moat.
- Publish measurement frameworks and domain-evaluation benchmarks that collate real-world accuracy, safety and audit evidence. Evidence builds institutional trust.
- Negotiate multi-year commitments with anchor customers and pursue co-sell arrangements that make migration to a platform provider costly or complex.
- Clarify data ownership and processing boundaries (on-prem, EU-only processing) and bake those terms into contracts.
- Prepare a contingency plan for API cost shocks: model switching, local inference options and negotiated enterprise pricing.
- Consider layered commercial models that mix subscription, usage and outcomes-based pricing. Outcomes-aligned contracts can shift perceived risk from buyers to sellers in some verticals.
Verdict: the headlines matter, but the long game wins
The Anthropic and OpenAI announcements were strategic shots across the bow: they revealed intent, drove market repricing and accelerated customer conversations. But the market reaction does not resolve the deeper question: can horizontal model providers cheaply and quickly replicate the entire stack of domain expertise, legal/clinical certification, integrations, and curated proprietary data that make vertical startups indispensable?For now, the answer is “no, not without meaningful investment and time.” That’s the space Legora and Tandem are exploiting: building sticky integrations, investing in domain datasets, and operationalising regulatory-compliant workflows. Those are expensive and slow-to-scale assets — precisely the things that blunt a platform’s advantage.
At the same time, startups must not be complacent. Platform providers have distribution, capital and a hunger for revenue that will drive them to productise more vertical scenarios. Startups should therefore treat model-provider actions as both a market validation and a competitive alarm: validation because latent demand is proven; alarm because the competitive set now includes firms with deeper pockets and direct user access.
Ultimately, winning in 2026 will be about depth over breadth: deeper integrations, deeper data, deeper safety engineering — and the business models that convert those depths into durable revenue and defensible margins.
Source: Tech.eu Legora and Tandem Health CEOs reject Anthropic and OpenAI threat