Herbert Smith Freehills Kramer says it wants to prove that the era of BigLaw — with its complex partnership structures, global practices and conservative risk instincts — can also be AI-native: a firmbuilt from the inside out to make generative AI part of everyday legal work rather than a series of pilots and experiments.
The firm behind the push, Herbert Smith Freehills Kramer (branded in most coverage as HSF Kramer), is the product of a transatlantic merger that closed in mid‑2025 and immediately vaulted the combined firm into the top tier of global practice groups by size and revenue. Early reporting estimated roughly 2,700 lawyers and more than $2 billion in gross revenue for the combined business — the scale that both enables and complicates any attempt to redesign workflows firmwide.
HSF Kramer’s public positioning on AI has two concurrent threads. First, the firm is selling to clients and the market a narrative that it will be an AI‑enabled adviser — not just a user of vendor tools, but a firm that has baked AI into governance, delivery model design, client engagement and productization. Second, leadership is using the merger as a forcing function to standardize systems and deploy machine‑assisted workflows across geographies in a way legacy BigLaw shops have typically avoided. Coverage in trade press and legal outlets has highlighted both the ambition and the unique operational challenge of pulling this off at scale.
This is both an operational and cultural challenge. Operationally, it requires central data lakes, clear export/import rules, and vendor contracts that prevent leakage of client‑sensitive material. Culturally, it requires rethinking incentives so that associates and partners are rewarded for supervising and validating AI outputs rather than hiding AI usage in the margins.
At the same time, the transformation is neither automatic nor risk‑free. Success will depend on a rigorous lockstep of procurement discipline, technical architecture, human supervision and transparent client engagement. The legal community would do well to watch HSF Kramer’s next concrete metrics — matters handled under AI‑governed workflows, client opt‑in rates, measured time‑to‑close reductions and, crucially, any incidents that test the firm’s governance.
If HSF Kramer can show reproducible business outcomes while keeping confidentiality, accuracy and professional responsibility intact, the firm will have built a useful template for BigLaw’s next chapter. If it fails to manage the known risks, the episode will serve as a sobering case study in the limits of technology‑led transformation when applied hastily to regulated professional services. Either way, the experiment is now one of the most consequential tests of whether BigLaw can be both large and truly AI‑native.
Source: Law360 HSF Kramer Wants To Show BigLaw Can Also Be AI-Native - Law360 Pulse
Background / Overview
The firm behind the push, Herbert Smith Freehills Kramer (branded in most coverage as HSF Kramer), is the product of a transatlantic merger that closed in mid‑2025 and immediately vaulted the combined firm into the top tier of global practice groups by size and revenue. Early reporting estimated roughly 2,700 lawyers and more than $2 billion in gross revenue for the combined business — the scale that both enables and complicates any attempt to redesign workflows firmwide.HSF Kramer’s public positioning on AI has two concurrent threads. First, the firm is selling to clients and the market a narrative that it will be an AI‑enabled adviser — not just a user of vendor tools, but a firm that has baked AI into governance, delivery model design, client engagement and productization. Second, leadership is using the merger as a forcing function to standardize systems and deploy machine‑assisted workflows across geographies in a way legacy BigLaw shops have typically avoided. Coverage in trade press and legal outlets has highlighted both the ambition and the unique operational challenge of pulling this off at scale.
What “AI‑native” means (and why the phrase matters)
“AI‑native” is a deliberately loaded term. At its simplest, it conveys three overlapping ideas:- AI is a default productivity layer embedded into daily drafting, research, matter intake and knowledge management.
- Workflows, training and partner compensation are redesigned to reward efficient, AI‑augmented outcomes rather than hours stacked around repetitive drafting.
- Governance, procurement and security are treated from day one as part of product design — not retrofitted after pilots reveal problems.
The concrete moves so far: tools, pilots and client signals
HSF Kramer has highlighted specific, measurable initiatives meant to show progress toward an AI‑native practice:- A client‑facing diagnostic tool to map general counsel attitudes and maturity on generative AI — designed to let GCs benchmark where they sit on the adoption curve and to open matter discussions about acceptable uses. The tool, rolled out publicly in 2025, is an example of using productized intellectual property as a business development asset.
- Investments in internal digital governance teams and innovation directors tasked with aligning matter workflows, billing practices and technology procurement around a small number of sanctioned vendors and architectures. Coverage of the firm’s post‑merger integration highlights a leadership emphasis on operational integration to accelerate U.S. growth, with AI flagged as a core differentiator.
- A two‑track approach that peers in the market are now adopting: a firmwide productivity assistant layer (often based on mainstream productivity models such as enterprise Copilot products) plus matter‑specific legal AI platforms for research, due diligence and deal drafting. The pattern is visible across the industry and not unique to HSF Kramer.
Why this is different from the standard BigLaw playbook
Most large law firms historically take a conservative route: decentralized tech experiments, partner‑by‑partner tool choices, and pilots that rarely scale beyond single practices. HSF Kramer’s pitch is that a global integration — where systems, governance and procurement are harmonized early — can shorten the time from pilot to production and reduce the “shadow AI” problem that many firms face. Independent commentary and industry research have shown that firms that treat AI as a corporate platform — not a set of boutique pilots — tend to scale faster, but only if they implement strong governance.This is both an operational and cultural challenge. Operationally, it requires central data lakes, clear export/import rules, and vendor contracts that prevent leakage of client‑sensitive material. Culturally, it requires rethinking incentives so that associates and partners are rewarded for supervising and validating AI outputs rather than hiding AI usage in the margins.
The upside: productivity, client retention and new product models
If the transformation succeeds, the potential upside is substantial:- Faster turnaround and lower cost: Tasks such as first‑draft contracts, discovery triage and targeted research can shrink from hours to minutes when backed by retrieval‑augmented systems and human review.
- Better knowledge reuse: Consolidated matter corpora and vector search can surface precedent and prior‑work context more reliably across geographies.
- New commercial products: Firms that standardize AI‑augmented playbooks can productize certain services (e.g., subscription contract clinics, automated due diligence summaries), converting time‑based work into productized revenues.
The hard limits and risks — what can go wrong
The legal profession’s obligations make missteps particularly costly. The main risk vectors are:- Confidentiality and client data leakage. Generative models trained or queried improperly can leak case facts, settlement terms or privileged communications. That risk is real for firms that allow unrestricted model access or sign vendor contracts without strict data use, retention and reverse‑engineering protections. Law departments and GCs explicitly demand contractual protections before they permit work to run through third‑party models.
- Hallucinations and factual inaccuracies. Language models can invent statutes, misstate cases or incorrectly summarize evidence — errors that matter enormously in motions, briefs and transactional disclosure. Academic work across legal‑AI research underscores that hallucination is not a solved problem, and mitigation requires retrieval‑augmented methods, provenance tracking and mandatory human verification in outward‑facing deliverables.
- Professional liability and supervision. Regulators and bar associations expect lawyers to supervise non‑lawyer tools that assist in the practice of law. That supervision must be demonstrable; opaque “agent” pipelines where models autonomously draft and send work without human oversight are a liability. Law firms have already seen malpractice scares and regulatory warnings when AI outputs were used without adequate verification.
- Vendor lock‑in and control of IP. Rapid vendor selection without strong procurement clauses can cede control of matter corpora, client data, and derivative IP to vendors or cloud providers. Firms must negotiate data sovereignty, model‑retraining clauses, and exit procedures to avoid creating brittle infrastructure that is expensive to unwind.
- Ethical and reputational risk. Delivering wrong legal advice under the banner of AI efficiency risks reputational damage that will reverberate far beyond a single botched brief. Clients expect accuracy; basic errors are not tolerated when they threaten legal outcomes.
Governance playbook: what successful firms are doing now
Practices across firms that have moved beyond pilot stage converge on a similar playbook. Firms aiming to be AI‑native at scale should consider these sequential priorities:- Executive sponsorship and a single accountable owner for firmwide AI strategy.
- A narrow set of sanctioned models and vendors — chosen by risk profile and contractual protections.
- Mandatory logging, versioning and provenance capture for every AI output used in client work.
- Training and certification programs for lawyers on prompting, validation and escalation practices.
- Clear procurement terms that protect client confidentiality, ownership of matter outputs and allow data deletion/portability.
- Human‑in‑the‑loop gates for any outward‑facing legal advice or court filings.
Architecture & vendor choices: the technical underpinnings
Becoming AI‑native is as much an IT program as an innovation initiative. The technical stack most firms are converging on includes:- Secure document ingestion pipelines: OCR, normalization, entity extraction and metadata tagging to make matter corpora usable for retrieval systems.
- Vector stores and retrieval augmentation: To reduce hallucinations and tie responses to cited documents, firms use vector indexes and RAG patterns that bind model outputs to specific, auditable passages.
- Model orchestration layers: These route queries to the appropriate models (general productivity assistants vs. matter‑specific legal LLMs), mediate access, and enforce redaction or anonymization where necessary.
- Audit & compliance tooling: Immutable logs, query audits and model explainability artifacts for e‑discovery and malpractice defense.
Talent, process and the human factor
Technology alone will not make a firm AI‑native. Human systems must change:- Training: Lawyers need explicit instruction on how to ask models useful, auditable questions and how to validate responses. Certification programs and internal “AI academies” are becoming common.
- Process redesign: Billing and staffing models must align with shorter cycles for drafting and review. Firms that keep fee models strictly hourly risk misaligned incentives.
- Cross‑functional teams: Product managers, data engineers, privacy lawyers and practicing attorneys must co‑design solutions so legal risk and technical design are coherent.
Market and competitive implications for BigLaw
If HSF Kramer truly becomes AI‑native at scale it could reshape firm competition:- Product differentiation: Firms that can consistently deliver faster, cheaper and auditable outputs will win more repeat business from cost‑sensitive in‑house legal teams.
- Recruiting & lateral market: Firms seen as AI leaders will attract talent that wants to work with modern tools and productized practices. Conversely, the firm’s internal governance choices (e.g., partner voting rules, lateral hiring policies) will shape who wants to join and how practice groups grow. Bloomberg Law’s coverage of HSF Kramer’s post‑merger integration highlighted operational steps to accelerate U.S. growth, including changes in hiring approvals.
- New business lines: Standardized AI workflows can be packaged into subscription or outcome‑based offerings, changing the revenue mix away from pure hourly billing.
Where HSF Kramer’s narrative is persuasive — and where it remains aspirational
The strengths in HSF Kramer’s approach are clear:- Scale and resources. The combined firm has the balance sheet and global footprint to standardize key systems and to invest in durable governance.
- Client‑facing productization. The GC diagnostic and public messaging move beyond internal pilots toward client engagement, which is necessary for adoption across large corporate accounts.
- Organizational focus. Using the merger to accelerate alignment between technology, practice and commercial strategy is sensible — a rare example of a structural opportunity being used to drive tech strategy.
- “AI‑native” is not a binary outcome. The reality on the ground is likely to be hybrid for years: some practices will be tightly governed and productive, others will lag behind. Public relations language often compresses a multi‑year transformation into a single soundbite.
- Proof points are still thin. Productized tools and pilots are necessary but not sufficient. The decisive test will be whether HSF Kramer can demonstrate measurable improvements in matter economics, error reduction and client satisfaction across multiple practice areas — evidence that is not yet public.
- Regulatory and malpractice exposure is unresolved. Bar rules and supervisory expectations differ by jurisdiction; a global firm must reconcile them. That’s an engineering, legal and cultural exercise that goes well beyond technology purchases.
Practical recommendations for other firms watching HSF Kramer
For firms contemplating their own journey toward AI‑native operations, the evidence and experience of early movers suggest a pragmatic path:- Start with narrow, high‑value pilots that have clear ROI metrics and strong governance.
- Centralize procurement to enforce consistent contractual protections for client data and IP.
- Build mandatory audit trails and provenance capture into every production pipeline.
- Invest in training and change management; the technical solution without human adoption fails.
- Publicize transparent client opt‑ins and contracts that reassure in‑house counsel and compliance teams.
Conclusion
HSF Kramer’s public commitment to being “AI‑native” is an important milestone in legal industry discourse because it reframes AI adoption as an organizational design problem rather than a gadget purchase. The firm’s scale gives it the resources to create standardized, auditable workflows and client products that could change market expectations about efficiency and delivery.At the same time, the transformation is neither automatic nor risk‑free. Success will depend on a rigorous lockstep of procurement discipline, technical architecture, human supervision and transparent client engagement. The legal community would do well to watch HSF Kramer’s next concrete metrics — matters handled under AI‑governed workflows, client opt‑in rates, measured time‑to‑close reductions and, crucially, any incidents that test the firm’s governance.
If HSF Kramer can show reproducible business outcomes while keeping confidentiality, accuracy and professional responsibility intact, the firm will have built a useful template for BigLaw’s next chapter. If it fails to manage the known risks, the episode will serve as a sobering case study in the limits of technology‑led transformation when applied hastily to regulated professional services. Either way, the experiment is now one of the most consequential tests of whether BigLaw can be both large and truly AI‑native.
Source: Law360 HSF Kramer Wants To Show BigLaw Can Also Be AI-Native - Law360 Pulse
