Burges Salmon Deploys Firmwide Copilot and Harvey for Matter Work with Responsible AI Governance

  • Thread Author
Burges Salmon’s Digital Enablement Programme has entered a decisive new phase: the firm has embedded Microsoft 365 Copilot as a firm‑wide foundation and — following a structured trial — selected the legal‑focused generative AI platform Harvey for matter‑specific workflows. The announcement crystallises a two‑track strategy that many progressive law firms are adopting: place a secure, organisation‑level assistant at the heart of everyday productivity, then layer in specialist legal tooling and agentic automation to tackle high‑volume, repeatable legal tasks. The rollout statistics the firm published — roughly 1,300 people enabled, ~700,000 prompts to date, and 4,000 agent tasks completed — show an adoption curve that has already moved from experimentation into measurable operational use. At the same time, Burges Salmon has formalised governance through a Responsible AI Board and is emphasising practice‑led pilots, training and risk controls as it scales. This article unpacks what the move means technically and commercially, analyses the strengths and risks, verifies key claims against public statements and independent reporting, and provides pragmatic guidance for other legal teams weighing a similar path.

Four professionals in business attire sit around a glass table with holographic blue UI icons.Background and industry context​

Legal services and the broader professional services sector have rapidly moved from curiosity to pragmatic deployment of generative AI over the last two to three years. Early pilots focused on legal research, contract review and drafting assistance; more recent programmes emphasise integrated agents, document‑centric workflows and enterprise security. Two parallel trends define the space today:
  • A shift towards platformed productivity — Microsoft 365 Copilot and equivalent assistants provide a secure, centrally managed way to surface AI capabilities across email, documents, meetings and collaboration tools.
  • A proliferation of legal‑specialist models and platforms — vendors such as Harvey tailor generative AI for legal research, document analysis and matter workflows, integrating with document stores, Word and legal research databases.
Burges Salmon’s approach — a firmwide Copilot baseline plus practice‑level deployment of Harvey — mirrors how many organisations intend to balance scale, usability and legal specificity. Importantly, the firm’s public statements emphasise that the Harvey adoption followed a structured internal trial and evaluation period and that deployment will be practice‑led and governed.

What Burges Salmon announced (summary of the update)​

Burges Salmon’s recent communication about the next phase of its Digital Enablement Programme contains several concrete elements:
  • Firmwide Copilot deployment: The firm reports that Microsoft 365 Copilot has been rolled out across the organisation and is embedded through the Digital Champion Network that supports adoption and knowledge sharing.
  • Usage metrics the firm disclosed: approximately 1,300 enabled people, about 700,000 prompts issued to Copilot to date, and 4,000 agent tasks completed across research and analysis agents.
  • Harvey selected for matter‑specific workflows: Following trialling, the firm will introduce Harvey into practice‑led use cases (Real Estate is cited as an early beneficiary).
  • Agent expansion: The firm has launched a “Facilitator” agent to Business Services teams and aims to expand an ecosystem of agents for research, analysis and facilitation.
  • Responsible AI Board created: A cross‑firm board will embed responsible AI across policy, process and training, reporting into existing governance structures.
  • External engagement: Ongoing participation at events like Microsoft Ignite to inform the DEP roadmap and maintain vendor/technology awareness.
These points are consistent with the firm’s own digital enablement and responsible AI content published by the firm, and were reiterated in the recent public announcement that outlines the DEP’s next steps.

Verifying the claims: what’s firm‑stated and what independent reporting shows​

A journalist’s duty is to verify. The key numerical and governance claims above fall into two classes:
  • Claims that come directly from Burges Salmon’s own communications (usage counts, internal programmes and the creation of a Responsible AI Board). These figures are publicly stated by the firm and by partners relaying the announcement. They are best understood as firm‑reported operational metrics — useful and indicative, but reliant on internal telemetry and definitions that the firm controls.
  • Claims that are independently verifiable by cross‑reference with wider market reporting and vendor documentation (the existence and growth of Harvey, its enterprise partnerships, Microsoft’s Copilot strategy, and general observations on legal AI adoption). These are corroborated by multiple external sources and vendor/customer stories.
Where numbers such as 700,000 prompts and 4,000 agent tasks are cited, they appear in the firm’s external announcement. Independent outlets and vendor materials confirm Burges Salmon’s technology choices (Microsoft Copilot and the legal AI platform Harvey) and show similar patterns at other firms: law firms are pairing Copilot for general productivity with legal‑specific platforms for matter work. Because prompt counts and agent tasks are internally generated metrics, they should be treated as firm‑reported operational statistics — valuable, but not independently audited unless the firm publishes raw telemetry or an audit.

The technical architecture implied by Burges Salmon’s approach​

Burges Salmon’s strategy layers capabilities and controls. The high‑level architecture implied is:
  • Platform layer (foundation): Microsoft 365 + Copilot
  • Copilot operates inside Word, Outlook, Teams, Excel and SharePoint, giving firm users AI assistance without forcing data out of the tenant.
  • This foundation benefits from Microsoft’s enterprise security, compliance tooling and identity controls (SAML/SSO, conditional access, DLP).
  • Legal‑specialist layer: Harvey
  • Harvey provides legal‑centric models and workflows (research, clause searching, due diligence summarisation) and integrates into document and matter systems.
  • Many deployments of Harvey in the market run on or integrate with Azure infrastructure and with document systems like SharePoint and Word, allowing tighter data residency and compliance.
  • Agent/automation layer
  • Copilot Studio agents and/or vendor agents orchestrate repeated tasks (research tasks, meeting summaries, checklists).
  • Burges Salmon’s “Facilitator” agent for Business Services is an example: a targeted agent that automates routine non‑legal workflows.
  • Governance, logging and human review
  • Responsible AI Board, training, policy enforcement, change management, and mandatory human validation before outputs enter a client deliverable.
This layered pattern minimises single‑point failure and allows teams to choose the right tool for the task: Copilot for integrated productivity and context extraction, and a speciality legal model where domain accuracy, legal recall and matter context are paramount.

What the metrics mean — interpretation and caveats​

The headline numbers invite analysis. Using the figures the firm published:
  • Approximately 1,300 enabled people and ~700,000 prompts implies an average of ~538 prompts per enabled user since the rollout (700,000 ÷ 1,300 ≈ 538). That average shows frequent engagement among enabled users, but averages mask distribution: a minority of “power users” can generate a bulk of prompts.
  • 4,000 agent tasks completed suggests agents are being used in focused scenarios. On a per‑enabled‑user basis this is roughly 3 agent tasks per person, but in practice agents tend to be used by a smaller group (e.g., research teams or legal assistants).
  • Rapidly rising prompt counts alongside formal governance shows the firm has progressed from early pilots into sustained, monitored use.
Important caveats:
  • These are firm‑reported counters. Different organisations count “prompts” differently — e.g., does a multi‑turn agent conversation equal one prompt or many? Are system‑initiated queries included? Burges Salmon has published the numbers publicly, but the raw telemetry definitions are internal.
  • Prompt counts alone are a weak proxy for value. The true business signal is time saved, error rates caught in review, client satisfaction changes and measurable reduction in repetitive tasks.

Governance and security: what Burges Salmon is doing and what it implies​

Burges Salmon has taken several governance steps that align with best practice for professional services:
  • Responsible AI Board: cross‑disciplinary oversight including Legal, Risk, Responsible Business, Innovation, IT and InfoSec. Embedding senior representation is essential for aligning ethical, regulatory and operational controls.
  • Data residency and tooling choices: the firm endorses licensed tools with technical and organisational measures and states that Copilot telemetry is protected by Microsoft enterprise controls. It also says that client information will not be sent to third‑party models unless strictly managed.
  • Certifications and security posture: the firm declares ISO 27001 and other certifications, signalling structured security management that auditors and clients can review.
  • Human‑in‑the‑loop policy: any output that forms part of the work product will be reviewed and approved by a lawyer — an indispensable control given professional duties and regulatory expectations.
Why this matters: law firms face dual obligations — to innovate and to manage client confidentiality and legal professional duties. A responsible AI board and explicit policies (data handling, training, mandatory review) are the baseline regulators and clients increasingly expect.

Risks and mitigations — a balanced view​

Adopting AI in legal workflows offers obvious efficiency upside, but it carries concrete risks that Burges Salmon and peers must address:
  • Accuracy and “hallucination”
  • Risk: generative outputs can assert incorrect facts or mis‑cite authorities.
  • Mitigation: enforce human validation, use conservative prompting strategies, and prefer specialist legal models trained/instructed to cite primary sources.
  • Data leakage and model training exposure
  • Risk: sending client data to a vendor model that may persist or be used to train general models.
  • Mitigation: select enterprise deployments with contractual data protections, prefer private instances or on‑tenant hosting, and restrict PII and sensitive client data in prompts.
  • Regulatory and professional risk
  • Risk: regulators may scrutinise how advice is generated; lawyers retain responsibility for output.
  • Mitigation: documented review trails, training, and firm policies that make the role of AI explicit in client communications.
  • Vendor lock‑in and integration complexity
  • Risk: layering multiple vendors creates operational complexity and potential lock‑in.
  • Mitigation: adopt interoperable integrations, clear exit strategies, and multi‑model architectures where feasible.
  • Overreliance and deskilling
  • Risk: junior lawyers may underdevelop legal judgement if they rely excessively on AI.
  • Mitigation: embed AI as an assistant, not a replacement; use AI for lower‑value drafting and let senior lawyers focus on training and review.
  • Auditability and recordkeeping
  • Risk: inability to reproduce or audit AI generation paths.
  • Mitigation: retain prompt/response logs, system metadata and human review records for key deliverables.
Burges Salmon’s approach — combining certified infrastructure, explicit policy and a Responsible AI Board — addresses these areas. Nonetheless, independent auditing of outputs and periodic red‑teaming will remain necessary as capabilities evolve.

Practical use cases and early ROI signals​

Harvey and Copilot are complementary in typical firm deployments. Use cases that Burges Salmon and other firms report or pilot include:
  • Research and analysis: rapid scoping of issues, case law summaries, and identification of relevant authorities.
  • Contract review and clause extraction: automated flagging of risky clauses, clause libraries and drafting first drafts.
  • Due diligence and document summarisation: folding multiple documents into short, structured outputs for review.
  • Closing checklists and transactional boilerplate: faster generation of standard lists and repetitive documents.
  • Business Services automation: the “Facilitator” agent is likely automating administrative tasks like meeting summaries, scheduling, and action item extraction.
Early ROI signals reported by vendors and customers in the market include time savings (some customer narratives suggest hours regained per week), more consistent first drafts, and faster turnaround on high‑volume tasks. Firms must, however, translate time saved into fee model innovation or reallocated lawyer time for higher‑value work to realise economic benefit.

Change management: how Burges Salmon is progressing adoption​

The announcement highlights several change management elements that underpin successful deployments:
  • Digital Champion Network: internal champions accelerate adoption, surface use cases and build confidence.
  • Practice‑led pilots: piloting in a practice area like Real Estate allows targeted measurement and iterative process redesign.
  • Training and skills: investing in workflow training and responsible AI education reduces misuse and increases value capture.
  • Metrics and measurement: counting prompts and agent tasks is one piece; the firm will need deeper KPIs (error corrections prevented, time saved validated by time‑tracking, client satisfaction).
  • Vendor collaboration: partnering closely with vendors and attending industry events provides forward visibility on product roadmaps.
For other firms, Burges Salmon’s mix of central coordination (Copilot baseline, Responsible AI Board) plus decentralised pilots is a pragmatic blueprint.

The vendor landscape and strategic trade‑offs​

Choosing a specialist vendor like Harvey versus relying solely on general platform assistants involves trade‑offs:
  • Benefits of legal‑specialist platforms:
  • Better legal ontology, citation behaviours and workflows tailored to matter work.
  • Pre‑built legal templates, training on legal corpora and integrations with legal databases.
  • Benefits of platform assistants (Copilot):
  • Deep integration into everyday productivity tools (Word, Outlook, Teams), strong enterprise controls, and more predictable data handling under a single tenancy.
  • Trade‑offs:
  • Adding specialist vendors increases integration effort and vendor management overhead.
  • Specialist models may offer superior legal recall but require careful controls around data ingress/egress and source authoritativeness.
Harvey’s market position — enterprise traction across large law firms and a cloud relationship with major providers — makes it a credible complement to Copilot. Market dynamics also feature partnerships (for example, legal database integrations with major research services) that affect value for law‑firm users.

What this means for clients and client relationships​

Clients will take two assessments from this update:
  • Will AI lower cost or improve speed without compromising quality? If Burges Salmon’s pilots deliver measurable time savings and quality controls, clients stand to benefit from faster turnaround and better use of senior lawyer time.
  • Will client confidentiality remain secure? The firm’s stated policies — not uploading client data to open models, using enterprise‑grade vendors and retaining ISO/secure hosting — aim to reassure clients. But transparency is key: firms should discuss AI usage on matters and obtain client consent where appropriate.
Firms that adopt AI responsibly can offer clients new service models: accelerated due diligence, more iterative drafting cycles and value‑added analytics on matter portfolios. But transparency, documented review and contractual safeguards must accompany any such claims to clients.

Recommendations for other law firms thinking about the same path​

  • Establish an enterprise baseline first:
  • Deploy a centrally governed assistant (Copilot or equivalent) for firmwide productivity and secure integration with identity and DLP tooling.
  • Pilot specialist tools where they map to high‑volume work:
  • Run practice‑led pilots (Real Estate, Corporate, Litigation) and measure before‑and‑after time and quality metrics.
  • Create multidisciplinary governance:
  • Form a Responsible AI Board with Legal, Risk, IT, InfoSec and Operations representation to set policy, audit usage and sign off on client communications.
  • Insist on human‑in‑the‑loop for all client deliverables:
  • Maintain mandatory legal review and retention of review trails and prompt/response logs for key outputs.
  • Build change management and champion networks:
  • Train users, share successful prompts and playbooks, and rotate champions to maintain momentum and guardrails.
  • Prepare vendor exit and audit plans:
  • Ensure contractual rights to data export and independent audit options for critical tooling.
  • Use measured KPIs:
  • Track time saved, review corrections, client satisfaction and adoption distribution (not just raw prompt counts).

Final assessment — strengths, risks and the likely path ahead​

Burges Salmon’s announcement is a credible example of a measured, pragmatic approach to legal AI adoption. Strengths include:
  • A clear two‑layer strategy (Copilot foundation + Harvey for matter workflows) that balances broad productivity gains with domain‑specific capability.
  • Formal governance via a Responsible AI Board and strong information security posture.
  • Practice‑led pilots and a digital champion network that drive real behaviour change rather than top‑down diktat.
Key risks to monitor:
  • Overreliance on vendor claims and internal metrics without independent audit of outcomes.
  • The perennial accuracy and hallucination risk in generative models; pushing outputs into client work without sufficient review would be professionally hazardous.
  • Integration complexity and potential lock‑in as the firm layers multiple AI vendors.
Where this initiative is strongest is in its emphasis on behaviour change and operational practices: the technology itself is a tool, but value is realised when workflows, human judgement and governance evolve together. For clients and peers in the market, Burges Salmon’s DEP progression offers a tested pattern: secure the platform, pilot the specialist, govern with purpose, and measure what matters. That path won’t eliminate risk, but it does make mitigation systematic — and that is what professional services clients and regulators will expect as AI becomes part of the standard toolkit.
In short, the firm’s announcement is a significant step for a major independent UK firm: it signals movement from isolated experiments to structured, governed deployment of AI at scale, while acknowledging the importance of responsible use, human oversight and operational maturity. The coming months will tell whether the reported prompt volumes and agent tasks convert into durable client value and measurable changes in how legal work is priced and delivered.

Source: Edinburgh Chamber of Commerce Burges Salmon advances DEP journey with trusted, responsible AI powered by Microsoft Copilot and Harvey
 

Independent UK firm Burges Salmon has moved its Digital Enablement Programme (DEP) into a decisive new phase: a firm‑wide Microsoft Copilot foundation paired with the targeted adoption of Harvey, a legal‑focused generative AI platform, for matter‑specific workflows. The combination is not experimental theatre — the firm reports approximately 1,300 people enabled, ~700,000 prompts issued to Copilot to date, 4,000 agent tasks completed, and the rollout of a Facilitator agent to Business Services. Those figures, published by the firm on 27 January 2026, underline a deliberate, staged strategy: embed a secure enterprise AI layer first, then introduce specialist legal tooling and purpose‑built agents under strong governance. This article examines what Burges Salmon has announced, places the move in sector context, evaluates technical and regulatory implications, and offers practical guidance for other firms pursuing responsible AI-led digital transformation.

Five professionals in a glass-walled boardroom review AI dashboards and meeting agendas.Background​

What Burges Salmon announced​

Burges Salmon’s announcement confirms the next phase of its multi‑year Digital Enablement Programme (DEP). The firm describes a two‑stage approach: first, a robust, firm‑wide foundation built on Microsoft Copilot to provide broad productivity and knowledge‑access capabilities; second, targeted deployment of Harvey for matter‑specific legal workflows where a legal domain model and document handling deliver differentiated value. The firm attributes the Harvey selection to structured internal trials, pilot‑group evidence and practice‑led evaluation, with Real Estate cited as an early beneficiary. Burges Salmon also emphasised governance: the creation of a Responsible AI Board to embed policy, process and training across use cases.

Why this matters for law firms​

Law firms operate at the intersection of high‑value intellectual work and strict duties of confidentiality, privilege and competence. The past three years have seen generative AI move from exploratory pilots to enterprise deployments that promise real efficiency gains—particularly in research, document analysis, contract review and first‑draft drafting. But those gains come with material risk if firms adopt technology without rigorous governance. Burges Salmon’s public messaging is significant because it signals a measured, replicable pattern: enterprise copilot first; specialist legal models second; governed rollouts and explicit accountable oversight.

Overview of the technologies involved​

Microsoft Copilot as the enterprise foundation​

Microsoft’s Copilot for Microsoft 365 is positioned as the primary interface for workplace AI: embedded in Office apps, Teams and Edge, and extended by AI agents that can automate or assist common business processes. Copilot provides secure access to firm data stores and integrates with the Microsoft security stack, which is why many regulated organisations prefer it as a starting point for AI enablement. Agents — such as the Facilitator used by Burges Salmon — automate meeting agendas, note‑taking and action tracking, reducing low‑value administrative load across legal business services.
Key operational benefits of a Copilot foundation:
  • Centralised authentication and single‑sign‑on across tools.
  • Integration with corporate data sources (SharePoint, OneDrive, Teams).
  • Built‑in enterprise controls for data access, retention, and auditing.
  • Low‑code agent creation via Copilot Studio for task automation.

Harvey: a legal‑specialist generative AI platform​

Harvey is a vendor specialising in AI for legal teams, and it markets capabilities around secure project workspaces, bulk document analysis, legal‑specific workflows and multi‑model agents optimised for legal outputs. Firms that need matter‑specific analysis, contract redlining suggestions or synthesis of high volumes of transactional documents often choose legal‑specialist platforms because they tailor system prompts, fine‑tune models on legal data, and provide workflow artefacts lawyers expect.
Why firms pick specialist legal models:
  • Legal domain tuning reduces generic hallucinations on legal matters.
  • Workflow automation tailored to matters (eDiscovery, contract playbooks).
  • Document vaults and workspace isolation that support privilege handling.
  • Integration with familiar legal research and document management pipelines.

Implementation: staged, governed, and practice‑led​

A two‑layer strategy​

Burges Salmon’s approach shows clear sequencing:
  • Deploy Copilot broadly to build firm‑level AI fluency, connectivity to data, and standardised security posture.
  • Trial and then adopt specialist legal tools (Harvey) for matter‑level workflows where additional domain capability is required.
  • Expand agent usage for repeatable tasks (research, analysis, meeting facilitation), backed by training and policy.
This two‑layer pattern is practical. It preserves a single, auditable enterprise surface for basic AI interactions while allowing more sensitive, high‑value legal work to land on a platform purpose‑built for legal risk management.

Digital champions and behaviour change​

The firm cites an internal Digital Champion Network that has driven innovation, experimentation and knowledge sharing during Copilot rollout. This is a common success factor: technology adoption in professional services depends far more on behaviour change and workflow redesign than on model accuracy alone. Champions reduce fear, surface use‑cases, and accelerate skills transfer — which is exactly what Burges Salmon says it achieved.

Pilot evidence and selection of Harvey​

Burges Salmon describes a structured trial and evaluation path that led to selecting Harvey. The firm reports pilot teams — including Real Estate — showing measurable behaviour change and operational improvements. This approach demonstrates two important procurement and risk management best practices:
  • Evidence‑based selection via pilots that measure lawyer behaviour and output quality, not only time‑saved metrics.
  • Practice‑led governance to make sure tool adoption aligns with legal delivery standards and client obligations.

Responsible AI governance: the Responsible AI Board​

Structure and remit​

Burges Salmon has established a Responsible AI Board composed of colleagues across the firm. The Board’s remit includes ensuring AI adoption aligns with regulatory duties, client confidentiality expectations, and firm values. It will work alongside the DEP team and the Harvey project team to embed policy, process and training.
A few governance tasks the Board should prioritise:
  • Define permitted and prohibited AI uses (e.g., no uploading of client confidential material to unsecured public models).
  • Approve risk assessment templates for pilots and production rollouts.
  • Oversee consent, client notice and contractual language for AI use.
  • Require regular model validation and output quality audits.
  • Maintain incident response and escalation procedures for data leakage or hallucinations.

Why formal governance matters now​

Regulators in the UK and elsewhere are actively grappling with AI in legal services. The Solicitors Regulation Authority and other bodies have signalled that innovation is welcome but consumer protections and professional duties remain paramount. Formal governance demonstrates to clients, auditors and regulators that the firm takes its duties seriously and has controls in place to manage AI‑specific risk.

Benefits: what Burges Salmon and similar firms can expect​

  • Increased lawyer productivity on routine tasks: drafting, summarisation, and research synthesis free up time for higher‑value legal judgment.
  • Faster matter turnaround times by automating evidence and document triage using agents and specialist workflows.
  • Improved knowledge reuse as Copilot surfaces firm precedent, clauses, and playbooks inside the flow of work.
  • Enhanced collaboration: meeting facilitation agents reduce administrative friction and create reliable action lists.
  • Competitive differentiation: early, responsible adoption enables new service models and pricing options for clients.
These benefits are not theoretical. Firms piloting legal AI report time savings and better first drafts, provided outputs are validated and lawyers maintain a supervisory role.

Risks and blind spots​

Hallucinations and material accuracy​

Generative models can produce plausible but incorrect statements — the well‑known “hallucination” problem. For lawyers, an incorrect legal assertion or a misread clause could create material risk. Specialist legal platforms mitigate this with legal prompts, model fine‑tuning and provenance features, but never eliminate the need for lawyer review.
Mitigation:
  • Human‑in‑the‑loop validation as mandatory for all legal outputs.
  • Clear UI flags that highlight AI‑generated content and confidence levels.
  • Model provenance logs to show which documents and training data informed outputs.

Client confidentiality and privilege​

Uploading client documents to third‑party systems creates a potential exposure. Even when firms use enterprise platforms hosted on major cloud providers, contract terms, data segregation, encryption and access controls must be validated.
Mitigation:
  • Ensure contractual commitments for data residency, encryption, retention, and no‑model‑training clauses where necessary.
  • Use dedicated, private deployments or secure vaults for privileged matter data.
  • Strict role‑based access and audit trails for AI workspaces.

Vendor lock‑in and multi‑model strategy​

Relying on a single vendor for both enterprise copilot and specialist legal tooling creates strategic and procurement risk. Firms must weigh trade‑offs between deep integration and flexibility.
Mitigation:
  • Define exit and portability clauses up front (data export and model transition).
  • Evaluate multi‑model approaches that allow fallback to alternative providers or self‑hosted models.

Regulatory and ethical uncertainty​

Regulators are still shaping rules on explainability, accountability and disclosure. Firms must be prepared to adapt to new guidance and, in some jurisdictions, explicit reporting or client notices.
Mitigation:
  • Build policies that can be updated rapidly and communicate changes promptly to clients.
  • Engage in public consultations and industry groups to stay ahead of rule changes.

Technical considerations and architecture decisions​

Data flows and security posture​

A firm using Copilot as the enterprise layer and Harvey for matter flows must design clear data boundaries. Copilot access to SharePoint or Outlook data should be controlled via conditional access and DLP policies. Specialist matter data should flow into Harvey’s secure project vaults only after compliance checks and client consent where required.
Important technical controls:
  • Enforce encryption at rest and in transit for both Copilot and specialist tool data stores.
  • Apply Data Loss Prevention (DLP) policies to block or flag risky uploads.
  • Enable comprehensive logging for all AI interactions and retain logs for auditing.
  • Use tenant‑level controls to prevent model training on sensitive data where vendors permit.

Integration and workflow orchestration​

Copilot’s agent architecture and Copilot Studio enable low‑code agent creation to orchestrate cross‑system workflows (e.g., pull documents from SharePoint, summarise with Harvey, produce a draft memo in Word). That orchestration reduces manual handoffs but increases the need for end‑to‑end testing and transaction‑level audits.

Cloud and infrastructure choices​

Vendor commitments to cloud providers matter for certification, data residency and performance. Specialist vendors often commit to major clouds (Azure, AWS) and may offer enterprise agreements. Firms must validate providers’ compliance certifications, cloud region options, and contractual limits on secondary model usage.

Change management: people, process and skills​

Training and role definition​

Burges Salmon’s emphasis on behaviour change and the Digital Champion Network is exactly what the market recommends. Training must go beyond “how to prompt” and include:
  • Understanding model limits and when human review is required.
  • New matter workflows that incorporate AI‑generated outputs.
  • Security and confidentiality rules specific to AI tools.

Red teaming and QA​

Before production rollouts, firms should adopt red‑teaming and scenario testing to surface failure modes: hallucination tests, data exfiltration attempts, and performance testing against curated legal tasks.

Client communication and consent​

Clients must be told when AI is used on their matters — either through engagement letters, matter intake forms, or ongoing consent mechanisms. Some clients will require additional contractual protections or opt‑out rights.

Practical steps for firms considering a similar path​

  • Establish an enterprise AI foundation first (single sign‑on, global security policies, Copilot/enterprise platform).
  • Create an AI governance body (Responsible AI Board) with cross‑disciplinary representation.
  • Pilot specialist legal tools against measurable lawyer behaviour change and quality metrics.
  • Enforce human review rules and audit logging for all legal outputs.
  • Negotiate contract terms that protect client data and include clear exit and portability provisions.
This five‑step path reduces runaway experimentation and ensures new tools add measurable legal value rather than just novelty.

Critical analysis: strengths and potential gaps in Burges Salmon’s approach​

Strengths​

  • Staged deployment: Starting with Copilot gives Burges Salmon centralised control over baseline AI interactions, making it easier to govern and audit user activity.
  • Evidence‑based vendor selection: The structured trial for Harvey demonstrates procurement discipline and practice‑led validation rather than technology fetishism.
  • Governance focus: Creating a Responsible AI Board shows the firm recognises the non‑technical aspects of risk management — ethics, client duty, training and policy.
  • Behaviour change and adoption strategy: The Digital Champion Network and measurement of prompts and agent tasks indicate the firm is managing user adoption actively rather than imposing tools top‑down.

Potential gaps and unresolved challenges​

  • Transparency on client engagement: The firm’s public announcement highlights governance and a Responsible AI Board, but it does not disclose how client consent and notifications will be operationalised matter‑by‑matter. Clients and regulators increasingly expect clarity here.
  • Depth of technical safeguards: High‑level controls are described, but enterprise security depends on low‑level details: whether Harvey will be deployed in a dedicated tenant, the contractual limits on model training, and how Copilot’s data access is scoped. Those details matter when handling privileged materials.
  • Overreliance on vendor roadmaps: The combined Copilot+Harvey strategy leans on two major providers and their cloud partners. The firm must maintain vendor contingency strategies and portability to avoid lock‑in risks.
  • Ongoing metricisation of outcomes: Counting prompts and agent tasks is useful, but firms must also measure outcome quality, client satisfaction, and error rates from AI outputs. Without quality metrics, adoption risks prioritising volume over value.
Where claims or vendor statements cannot be independently validated (for example, internal pilot details or precise legal quality improvements), firms should be transparent with clients and regulators that early evidence is encouraging but not definitive.

What this means for clients and the market​

Clients can expect faster, more consistent responses for routine legal tasks, plus potential new service models that blend lawyer oversight with AI‑assisted drafting and review. That said, clients should ask firms how AI is used on their matters, what protections exist for privilege and confidentiality, and whether they can opt out or require additional contractual safeguards.
For the legal market, Burges Salmon’s announcement is a marker of maturation. Many early adopters experimented with pilots in 2023–2024; by 2026 the pattern shifting from pilots to governed platforms is clear. The firms that combine strong enterprise guardrails with targeted legal tools and human oversight will set the standard for credible, defensible AI adoption.

Conclusion​

Burges Salmon’s next phase of the Digital Enablement Programme represents a pragmatic path into generative AI for legal practice: build a secure, enterprise Copilot foundation; use specialist vendors like Harvey where legal domain expertise matters; and embed governance, training and measurement through a Responsible AI Board and a Digital Champion Network. The firm’s reported traction — large numbers of Copilot prompts and completed agent tasks — demonstrates genuine user adoption, not mere pilot activity.
That said, technology alone is not the answer. The real test will be how Burges Salmon operationalises client consent, data segregation, contract protections and quality assurance, and how it measures legal outcomes beyond usage statistics. Done well, the model it describes could become a template for responsible AI adoption in the profession: secure foundations, purpose‑built legal tooling, rigorous governance, and lawyers retained as the final decision makers. Done poorly, firms risk client trust and regulatory pushback.
For law firms preparing their own AI journeys, the lesson is simple but firm: start with enterprise controls, pilot purpose‑built solutions where domain knowledge matters, insist on lawyer oversight, and institutionalise governance and measurement from day one.

Source: Burges Salmon Burges Salmon advances DEP journey with trusted, responsible AI powered by Microsoft Copilot and Harvey
 

Burges Salmon’s move to pair a firm‑wide Microsoft Copilot foundation with targeted Harvey deployments marks a decisive, pragmatic step in how modern law firms are adopting AI: not as a one‑size tool, but as a layered platform where general productivity copilots and specialised legal LLMs do different jobs. The firm says around 1,300 people are already enabled on Copilot, with roughly 700,000 prompts issued and some 4,000 agent tasks completed — metrics that show sustained operational use rather than a throwaway pilot. This next phase, governed by a newly established Responsible AI Board and led through practice‑led use cases, is an instructive template for other firms weighing productivity, risk and client confidentiality in equal measure.

Professionals in a boardroom review the Agent Catalogue amid a Responsible AI Board display.Background / Overview​

Burges Salmon launched its Digital Enablement Programme (DEP) to modernise legal service delivery by combining GenAI, data, processes and technology into a coherent roadmap. The programme began by embedding Microsoft Copilot across the firm as a broad productivity layer, and is now evolving to include Harvey, a legal‑specialist generative AI platform, for matter‑specific workflows. The firm’s public statements stress a staged, evidence‑driven rollout: Copilot provides the firm‑wide foundation and behaviour change capability, while Harvey supports deeper, practice‑specific tasks — for example, early‑stage analysis and structured first drafts in Real Estate matters.
This is notable for three reasons:
  • It recognises the difference between enterprise productivity AI and domain‑tuned legal models.
  • It emphasises governance and people‑centred change rather than technology procurement alone.
  • It signals that leading firms expect complementary toolsets instead of betting exclusively on a single vendor.

Why this matters: the technical and business case​

Law firms are primarily knowledge businesses where the twin pressures of cost control and client expectations drive automation. Copilot brings direct productivity gains by enabling lawyers to summarise meetings, draft and edit text in Word, make sense of emails, and automate administrative tasks across Outlook, Teams and SharePoint. A domain specialist like Harvey, in contrast, is architected to understand legal language, precedent patterns, and the particular needs of legal research and drafting.
The layered architecture Burges Salmon describes — Copilot as a broad base and Harvey as a matter‑specific overlay — maps neatly to practical priorities:
  • Use Copilot for firmwide productivity, information retrieval across Microsoft 365, and agent‑based automations that reduce routine labour.
  • Use a legal‑specific model to accelerate early legal analysis, structure first drafts, and standardise outputs that require deeper domain knowledge and legal accuracy controls.
From a business perspective, this reduces the risk of over‑reliance on any single AI, opens opportunities for vendor competition where it matters (research/drafting vs. productivity), and helps the firm instrument value: measuring prompts, agent tasks, time saved, and quality improvements.

What Harvey brings — and what it doesn’t​

Harvey is positioned in the market as a legal‑first AI platform. Its product emphasis is on legal research, drafting assistance, document review and matter‑centric analysis — functions that require access to structured legal knowledge, citation handling, and domain‑aware prompts. Firms that trialled Harvey reportedly saw benefits in accelerating early‑stage analysis and improving the structure of first drafts, outcomes that align with Harvey’s product claims.
That said, commercial legal AI platforms are not magic bullets:
  • They still require lawyer oversight: hallucinations, mistaken citations, or jurisdictional errors remain risks.
  • Integration and data governance are not trivial: connecting to matter data, document management systems and DMS metadata calls for secure APIs and robust access controls.
  • Performance varies by task: Harvey may outperform Copilot on legal drafting but Copilot likely remains faster for calendar or inbox summarisation.
In short, Harvey can materially raise throughput for complex legal workflows — when deployed with appropriate guardrails.

Strengths of a specialist legal model​

  • Domain‑aware responses and legal phrasing that map to practitioner expectations.
  • Purpose‑built agent templates for common legal tasks (e.g., due diligence checklists, clause extraction).
  • Vendor attention to confidentiality, auditability and enterprise controls tailored to law firms.

What to temper expectations with​

  • Specialist models still produce errors; verification by qualified lawyers remains mandatory.
  • Providers vary in how they handle data retention, training usage and client confidentiality — each must be contractually and technically validated.
  • Adoption costs include change management, training and process redesign, not just subscription fees.

Governance: Responsible AI Board and SRA expectations​

Burges Salmon has created a Responsible AI Board to embed governance across policy, processes and training. That governance construct is essential: UK regulators have signalled that firms remain accountable for outputs even when third‑party AI tools are used. Regulatory guidance emphasises outcomes and client protection rather than prescribing specific technologies — but it is explicit that firms must maintain professional standards, manage risks such as hallucinations, maintain client confidentiality and run appropriate quality assurance.
Key governance functions the firm will need to perform include:
  • Risk and impact assessments for each AI use case.
  • Mapping data flows and ensuring no unauthorised external exposure of client material.
  • Defining supervision regimes and sign‑offs for AI‑generated outputs.
  • Maintaining auditable logs of prompts, outputs and review steps.
  • Training and competence programmes for lawyers and support staff.
These are not optional niceties; they are required operational foundations if the firm wants to scale AI responsibly while remaining within regulatory expectations.

Operationalising the DEP: agents, champions and change management​

Burges Salmon’s approach includes three operational pillars that are replicable and practical:
  • A Digital Champion Network to accelerate peer learning and best practice spread.
  • Controlled ‘agent’ rollouts for research and analysis tasks, tracked via metrics (e.g., 4,000 agent tasks completed).
  • Practice‑led pilots where teams own use cases, measure outcomes, and feed lessons into wider deployment plans.
This pattern — champion + agent + practice ownership — reduces the classic failure mode where central IT buys a tool but the practice never integrates it into daily work. The emphasis on behaviour change is important: tools only deliver value when they alter workflows, not when they sit unused.

Practical rollout checklist (inspired by Burges Salmon’s DEP)​

  • Identify 3–5 high‑value pilot use cases in a single practice area.
  • Appoint practice sponsors and Digital Champions.
  • Run controlled pilots, capture metrics (time, error rate, quality), and surface edge cases.
  • Establish review gates and expand only when measurable value and risk controls are proven.
  • Scale via templates, training modules and monitored agent catalogs.

Security, data protection and client confidentiality​

Legal work is highly confidential. Any adoption of cloud AI must be accompanied by technical and contractual measures:
  • Ensure the AI vendor supports enterprise‑grade controls: tenant isolation, encryption at rest and in transit, and robust access controls.
  • Confirm data handling: does the vendor use client data for model training? If so, is it opt‑in and fully anonymised?
  • Audit and logging: keep immutable records of prompts, outputs and who reviewed them.
  • Cross‑border data transfers: be explicit about where data is hosted and whether international transfers implicate regulatory rules.
  • Integration security: connectors to document management systems and SharePoint must be hardened and limited to the minimum required scope.
Failure to harden these areas exposes the firm and clients to confidentiality breaches, regulator scrutiny and potential malpractice claims.

Vendor selection and commercial considerations​

Selecting Harvey as a complementary tool to Microsoft Copilot illustrates a pragmatic vendor strategy: match tool capabilities to workflow needs. But procurement should go beyond feature checklists:
  • Negotiate clauses around data usage, model training, retention, and deletion.
  • Define SLAs that cover uptime, incident response and security breach notifications.
  • Demand transparency on model provenance and update cadence.
  • Insist on professional indemnity considerations and indemnities tied to data misuse.
  • Plan for exit: ensure data export and transition support if you change vendors.
A commercial negotiation informed by legal and IT security teams reduces future surprises and ensures the firm retains control over client data.

Measuring value: metrics that matter​

Burges Salmon’s citing of prompts, users and agent tasks is a good start, but harder metrics drive better decisions. Useful measures include:
  • Time to first draft: average minutes saved per matter compared to baseline.
  • Review delta: proportion of AI output requiring substantive legal rework.
  • Error rate: frequency of hallucinations or inaccurate citations detected during QA.
  • Client satisfaction: measured change in perceived responsiveness and quality.
  • Cost per matter: operational cost reduction versus baseline.
  • Adoption velocity: number of active users and retained activity across months.
Well‑designed metrics tie AI adoption to commercial and quality objectives — the only way to justify scaling beyond pilots.

Risks and mitigation: a practical lens​

No technology is risk‑free. Key risks and recommended mitigations:
  • Hallucinations and incorrect legal statements
  • Mitigation: mandatory human review, red‑lining processes, and “no‑submit” rules without lawyer sign‑off.
  • Client confidentiality breaches
  • Mitigation: strict data minimisation, encrypted connectors, contractual limits on vendor data usage.
  • Regulatory exposure and professional conduct breaches
  • Mitigation: Oversight by COLP (or equivalent), explicit regulatory risk assessments, and documented governance.
  • Vendor lock‑in and dependency
  • Mitigation: multi‑vendor strategy for critical workflows, exportable data formats, and porting playbooks.
  • Over‑automation of legal judgement
  • Mitigation: keep AI focused on augmentation tasks and define clear boundaries where human judgment is required.
  • Operational complexity from many agents
  • Mitigation: create an “Agent Catalogue” with lifecycle management, performance monitoring and de‑registration policies.

For clients and market-facing services​

Burges Salmon’s DEP also aims to deliver client value by using deployment insights to help clients adopt AI responsibly. This is an important commercial lever:
  • Firms that internalise AI best practice can advise clients credibly on their own AI journeys.
  • Demonstrable governance and auditability become buyer assurances for corporate legal teams.
  • Firms can offer managed AI‑enabled services (e.g., faster due diligence reports) as differentiated products.
The strategic upside: helping clients adopt AI responsibly becomes both a product and a market signal of competence.

Practical recommendations for law firms contemplating a similar path​

  • Start with a clear problem statement: define which tasks must change and why.
  • Use a layered architecture: enterprise productivity (Copilot) + domain specialists (Harvey or similar).
  • Build governance first: create a Responsible AI Board or equivalent that includes practice leads, Ethics/Compliance, IT, and a legal technologist.
  • Pilot with measurable objectives and a defined exit strategy.
  • Train for behaviour change: invest in Digital Champions and scenario‑based learning, not just slide decks.
  • Contract tightly with vendors about data use, training, retention and breach response.
  • Instrument for audit: capture prompts, outputs and reviewer notes for every matter where AI assists outcome decisions.
  • Keep the human in the loop: preserve lawyer accountability and make AI outputs a starting point, not a final product.

The long view: why a mixed model is likely to dominate legal AI​

Burges Salmon’s combination of Copilot and Harvey highlights an industry reality: the legal sector will not standardise on a single AI type anytime soon. Instead, expect to see:
  • Generalist copilots embedded in office suites to handle productivity and knowledge retrieval.
  • Domain‑specialist models for research, drafting and complex matter analysis.
  • An expanding agent ecosystem that automates repetitive research and reporting tasks.
  • Heightened governance frameworks driven by regulators and professional bodies.
Firms that design an architecture to manage all three — productivity, specialisation and governance — will be best positioned to control risk while unlocking value.

Final analysis: strengths, caveats and a balanced verdict​

Burges Salmon’s DEP demonstrates several strengths that other firms should study:
  • A staged, evidence‑based rollout that starts with a firm‑wide productivity layer and moves to specialised models for legal work.
  • A focus on real‑world, practice‑led use cases that produce measurable behaviour change.
  • Strong governance via a Responsible AI Board and integration of compliance and technology functions.
But caution is necessary:
  • The technology’s limitations remain material: legal accuracy, hallucinations, and jurisdictional nuance require ongoing human oversight.
  • Commercial and security diligence is essential to avoid vendor lock‑in and data exposure.
  • Scaling requires investment in people, process redesign and audit mechanisms — costs that must be weighed against projected savings.
Overall, this is a mature, practical path to AI adoption: one that recognises both the power of modern generative AI and the continuing duty of legal professionals to deliver accurate, confidential and high‑quality legal services. Burges Salmon’s DEP — with Copilot as the foundation and Harvey for specialised legal workflows — is not merely a technology play. It is a change programme that could realistically shift where and how lawyers spend their time, while preserving the professional standards clients expect.
The lesson for the market is simple: successful AI in legal services will be earned through governance, practice ownership, and incremental value—never by skipping the hard work of integration, training and risk management. Conscientious firms that follow that blueprint will likely be the ones to convert AI experimentation into durable client value.

Source: Solicitors Journal https://www.solicitorsjournal.com/s...apabilities-for-legal-services/?category=none
 

Back
Top