EMMA AI: Mott MacDonald Governance-First AI for Infrastructure

  • Thread Author
Mott MacDonald is embedding artificial intelligence into the very fabric of civil engineering work—turning blueprints into databacked decisions, creating an enterprise assistant that indexes decades of technical know‑how, and using computer vision and language models to make infrastructure safer, cheaper and faster to design, build and maintain.

Background​

Mott MacDonald is a global, employee‑owned engineering and consultancy firm whose work underpins water systems, transport networks and energy infrastructure worldwide. The firm has positioned AI not as a fringe experiment but as a core capability layered over its existing technical practices, with an ambition to improve operational safety, accelerate decisions and preserve institutional knowledge.
This transformation rests on a strategic partnership with Microsoft and a concentrated investment in Microsoft Azure technologies—most notably Azure AI Foundry, Azure AI Search, and Microsoft 365 Copilot—combined with an in‑house enterprise assistant known as EMMA (Every Mott MacDonald Answer). The initiative covers both field‑facing engineering use cases (computer vision for asset inspection, flood and water‑quality modelling) and internal productivity (searchable knowledge, consent management and compliance automation).

Why this matters: AI meets infrastructure engineering​

Infrastructure projects produce enormous volumes of operational and project data: sensor readings, inspection imagery, design standards, stakeholder responses and historical lessons learned. Turning that data into reliable answers is the central productivity and safety problem Mott MacDonald is addressing.
  • AI lets teams surface relevant standards, subject‑matter experts and previous project precedents in seconds rather than hours.
  • Computer vision automates routine inspection tasks and prioritises high‑risk defects for human engineers.
  • Language models accelerate stakeholder engagement processes by classifying and explaining thousands of public responses during major projects.
Those real‑world wins convert directly into reduced cost, faster delivery and improved safety—outcomes that matter to clients and the communities who rely on infrastructure.

EMMA: an enterprise assistant built for engineering​

What EMMA does​

EMMA is Mott MacDonald’s internal AI assistant built to index and synthesise company documents, project records and policies so staff can find answers in natural language. Unlike an ungoverned consumer chatbot, EMMA is designed to surface authoritative, auditable answers tied to specific corporate sources. Employees use it to locate procedures, identify experts, check compliance points and speed up routine workflows across the business.

Architecture and platform choices​

EMMA is built on Microsoft platform primitives that preserve enterprise identity, access controls and regulatory needs:
  • Microsoft 365 (SharePoint and OneDrive) as the content layer where documents and project artefacts live.
  • Microsoft Graph for delegated access and least‑privilege authorization so EMMA only shows what the querying user is entitled to see.
  • Azure AI Search and vector‑based retrieval for semantic indexing and retrieval‑augmented generation (RAG).
  • Azure AI Foundry and the Responses API to host agents, manage model endpoints and orchestrate workflows.
  • Observability and analytics stored in enterprise databases (for example, Azure Database for PostgreSQL) to monitor usage, failure modes and hallucination rates.
Those choices let Mott MacDonald scale EMMA while remaining compliant with data residency and audit requirements.

Why governed retrieval matters​

EMMA’s designers emphasise retrieval over freeform generation: answers are grounded in retrieved evidence snippets that link back to SharePoint or OneDrive, reducing the risk of hallucination and making outputs auditable. That approach also preserves project confidentiality because access is mediated by existing Microsoft identity and access patterns rather than by a generic bridge to a public LLM.

Field applications: AI beyond paperwork​

Mott MacDonald is pairing domain expertise with AI to operate in physical spaces, not just office workflows.

Computer vision for asset health​

Vision models analyse imagery from bridges, dams and motorways to detect structural cracks, defects and other signs of deterioration. Automating the triage of inspection images reduces the need for frequent manual inspections, speeds up intervention and focuses scarce specialist time where it matters most. This capability ties into condition monitoring programmes and predictive maintenance pipelines.

Environmental and operational risk modelling​

Engineers at the firm use large datasets to model floods, predict water quality and assess interdependent climate risks for transport networks. These predictive models combine historical sensor data, hydrological models and machine learning to improve early warnings and resilience planning. Mott MacDonald has deployed such assessments in collaboration with major transport authorities to understand cascading climate impacts across networked systems.

Tree health and public safety​

A niche but vital example is using vision models to identify ash dieback and other tree diseases that make roadside trees hazardous. Early automated detection informs proactive tree management and reduces risks of collapse onto roads and railways. These kinds of targeted use cases show how narrow AI systems can deliver immediate safety value.

Language models for consent management​

Large language models are applied to classify and summarise thousands of public consultation responses during large projects (for example, new rail lines). Where teams previously read and categorised responses manually, AI now provides quick, consistent classifications plus explanations that stakeholders find easier to accept. That combination of speed and explainability improves decision timelines without sacrificing transparency.

Responsible AI: engineering standards applied to AI​

Safety and governance are non‑negotiable​

The engineering sector demands high standards of safety and traceability, and Mott MacDonald applies those expectations to AI deployment. The firm runs adversarial testing (AI red‑teaming), uses Azure AI Content Safety controls, and embeds monitoring to block unsafe outputs and detect prompt‑injection or inference failures. These operational controls are designed to mitigate risks where wrong outputs could affect technical decisions or public safety.

Practical governance over prohibition​

The organisation recognises that forbidding AI drives shadow usage. Instead, it tries to find the “sweet spot” between tight governance and practical adoption by creating safe, documented channels—like EMMA and Copilot integrations—so staff have sanctioned, auditable tools. That balance is central to maintaining oversight while unlocking productivity gains.

Explainability and auditability​

Because outputs are retrieval‑grounded and logged, Mott MacDonald can show clients and auditors the provenance of decisions—what documents informed an answer, which model produced a summary, and who validated the result. This traceability has both operational and commercial benefits; in some cases it has helped the firm win tendered work by demonstrating transparent AI governance.

Building an AI‑confident culture​

Training, adoption and citizen development​

Mott MacDonald pairs AI literacy training with practical, project‑level examples to address worker concerns about job displacement and sustainability. The company has enabled thousands of staff to become “citizen developers” using Azure‑based low‑code tools inside a governed digital workspace—an early step toward democratizing automation while keeping a central governance backstop. Reported adoption numbers indicate strong internal momentum.

Human‑in‑the‑loop workflows​

Critical engineering decisions continue to require human oversight. AI is used to augment human judgement—flagging anomalies, summarising evidence, and prioritising interventions—while subject‑matter experts retain final accountability. That model aligns with best practices for safety‑critical industries where responsibility and professional judgement cannot be outsourced to an algorithm.

Technical anatomy: how the stack fits together​

Mott MacDonald’s implementation exemplifies a pragmatic, enterprise‑grade AI architecture:
  • Data and documents remain in Microsoft 365 stores; indexing uses Azure AI Search and vectorization for semantic retrieval.
  • Microsoft Graph enforces delegated access so the assistant respects existing tenancy permissions and least‑privilege access.
  • Azure AI Foundry provides the model runtime, orchestration and catalog for routing requests to different model endpoints (balancing cost, latency and accuracy).
  • The Responses API and agent runtimes orchestrate tool invocation, evidence citation and conversation state.
  • Governance features include content safety filters, red‑teaming, OpenTelemetry traces and analytics stored in PostgreSQL for continuous monitoring.
This approach trades custom, brittle infrastructure for managed platform services that already include identity integration, compliance tooling and operational SLAs—reducing time to value for engineering teams.

Strengths: where Mott MacDonald’s approach stands out​

  • Domain‑grounded AI: EMMA and field models are tuned for engineering data and workflows, not generic chat—this increases relevance and reduces error modes common in unguided LLM deployments.
  • Governance baked in: Using Microsoft Graph, SharePoint and Azure controls preserves existing access constraints and audit trails, addressing compliance and data‑residency concerns for global projects.
  • Practical safety engineering: Red‑teaming, content safety filters and observability make the deployment maintainable and defensible in safety‑sensitive contexts.
  • Rapid internal adoption: Low‑code workspaces and Copilot integrations provide accessible entry points for non‑technical staff to participate in automation, multiplying the organisation’s innovation capacity.

Risks and cautionary points​

No AI deployment is without tradeoffs. Key risks and mitigation strategies that emerge from the implementation are:
  • Vendor lock‑in and platform dependence: Deep integration with Microsoft 365 and Azure AI Foundry can accelerate delivery but raises long‑term portability and negotiation risks. Organisations should document escape hatches and contractual commitments.
  • Hallucination and over‑trust: Even retrieval‑grounded systems can synthesise incorrect conclusions if the underlying documents are incomplete or mislabelled. Continuous evaluation, human‑in‑the‑loop signoffs and evidence citation standards are essential.
  • Data quality and metadata hygiene: Semantic search and RAG depend on clean metadata and well‑structured content. Large consulting firms must invest in content curation to avoid garbage‑in, garbage‑out scenarios.
  • Operational cost and carbon footprint: Running large models and maintaining vector indexes at scale has compute and cost implications. Architectural choices—model selection, endpoint routing and query caching—should be tuned to balance accuracy and efficiency.
  • Workforce impact and skills gaps: While automation frees engineers from repetitive tasks, it also changes work content. Sustained training and role redesign are required to capture productivity gains without displacing expertise.
Where claims about platform-scale metrics or future roadmap items appear in vendor messaging, they should be treated cautiously until confirmed through public product documentation or contractual details. Some product‑level availability and adoption numbers can vary over time, so independent verification is recommended for procurement decisions.

Practical lessons and recommendations for engineering firms​

  • Start with clear, business‑critical problems that require engineering judgement (knowledge search, inspection triage, consent classification) rather than chasing general AI hype.
  • Architect retrieval first: build reliable indexing, metadata and access controls before exposing generative models to users. Ground answers in evidence.
  • Use managed platform building blocks (identity, model catalogues, safety filters) to reduce plumbing work, but define exit strategies to manage vendor concentration risks.
  • Make governance pragmatic: restrict unsafe actions, but provide sanctioned, easy‑to‑use channels to prevent shadow IT. Combine policy with adoption and training.
  • Measure continuously: log interactions, track hallucination rates, review red‑team findings and use metrics to refine retrieval, prompts and workflows.

The bigger picture: what this signals for the industry​

Mott MacDonald’s program exemplifies a maturing pattern in which professional services and engineering firms:
  • Treat AI as an augmentation layer over institutional knowledge, not a replacement.
  • Use enterprise platform services to deliver production‑grade agents with governance and observability.
  • Combine narrow, mission‑specific models (vision for inspections, ML for hydrology) with general‑purpose assistants (Copilot, EMMA) to support different aspects of the business.
These practices suggest that AI’s highest‑value role in engineering is integrative: connecting dispersed data, amplifying domain expertise, and improving decision velocity while retaining human accountability.

Conclusion​

Mott MacDonald’s work shows how a global engineering consultancy can sensibly embed AI into safety‑critical, knowledge‑intensive operations. By building EMMA as a governed, evidence‑grounded enterprise assistant, applying computer vision to asset inspection, and using language models to scale stakeholder engagement, the firm has created a repeatable template for combining cloud platform services with professional engineering standards. Their approach balances opportunity and risk: it leverages Microsoft’s Azure AI Foundry and Microsoft 365 primitives to accelerate delivery while investing heavily in governance, red‑teaming and explainability to keep control over outcomes.
For engineering organisations weighing AI, the lesson is clear: start with the problems only your engineers can define, ground AI in curated company knowledge, and design for continuous oversight. When implemented this way, AI doesn’t replace engineering judgement—it multiplies it.

Source: Microsoft UK Stories How Mott MacDonald is using AI to engineer a smarter world