Governance First AI Platform Drives Scale: Access Evo and ISO 42001

  • Thread Author
When a major UK software vendor set out to accelerate its generational shift into AI, it did something many organisations still talk about but few execute: it put governance first and used it to scale. The Access Group’s Access Evo platform — built on Azure API Management, Azure AI Search and orchestration tooling — packages governance as a repeatable developer foundation rather than an afterthought, enabling more than 50 AI-enhanced product launches in a year and ISO 42001 accreditation while supporting millions of users.

GenAI Gateway for Azure API Management links AI Copilot, data store, telemetry, and user metrics.Background​

The Access Group is a large, UK-headquartered business management software company that serves tens of thousands of organisations across verticals such as care, hospitality, recruitment and finance. As it moved from bespoke, application-by-application AI experiments to a company-wide strategy, Access created Access Evo — a governed AI platform designed to centralize routing, observability, cost controls, and access enforcement for all AI interactions. The platform’s control plane is built around Azure API Management, which the company uses as a GenAI Gateway to enforce uniform policies and telemetry across product lines. This isn’t a marketing construct: Microsoft’s customer story documents the concrete outcomes Access reported — more than 50 AI-powered products in the first year on the platform, ISO/IEC 42001 certification (the international AI management system standard), and growth to 2.2 million users with stated ambitions to scale further. These claims reflect an operational pattern increasingly visible in high-maturity Azure deployments: centralize the governance primitives and let product teams build on top.

Overview: Governance as a growth engine​

Access Evo reframes governance from a compliance chore into an engineering productivity layer. The GenAI Gateway implemented in Azure API Management standardizes:
  • Routing of model requests and multi-model selection logic.
  • Observability — traces, telemetry and latency metrics.
  • Cost controls and token-level usage tracking for chargeback.
  • Access enforcement — permissions, role checks and data scoping.
  • Semantic caching and efficient retrieval to lower latency and cost.
That combination gave Access a repeatable template for developers. Instead of each product team building guardrails, observability pipelines and quota controls, the organisation built them once and exposed them as platform primitives. The result: three-fold benefits — speed, predictability, and traceability — which in turn made certification (ISO 42001) achievable and meaningful.

Why API Management matters for GenAI at scale​

API Management is a familiar component in enterprise architecture, but its role in generative AI platforms is evolving. Instead of just exposing REST endpoints or applying simple quotas, the gateway now:
  • Acts as the authoritative gate for model access and tool invocation.
  • Provides policy enforcement surfaces (e.g., latency, cost, moderation).
  • Enables detailed telemetry and audit trails for each AI interaction.
  • Facilitates cost attribution down to teams or features, feeding FinOps.
These patterns mirror broader industry guidance around agent and AI management: identity primitives (Entra), data governance (Purview), network isolation, and telemetry must all line up with the gateway to create an auditable, operable AI surface. Forum and architectural analyses of Azure-native AI projects reinforce that multi-layered control planes — identity + API governance + observability — are the practical path to safe, large-scale AI adoption.

Technical anatomy of Access Evo (what we can infer)​

Access’s published narrative names several components explicitly and implies others through the patterns it describes. The key elements are:
  • GenAI Gateway (Azure API Management) — centralized routing and policy enforcement for every AI call. This enforces latency, cost and access rules before requests reach model endpoints.
  • Model orchestration and reasoning layer — Semantic Kernel and orchestration code that composes retrieval + generation flows and invokes the appropriate model depending on cost, fidelity and context.
  • Retrieval layer (Azure AI Search / vector search) — indexed corporate documents and metadata feed retrieval-augmented-generation (RAG) flows, ensuring answers are grounded in trusted sources and subject to permission scoping.
  • Data stores (Azure SQL Database and others) — structured business data (finance, HR, sales) is queried directly for analytics copilot scenarios. Azure SQL Database is explicitly mentioned for Analytics Copilot.
  • Centralized telemetry and FinOps — token-level usage, latency metrics and cost dashboards to track spend by product and charge back appropriately.
This architecture follows established production patterns for enterprise assistants and agentic systems on Azure: preserve delegated authorization, route through a single control plane, and keep tracing and auditability in the critical path. Independent architectural writeups of enterprise Azure AI projects show the same building blocks — Foundry, Entra identity patterns, and RAG with vector stores — all integrated into a governance-first pipeline.

Two practical examples: Docs and Analytics copilots​

Access highlights two flagship experiences that show how governance and productivity combine:

Access Evo Docs (Document and Policy Copilot)​

Built on Access’s secure document storage and Azure AI Search, Docs lets employees find policies, procedures and HR information instantly through conversational queries. The assistant respects permission boundaries, returns grounded answers from indexed documents, and reduces the friction of hunting through shared drives. Customer anecdotes show a shift from people emailing HR to self-serve interactions, speeding outcomes and reducing bottlenecks.

Access Evo Analytics (Analytics Copilot)​

Analytics Copilot queries structured datasets (sitting in Azure SQL Database and other systems), generates charts and answers successive follow-ups in natural language. It’s explicitly aimed at non-BI users — store managers, HR leaders — who now can ask conversational questions like “Show me revenue by product for last quarter” and receive instant visualizations. The agent continues the conversation, enabling drill-downs without requiring technical BI training. Both experiences route through the GenAI Gateway, meaning the same performance SLAs, cost controls and observability apply to these user-facing agents as to any other AI-powered feature. That single-control-plane model simplifies auditing and risk management.

ISO/IEC 42001: what certification means in practice​

ISO/IEC 42001 is the international management-system standard for AI — essentially the AI equivalent of ISO 9001-style management frameworks adapted to AI risk and governance. It specifies requirements for establishing, implementing, maintaining and improving an Artificial Intelligence Management System (AIMS), with an emphasis on transparency, risk assessment, continuous improvement and accountability. For a vendor like Access, achieving ISO 42001 accreditation signals:
  • A documented, auditable management system for AI lifecycle governance.
  • Management commitment, defined policies, and structured risk assessments.
  • Operational controls for data provenance, model validation and monitoring.
  • Processes for continual improvement and internal audits.
Certification is not a one-click badge: it requires documented procedures, evidence of operational controls, and surveillance audits. Organisations using ISO 42001-aligned controls are better positioned to satisfy regulatory and procurement requirements, particularly in regulated sectors such as healthcare and public services. Independent certification bodies and advisory pages explain that the process often involves staged audits, surveillance and ongoing evidence of effective control operation. Caveat: the standard shows that a company has processes in place; it does not guarantee perfect outputs or eliminate AI-specific risks such as hallucination or data leakage. Certification reduces risk and raises the bar, but it should be considered one part of a comprehensive governance program.

What Access achieved — measurable outcomes and claims​

According to the Microsoft customer story, Access’s measurable outcomes include:
  • Launching more than 50 AI-enhanced products in one year across its product portfolio.
  • Achieving ISO/IEC 42001 certification, presenting a formal, auditable AI management system.
  • Scaling Access Evo to 2.2 million users, with stated plans to reach 5 million worldwide. Microsoft’s write-up frames these numbers as the result of embedding governance into the development foundation.
Independent verification: the ISO standard and certification process are documented on ISO’s site, while Access’s own public pages and press/social posts corroborate claims about Access Evo rebranding and ISO 42001 accreditation. These back up the central narrative even as specific user-counts or growth targets remain company-declared projections. Caveat: while certification and product counts can be validated through vendor communications and ISO’s published standard, some operational metrics (e.g., internal cost-per-token breakdowns, exact telemetry data) are proprietary and not independently audited in public sources. Treat company-stated future targets (like 5 million users) as ambitions, not audited historical facts.

Critical analysis — strengths, trade-offs, and risks​

No enterprise rollout is risk-free. Access Evo’s approach is solid, but implementers and customers should weigh strengths against potential blind spots.

Strengths​

  • Repeatability and developer velocity. A single gateway and policy model reduces duplicated work and speeds delivery. Product teams can ship with governance embedded instead of retrofitting controls.
  • Operational observability and FinOps. Token-level tracking and unified telemetry let organisations monitor performance, attribute cost, and control model spend proactively — a practical and often neglected discipline in AI at scale.
  • Standards-based assurance. ISO 42001 accreditation gives a documented management framework and a procurement playbook for regulated customers — which matters in healthcare, public sector and finance.
  • User-facing impact. Grounded copilots for policies and analytics reduce friction for non-technical staff, improving adoption and business value quickly.

Trade-offs and design decisions to watch​

  • Centralization vs. autonomy. A single gateway can become a bottleneck unless it is architected for scale and redundancy. Organisations must design capacity, rate limits and failover carefully.
  • Model routing complexity. Multi-model strategies (cheap models for routine tasks, higher-cost models for corner cases) require dynamic routing logic and ongoing evaluation to avoid cost surprises.
  • Third-party dependency. Heavy reliance on platform primitives (Azure API Management, Azure AI Search, Semantic Kernel) reduces implementation friction but increases platform lock-in risk — a commercial as well as technical consideration.

Risks and hazards​

  • Data leakage and privacy: Even with gateway controls, retrieval and model outputs must be carefully scoped. Policies should forbid accidental context leakage, and telemetry must capture provenance.
  • Hallucinations and misuse: Grounding with RAG and document indexing reduces hallucinations but does not eliminate them. Human-in-the-loop gating for high-risk outputs remains essential.
  • Operational sprawl and governance drift: As more product teams adopt the platform, governance rules must be enforced centrally; otherwise policy drift and “shadow AI” can re-emerge. Establish an AI governance board and controls-as-code to keep the platform consistent.
  • Audit and compliance complexity: Certification is a helpful signal, but customers and regulators will still expect evidence from production telemetry and testing — not just policy documents. Maintain immutable logs, red-team results and release artifacts to pass audits.

Practical guidance for Windows and Azure practitioners​

For IT leaders and platform engineers looking to apply these lessons, a pragmatic checklist distilled from Access Evo and broader Azure patterns:
  • Define the control plane
  • Standardize on a gateway (Azure API Management or equivalent) for model and tool access.
  • Implement policy templates (latency, retry, cost, moderation) as reusable assets.
  • Force delegated authorization
  • Use Azure AD/Entra delegated access and preserve least-privilege semantics for document retrieval and database queries.
  • Ground outputs with RAG
  • Index authoritative corpora with Azure AI Search or a vector store, and record provenance links in responses.
  • Build FinOps and telemetry into day one
  • Tag requests to attribute costs, export traces (OpenTelemetry), and create dashboards for developers and finance owners.
  • Automate compliance evidence
  • Store prompts, model versions, tool calls and action logs in an immutable store to satisfy audits and incident investigations.
  • Pilot, red-team, then scale
  • Run narrow, measurable pilots; conduct adversarial testing; add human-in-the-loop thresholds before full write/transaction permissions.

Where governance can go next: agentic ecosystems and API governance​

As organisations move from single assistants to fleets of agents and agentic automation, the role of API management will broaden. Emerging patterns include:
  • Catalog-driven tool and skill discovery for agents.
  • Fine-grained per-agent identity, lifecycle and permissioning.
  • Event-native gateways that handle asynchronous workflows and agent-to-agent choreography.
  • Policy layers that span both synchronous API calls and event streams.
Access Evo’s approach — treating governance as an enabler rather than a drag — aligns with this direction. The practical challenge is operational: instrumenting and governing hundreds of agents without creating unmanageable complexity is non-trivial and requires investment in platform engineering and governance tooling.

Conclusion​

Access’s Access Evo story is a modern enterprise playbook: embed governance at the infrastructure level, make it reusable, and let product teams build on a predictable, observable foundation. The result is not just lower risk, but faster delivery — 50+ AI features in a year and ISO 42001 accreditation are concrete outcomes that show governance can be a growth engine rather than a brake.
That said, certification and platformization do not remove operational responsibility. Hallucinations, data privacy, and sprawl remain real hazards that demand continuous monitoring, red-teaming, and process discipline. For organisations planning their own journey, the clear takeaway is pragmatic: invest in a single, well-instrumented control plane, align identity and data governance, bake FinOps into the stack, and treat standards like ISO 42001 as a framework for continuous improvement — not a final destination. By shifting governance from bolt-on to built-in, Access demonstrates a replicable model: trust becomes infrastructure, and infrastructure becomes speed.

Source: Microsoft The Access Group scales AI with trust built in, using Azure API Management | Microsoft Customer Stories
 

Back
Top