Intersys today published a freely downloadable AI in the Workplace: Governance Policy Template aimed squarely at insurers, MGAs, brokers and market service providers — a pragmatic, role-based policy pack that sets out mandatory staff training, data-redaction controls, centralized account management, and explicit prohibitions on using personal AI accounts for company data.
The insurance sector has rapidly adopted generative AI across underwriting, claims handling and customer service since the breakthrough of large language models in 2022, but many firms remain exposed to shadow AI and poor controls that can lead to data leakage, regulatory breaches, and operational disruption. Intersys’ template arrives as a practical control baseline that organisations can drop into internal governance frameworks while regulators and industry bodies continue to clarify how existing rules apply to AI. Large and small insurance firms are wrestling with both opportunistic and systemic risks as AI is embedded into decision-making workflows. Regulators such as the FCA have emphasised that existing rules — including the Senior Managers and Certification Regime (SM&CR), model risk-management expectations and the Consumer Duty — already apply to AI, which means insurers must adapt governance, not wait for bespoke AI-only laws.
Source: FinTech Global Intersys unveils new AI governance policy for insurers
Background
The insurance sector has rapidly adopted generative AI across underwriting, claims handling and customer service since the breakthrough of large language models in 2022, but many firms remain exposed to shadow AI and poor controls that can lead to data leakage, regulatory breaches, and operational disruption. Intersys’ template arrives as a practical control baseline that organisations can drop into internal governance frameworks while regulators and industry bodies continue to clarify how existing rules apply to AI. Large and small insurance firms are wrestling with both opportunistic and systemic risks as AI is embedded into decision-making workflows. Regulators such as the FCA have emphasised that existing rules — including the Senior Managers and Certification Regime (SM&CR), model risk-management expectations and the Consumer Duty — already apply to AI, which means insurers must adapt governance, not wait for bespoke AI-only laws. What Intersys has released: a practical snapshot
Core components of the policy template
- 10 Commandments for AI Safety — clear behavioural rules for employees on what can and cannot be shared with AI tools.
- Company AI Policy Template — a modular policy document organisations can customise and adopt, covering permitted tools, approval processes and training requirements.
- Controls checklist — practical controls such as centralised account management, disabling contribution-to-training toggles, and mandatory redaction of sensitive fields before submission to external models.
Executive framing and commentary
Intersys’ leadership frames the template as an immediate mitigation measure: mandatory training, a ban on personal AI accounts for company data, and oversight of approved tools are presented as minimum governance essentials for insurers. Quotes from Intersys’ executives underline the twin message of opportunity and risk — AI can boost underwriting and claims efficiency, but it demands quick governance action to protect policyholder data and commercial sensitivity.Why insurers need an immediately usable AI governance template
Shadow AI and measurable breaches
Independent breach research indicates that unsanctioned AI use — commonly called shadow AI — is already implicated in a material share of incidents. Recent industry reporting summarising large-scale breach surveys shows that roughly one in five breaches had a shadow-AI component, which added hundreds of thousands of dollars to remediation costs on average. For organisations that handle high volumes of personal and regulated data, this exposure is acute.Regulatory pressure without bespoke AI rules
Regulators have signalled a technology-neutral approach that holds firms accountable under existing frameworks rather than issuing prescriptive AI-only laws. That means insurers must interpret how rules on fairness, model governance, data protection, and consumer outcomes apply when AI is used. Practical policy templates give firms a defensible baseline of controls that can be harmonised with compliance teams and model governance units.Operational concentration of risk
Insurers frequently process sensitive personal data, proprietary pricing models, and claims details. When staff use generative AI tools without controls, they risk exposing policyholder data or commercial actuarial inputs to external vendor models that retain inputs for training or are vulnerable to misconfiguration. A simple, enforceable corporate policy can materially reduce that attack surface and demonstrate reasonable steps to auditors and regulators.What the template does well — notable strengths
1. Actionable and role-based language
The template’s strength is its practicality. Rather than abstract principles, it provides role-based rules and a short, actionable “10 Commandments” list employees can easily follow. This reduces the common gap between high-level policy and day-to-day behaviour. Practicality increases adoption speed and helps compliance teams demonstrate rapid remediation.2. Focus on immediate technical mitigations
Controls such as centralised account management, toggling off “use my inputs for model training”, and explicit redaction guidance are concrete steps organisations can implement quickly. These mitigation measures are low-friction and address the most likely vectors for data leakage from employee queries to public models.3. Free and shareable: lowers the adoption barrier
By distributing the template free of charge, Intersys removes a common barrier — time and budget — that stops smaller insurers and MGAs from building initial policies. For sectors where third-party ecosystems (brokers, MGAs, vendors) play a large role, an open template helps raise the floor of baseline controls across the market.4. Aligns with regulatory expectations
Because UK regulators and many international supervisors are signalling that existing rules apply to AI usage, a policy that maps to operational resilience, model governance, and data protection expectations is defensible. The template’s emphasis on oversight and training dovetails with regulator messaging around accountability.What the template does not solve — gaps and limitations
A policy plays an important role, but it is only one element of a mature AI governance program. Organisations should be clear-eyed about what a template cannot deliver.1. It’s not a substitute for model lifecycle governance
The template helps govern user behaviour and third-party tool access, but it does not replace a full model risk-management programme for internally developed or high-impact third-party models. Model validation, continuous monitoring, retraining controls, explainability and fairness testing are specialist functions usually owned by model risk or data science teams. Firms need to complement a workplace policy with stringent model-lifecycle controls. This limitation is material where AI models drive pricing, claims triage or any decision with a material customer impact.2. Vendor and API risk require deeper contractual and technical work
A corporate prohibition on personal accounts reduces risk, but many organisations rely on vendor-hosted or API-based models. Contracts, SLAs and technical controls (VPC endpoints, data-at-rest policies, encryption, and logging) are needed to secure those integrations. The template advises on behaviour — not on renegotiating third-party terms or implementing secure integration patterns.3. Jurisdictional and legal edge cases
The template offers UK-centric pragmatic advice (including references to toggling off data contribution options), but insurers operating across the EU, US and APAC must map policy elements to local data-protection regimes and sectoral rules (for example, GDPR, the EU’s AI Act obligations for high-risk systems, or US state-level data laws). Some claims made in commentary around uptake and job titles have secondary sourcing and should be validated locally before being used as compliance evidence. Where claims cannot be independently verified, firms should treat them as directional rather than definitive.4. Enforcement and culture change are hard
A policy is only effective if enforced and embedded in culture. Without consistent training programmes, incident playbooks and technical blockers (e.g., network-level controls, sanctioned tool lists enforced by IT) the policy risks becoming window-dressing. The template calls for no training, no access — but implementing that at scale is a non-trivial operational change for large insurers.Regulatory and market context: what the policy enables insurers to demonstrate
Aligning policy to regulator expectations
UK regulators have been explicit: AI use is subject to existing regulatory tools (SM&CR, Consumer Duty, model risk governance). Practical measures that demonstrate reasonable steps — clear policies, staff training, oversight, and technical controls — will help firms evidence compliance in audits and supervisory interactions. The Intersys template provides a documented starting point firms can map to their supervisory evidence packs.Demonstrating reasonable steps against shadow-AI exposure
Independent breach research shows a measurable cost to shadow-AI incidents; documenting immediate mitigations helps firms show they acted to reduce foreseeable harm. That matters both for regulators and for cyber insurers assessing underwriting risk. A simple corporate prohibition on personal account use plus redaction rules can materially reduce the most common leakage routes.Practical implementation roadmap for insurers
The policy template is most valuable when used as a component of a time-bound remediation programme. The following roadmap converts the template into an implementable programme.- Conduct a rapid inventory (0–2 weeks)
- Identify high-risk workflows where AI is already used (underwriting models, claims triage, customer support scripts).
- Map which teams use public AI tools and the data types they handle.
- Apply the template as a baseline (week 1–3)
- Publish the adapted AI policy to targeted teams, focusing first on high-risk units.
- Lock down centralized account management and audit logging for sanctioned AI tools.
- Technical mitigations (weeks 2–8)
- Enforce network segmentation for AI integrations; use enterprise API gateways and private endpoints where possible.
- Disable vendor options that reuse customer inputs for model training unless contractually secured.
- Role-based training and certification (weeks 2–12)
- Mandate role-specific training and require completion prior to access, as the template recommends.
- Integrate AI behaviour modules into annual compliance training.
- Model governance integration (month 1–ongoing)
- Ensure any high-impact model is subject to model validation, fairness testing and ongoing monitoring.
- Extend risk assessments to third-party model suppliers and include contractual clauses for data use and incident response.
- Continuous audit and improvement (quarterly)
- Run shadow-AI discovery scans and periodic audits to detect unapproved tooling.
- Update the policy and controls as vendor features and regulatory guidance evolve.
Roadblocks insurers should anticipate
- Legacy systems and decentralised operations make rapid centralisation of AI accounts and tooling challenging. Expect delays in consolidating access control and inventorying API integrations.
- Vendor contracts may restrict the granular protections firms want (for example, some vendors’ terms limit control over retention of inputs). Addressing those clauses can be a slow procurement negotiation.
- Senior accountability needs clarity. Regulators expect clear senior-manager responsibility for model governance; assigning that in large organisations requires change-management work.
Industry implications: vendors, brokers and the cyber market
- Vendors will feel upward pressure to offer enterprise controls (training toggles, private endpoints, audit logs) as insurers standardise policy baselines. Expect commoditisation of features previously marketed as premium.
- Brokers and MGAs that rely on third-party tools will need to harmonise policies across contracts and data flows; market-wide adoption of a common template reduces systemic weak links.
- Cyber insurers will underwrite AI-related cyber risk with greater scrutiny. Improvements in first-line controls — as articulated in the Intersys template — can reduce premiums and improve loss ratios where insurers can demonstrate effective mitigation.
Critical assessment: strengths, risks and verdict
Intersys’ template is a useful, pragmatic contribution to a fast-moving problem set. Its strengths are immediacy, clarity and operational focus — it tells employees what to do and gives compliance teams a start point that can be adapted. For smaller insurers and MGAs, this reduces the initial friction of policy design and helps uplift baseline cyber hygiene quickly. However, a few caveats matter for any insurer treating the template as a silver bullet:- The template is not a substitute for robust model lifecycle governance or vendor contract remediation. Firms should not conflate user-level controls with model validation. Model risk management remains a specialist discipline requiring evidence, testing and monitoring beyond a workplace policy.
- Cross-border legal differences mean the template must be tailored rather than adopted verbatim in multi-jurisdictional organisations. Legal teams should map the policy to GDPR, the EU AI Act (where relevant), and any US state-level data constraints.
- The policy’s effectiveness depends on enforcement, tooling and corporate culture. Without technical blockers and continuous discovery, shadow AI can persist despite written rules.
Action checklist for boards and risk committees
- Approve and publish the adapted AI workplace policy within 30 days.
- Require senior-manager sign-off on AI risk allocation under the SM&CR or equivalent frameworks.
- Mandate completion of role-based AI training for all staff with access to regulated customer data prior to granting tool access.
- Commission a model and third-party risk gap assessment within 60 days to identify high-impact AI dependencies.
- Ensure incident response runs include AI-related scenarios in next tabletop exercise.
Final thoughts
The release of Intersys’ AI governance template is both pragmatic and timely: it equips insurers with a usable baseline at a moment when uncontrolled adoption of generative AI has demonstrable operational and financial consequences. But organisations should not mistake a downloadable policy for a complete governance programme. The template performs a crucial first-line function — narrowing immediate exposure and making compliance posture demonstrable — but insurers must pair it with model lifecycle controls, vendor contract remediation, technical integrations and a sustained culture-change programme to manage the medium- and long-term risks of enterprise AI. Caution: certain market-level claims about hiring trends or specific percentages quoted in vendor commentary are directional and should be independently verified against primary data before being relied on for governance decisions. The template is a pragmatic step; comprehensive, auditable AI governance remains an enterprise-wide objective that goes well beyond a single document.Source: FinTech Global Intersys unveils new AI governance policy for insurers