The Society of Radiographers has quietly but deliberately reshaped its privacy and technology policy to govern staff use of artificial intelligence, introduce a new learning management system, and formalise a cautious — yet progressive — approach to digital tools that touch member data. The changes make three things clear: staff must use approved AI tools (Microsoft Copilot is named as an example), personal data entered into AI systems must be minimized and human‑checked, and a new online learning platform, RAD Academy, will collect and process member learning records to support continuing professional development. These updates were highlighted for discussion at the SoR Reps’ Summit on Tuesday 11 November and mark a significant step in how a major professional body for radiography intends to balance innovation with information governance and professional accountability.
The Society’s explicit naming of Microsoft Copilot and the rollout of “RAD Academy” frames its approach: embrace tools that improve efficiency and learning, but place rules, training, and oversight around them so member data and professional decisions remain protected.
However, the policy as published leaves several operational details unspecified. The most consequential omissions concern the definitions of sensitive data, retention policies for both LMS records and AI prompts, and the vendor governance posture for RAD Academy. These are not small administrative items; they are the locus of legal, ethical and reputational risk. A policy that sets a responsible tone must be followed by concrete, public‑facing technical and contractual details that give members confidence their professional data — and by extension patient information — will remain protected.
Source: SoR Updated privacy policy provides guidance on SoR’s use of AI | SoR
Background
Why this matters now
Professional membership bodies are under intensifying scrutiny over how they adopt and govern AI. Radiography, a field that routinely handles sensitive personal health information and regulatory records, is especially exposed. The Society’s updated privacy policy lands against a backdrop of rapid enterprise adoption of generative AI, growing regulatory pressure on data processors, and public debate over privacy, safety, and accuracy of AI outputs.The Society’s explicit naming of Microsoft Copilot and the rollout of “RAD Academy” frames its approach: embrace tools that improve efficiency and learning, but place rules, training, and oversight around them so member data and professional decisions remain protected.
The role of the Society
The Society of Radiographers (SoR), together with its charitable arm the College of Radiographers, represents radiography professionals across the UK. Its policies and guidance shape continuing professional development, workplace standards, and member services — including digital platforms and communications that often store personal and professional data. Changes to the Society’s privacy posture therefore ripple through member services, staff workflows, and — crucially — patient information handling in professional settings.What the updated privacy policy says (practical summary)
- Staff are restricted to secure, approved AI tools for work‑related tasks; Microsoft Copilot is explicitly mentioned as an approved example.
- Use of AI is limited to assisting with administrative and analytical tasks that reference information already held within Society and College systems (e.g., drafting, summarising, retrieving or analysing internal records).
- Any personal data entered into an AI system must be necessary for the task, handled securely, and checked by a human before being used in communications or decisions.
- A new learning management system, RAD Academy, will collect standard learning records: name, email, role, progress and assessment results, to provide, manage and improve the learning experience.
- Staff usage will follow internal guidance and training, implying mandatory staff education and operational rules for tool use.
- The Society frames the changes under a commitment to responsible innovation, emphasising safety, transparency, and benefit to members and the wider professional community.
The building blocks: Microsoft Copilot and enterprise AI controls
What is Microsoft Copilot in the enterprise context
Microsoft Copilot is an integrated generative AI assistant embedded across Microsoft 365 applications. When deployed in enterprise settings, Copilot can assist with drafting emails, summarising documents, extracting information, and automating routine tasks. Enterprise deployments can be configured with additional governance controls intended to limit data leakage and confine AI operations to organisational data boundaries.Controls available and practical limits
Organisations implementing Copilot can use controls such as:- Data loss prevention (DLP) policies to block sensitive content from being typed into or transmitted to generative AI surfaces.
- Browser and endpoint protections that filter or restrict access to unsanctioned AI web apps (reducing “shadow AI”).
- Tenant-level configuration that keeps Copilot prompts and context within an organisation’s controlled cloud environment (subject to data residency options and vendor contracts).
- Authentication integration (SSO/MFA) and role‑based access control to ensure only authorised staff use Copilot features.
Learning management systems (LMS) and RAD Academy: what to expect
Data collected and why
The Society’s statement about RAD Academy mirrors common LMS practice: to deliver personalised learning and CPD tracking, an LMS needs identifiers and progress data. Typical data points include:- User identity and contact: name, email, professional role
- Authentication metadata: login times, IP addresses (for security/audit)
- Learning records: enrolments, course completions, quiz and assessment results, certificates
- Activity logs and training analytics to improve course design and compliance reporting
Key privacy and security obligations for an LMS in health and professional settings
- Data minimisation: collect only what is necessary to deliver learning and compliance functions.
- Purpose limitation and transparency: be explicit with users about what is collected, why, and how long it will be retained.
- Encryption at rest and in transit: use TLS/SSL for transmission and AES‑based encryption for stored data.
- Access controls and audit logging: limit access by role and log all administrative actions.
- Data portability and deletion: enable users and the organisation to export and remove records when required by law or policy.
- Vendor and hosting scrutiny: ensure third‑party LMS providers meet data residency, contractual and technical safeguards appropriate to regulated or professional data.
Strengths of the Society’s approach
- Explicit tool approvals: Naming approved tools reduces ambiguity and helps eliminate unsafe "shadow AI" use. A policy that directs staff to sanctioned tools is a practical way to enforce governance.
- Human‑in‑the‑loop requirement: Requiring human review of AI outputs before they influence communications or decisions aligns with good practice in regulated professions.
- Training and internal guidance: Mandated training recognises that technical controls alone are insufficient; staff behaviours are the most common failure point.
- Proactive LMS rollout: Centralising learning in RAD Academy provides an opportunity to standardise CPD, offer role‑based curricula, and embed privacy by design.
- Framing as "responsible innovation": The policy sets a tone of measured adoption rather than unquestioning tech enthusiasm, which is appropriate for a profession tied to patient care.
Risks, gaps and areas needing clarity
- Overreliance on a single vendor example: Mentioning Microsoft Copilot as an example of an approved tool is practical, but it risks complacency. Enterprise Copilot controls vary by tenant configuration and geography; the Society must avoid assuming a “one‑size‑fits‑all” safety posture.
- Unclear boundaries for sensitive data: The policy requires that "personal data entered must be necessary", but it does not explicitly define categories such as personal identifiers, health data, or sensitive professional records. Without concrete definitions and examples, staff may inadvertently expose higher‑risk data to AI systems.
- Retention and data residency questions: The policy does not publish specifics on how long learning records are stored or where AI interaction data (prompts, outputs, logs) are processed or retained. These are material items for legal compliance and member confidence.
- Third‑party LMS governance: It is not stated whether RAD Academy is hosted internally or provided by a vendor. Data controller vs processor roles must be explicit, and contracts must enforce encryption, breach notification, and audit rights.
- Auditability and incident response: The policy should clarify logging, monitoring, and post‑incident obligations. If Copilot is used to draft communications that later cause harm or leak data, the ability to trace actions and remediate quickly is essential.
- Cross-border regulatory complexity: Members, staff, and vendors may operate across jurisdictions. The policy does not detail how international data transfers or cross‑border processing will be handled.
Practical recommendations — a checklist for SoR and similar professional bodies
- Define "sensitive data" in operational terms
- List examples (names plus NHS/membership numbers, clinical identifiers, PHI, staff disciplinary records).
- Provide explicit "do not enter" examples for AI prompts.
- Publish a DPIA (Data Protection Impact Assessment)
- Conduct and share a DPIA for both Copilot usage and RAD Academy, summarising key risks and mitigations.
- Lock down technical controls
- Enforce tenant-level Copilot settings that keep prompts in‑tenant and disable web search where appropriate.
- Apply Purview DLP and endpoint protections to prevent sensitive data leakage.
- Define retention and deletion policies
- Publish retention timelines for RAD Academy learning records and for any AI interaction logs.
- Provide an export and erasure process for members.
- Vendor and contract hardening
- Ensure LMS vendor is contractually required to support encryption, audits, breach notification, and data locality controls.
- Insist on subprocessors disclosure and prior notice of changes.
- Strengthen access controls
- Use SSO and MFA for RAD Academy access.
- Implement role‑based access control to restrict who can view assessment results or personally identifiable learning data.
- Embed governance and training
- Provide role‑based training (administrators, clinical staff, comms teams).
- Maintain an AI use register that documents approved use cases and exceptions.
- Test and audit
- Schedule regular security assessments and privacy audits for RAD Academy and AI configurations.
- Run tabletop exercises for AI-related data incidents.
- Communicate transparently to members
- Publish a simple privacy notice for RAD Academy and an FAQ explaining what data is collected, why, and members’ rights.
- Plan for continuous review
- Commit to reviewing AI controls and vendor capabilities on a fixed cadence (e.g., every six months), as enterprise AI landscapes evolve rapidly.
Governance instruments that should accompany the policy
Human oversight standards
- Role‑specific checklists for reviewers who must validate AI outputs (what to check, red flags for hallucination or bias).
- Requirement to retain original AI prompt and human corrections for audit (subject to privacy and retention policies).
Approval and exceptions process
- A formalised route to request and record exceptions to the "approved tools" list, including documented risk assessments.
Recordkeeping for AI-assisted decisions
- Maintain an “AI decision log” where outputs that materially influence member communications, professional advice or regulatory outcomes are recorded and justified.
Transparency for members
- Simple, accessible explanations of what RAD Academy collects and how their learning records are used. Provide clear opt‑out alternatives where feasible.
Legal and regulatory considerations
The Society operates primarily under UK data protection law and, where applicable, GDPR principles. This imposes clear obligations:- Lawful basis and purpose limitation: Personal data must be collected for specified purposes (education, CPD tracking) and not repurposed without legal basis.
- Data subject rights: Members have rights to access, correct, export and request deletion of personal data.
- Accountability: The Society must document processing activities, DPIAs and risk mitigation.
- Cross‑border transfers: If RAD Academy or Copilot interactions are processed outside the UK/EEA, appropriate transfer mechanisms must be in place.
How members and local employers should respond
- Assume no AI output is authoritative: continue to treat AI as an assistant, not as a substitute for clinical judgement or regulated decision‑making.
- Review local policies: where SoR guidance intersects with employer systems and patient data, local NHS trust or employer policies may impose stricter controls.
- Check consent and disclosure: if learner data will be used for research or aggregated analytics, explicit, informed member consent (or lawful basis) is required.
- Request clarity: members should ask SoR for published retention schedules, vendor contracts’ high‑level privacy commitments, and the Society’s DPIA summary.
A balanced verdict: progressive with caveats
The Society of Radiographers has taken a pragmatic route: it acknowledges the potential of AI to reduce administrative friction and support learning, while insisting on human oversight and targeted controls. Naming an enterprise tool and launching a central LMS are decisive steps that will help standardise practice and reduce unsafe ad‑hoc tool use.However, the policy as published leaves several operational details unspecified. The most consequential omissions concern the definitions of sensitive data, retention policies for both LMS records and AI prompts, and the vendor governance posture for RAD Academy. These are not small administrative items; they are the locus of legal, ethical and reputational risk. A policy that sets a responsible tone must be followed by concrete, public‑facing technical and contractual details that give members confidence their professional data — and by extension patient information — will remain protected.
Practical next steps SoR should publish publicly (recommended timeline)
- Within 30 days: Publish a short privacy notice for RAD Academy and a one‑page AI user guide that defines forbidden data types for Copilot.
- Within 60 days: Produce and share a DPIA executive summary for both Copilot use and RAD Academy.
- Within 90 days: Finalise vendor contracts and publish a vendor assurance statement confirming encryption, audit rights, and data residency commitments.
- Ongoing (every 6 months): Review and publish an AI control report summarising incidents (if any), policy changes, and new approved tools.
Conclusion
The Society’s updated privacy policy marks a credible, cautious step toward integrating AI and modern learning platforms into professional services for radiography. The policy’s strengths lie in explicit tool approvals, a human‑in‑the‑loop requirement, and the launch of a structured learning environment. To convert these intentions into durable protections, the Society must now close important operational gaps: define sensitive data, publish retention and residency commitments, formalise DPIAs, and embed vendor assurances. Done correctly, these measures will protect members and patients while unlocking the clear productivity and learning benefits AI promises. Done incompletely, they risk exposure to preventable data incidents and erode member trust at a moment when both transparency and rigour are essential.Source: SoR Updated privacy policy provides guidance on SoR’s use of AI | SoR