King’s College London Spends £35K on Copilot as AI Misuse Expulsions Rise

  • Thread Author
King’s College London has quietly spent £35,013 on Microsoft Copilot licences while university disciplinary records show AI use has become a live integrity issue — with 10 students expelled since September 2022 for cases in which AI misuse was cited as a major factor.

A laptop on a desk displays Copilot's AI misuse warning with policy papers nearby.Background​

King’s disclosure, obtained by journalists via Freedom of Information requests and published by a student news outlet, states the university bought 80 tenant seats as part of an Enrolment for Education Solution renewal in September 2024 for £28,396, and later purchased 13 ad‑hoc licences for £6,617 — a combined total of £35,013. The same reporting aggregates the disciplinary picture: between September 2022 and the present, AI was mentioned in 55 cases that reached the misconduct committee, 42 of those cases led to some sanction, and 10 resulted in expulsion. In the 2024/25 academic year alone, 32 students were investigated, 17 disciplined and five expelled for AI‑related reasons. These figures land against a fast‑moving sector debate. National and international bodies are warning campuses to treat Copilot‑style assistants with caution because of unresolved privacy and accuracy concerns, and the sector ombudsman has explicitly reminded universities of the limitations of automated AI‑detection tools when adjudicating misconduct.

Overview: what the numbers mean​

Short, concrete takeaways:
  • King’s procurement cost for Copilot seats is modest in absolute terms for a large Russell Group institution, but it signals an institutional decision to adopt Microsoft’s assistant within part of its administrative/educational estate.
  • The disciplinary data show AI usage is already material to academic integrity processes; expulsions have occurred and dozens of students have faced sanctions.
  • The national context is volatile: privacy watchdogs and education consortia have warned against or advised caution around Copilot deployments, and ombudsman decisions stress careful evidential standards before penalising students based primarily on detector output.
Each of those facts is verifiable in public reporting and institutional material; where underlying records are currently sealed for privacy, the journalist’s FOI‑sourced reporting is the authoritative public reflection of those FOI returns at time of publication.

King’s procurement: scale, scope and likely rationale​

What King’s bought and why it matters​

King’s purchased 80 Copilot licences as part of an enrolment‑centric product renewal and added 13 licences on an ad‑hoc basis. The line items published by the FOI response show this was a purchased enterprise/education seat allocation rather than a consumer promotion, and the headline cost — £35k — is concentrated in per‑seat licensing rather than systems integration or bespoke development. For IT leaders this pattern is familiar: buying a modest cluster of tenant licences is a conservative procurement step that lets institutions pilot the productivity and administration benefits of Copilot (email summarisation, meeting notes, document drafting assistance, automated admin tasks) without provisioning campus‑wide exposure or paying for thousands of student seats. Internal pilots often anchor later larger buys if the tool proves operationally valuable. Internal rollout documents from other campuses show a common playbook: start with staff/admin seats, couple rollout with DLP and SSO controls, and expand after governance and pedagogy are aligned.

Budget perspective​

By higher‑education procurement standards, £35k for 93 total seats (80 +13) is not a transformative capital line. The cost signal, however, is political: it tells faculty and students that the institution views Copilot as part of its operational toolkit. That matters because the technical and contractual details for tenant Copilot licences differ materially from the consumer Copilot seats students may obtain through personal Microsoft accounts — especially on data use and contractual protections. SURF, the Dutch educational ICT cooperative, has repeatedly warned institutions to be careful about Copilot’s privacy footprint and contractual transparency.

Disciplinary outcomes: expulsions, investigations, and patterns​

What the FOI reporting reveals​

The reporting may be summarised as follows: since September 2022, AI has been explicitly mentioned in 55 misconduct committee cases at King’s; 42 of those cases resulted in disciplinary action; and 10 expulsions have been recorded where AI misuse was cited as a major reason. The most recent academic year (2024/25) alone included 32 investigations, 17 sanctions and five expulsions. Those numbers are significant in that they show institutional enforcement is already happening, not merely policy statements.

Comparative context​

Media reporting from other outlets shows King’s is not alone: national investigations have found expulsions linked to AI misuse at a handful of UK universities, and sector‑level analysis suggests enforcement is uneven across institutions. For example, investigative reporting into Russell Group universities showed only a small proportion of students faced penalties for AI misuse nationally, but several universities — including King’s — did confirm expulsions connected to AI. That irregularity points to divergent local policy and evidentiary practice.

What the numbers do — and do not — prove​

  • They prove that sanctions are occurring. The FOI‑derived totals are concrete evidence that disciplinary machinery is engaging with generative AI use.
  • They do not prove uniform fairness or consistent evidential thresholds. Public ombudsman rulings and sector commentary warn that automatic reliance on detection tools or single indicators is legally and ethically risky; each case must be examined on its facts and process.

Students’ experience: mixed use, convenience and concern​

Roar’s student survey data accompanying the FOI reporting indicate patterns many educators already recognise:
  • 60% of surveyed students use AI to summarise readings.
  • 57% used AI for idea generation and assignment planning.
  • 32% used AI to help write essays — a smaller but still significant share.
These figures illustrate a practical reality: students treat generative assistants as study aids first (summaries, brainstorming) and as content generators second (essay drafting). That blend explains both the pedagogical promise and the integrity risk: when a tool moves from scaffolding to substitution, the institution’s assessment design and supervision must change to preserve the value of the qualification. Independent user surveys and campus pilots elsewhere echo the same pattern: students reach for AI when under time pressure, and a minority will attempt to use it in ways that cross academic rules.

King’s official stance and the student union response​

King’s policy​

King’s public guidance states generative AI can be used only at the discretion of module convenors; copy‑and‑paste into submissions is forbidden; and any use must be acknowledged. The College’s Academic Board recognises that student use is likely to continue regardless of policy, and that staff must undergo a mindset shift to adapt assessment and pedagogy. These institutional words are typical: they combine a restrictions‑plus‑education approach rather than an outright ban.

KCLSU (students’ union) and the AI Manifesto​

The student union at King’s has been drafting an AI Manifesto to clarify the Union and University position. The Union has raised two concerns: that policy language is too unclear and that students are being wrongly accused, and that detection tools are imperfect and can lead to unfair outcomes. That tension — between institutional enforcement and student due‑process — is where many campus disputes are playing out nationally.

Detection technology and the ombudsman’s warning​

The technical limits of detection​

Automated AI detectors are improving but remain fallible. Peer reviews and sector testing show detectors struggle with hybrid texts (human‑edited AI outputs), short answers, translated or non‑native English wording, and deliberate obfuscation. Several universities have paused or disabled Turnitin’s AI indicator after evidence of false positives, and the Office of the Independent Adjudicator has explicitly cautioned providers to avoid over‑reliance on these tools as conclusive evidence.

Procedural fairness and burden of proof​

The OIA’s case summaries emphasise that the burden is on the provider to prove misconduct; detectors can be an investigatory lead but not the sole basis for sanctions. Universities must weigh detection results against process evidence such as draft histories, viva voce (oral) defenses, access logs, and corroborating material before imposing severe penalties like expulsion. This is a legal and reputational constraint that all institutions must weave into their misconduct procedures.

Privacy, governance and the SURF advisory​

SURF’s DPIA and the privacy question​

SURF’s Data Protection Impact Assessment into Microsoft 365 Copilot concluded that the tool posed privacy risks to educational institutions, primarily because of lack of transparency about what personal data is collected and retained and unclear documentation about retention and access. SURF advised educational and research institutions not to use Microsoft 365 Copilot “for the time being” in December 2024, and has since updated that stance to a more cautious “use with restrictions” posture as Microsoft provided mitigation information. That advisory places a governance burden on any institution that deploys Copilot: legal teams, procurement and IT must ensure contract terms, retention windows, non‑training guarantees and DLP controls are explicit and effective.

What procurement must cover​

For IT and procurement professionals, the takeaways are practical:
  • Require contractual clarity on whether prompts and uploaded files will be used to further train public models.
  • Negotiate retention windows, deletion rights, audit/export access and data localisation where required.
  • Enforce strict SSO, SCIM and role‑based admin controls so tenant seats cannot be misused and to make forensic logging feasible. Institutional pilots and guidance documents emphasise these controls as essential.

Strengths of institutional Copilot adoption​

Adopting Copilot in a controlled way can deliver measurable benefits:
  • Productivity gains for staff: automated drafting, meeting summarisation and email triage save time across administrative functions.
  • Pedagogical scaffolding: Copilot can be used to generate formative exercises, summary notes and revision prompts that, when supervised, support student learning.
  • Controlled exposure: offering some institutional seats allows controlled, privacy‑governed exposure to the technology for research and staff training, building institutional capability responsibly. Institutional playbooks from multiple deployments suggest these are realistic, immediate gains — provided governance and pedagogy keep pace.

Risks and downsides​

  • Privacy and contractual opacity
  • Copilot’s data handling and telemetry practices have been flagged by external auditors; institutions must not assume vendor statements are sufficient without written contractual guarantees. SURF’s DPIA remains a cautionary benchmark.
  • Academic integrity and assessment design
  • Rapid student adoption of AI tools risks deskilling when assessments continue to reward polished final output rather than authentic process; redesigns toward staged submissions, process logs and oral assessments are necessary steps.
  • Detection and due process
  • Detectors do not provide courtroom‑grade proof. The ombudsman’s summaries prove that over‑reliance on a black‑box indicator leads to unsafe findings, appeals, and reputational damage.
  • Equity and access
  • Widespread student use of consumer AI can widen gaps: students with better devices and paid subscriptions will gain advantages unless institutions address parity. Pilots and rollout plans elsewhere consistently emphasise device parity and opt‑out pathways as mitigations.
  • Vendor lock‑in and recurring cost
  • A small initial spend can be a precursor to multi‑year commercial dependence. Without clear exit and data‑export provisions, migration costs escalate. Procurement checklists recommend negotiating audit/export rights and explicit non‑training terms.

Practical recommendations for universities and IT leaders​

  • Treat AI procurement as cross‑functional — procurement, legal, IT, academic leads, and student reps must be at the table.
  • Require explicit non‑training clauses, retention periods, and export rights in contracts with vendors. SURF’s DPIA should be a model for contract‑level questions.
  • Start small: pilot Copilot with staff/admin seats first, collect telemetry, pedagogical outcomes, and any integrity incidents over a semester before scaling.
  • Redesign assessments: move to staged submissions, process logs, oral vivas and portfolios where authentic process is required. Teach prompt literacy and demand disclosure of AI assistance.
  • Avoid punitive-first responses: ensure investigations do not rest solely on an AI detector’s score; secure drafts, timestamps, and viva evidence before escalating to major sanctions. OIA guidance is explicit on this point.
  • Communicate clearly with students: publish plain‑language AI guidance, explain how detections are used in investigations, and provide support for students to learn to use AI responsibly.

Why King’s figures matter beyond the headline​

King’s spending figure alone is not a scandal — it is a governance decision that reflects the university’s view of productivity tooling. The more consequential story lies in the disciplinary totals: expulsions and dozens of sanctions indicate that policy and practice have not simply been passive. Where institutions adopt AI tools, they must simultaneously mature assessment design, investigatory practice and procurement safeguards. The national sector trends — public advisories from SURF, ombudsman rulings, and universities disabling or revising detector use — show exactly how messy and policy‑sensitive this transition is.

Final analysis — balancing opportunity and institutional risk​

Institutional Copilot adoption brings real operational benefits but imposes a governance tax. The benefits are highest when:
  • The rollout is limited and instrumented,
  • Contracts are explicit about data use and training,
  • Faculty redesign assessments, and
  • Students are taught how to use AI responsibly and required to disclose its use.
The risks are acute where procurement outpaces policy, detectors are treated as conclusive evidence, or contracts fail to protect student and research data. The ombudsman rulings and SURF advisories should be read as early‑warning system signals: they mean the sector must slow down procurement rollouts that lack legal and pedagogical guardrails. Universities that manage the transition will do three things well: negotiate strict contractual protections, redesign assessment to foreground process over polished final prose, and treat detection outputs as investigatory leads rather than verdicts. Those are technical, legal and cultural fixes — none of them optional if higher education is to preserve the credibility of degrees while responsibly integrating AI into everyday teaching and administration.

King’s College London’s newly reported Copilot spending and the FOI‑revealed disciplinary statistics are a microcosm of a sector pulled between productivity gains and integrity obligations. The lessons here are operational: procurement must be paired with contract safeguards; pedagogy must be redesigned to measure learning process, not an artefact that AI can replicate; and institutions must build investigatory procedures that respect student rights, are evidence‑based, and avoid over‑reliance on imperfect detection tools. The future of credible assessment depends on getting those tradeoffs right.
Source: Roar News Exclusive: King's Spent £35k on Copilot AI as 10 Students Expelled for AI Misuse Since 2022 - Roar News
 

Back
Top