Liverpool John Moores University has enrolled 134 staff in a new AI Academy delivered in partnership with training provider Multiverse, kicking off an institution-wide push to embed generative AI into everyday workflows with the explicit aim of reclaiming administrative time for teaching, research, and student support.
Liverpool John Moores University (LJMU) says the AI Academy will bring academics and professional services staff together under a single upskilling programme designed to build AI capability at scale, strengthen data handling, and promote a consistent approach to responsible AI across the institution. The initial intake — 134 employees drawn from across faculties and support services — will start on a sequence of Multiverse programmes that include practical productivity training, advanced leadership pathways, and fellowship-level work in machine learning and data-driven process redesign.
This announcement marks a shift from small, local pilots toward coordinated, institution-level deployment strategies. Rather than treating AI as the preserve of a few innovation teams, LJMU is explicitly positioning AI as operational infrastructure: a set of tools and processes intended to sit alongside established systems and to be used across teaching, assessment, administration, and research workflows.
Notable strengths of the vendor model include:
Risk: Sensitive data used in model prompts or training
Decision-makers should evaluate partners on:
However, the promise of reclaimed time and improved productivity is not automatic. Realising those benefits requires rigorous measurement, strong governance, careful data practices, and a strategic approach to procurement and vendor management. Headline projections — like the reported average of 4.5 hours saved per week — are useful as targets, but institutions must produce transparent baselines and independent evaluations to convert projections into credible institutional policy.
For IT leaders and university executives planning similar programmes, the central lesson is this: treat AI adoption as an organisational transformation, not a training exercise. Only by aligning policy, procurement, technical controls, and pedagogy can universities both harness the productivity of generative AI and protect the educational mission they exist to serve.
Source: EdTech Innovation Hub LJMU launches AI Academy with Multiverse for 134 staff | ETIH EdTech News — EdTech Innovation Hub
Background
Liverpool John Moores University (LJMU) says the AI Academy will bring academics and professional services staff together under a single upskilling programme designed to build AI capability at scale, strengthen data handling, and promote a consistent approach to responsible AI across the institution. The initial intake — 134 employees drawn from across faculties and support services — will start on a sequence of Multiverse programmes that include practical productivity training, advanced leadership pathways, and fellowship-level work in machine learning and data-driven process redesign.This announcement marks a shift from small, local pilots toward coordinated, institution-level deployment strategies. Rather than treating AI as the preserve of a few innovation teams, LJMU is explicitly positioning AI as operational infrastructure: a set of tools and processes intended to sit alongside established systems and to be used across teaching, assessment, administration, and research workflows.
What the programme covers — a practical overview
The academy enrolment is structured as a ladder of learning, beginning with hands-on, tool-focused modules and progressing to strategy and leadership training. Key components reported by the university and the training partner include:- Level 3: AI-Powered Productivity — practical use of generative AI in day-to-day work, including working with platforms such as Microsoft 365 Copilot and large-model assistants like Gemini.
- Additional Level 3 or equivalent modules: AI for Business Value and AI & Machine Learning Fellowship — focused on process redesign, ethical use, and converting data into actionable insights.
- Level 5: AI Strategy and Leadership — aimed at managers and teams responsible for guiding AI implementation across units and balancing technical, operational and ethical trade-offs.
The headline efficiency claim — what it actually says
LJMU projects that staff who complete the training and adopt AI workflows could recover an average of 4.5 hours of work per week. The university highlights several routine activities where AI-driven automation and assistance are expected to save time:- automating reports and routine data pulls
- streamlining document preparation and drafting
- improving triage of enquiries and case work
- reducing manual note-taking through AI-assisted transcription and summarisation
Why LJMU is doing this now: operational pressures and strategic goals
Several environmental and strategic factors explain the timing and scale of LJMU’s move:- Financial pressure across higher education is real; institutions are looking for scalable efficiency gains without resorting to cuts that damage core teaching and student support services.
- Student expectations have shifted: learners expect timely feedback, flexible contact hours, and modern digital experiences. Reclaiming administrative time to increase staff-student engagement is a tangible way to respond to those expectations.
- The maturity of generative AI tools and their integration into mainstream productivity suites (for example, Microsoft 365 Copilot and other large-model assistants) makes practical deployment feasible in ways it wasn’t two years ago.
- Risk management and governance concerns are maturing. Rather than banning or constraining AI use, many universities now prefer controlled, upskilling-led rollouts that embed policy, ethics, and technical controls.
Multiverse as the delivery partner — strengths and caveats
Multiverse is an applied learning and apprenticeship provider that has expanded into corporate upskilling, including AI, data and digital competencies. The vendor emphasises on‑the‑job learning, cohort models, and measurable outcomes.Notable strengths of the vendor model include:
- Experience delivering applied apprenticeships and workplace learning at scale across sectors.
- A productised curriculum that can be deployed consistently across cohorts and locations.
- Practical, outcome-driven training focused on results, not just theory.
- Vendor-reported apprenticeship counts and metrics vary across different communications; prospective partners should validate the vendor’s delivery capacity and alumni outcomes with independent references.
- Like many fast-growing training firms in the AI space, Multiverse has seen rapid expansion alongside significant investment — institutions should assess partner financial sustainability and long-term support commitments.
- The value of any training programme hinges on organisational change management: training professionals is necessary but not sufficient. Universities must also adapt processes, integrate tooling with existing systems, and measure real-world outcomes.
Responsible AI: governance, policy and data protection
Training staff to use generative AI creates immediate governance questions that must be answered before wide deployment:- Data protection and compliance: Any use of AI that ingests student, staff, or research data must comply with applicable data protection frameworks (for UK institutions, the UK GDPR and related guidance). This means defining allowed data classes, retention policies, and whether models operate on-premises, via trusted enterprise channels, or on public cloud endpoints.
- Model choice and vendor lock-in: Integration with productivity layers like Microsoft 365 Copilot or cloud model providers introduces vendor dependencies. Procurement should evaluate portability, exit strategies, and whether fine-tuning or private deployments will be required for sensitive workloads.
- Accuracy and hallucination risk: Generative models can produce confidently-worded errors. Workflows that use AI for automated communications, assessment feedback, or decision-support must include human review steps and clear accountability.
- Academic integrity and assessment: The presence of generative tools changes the assessment landscape. Universities need updated academic integrity policies, explicit expectations for AI use in coursework, and aligned assessment design that tests higher-order skills less susceptible to automation.
- Equity and access: Not all staff and students will adopt tools at the same rate. Training programmes must be inclusive and accompanied by accessibility measures so benefits do not accrue unevenly across cohorts or departments.
Measuring success — what to track and how to prove ROI
Institutions that want to replicate LJMU’s approach should define measurable success criteria from day one. Key indicators include:- Baseline workload and time-use metrics segmented by role (teaching time, admin time, student contact hours).
- Changes to student-facing KPIs (timeliness of feedback, student satisfaction scores, progression/retention signals).
- Process efficiency metrics (time-to-complete routine tasks, number of manual touches per workflow).
- Quality and safety indicators (error rates in automated reports, incidence of data protection breaches, audit logs).
- Uptake and confidence measures (percentage of staff using AI tools, self-reported confidence and competence).
- Conduct a baseline time-and-motion or workload audit across representative teams before training starts.
- Run a controlled pilot cohort where outputs and time savings are tracked against matched controls.
- Use a mix of quantitative metrics (time saved, number of automated tasks) and qualitative evidence (staff surveys, case studies).
- Publish a transparent evaluation that shows methodology and assumptions behind headline figures like projected hours saved.
Operational risks and mitigations
Adopting AI at scale brings practical risks. Below are common issues universities will encounter and suggested mitigations.Risk: Sensitive data used in model prompts or training
- Mitigation: Implement strict data classification, prevent ingestion of personal data into public models, and prefer enterprise or on-prem model deployments for regulated data.
- Mitigation: Maintain interoperability standards, insist on exportable artefacts (e.g., audit logs, prompt histories), and negotiate contractual exit clauses.
- Mitigation: Tie efficiency targets to student-facing metrics and earmark time savings for defined student support activities.
- Mitigation: Frame training as augmentation, not replacement; reskill staff toward higher-value tasks such as student engagement and complex problem-solving.
- Mitigation: Introduce human-in-the-loop review for all outputs used in decision-making; run regular bias and safety testing.
- Mitigation: Adopt a centralised policy and a federated delivery model — central governance with devolved implementation — to ensure consistent standards.
A practical checklist for universities planning an AI Academy
- Conduct a thorough baseline audit of staff time use and process bottlenecks.
- Define clear, measurable goals tied both to productivity and to student outcomes.
- Select training partners who commit to outcome measurement and who provide evidence of workplace impact.
- Build a cross-functional governance body: IT, legal, academic staff, student representatives, and data protection officers.
- Specify data handling rules up front and align tool selection with compliance requirements.
- Pilot with a representative cohort and publish transparent evaluation results.
- Invest in integration: ensure AI tools are usable inside existing systems rather than as standalone stovepipes.
- Create feedback loops: collect staff and student input, monitor usage patterns, and iterate on policy and training.
- Plan for long-term support: ongoing upskilling, model updates, and a roadmap for scaling.
The academic integrity question — how to redesign assessment and feedback
Generative AI changes what it means to test learning. Universities adopting AI broadly must rethink assessments in ways that:- Emphasise process, reflection, and in-person or proctored demonstration of complex skills.
- Use authentic assessment tasks linked to real-world problems where AI can be part of the toolkit but cannot produce entire submissions without substantial student input.
- Integrate AI-awareness into curricula so students can demonstrate competence in using AI responsibly and critically.
- Train markers and examiners to identify AI-produced artefacts and to use AI themselves for consistent, evidence-based feedback — subject to human oversight.
Vendor sustainability and market context — a sober look at partners
Providers such as Multiverse have scaled rapidly to meet enterprise demand for AI and digital skills. That growth brings benefits — tested curricula, implementation experience, employer networks — but also business risks: fast-growing firms can be subject to operating losses, strategic pivots, and market consolidation.Decision-makers should evaluate partners on:
- Track record of delivering similar programmes in higher education.
- Demonstrable learner outcomes and employer testimonials.
- Financial and operational stability to ensure multi-year programme delivery.
- Depth of content expertise — not just educational design but also domain-specific AI safety and governance.
Broader sector implications: from experimentation to infrastructure
LJMU’s approach typifies a broader sector trend: universities moving from experimentation to treating AI as part of core infrastructure. When a university invests in training cohorts across both academic and professional services, it signals a systemic adoption model. That has three structural implications:- Procurement: Universities will need procurement frameworks that account for AI-specific requirements (model provenance, explainability, data residency).
- Staffing: New job profiles will emerge — AI ops, model auditors, prompt engineers in professional services — while existing roles will shift toward orchestration and oversight.
- Education mission: Institutions must balance the benefits of operational AI with the responsibility to prepare students for a world where AI tools are ubiquitous.
Recommendations: pragmatic steps for implementation
For university leaders considering an AI Academy, these steps increase the chance of beneficial, sustainable outcomes:- Start with business-critical workflows where risks are low and ROI is measurable (e.g., scheduling, routine reporting, transcription).
- Build robust consent, logging, and audit infrastructure to trace how data flows through AI systems.
- Make governance visible: publish a clear policy that describes acceptable AI use, escalation routes, and sanctions for misuse.
- Invest in instructor training: equip those who will teach colleagues with both technical knowledge and organisational change skills.
- Treat time-savings as conditional: require units to demonstrate how saved time will be reallocated to teaching or student support.
- Run a two-phase rollout: intensive pilot with rigorous evaluation, then scaled deployment conditioned on meeting predefined metrics.
Conclusion
Liverpool John Moores University’s AI Academy is a substantial bet on workforce-centred adoption of generative AI: rather than confining AI to research labs or technology pilots, the institution is aiming to build capability across the university. If successful, that approach promises measurable efficiency gains and the reallocation of staff time toward higher-value work that benefits students.However, the promise of reclaimed time and improved productivity is not automatic. Realising those benefits requires rigorous measurement, strong governance, careful data practices, and a strategic approach to procurement and vendor management. Headline projections — like the reported average of 4.5 hours saved per week — are useful as targets, but institutions must produce transparent baselines and independent evaluations to convert projections into credible institutional policy.
For IT leaders and university executives planning similar programmes, the central lesson is this: treat AI adoption as an organisational transformation, not a training exercise. Only by aligning policy, procurement, technical controls, and pedagogy can universities both harness the productivity of generative AI and protect the educational mission they exist to serve.
Source: EdTech Innovation Hub LJMU launches AI Academy with Multiverse for 134 staff | ETIH EdTech News — EdTech Innovation Hub