• Thread Author
Aston University and the University of Leeds have been awarded £3.4 million to lead a four‑year national network — the Artificial Intelligence Researcher Development Network Plus (AI.RDN+) — that will map how publicly available generative AI tools are being used across doctoral research and produce practical guidance, training and a living portal for supervisors, examiners and research‑enabling teams.

A team in a futuristic command room analyzes a holographic AI network across the UK.Background​

Publicly available generative AI tools such as ChatGPT and Microsoft Copilot have become ubiquitous in academic workflows, promising productivity gains but also raising complex challenges around governance, data privacy, reproducibility and assessment. Universities are moving from blanket bans toward managed adoption models — offering centrally vetted AI workspaces, issuing usage guidance, and piloting role‑specific training — and AI.RDN+ is explicitly positioned within this shift to address the distinctive needs of doctoral research.
AI.RDN+ is led by Professor Phil Mizen of Aston University with academic leads from the University of Leeds, including Dr Hosam Al‑Samarraie and Professor Arunangsu Chatterjee, and draws on the Midlands Innovation cluster (eight research‑intensive institutions) and the Yorkshire Universities consortium (12 institutions). The network’s work will also be supported by sector bodies including Jisc, Vitae, the UK Council for Graduate Education and the National Centre for Universities and Business.

Why this matters: doctoral research is not the same as undergraduate teaching​

Doctoral study differs from undergraduate education in several crucial ways: longer project timelines, bespoke datasets (often sensitive), bespoke analytical pipelines, closer supervisor–candidate relationships, and a wider range of research outputs (theses, code, datasets, unique archival material). The consequences of introducing consumer-facing generative AI into these workflows are therefore distinct and often more consequential. AI.RDN+ targets these specific fault lines.
Key distinctions that make doctoral contexts unique:
  • Data sensitivity: doctoral projects frequently process human subjects data or proprietary partner data where leaking prompts or content to public models could be legally and ethically damaging.
  • Reproducibility and provenance: AI‑assisted literature reviews, code generation, and drafting complicate the audit trail of research decisions unless prompts, model versions and post‑processing are logged.
  • Supervision and assessment: supervisors and examiners need clear, consistent policies on disclosure, attribution and acceptable use — areas where undergraduate policy work does not translate directly.

What AI.RDN+ will do (scope and outputs)​

AI.RDN+ is funded through the Research England Development Fund as a four‑year programme. The network has a practical, evidence‑driven remit and aims to combine large‑scale consultation with mixed‑methods research across the doctoral ecosystem. Planned deliverables include:
  • A living resource base that catalogs publicly available AI tools and recommends use cases and risk mitigations for doctoral workflows.
  • Large‑scale surveys and qualitative research with doctoral researchers, supervisors, examiners and research‑support staff to map uptake, attitudes and practices.
  • Co‑created training and professional development modules targeted to distinct roles (PhD researchers, supervisors, research‑IT and professional services).
  • An AI.RDN+ portal to publish guidance, case studies of practice, templates (procurement, DPIAs), and training assets.
These deliverables focus on practical institutional needs: procurement and contract language, Data Protection Impact Assessment (DPIA) templates, logging and provenance recommendations, and low‑barrier microtraining modules.

Leadership, partners and scale​

AI.RDN+ brings together regional research partnerships to achieve breadth and transferability in its findings. The leadership team and institutional network give the project two important advantages:
  • Scale: by coordinating the Midlands Innovation group and the Yorkshire Universities consortium, AI.RDN+ gains access to a broad base of doctoral researchers across disciplines and institution types — enabling more generalisable findings and multi‑site pilots.
  • Sector pathways: named sector partners — Jisc, Vitae, UK Council for Graduate Education, and the National Centre for Universities and Business — create direct routes for the project’s outputs to influence national researcher development infrastructure and practice.
Key named leads include Professor Phil Mizen (Aston), Dr Hosam Al‑Samarraie and Professor Arunangsu Chatterjee (Leeds), with additional academic collaborators listed from both universities and the regional networks. This mix provides methodological expertise across education, social science, information science and research‑support practice.

Strengths and opportunities: what AI.RDN+ can realistically deliver​

AI.RDN+ is well designed to address gaps in current university AI policy work. Its most notable strengths include:
  • Targeted focus on the doctoral lifecycle. Most institutional AI work to date has centred on undergraduate assessment integrity; AI.RDN+ deliberately focuses on supervision, thesis production, examiner practice and long‑term data stewardship. This is a critical and under‑served space.
  • Evidence‑based, co‑creative approach. The network plans to co‑create guidance with stakeholders rather than imposing top‑down rules. Practical case studies, pilots and modular resources increase the likelihood of adoption.
  • Practical, operational outputs. By committing to procurement templates, DPIA checklists, logging guidance and microtraining, AI.RDN+ can move from diagnosis to actionable institutional tools. These products are high‑value to IT, research‑IT and doctoral schools.
  • Sector integration and scalability. Links with national bodies offer the potential for AI.RDN+ outputs to be adopted widely beyond the initial consortium.

Risks, blind spots and limits to watch​

No research programme operates in a vacuum; AI.RDN+ faces several structural risks that could blunt impact unless mitigated.
  • Vendor assurances are contractual, not technical guarantees
    Vendors often provide contractual assurances (for example, that enterprise Copilot or ChatGPT Edu deployments will not feed prompts back into public model training). These assurances are only as secure as the contracts and enforcement mechanisms that back them; they are not technical or cryptographic guarantees. AI.RDN+ must emphasize procurement best practice and legal checks, not treat vendor claims as absolute.
  • Detection tools are unreliable and dangerous as the sole enforcement mechanism
    Automated AI‑detection tools produce false positives and negatives; punitive policies grounded in unreliable detectors risk unfair sanctions. Examiner and integrity guidance must instead focus on transparency, redesign of assessment, and reproducibility practices.
  • Rapid model evolution undermines static guidance
    Models, vendor terms and regulatory frameworks change quickly. A four‑year funded window is useful but not definitive; AI.RDN+ must commit to living resources and handover arrangements so guidance remains current after the grant ends.
  • Uneven capacity across institutions
    Smaller or resource‑constrained departments may struggle to implement recommendations that require procurement, monitoring or staffing. AI.RDN+ should prioritize low‑effort templates, microtraining and adaptable tools that lower adoption barriers.
  • Environmental and cost implications
    Widespread inference use has real cost and carbon consequences. Transparency from vendors on energy consumption and lifecycle impacts should be part of the guidance AI.RDN+ produces.

Recommendations for university IT, research‑IT and doctoral schools (actionable checklist)​

The WindowsForum audience needs concrete steps. The following checklist condenses practical priorities that AI.RDN+ is well placed to validate and amplify.
  • Governance and procurement
  • Require explicit contractual clauses preventing the use of institutional prompts/data for public model training unless authorised; include deletion, audit and retention clauses and breach notification provisions.
  • Conduct DPIAs for AI services that may process research participants’ data.
  • Controlled access and logging
  • Prefer enterprise or campus‑provisioned AI workspaces (licensed Copilot, ChatGPT Edu, or on‑prem/open‑source stacks) over unvetted consumer accounts.
  • Implement central logging and metadata capture for AI interactions used in research workflows to preserve provenance (prompt, model, version, timestamp, and any post‑processing).
  • Redaction and input hygiene
  • Produce standard redaction templates and short training to remove PII and avoid pasting raw sensitive or proprietary data into public chatbots.
  • Supervision, reproducibility and examiner practice
  • Treat AI outputs as research artefacts: require disclosure of tool names, prompt approaches and human edits in methods sections or thesis acknowledgements where relevant.
  • Provide examiners with checklists for interpreting AI‑assisted materials without assuming misconduct.
  • Training and workforce development
  • Deliver role‑targeted, low‑burden modules: prompt literacy for PhD researchers; assessment and attribution guidance for supervisors; procurement and DPIA training for research‑enabling staff. Make modules modular and reusable.
  • Pilot, measure and iterate
  • Start with small pilots that measure time savings, error rates and researcher confidence (e.g., literature triage, code assist in computational labs) and scale only proven practices.

Editorial analysis: what success looks like for AI.RDN+​

For AI.RDN+ to deliver durable value, the project must move beyond descriptive reporting to deliver tangible, transferable assets that institutions can adopt with low friction. Concrete markers of success should include:
  • Reproducible evidence of doctoral AI usage: published instruments, anonymised aggregate data and methodology so others can validate and build on the findings.
  • Practical procurement and DPIA templates: legally usable language and checklists that IT and legal teams can drop into vendor negotiations.
  • Role‑specific training modules and evaluation frameworks: microlearning assets with assessment frameworks showing whether training measurably improves appropriate practice and research quality.
  • An actively maintained public portal: a living repository that is updated as models, vendor terms and regulations change, ideally with clear handover to sector bodies to maintain currency post‑grant.
If AI.RDN+ achieves these outputs and partners with national sector bodies to maintain them, the network could become a durable national resource shaping responsible AI adoption in doctoral research.

Technical specifics to verify and cautionary flags​

Several specific technical or contractual claims commonly arise in campus conversations; AI.RDN+ should explicitly verify these claims and communicate them clearly.
  • “Enterprise Copilot / ChatGPT Edu ensures prompts are not used for model training.”
  • Caution: vendor statements are contractual assurances; they should be confirmed in contract clauses with audit and deletion rights, not treated as technical proof. AI.RDN+ should prioritise exemplar contract wording and enforcement checks.
  • “AI detection tools reliably identify machine‑generated text.”
  • Caution: current detectors produce false positives and false negatives. Guidance should avoid recommending punitive policies that rely solely on detectors. AI.RDN+ needs to recommend transparency, redesigned assessment and reproducibility practices instead.
  • “Logging every prompt is straightforward and low-cost.”
  • Caution: systemic logging with metadata capture (model version, timestamp, prompt hash) has storage, privacy and governance implications. DPIAs and access‑control procedures are necessary to avoid creating new compliance risks. AI.RDN+ should provide pragmatic logging designs tailored to data sensitivity.
Where claims cannot be independently established from public statements or where vendors’ contractual terms are confidential, the project should flag those as unverifiable without access to contracts or audits, and recommend institution‑level procurement and audit strategies.

How AI.RDN+ fits into broader sector activity​

AI.RDN+ is not an isolated initiative; it joins other Research England‑backed and university‑led efforts to shape AI practices in research. The sector trend is clear: centrally managed offerings (enterprise AI workspaces), strong procurement safeguards, and literacy modules are emerging as the pragmatic alternative to blanket bans. AI.RDN+ adds value by focusing on the doctoral lifecycle and by combining regional clusters to scale pilots and share best practice across institutions.
This networked, sector‑integrated approach increases the probability that well‑designed outputs will be adopted nationally — provided the project commits to living resources and measurable evaluations.

Practical next steps for IT teams (short list)​

  • Review current contracts for any campus AI service and flag language about data usage, model training, deletion and audit rights. If missing, engage procurement to demand these clauses.
  • Build a pilot controlled AI workspace for research teams handling sensitive data and implement logging that captures prompt metadata and model version. Test redaction workflows.
  • Draft a short examiner guidance note that focuses on transparency and reproducibility rather than automated detection. Share with doctoral schools for comment.
  • Prepare low‑effort microtraining modules (15–30 minutes) for supervisors and researchers that cover prompt hygiene, provenance capture and disclosure expectations.

Conclusion​

AI.RDN+ represents a timely, pragmatic and well‑resourced attempt to bring empirical evidence, sector coordination and practical tools to the pressing question of how publicly available generative AI tools should be used in doctoral research. The project’s strengths — regional scale, sector partnerships and a focus on operational outputs — make it a promising candidate to produce durable resources for universities. However, the initiative will only realise its potential if it treats vendor assurances critically, avoids over‑reliance on fragile detection tools, prioritises living resources and handover mechanisms, and produces low‑barrier templates that smaller institutions can adopt.
For technical teams and doctoral schools, the immediate priorities are clear: secure contractual protections, build controlled AI workspaces where necessary, implement pragmatic logging and provenance practices, design role‑targeted microtraining, and adopt transparency and assessment redesign principles that recognise AI as assistive rather than authoritative. If AI.RDN+ can deliver evidence, templates and maintainable resources on these fronts, it will have made a decisive contribution to the safe, ethical and practical adoption of AI in doctoral research.

Source: BusinessCloud Universities to lead £3.4m project on AI in doctoral research
 

Back
Top