• Thread Author
A joint Aston University–University of Leeds partnership has secured a £3.4 million Research England Development Fund award to build the Artificial Intelligence Researcher Development Network Plus (AI.RDN+), a four‑year programme that will map how publicly available AI tools (from ChatGPT to Microsoft Copilot and other consumer-facing agents) are being used across doctoral research, consult PhD researchers and research staff, and produce a shared portal of guidance, case studies, training and best-practice resources for supervisors, examiners and research‑enabling teams.

Futuristic scene around a glowing AI.RDN+ hub linking researchers at Midlands Innovation and Yorkshire Universities.Background / Overview​

Publicly available generative AI tools have been adopted at speed in academia, promising productivity gains, faster literature triage, and accessible writing support — but they also bring complex governance, privacy, integrity and sustainability challenges. Universities worldwide are shifting from outright bans to managed adoption: offering centrally vetted or enterprise-grade AI workspaces, publishing usage guidance, and piloting pedagogically framed applications while warning against punitive use of unreliable detectors and unchecked deployments. These patterns frame why AI.RDN+ is both timely and potentially influential.
AI.RDN+ — led within the consortium by Professor Phil Mizen (Aston University) together with academic leads from Leeds — intends to focus specifically on the doctoral ecosystem: doctoral researchers (PhD candidates), supervisors, examiners and the staff who enable research (e.g., technicians, professional services, research-development units). The network will survey current uptake, identify risk and innovation areas, co‑create guidance and training, and publish its outputs via a dedicated AI.RDN+ portal. The project also brings together regional research partnerships: the eight Midlands Innovation universities and the 12 institutions in the Yorkshire Universities consortium.

What AI.RDN+ will (explicitly) do​

Project scope and timeline​

  • Funding: £3.4 million awarded from the Research England Development Fund for a four‑year programme.
  • Geographic and institutional scope: Lead institutions Aston University and University of Leeds, plus the Midlands Innovation cluster (eight research-intensive universities) and the Yorkshire Universities group (12 institutions).
  • Core activities:
  • Large‑scale consultation and mixed-methods research with doctoral researchers, supervisors, examiners and research‑support staff.
  • A living resource base cataloguing publicly available AI tools and recommended uses in doctoral workflows.
  • Creation and piloting of training and professional-development modules for researchers and supervisors.
  • Development of an AI.RDN+ portal to publish guidance, case studies of good practice, and training assets.

Leadership and partners​

The network will be led by a cross‑institutional team (including Professor Phil Mizen at Aston and academic leads at Leeds). It will draw on a range of sector bodies and collaborators (for example Jisc, Vitae, the UK Council for Graduate Education and the National Centre for Universities and Business are cited as partners or supporters in project descriptions), which positions AI.RDN+ to feed outputs into national researcher‑development infrastructure.

Why this matters for doctoral research​

Doctoral work is distinct from undergraduate study in key ways: long project horizons, greater dependence on bespoke data and analysis pipelines, supervision relationships, and a spectrum of research outputs (code, data, proprietary methods, uniquearchives). These differences mean that the introduction of public generative AI tools raises research‑specific questions that are not well covered by undergraduate-focused AI policies:
  • Data sensitivity and IP: doctoral datasets can be sensitive (human subjects, proprietary datasets, industry collaborations). Using consumer AI services without contractual safeguards risks data exposure.
  • Reproducibility and provenance: AI‑assisted literature summaries or code generation complicate traceability of research decisions and reproducibility if outputs are not logged and versioned.
  • Supervisor/examiner practice: supervisors and external examiners need consistent guidance on disclosure, attribution and acceptable AI use during thesis preparation and assessment.
  • Career training: doctoral researchers must develop practical AI literacy — not only how to use tools but how to evaluate outputs, redact and manage sensitive inputs, and understand contractual/vendor constraints.
AI.RDN+ explicitly positions itself to map these gaps and provide sector‑wide resources to address them.

Strengths and opportunities: what AI.RDN+ can deliver​

1. Scale and coordination across regional clusters​

By combining the Midlands Innovation and Yorkshire Universities networks, AI.RDN+ can draw on thousands of doctoral researchers and wide disciplinary diversity. This breadth supports robust, generalisable findings and the ability to pilot interventions across multiple institutional contexts.

2. Focused research on the doctoral lifecycle​

Most prior AI policy work has emphasised undergraduate integrity; AI.RDN+ specifically targets the doctoral lifecycle — supervision, examination, data stewardship and research‑enabling services — promising resources tailored to the unique needs of doctoral candidates and their supervisors.

3. Sector integration and pathways to practice​

The project is explicitly linked to national sector bodies and research‑development organisations (examples named in project summaries), which improves the chances that outputs (guidance, training modules, procurement recommendations) will be adopted beyond the immediate consortium.

4. Evidence‑based guidance and case studies​

Rather than prescribing single policies, AI.RDN+ promises to collect empirical evidence and co‑create guidance with stakeholders — a pragmatic approach that increases adoption and local relevance. The plan to publish case studies of best practice will make guidance actionable for supervisors and research support teams.

Risks, blind spots and limits to watch​

Data governance and vendor assurances are contractual, not absolute​

Many organisations rely on vendor claims (for example that enterprise Copilot or ChatGPT Edu will not feed campus prompts back into public model training). These are contractual assurances and must be verified and enforced; they are not technical guarantees that remove all risk. AI.RDN+'s guidance must emphasise procurement and legal checks, not only technical literacy.

Detection, integrity and false security​

Automated AI‑detection tools remain unreliable. Several institutional guidance documents and sector reports caution against punitive use of detectors because of false positives and negatives. Any examiner guidance or integrity policy developed by AI.RDN+ should avoid over‑reliance on detectors and instead recommend redesigning assessment and transparency practices.

Potential for uneven access and skills gaps​

If training and secure access are not equitably provisioned, doctoral researchers in resource‑poor labs or partner institutions may be excluded from the very tools and training intended to help them. AI.RDN+ must emphasise equitable roll‑out and low‑barrier resources.

Environmental and compute cost considerations​

Large‑scale usage of inference services can have material carbon and cost impacts. Institutions must weigh sustainability implications and seek vendor transparency on energy and emissions for model hosting. AI.RDN+ outputs should include lifecycle and sustainability guidance for research units considering wide deployment.

The limits of a four‑year project​

A four‑year funded window is substantial, but the ecosystem (models, vendors, regulation) evolves quickly. AI.RDN+ should design outputs as living resources—maintained by partners or sector bodies after initial funding ends—otherwise guidance will rapidly become outdated.

Practical guidance and recommendations for institutions (actionable checklist)​

The WindowsForum readership is technical and operationally minded; the following practical checklist distils what central IT, research‑IT teams, research‑development units and doctoral schools should consider now. These are pragmatic steps AI.RDN+ should validate and amplify.
  • Governance and procurement
  • Require vendor contractual clauses that explicitly prohibit use of institutional prompts/data for public model training unless explicitly authorised.
  • Obtain deletion, audit and retention clauses. Ensure SLAs include data residency and breach notification obligations.
  • Conduct Data Protection Impact Assessments (DPIAs) for AI services that process research participants’ data.
  • Controlled access and logging
  • Prefer enterprise or campus‑provisioned AI workspaces (e.g., licensed Copilot, ChatGPT Edu, or campus-hosted open-source stacks) instead of unvetted consumer accounts.
  • Implement central logging and metadata capture for AI interactions used in research workflows to preserve provenance and enable reproducibility.
  • Redaction and input hygiene
  • Produce and require standard redaction templates and short training for researchers: remove PII, anonymise sensitive variables, and avoid pasting raw proprietary data into public tools.
  • Research reproducibility
  • Treat AI outputs as research artefacts: capture prompts, model version, date/time, and any post‑processing so examiners and auditors can assess provenance.
  • Supervision and examiner guidance
  • Issue explicit expectations on disclosure of AI use in theses and publications (e.g., a Methods subsection describing tool names, prompt approach and human edits).
  • Provide examiner checklists for interpreting AI‑assisted outputs without assuming misconduct.
  • Training and workforce development
  • Deliver short, role‑targeted modules: (a) PhD researchers — prompt literacy and evaluation of model outputs; (b) Supervisors — guidance on assessment, attribution and redaction; (c) Research‑enabling staff — procurement and DPIAs.
  • Make modules low‑burden and accessible (microlearning, recorded webinars, templates).
  • Pilot, evaluate, iterate
  • Start with small pilots (e.g., literature triage pilots, code-assist in computational disciplines), measure outcomes (time savings, error rates, student/researcher satisfaction), and only scale proven practices.
  • Equity and inclusion
  • Ensure training and access are distributed equitably across faculties and partner institutions; monitor for disproportionate impacts (language, disabilities).
  • Sustainability
  • Track inference usage and demand transparency from vendors on energy consumption; consider hybrid models (on‑premises for sensitive workloads).
  • Communication and transparency
  • Provide clear, public-facing guidance and a central "AI hub" portal where students and staff can find policy, FAQs, training, and recommended tools. AI.RDN+'s planned portal model is a strong example to emulate.

What AI.RDN+ needs to do well to be credible​

  • Methodological rigour: publish survey instruments and anonymised aggregate data so findings are reproducible and usable by other institutions.
  • Cross‑disciplinary balance: ensure STEM, social sciences, humanities and creative arts are all included; AI impacts and acceptable practices vary across fields.
  • Living guidance: build resources in modular formats so they can be updated as model versions and vendor terms change.
  • Procurement and legal templates: produce exemplar contract language and DPIA checklists that institutions can adapt — this is high‑value practical output.
  • Measurable outcomes: beyond surveys, produce evaluation frameworks that show whether training reduces misuse, improves research quality, or increases researcher confidence.
If AI.RDN+ commits to these practical outputs, the project can move beyond descriptive research into durable institutional change.

How doctoral researchers should approach AI tools today​

  • Treat AI outputs as assistive rather than authoritative. Use models to accelerate literature triage, draft notes or generate ideas — but always check citations, track provenance, and perform independent validation.
  • Keep a prompt log and annotate drafts with the origin of sections generated by AI; disclose use in thesis acknowledgements or a Methods section where relevant.
  • Ask supervisors early about local expectations and whether the research group has a departmental AI policy.
  • Protect participant and proprietary data: avoid pasting raw sensitive data into public chatbots.

How this ties into broader sector activity​

AI.RDN+ is one of several Research England–funded and university‑led efforts seeking to shape research AI practice. Similar initiatives emphasise secure provisioning (enterprise Copilot, ChatGPT Edu), central AI hubs, and literacy modules in multiple countries. The sector consensus emerging in case studies and guidance materials is that centrally managed offerings plus strong training and procurement safeguards are the pragmatic route away from blanket bans. AI.RDN+ adds value by centring doctoral research and by combining regional clusters to scale pilots and share best practice.

Editorial assessment: strengths, caveats and likely impact​

AI.RDN+’s greatest immediate strength is its focus and scale: doctoral research has been comparatively neglected in the AI policy conversation, and a targeted, evidence‑based programme backed by Research England can close that gap. The project’s network approach (Midlands + Yorkshire) and stated links with sector bodies increases the likelihood that practical outputs will propagate across the UK research ecosystem.
However, the programme will face several critical tests:
  • Translating findings into enforceable change: guidance is valuable, but procurement and legal frameworks determine what institutions can safely provide. AI.RDN+ must include legal/procurement expertise and exemplar contract language to move practice forward.
  • Keeping pace with rapid model evolution: models, vendor terms and national regulation will shift quickly; AI.RDN+ must commit to creating living resources and to handover arrangements that keep materials up to date post‑funding.
  • Sector uptake and resourcing: smaller universities or departments may lack capacity to implement recommended changes even with good guidance. The network should provide low‑effort templates and modular microtraining to lower the barrier to adoption.
If it meets those challenges, AI.RDN+ can become a durable national resource shaping how doctoral research responsibly adopts AI.

Final verdict and practical takeaways​

The Research England investment in AI.RDN+ is a sensible, pragmatic response to a pressing gap: doctoral researchers, supervisors and examiners must be equipped with practical guidance for responsibly using rapidly evolving public AI tools. The project’s design—empirical consultation, resource curation, training development and a public portal—matches the problem. For universities and technology teams, the immediate priorities are to secure enterprise‑grade provisioning where data sensitivity demands it, to adopt procurement clauses that limit vendor training use of institutional prompts, to implement robust logging/provenance practices, and to provide accessible microtraining for supervisors and researchers.
AI.RDN+ should be judged on its outputs: legal/procurement templates, reproducible evidence of doctoral AI usage, practical training modules, and an actively maintained portal that translates research into institutionally actionable policy. If the programme delivers these, it will create high, practical value for the UK research ecosystem — and provide a model other higher‑education systems can adapt.

AI in higher education is no longer theoretical; AI.RDN+ is an early institutional attempt to move from ad‑hoc practice to structured, evidence‑based support for the research community. Its success will depend not only on research outputs, but on building tools and templates that legal, procurement and IT teams can implement immediately — and on keeping guidance current as models and vendor terms evolve.

Source: EdTech Innovation Hub Aston University and University of Leeds win £3.4 million for AI tools researcher development network — EdTech Innovation Hub
 

Aston University and the University of Leeds have been awarded £3.4 million to lead a four‑year national network — the Artificial Intelligence Researcher Development Network Plus (AI.RDN+) — that will map how publicly available generative AI tools are being used across doctoral research and produce practical guidance, training and a living portal for supervisors, examiners and research‑enabling teams.

A team in a futuristic command room analyzes a holographic AI network across the UK.Background​

Publicly available generative AI tools such as ChatGPT and Microsoft Copilot have become ubiquitous in academic workflows, promising productivity gains but also raising complex challenges around governance, data privacy, reproducibility and assessment. Universities are moving from blanket bans toward managed adoption models — offering centrally vetted AI workspaces, issuing usage guidance, and piloting role‑specific training — and AI.RDN+ is explicitly positioned within this shift to address the distinctive needs of doctoral research.
AI.RDN+ is led by Professor Phil Mizen of Aston University with academic leads from the University of Leeds, including Dr Hosam Al‑Samarraie and Professor Arunangsu Chatterjee, and draws on the Midlands Innovation cluster (eight research‑intensive institutions) and the Yorkshire Universities consortium (12 institutions). The network’s work will also be supported by sector bodies including Jisc, Vitae, the UK Council for Graduate Education and the National Centre for Universities and Business.

Why this matters: doctoral research is not the same as undergraduate teaching​

Doctoral study differs from undergraduate education in several crucial ways: longer project timelines, bespoke datasets (often sensitive), bespoke analytical pipelines, closer supervisor–candidate relationships, and a wider range of research outputs (theses, code, datasets, unique archival material). The consequences of introducing consumer-facing generative AI into these workflows are therefore distinct and often more consequential. AI.RDN+ targets these specific fault lines.
Key distinctions that make doctoral contexts unique:
  • Data sensitivity: doctoral projects frequently process human subjects data or proprietary partner data where leaking prompts or content to public models could be legally and ethically damaging.
  • Reproducibility and provenance: AI‑assisted literature reviews, code generation, and drafting complicate the audit trail of research decisions unless prompts, model versions and post‑processing are logged.
  • Supervision and assessment: supervisors and examiners need clear, consistent policies on disclosure, attribution and acceptable use — areas where undergraduate policy work does not translate directly.

What AI.RDN+ will do (scope and outputs)​

AI.RDN+ is funded through the Research England Development Fund as a four‑year programme. The network has a practical, evidence‑driven remit and aims to combine large‑scale consultation with mixed‑methods research across the doctoral ecosystem. Planned deliverables include:
  • A living resource base that catalogs publicly available AI tools and recommends use cases and risk mitigations for doctoral workflows.
  • Large‑scale surveys and qualitative research with doctoral researchers, supervisors, examiners and research‑support staff to map uptake, attitudes and practices.
  • Co‑created training and professional development modules targeted to distinct roles (PhD researchers, supervisors, research‑IT and professional services).
  • An AI.RDN+ portal to publish guidance, case studies of practice, templates (procurement, DPIAs), and training assets.
These deliverables focus on practical institutional needs: procurement and contract language, Data Protection Impact Assessment (DPIA) templates, logging and provenance recommendations, and low‑barrier microtraining modules.

Leadership, partners and scale​

AI.RDN+ brings together regional research partnerships to achieve breadth and transferability in its findings. The leadership team and institutional network give the project two important advantages:
  • Scale: by coordinating the Midlands Innovation group and the Yorkshire Universities consortium, AI.RDN+ gains access to a broad base of doctoral researchers across disciplines and institution types — enabling more generalisable findings and multi‑site pilots.
  • Sector pathways: named sector partners — Jisc, Vitae, UK Council for Graduate Education, and the National Centre for Universities and Business — create direct routes for the project’s outputs to influence national researcher development infrastructure and practice.
Key named leads include Professor Phil Mizen (Aston), Dr Hosam Al‑Samarraie and Professor Arunangsu Chatterjee (Leeds), with additional academic collaborators listed from both universities and the regional networks. This mix provides methodological expertise across education, social science, information science and research‑support practice.

Strengths and opportunities: what AI.RDN+ can realistically deliver​

AI.RDN+ is well designed to address gaps in current university AI policy work. Its most notable strengths include:
  • Targeted focus on the doctoral lifecycle. Most institutional AI work to date has centred on undergraduate assessment integrity; AI.RDN+ deliberately focuses on supervision, thesis production, examiner practice and long‑term data stewardship. This is a critical and under‑served space.
  • Evidence‑based, co‑creative approach. The network plans to co‑create guidance with stakeholders rather than imposing top‑down rules. Practical case studies, pilots and modular resources increase the likelihood of adoption.
  • Practical, operational outputs. By committing to procurement templates, DPIA checklists, logging guidance and microtraining, AI.RDN+ can move from diagnosis to actionable institutional tools. These products are high‑value to IT, research‑IT and doctoral schools.
  • Sector integration and scalability. Links with national bodies offer the potential for AI.RDN+ outputs to be adopted widely beyond the initial consortium.

Risks, blind spots and limits to watch​

No research programme operates in a vacuum; AI.RDN+ faces several structural risks that could blunt impact unless mitigated.
  • Vendor assurances are contractual, not technical guarantees
    Vendors often provide contractual assurances (for example, that enterprise Copilot or ChatGPT Edu deployments will not feed prompts back into public model training). These assurances are only as secure as the contracts and enforcement mechanisms that back them; they are not technical or cryptographic guarantees. AI.RDN+ must emphasize procurement best practice and legal checks, not treat vendor claims as absolute.
  • Detection tools are unreliable and dangerous as the sole enforcement mechanism
    Automated AI‑detection tools produce false positives and negatives; punitive policies grounded in unreliable detectors risk unfair sanctions. Examiner and integrity guidance must instead focus on transparency, redesign of assessment, and reproducibility practices.
  • Rapid model evolution undermines static guidance
    Models, vendor terms and regulatory frameworks change quickly. A four‑year funded window is useful but not definitive; AI.RDN+ must commit to living resources and handover arrangements so guidance remains current after the grant ends.
  • Uneven capacity across institutions
    Smaller or resource‑constrained departments may struggle to implement recommendations that require procurement, monitoring or staffing. AI.RDN+ should prioritize low‑effort templates, microtraining and adaptable tools that lower adoption barriers.
  • Environmental and cost implications
    Widespread inference use has real cost and carbon consequences. Transparency from vendors on energy consumption and lifecycle impacts should be part of the guidance AI.RDN+ produces.

Recommendations for university IT, research‑IT and doctoral schools (actionable checklist)​

The WindowsForum audience needs concrete steps. The following checklist condenses practical priorities that AI.RDN+ is well placed to validate and amplify.
  • Governance and procurement
  • Require explicit contractual clauses preventing the use of institutional prompts/data for public model training unless authorised; include deletion, audit and retention clauses and breach notification provisions.
  • Conduct DPIAs for AI services that may process research participants’ data.
  • Controlled access and logging
  • Prefer enterprise or campus‑provisioned AI workspaces (licensed Copilot, ChatGPT Edu, or on‑prem/open‑source stacks) over unvetted consumer accounts.
  • Implement central logging and metadata capture for AI interactions used in research workflows to preserve provenance (prompt, model, version, timestamp, and any post‑processing).
  • Redaction and input hygiene
  • Produce standard redaction templates and short training to remove PII and avoid pasting raw sensitive or proprietary data into public chatbots.
  • Supervision, reproducibility and examiner practice
  • Treat AI outputs as research artefacts: require disclosure of tool names, prompt approaches and human edits in methods sections or thesis acknowledgements where relevant.
  • Provide examiners with checklists for interpreting AI‑assisted materials without assuming misconduct.
  • Training and workforce development
  • Deliver role‑targeted, low‑burden modules: prompt literacy for PhD researchers; assessment and attribution guidance for supervisors; procurement and DPIA training for research‑enabling staff. Make modules modular and reusable.
  • Pilot, measure and iterate
  • Start with small pilots that measure time savings, error rates and researcher confidence (e.g., literature triage, code assist in computational labs) and scale only proven practices.

Editorial analysis: what success looks like for AI.RDN+​

For AI.RDN+ to deliver durable value, the project must move beyond descriptive reporting to deliver tangible, transferable assets that institutions can adopt with low friction. Concrete markers of success should include:
  • Reproducible evidence of doctoral AI usage: published instruments, anonymised aggregate data and methodology so others can validate and build on the findings.
  • Practical procurement and DPIA templates: legally usable language and checklists that IT and legal teams can drop into vendor negotiations.
  • Role‑specific training modules and evaluation frameworks: microlearning assets with assessment frameworks showing whether training measurably improves appropriate practice and research quality.
  • An actively maintained public portal: a living repository that is updated as models, vendor terms and regulations change, ideally with clear handover to sector bodies to maintain currency post‑grant.
If AI.RDN+ achieves these outputs and partners with national sector bodies to maintain them, the network could become a durable national resource shaping responsible AI adoption in doctoral research.

Technical specifics to verify and cautionary flags​

Several specific technical or contractual claims commonly arise in campus conversations; AI.RDN+ should explicitly verify these claims and communicate them clearly.
  • “Enterprise Copilot / ChatGPT Edu ensures prompts are not used for model training.”
  • Caution: vendor statements are contractual assurances; they should be confirmed in contract clauses with audit and deletion rights, not treated as technical proof. AI.RDN+ should prioritise exemplar contract wording and enforcement checks.
  • “AI detection tools reliably identify machine‑generated text.”
  • Caution: current detectors produce false positives and false negatives. Guidance should avoid recommending punitive policies that rely solely on detectors. AI.RDN+ needs to recommend transparency, redesigned assessment and reproducibility practices instead.
  • “Logging every prompt is straightforward and low-cost.”
  • Caution: systemic logging with metadata capture (model version, timestamp, prompt hash) has storage, privacy and governance implications. DPIAs and access‑control procedures are necessary to avoid creating new compliance risks. AI.RDN+ should provide pragmatic logging designs tailored to data sensitivity.
Where claims cannot be independently established from public statements or where vendors’ contractual terms are confidential, the project should flag those as unverifiable without access to contracts or audits, and recommend institution‑level procurement and audit strategies.

How AI.RDN+ fits into broader sector activity​

AI.RDN+ is not an isolated initiative; it joins other Research England‑backed and university‑led efforts to shape AI practices in research. The sector trend is clear: centrally managed offerings (enterprise AI workspaces), strong procurement safeguards, and literacy modules are emerging as the pragmatic alternative to blanket bans. AI.RDN+ adds value by focusing on the doctoral lifecycle and by combining regional clusters to scale pilots and share best practice across institutions.
This networked, sector‑integrated approach increases the probability that well‑designed outputs will be adopted nationally — provided the project commits to living resources and measurable evaluations.

Practical next steps for IT teams (short list)​

  • Review current contracts for any campus AI service and flag language about data usage, model training, deletion and audit rights. If missing, engage procurement to demand these clauses.
  • Build a pilot controlled AI workspace for research teams handling sensitive data and implement logging that captures prompt metadata and model version. Test redaction workflows.
  • Draft a short examiner guidance note that focuses on transparency and reproducibility rather than automated detection. Share with doctoral schools for comment.
  • Prepare low‑effort microtraining modules (15–30 minutes) for supervisors and researchers that cover prompt hygiene, provenance capture and disclosure expectations.

Conclusion​

AI.RDN+ represents a timely, pragmatic and well‑resourced attempt to bring empirical evidence, sector coordination and practical tools to the pressing question of how publicly available generative AI tools should be used in doctoral research. The project’s strengths — regional scale, sector partnerships and a focus on operational outputs — make it a promising candidate to produce durable resources for universities. However, the initiative will only realise its potential if it treats vendor assurances critically, avoids over‑reliance on fragile detection tools, prioritises living resources and handover mechanisms, and produces low‑barrier templates that smaller institutions can adopt.
For technical teams and doctoral schools, the immediate priorities are clear: secure contractual protections, build controlled AI workspaces where necessary, implement pragmatic logging and provenance practices, design role‑targeted microtraining, and adopt transparency and assessment redesign principles that recognise AI as assistive rather than authoritative. If AI.RDN+ can deliver evidence, templates and maintainable resources on these fronts, it will have made a decisive contribution to the safe, ethical and practical adoption of AI in doctoral research.

Source: BusinessCloud Universities to lead £3.4m project on AI in doctoral research
 

Back
Top