NIU AI Tools for Research: Ethics and PEACEful Use

  • Thread Author
The Founders Memorial Library’s recent “AI Tools for Research and Productivity” workshop delivered a clear, pragmatic message to NIU students: artificial intelligence can be a powerful research companion — but only when used ethically, transparently, and with human judgment firmly in the driver’s seat.

Background​

For students and researchers wrestling with the volume of academic literature and the pressure to produce polished deliverables, generative AI and research-assistant tools promise speed and scale. Libraries and teaching centers across higher education have rushed to respond, offering training that moves beyond hype to teach how to use these tools responsibly. At Northern Illinois University, that response is manifested in a series of practical workshops and institutional guidance that foreground academic integrity, data protection, and user agency as central concerns.
The NIU workshop, hosted virtually via Microsoft Teams by Student Success librarian Kimberly Shotick as part of the library’s “Mission in the Stacks” program, targeted both undergraduate and graduate audiences. It combined a live demonstration of current AI research tools with a concise ethical framework — the “PEACEful Use of AI” — that attendees can apply when making choices about when, how, and whether to integrate AI into their academic workflows.

Workshop overview​

The session began with a philosophical but practical orientation: AI is best framed as a supplement to student work rather than a replacement. Shotick emphasized the risk of over-reliance, calling out the danger that students may cede critical thinking and global synthesis to algorithmic outputs if they lean too heavily on automated assistants. From there, the workshop shifted into hands-on territory: scenario-based questions, interactive Teams polling, and demonstrations of free research tools available to NIU students.
Attendees were shown how AI can help with early-stage problems — generating research questions, suggesting search strategies, organizing literature, and managing project timelines. The event also walked students through the limits of each tool and the verification steps required to avoid hallucinated claims or incorrect citations.
The session’s instructional core was twofold:
  • Teach students which tools fit which phases of research.
  • Provide a simple, memorable ethical checklist (the “PEACEful Use”) students can use before they submit any AI-assisted work.

The “PEACEful Use of AI” framework — a practical ethical checklist​

Shotick’s framework is intentionally mnemonic and actionable. It asks users to pause and validate a short set of concerns before using an AI tool in academic work:
  • Policies: Does the tool’s use comply with institutional rules, course syllabi, and applicable laws? Is the tool approved for use with sensitive or university data?
  • Ethics: Does the use align with disciplinary ethical norms and broader values like honesty and fairness?
  • Agency: Do you retain control over your data and the outputs? Who owns what you upload, and who can access it?
  • Critical Thinking: Does the tool support your reasoning rather than replace your learning process?
  • Environment: Have you considered environmental costs and trade-offs — for example, the carbon footprint of large models?
This checklist deliberately mixes institutional compliance (Policies), personal responsibility (Agency, Critical Thinking), and broader considerations (Ethics, Environment), giving students a multi-dimensional way to evaluate decisions that go beyond “Is this tool allowed?” to include long-term, societal questions.

Tools demonstrated and what they do​

Three representative tools featured in the workshop showcase the range of functionality available to student researchers. Each tool has clear advantages — and important caveats.

Microsoft Copilot​

  • What it does: Integrates with productivity apps to summarize documents, draft text, and help with ideation. Copilot can parse content in documents and suggest outlines, summaries, and next steps.
  • Why students like it: It speeds up mechanical tasks like summarization or formatting and can help synthesize notes across multiple files.
  • Caveats: Copilot’s outputs are only as reliable as the prompts and the source material. It can present plausible-sounding but incorrect information (hallucinations). Students should verify facts and preserve their own analytical voice.

Elicit​

  • What it does: Designed specifically for research, Elicit helps find and extract information from peer-reviewed literature, supports systematic-review workflows, and can generate structured summaries and data extraction tables.
  • Why students like it: It reduces the tedium of initial discovery and can accelerate literature-mapping and evidence-synthesis tasks.
  • Caveats: Elicit can miss grey literature and non-indexed materials; its extraction is automated and must be checked for accuracy. It’s a tool for speeding up initial reviews, not for replacing thorough manual appraisal.

ResearchRabbit​

  • What it does: Builds citation-based visual maps that show how papers connect through references, co-authorship, and topic clusters.
  • Why students like it: It helps researchers discover related literature they might otherwise miss and visualizes the intellectual lineage of a topic.
  • Caveats: Citation networks reflect dynamics of publication, not necessarily quality or relevance; popularity and citation practices can skew what looks prominent in a map.

Why institutional guidance matters: NIU’s approach​

NIU’s broader guidance, reflected in campus webpages and teaching-center materials, underscores several consistent priorities:
  • Transparency: Students are encouraged — and often required — to declare AI use in assignments where it contributed substantively.
  • Privacy and data protection: Only approved institutional tools should be used with private or regulated data to avoid accidental disclosures.
  • Faculty prerogative: Course-specific AI rules can vary; instructors retain the right to permit, limit, or prohibit AI tools in their classes.
  • Academic integrity: The use of AI to write whole assignments without proper attribution or instructor approval is discouraged and may constitute misconduct.
These principles aim to balance enabling productivity with preserving the learning outcomes that assignments are designed to measure.

Critical analysis: strengths, limitations, and blind spots​

AI-driven research tools deliver clear, measurable benefits, but their adoption carries nontrivial risks. Below I examine both sides and offer practical guardrails.

Strengths​

  • Faster literature discovery: Tools like Elicit and ResearchRabbit surface relevant sources and citation connections much faster than manual searching.
  • Reduced friction for early-stage work: Generative assistants help overcome writer’s block and structure research timelines, which can be especially valuable for students new to academic workflows.
  • Lower barrier to entry: Students without extensive library- or database-search skills can use AI to level the playing field and access foundational materials more quickly.
  • Scaffolding for research methods: Automated data extraction and mapping can teach novices how to structure systematic reviews and how pieces of the scholarly conversation fit together.

Limitations and risks​

  • Hallucination and factual drift: Generative systems can invent citations, misreport study outcomes, and present inaccurate summaries. This is dangerous in academic contexts where accuracy matters.
  • Erosion of critical skills: If students outsource analysis and synthesis to AI, they risk losing the very skills assignments aim to develop — critical appraisal, argumentation, and original thinking.
  • Privacy and IP exposure: Uploading proprietary datasets, human-subjects data, or in-progress manuscripts to third-party AI services can violate data-use agreements, IRB rules, and copyright.
  • Uneven coverage and bias: Tools trained on particular corpora can underrepresent certain regions, languages, or disciplines. Citation maps reward the already well-cited, potentially reinforcing disciplinary blind spots.
  • Environmental costs: Large models require significant compute; routine blind use of costly models for marginal gains can be environmentally irresponsible.

Blind spots in common advice​

  • Policy guidance often focuses on whether students should cite AI use, but less on how to contextualize AI contributions in methodology sections or acknowledgments. Nor do many campus resources clearly instruct students on how to verify AI-generated claims or how to document the prompts and tool versions they used.

Practical, step-by-step guidance for students​

To apply the workshop lessons practically, here is a reproducible workflow students can adopt when using AI tools in research.
  • Before you start, run the PEACEful checklist:
  • Check your syllabus and course policy.
  • Confirm the tool complies with NIU or institutional data rules.
  • Decide who owns any material you upload.
  • Use AI to scaffold rather than to deliver:
  • Ask AI for topic ideas or research questions, then refine those manually.
  • Use extraction tools to pull candidate citations — then retrieve and read the original papers.
  • Verify every factual claim:
  • Cross-check AI summaries against the source article’s abstract, methods, and results.
  • For statistics or numeric claims, open the original paper or dataset to confirm.
  • Document usage:
  • Keep a short log with the tool name, date, prompt text (or a paraphrase), and how you used the output.
  • When relevant, include an explicit statement in your methodology or acknowledgments to disclose AI assistance.
  • Preserve authorship and voice:
  • Edit any AI-generated text thoroughly so the final product reflects your reasoning and writing.
  • Use AI-generated outlines as a starting point, not as a finished draft.
  • Ask your instructor if unsure:
  • When in doubt about a tool or a use case, check the syllabus or ask the professor for permission.

A deeper look at verification and reproducibility​

Academic rigor requires that claims be reproducible and traceable. When you use an AI to synthesize literature or to extract data:
  • Always preserve the original sources. If an AI suggests a study, go to the study itself and confirm methodology, sample size, and conclusions.
  • Avoid presenting model-derived syntheses as if they were independently peer-reviewed conclusions. Frame them as assistive summaries that require verification.
  • Keep provenance metadata for AI outputs where possible: which tool, which model or version, which prompt, and the date. This practice helps instructors and future reviewers understand how the work was produced.
These practices align with NIU’s emphasis on transparency and with broader moves in scholarship to require explicit documentation of computational assistance.

Policy implications and what universities should do next​

Workshops are a necessary frontline response, but they are not sufficient. Universities should consider layered policies and practical investments:
  • Clear course-level statements: Require every syllabus to state the instructor’s policy on AI use and to describe acceptable practices and expected attributions.
  • Institutional approvals for tools: Maintain a vetted catalog of approved AI tools for handling institutional data, and publish clear guidelines for what data can be uploaded to which services.
  • Training for faculty: Equip instructors to design assessments that test skills that cannot be outsourced to AI and to evaluate AI-assisted work fairly.
  • Audit and support structures: Offer students reproducibility checklists, prompt-logging templates, and library consultations specifically tailored to AI-assisted research.
  • Sustainability assessments: As campuses sign enterprise AI deals, include environmental impact assessments as part of procurement and adoption decisions.

Conclusion: Responsible augmentation, not automation​

The NIU library’s workshop distilled a straightforward, defensible truth: AI can amplify research productivity, but that amplification must be governed by ethics, transparency, and human judgment. The PEACEful Use checklist gives students an accessible way to operationalize those norms before they rely on automated outputs.
Students should take away three concrete behaviors. First, use AI to get unstuck and to scale discovery — not to substitute for critical thinking. Second, verify relentlessly: every AI-supplied fact or citation deserves human confirmation. Third, document and disclose your use so your work remains reproducible, attributable, and defensible.
AI will change how research gets done. The responsible path is not to ban useful tools outright, nor to let them run unchecked; it is to embed them within ethical practices, institutional guardrails, and a culture of verification. When libraries, faculty, and students align around those practices, AI can become a legitimate research partner — one that increases productivity while preserving the intellectual development that higher education exists to produce.

Source: northernstar.info Workshop guides ethical use of AI in academic work