AI is Here, Now What? Safe Start for SMHS Faculty

  • Thread Author
From noon to 1 p.m. on Wednesday, Oct. 1, the University of North Dakota School of Medicine & Health Sciences (SMHS) Teaching, Learning, and Scholarship (TLAS) team will host a free virtual faculty development session titled “AI is here, now what? Getting started (safely),” presented by SMHS Associate Dean for Teaching and Learning Richard Van Eck, Ph.D. — a focused, practical primer designed to explain the SMHS AI policy, help faculty determine whether tools on their machines (for example, Microsoft Copilot) are configured securely, and offer concrete starting steps for using AI as a teaching and productivity aid.

Man in a blue shirt watches a virtual AI policy presentation on safe practices.Background / Overview​

AI adoption on campuses has moved rapidly from debate to managed adoption: institutions now pair centrally provisioned, enterprise-grade AI tools with faculty-led pedagogical decisions and principle-based governance. National higher-education guidance emphasizes governance, operations, and pedagogy as three pillars for safe, equitable adoption — a model UND is explicitly following with campus-wide AI resources and school-level policies.
The Oct. 1 TLAS session arrives in that context: it is both a high-level policy briefing and a hands-on starting point for instructors who want to use AI without inviting avoidable risk. The meeting will be held over Zoom (enter the passcode UND if prompted), is free and open to all faculty, and is part of TLAS’s ongoing faculty development series and repository of recorded sessions.

Why SMHS is running this session now​

AI tools are ubiquitous on campus devices and in cloud services. That ubiquity creates immediate opportunities (efficiency, new pedagogical tools, personalized learning paths) and immediate liabilities (data privacy, research integrity, inaccurate outputs). UND has already published campus-level guidance and frequently asked questions about generative AI use, which stresses that private or restricted data should not be submitted to consumer-grade LLMs and that purchases of generative AI tools require institutional procurement routes. The TLAS session aims to translate those institutional obligations into everyday choices faculty can act on.

What the TLAS session will cover (session highlights)​

  • Explanation of the UND SMHS AI Policy and where to find it for course- and research-specific questions.
  • How to tell whether a tool already on your computer — for instance, Microsoft Copilot or other built-in assistants — is running with enterprise protections (commercial/enterprise data protection) or as a consumer instance that may be subject to different data-use terms.
  • Practical checks to evaluate whether a tool is secure enough for the type of data you will use it with (classroom prompts vs. protected research data).
  • Basic steps for integrating AI into lesson design and productivity workflows while maintaining academic integrity and equity.

Understanding UND / SMHS AI policy — the essential points for faculty​

Where policy lives and why it matters​

UND makes clear that AI is not a free-for-all; the University’s AI FAQ and SMHS schoolwide policies outline expectations for data handling, procurement, transparency, and academic integrity. Faculty should treat those pages as operational guidance that informs day-to-day decisions: what can go into a generative AI prompt, whether an AI notetaker may be used in class, and how to declare AI use in course materials.

Key policy takeaways every instructor must know​

  • Don’t submit private, restricted, or sensitive information — including identifiable student data, proprietary research data, or protected health information — to consumer generative AI tools. UND explicitly classifies those data types as off-limits for public generative AI services.
  • Procurement rules apply — software purchases involving sensitive data should follow institutional procurement processes (UND uses Jaggaer for non-standard procurements), and IT should be notified if a unit has already adopted a third-party AI tool. This ensures contracts and technical configurations meet UND’s legal and security requirements.
  • Transparency and course expectations — faculty should state their course-level expectations about AI clearly in syllabi and assignment prompts; this reduces ambiguity about acceptable use and aligns with academic integrity standards. UND’s guidance recommends explicitly stating whether generative AI is permitted for credit-bearing work.

Microsoft Copilot and enterprise protections: what faculty need to know​

Microsoft’s Copilot family (Copilot for Microsoft 365, Copilot Chat, Copilot Studio, etc.) includes enterprise / commercial data protection options that change how data is handled compared with consumer instances. For organizational Entra ID accounts, Microsoft’s commercial data protection promises that prompts and responses are not used to train foundation models, that prompts are not retained for training, and that organizations will see a “Protected” badge when those protections are active. These protections are a central reason many universities adopt enterprise Copilot rather than relying on consumer-grade chatbots.
At the same time, Microsoft’s consumer-facing privacy documentation acknowledges that, outside the enterprise protections, some Copilot interactions can be used to improve models unless users opt out — and Microsoft offers opt-out controls for consumer conversations. This makes the authentication context (work/school Entra ID vs. personal Microsoft Account) and tenant settings crucial. Signed-in enterprise users are generally afforded stronger protections than signed-out consumer users.

Practical checks to determine if Copilot on your device is protected​

  • Look for the interface indicators: enterprise-protected Copilot experiences will display a Protected badge and explanatory text indicating that your prompts and responses are not used for model training.
  • Confirm you are signed into Copilot with an Entra ID (work or school) account — consumer Microsoft Accounts may route data differently.
  • Ask your IT/security team whether your organization has enabled Commercial Data Protection and whether tenant-level telemetry, logging, or opt-in flags have been configured. Don’t assume protection by default: verify tenant settings and contractual commitments.

A practical faculty checklist: how to evaluate any AI tool before using it in teaching or research​

Use this checklist before you enter course materials, student data, or research artifacts into any AI system:
  • Security & Authentication
  • Is the tool offered through an institutional (enterprise) account or a consumer account? Enterprise accounts are typically safer for internal data.
  • Does the vendor provide documented commercial/enterprise data protection and contractual language that prohibits tenant data from being used for model training?
  • Data handling & retention
  • Are prompts and responses retained? If so, for how long and under what controls? Can the tenant require deletion?
  • Does the vendor store or transfer data across jurisdictions that might violate local privacy rules? Confirm data residency and movement policies.
  • Vendor commitments & contracts
  • Does the contract include non-use for training, audit rights, and data deletion guarantees? Can the institution negotiate stronger protections?
  • Accuracy & accountability
  • Does the tool include disclaimers about hallucinations and accuracy? Will the tool log or flag uncertain outputs? Educators must treat AI outputs as drafts requiring human verification.
  • Accessibility & equity
  • How will using the tool affect students with limited access to paid AI services? Can the instructor provide alternative pathways to complete assignments? This aligns with institutional equity responsibilities.
  • Institutional policy alignment
  • Does the proposed use follow UND/SMHS guidelines (procurement, data classification, and syllabus transparency)? If in doubt, contact TLAS or the institution’s AI governance office.
(Institutions and IT groups frequently use internal procurement and security questionnaires — examine those responses before approving classroom use.)

Instructional design with AI: concrete starting moves for faculty​

The TLAS session promises practical, immediate steps; below is a compressed how-to plan you can take away and apply the same day.
  • Audit the tools on your device. Identify whether Microsoft Copilot or other assistants are present, and note whether they are signed into personal or organizational accounts. If you’re unsure, log off and log back in using your UND credentials to test.
  • Review sensitive data rules. Class rosters, grades, protected health information, and unreleased research should never be pasted into consumer AI prompts. If a task requires such data, seek an enterprise tool vetted by IT.
  • Start small in the classroom. Use AI for low-stakes activities such as generating example exam questions, drafting discussion prompts, or creating study guides — but require students to annotate, critique, or verify AI-generated content. This reinforces learning, prevents overreliance, and teaches critical evaluation skills.
  • Add clear syllabus language. State whether students may use generative AI, how they must cite it, and the consequences for misuse. Use assignment-level clarification where necessary (for example, “AI allowed for brainstorming but not for final submissions” or “AI use must be disclosed in the submitted work”).
  • Build formative AI literacy. Before allowing AI in summative tasks, create a formative exercise that teaches students how to prompt, check facts, identify hallucinations, and cite AI outputs properly.
  • Use vendor-provided enterprise instances when available. If the institution offers enterprise Copilot or ChatGPT Edu with contractual protections, prefer those instances. Confirm protections and settings with IT.
  • Document and iterate. Keep a short log of how AI impacted outcomes: Did it speed grading? Introduce errors? Improve student engagement? Use those notes to refine use-cases and report back to TLAS or governance committees.

Pedagogical design: examples of safe AI-enabled activities​

  • AI-assisted formative feedback: Use Copilot to draft rubric-aligned feedback templates; a human instructor reviews and personalizes before sending to students. This scales feedback but preserves human judgment.
  • Data-free summarization practice: Ask students to summarize a short reading, then compare that human summary to an AI-generated one and critique differences — a metacognitive task that builds evaluation skills.
  • Prompt design clinics: Teach students how to craft prompts for clarity, and assess how prompt framing changes outputs — valuable for media-literacy and communication courses.

Risks, trade-offs, and mitigation strategies​

Hallucination and accuracy​

Generative models can produce convincing but incorrect outputs. Microsoft itself warns that Copilot should not be used for tasks requiring absolute accuracy or reproducibility without human verification. Always treat AI outputs as drafts to be verified, especially for clinical, legal, or scientific content.

Data privacy and research integrity​

Putting unpublished research data, protected health information, or identifiable student information into a consumer AI tool can create compliance risks. Use institutionally provisioned, contractually protected services for sensitive work, or use strictly local (on-premise) or properly scoped cloud services vetted by IT.

Equity and student access​

Students who do not have access to premium AI tools may be disadvantaged. Ensure equitable access by using institutionally licensed services when assignments rely on AI tools, or by designing work that assesses human learning outcomes rather than tool proficiency alone.

Overreliance and skill atrophy​

If students use AI to produce final drafts without reflective work, core skills such as critical thinking and writing may suffer. Mitigate by scaffolding assignments and requiring process evidence (drafts, annotated AI interactions, and reflective commentary).

Governance: what departments and leaders should do next​

  • Formalize vendor selection and procurement pathways so security and contractual protections scale across departments. UND’s procurement guidance and SMHS policies direct faculty to run purchases through institutional channels for large or sensitive deployments.
  • Create or empower an AI oversight body (AI ethics board, AI governance committee, or extend an existing committee) to review new AI vendors, assess equity and bias concerns, and advise on educational impacts. EDUCAUSE guidance proposes a governance model combining stakeholders from IT, faculty, instructional design, legal, and students.
  • Invest in faculty development. Sessions like the TLAS Oct. 1 event should be part of a year-round program that includes hands-on labs, risk assessment workshops, and vendor-contract literacy.

What to expect from the Oct. 1 TLAS session​

Faculty attending the session can expect a clear explanation of UND SMHS’s AI policy obligations, actionable guidance on identifying secure Copilot and other tools on campus devices, and practical design strategies to begin using AI in courses confidently and safely. TLAS also points to a repository of recorded sessions for asynchronous learning and follow-up consultations for faculty interested in crafting AI-enabled assignments or departmental workshops. Contact information for session leads (Adrienne Salentiny and Richard Van Eck) is provided in the event announcement for follow-up.

Quick-reference “If you only do three things” checklist​

  • Verify account context and protections: ensure you are signed into enterprise-protected services (Entra ID/organizational accounts) and check the “Protected” indicator where available.
  • Update your syllabus and assignment instructions to state AI expectations and documentation requirements clearly.
  • Never submit restricted or personally identifiable data to consumer AI tools; route sensitive work through institutionally approved services.

Final analysis: strengths, opportunities, and lingering risks​

The TLAS session is well timed and practical: it connects institutional policy to the everyday decisions faculty make, demystifies the technical distinctions that affect data protection (enterprise vs. consumer AI), and emphasizes pedagogy. UND’s institutional AI pages and SMHS policies already set a prudent baseline — they make it clear what not to do and outline procurement and transparency pathways that protect research and student data.
Strengths:
  • Clear institutional guidance reduces ambiguity for faculty and students.
  • Enterprise Copilot protections (when properly configured) materially lower the risk of tenant data being used for model training.
  • EDUCAUSE and other sector guidance provide tested governance frameworks that UND can adapt to SMHS needs.
Risks and caveats:
  • Protections depend on tenant configuration and contracts; faculty should not assume protection without IT confirmation. The mere presence of a Copilot app on a device does not guarantee enterprise-level protections.
  • AI hallucinations and accuracy problems remain a real risk in clinical and research contexts — outputs must be verified by domain experts.
  • Equity, access, and skill development must be actively managed; technology adoption without pedagogy risks widening gaps between students.
Where claims could be unverifiable:
  • Any claim that a vendor guarantees absolute non-use of data for any future training must be treated cautiously: protections are typically contractual and may evolve. Faculty and administrators should seek explicit contractual language and IT confirmation rather than rely solely on marketing claims. This session will explain how to escalate those contract questions to procurement and IT.

Conclusion​

The Oct. 1 session “AI is here, now what? Getting started (safely)” is not a theoretical conversation — it’s a pragmatic workshop for educators who need to make day-one decisions about tools already on their machines and classroom plans for the semester. UND’s SMHS and central university guidance provide concrete guardrails: protect sensitive data, follow procurement rules, and teach AI literacy alongside AI-enabled assignments. Faculty who attend will leave with a short checklist and a clear path to integrate AI tools in ways that enhance learning while managing privacy, accuracy, and fairness risks. Register via the TLAS announcement’s Zoom details (enter passcode UND if prompted), and reach out to Adrienne Salentiny or Richard Van Eck for follow-up consultation.

For broader context on sector trends and practical governance models, higher-education guidance and community discussion emphasize managed adoption, institutional AI hubs, and faculty-led pedagogical discretion — patterns reflected in both campus-level policy and the TLAS session design.

Source: University of North Dakota Instructional design: "AI is here, now what? Getting started (safely)" on Oct. 1 - For Your Health
 

Back
Top