
Artificial intelligence (A.I.) is rapidly transforming fields across the spectrum of higher education and healthcare, with academic medical centers being among the earliest and most energetic adopters of emerging A.I. solutions. On July 24, at the University of North Dakota’s (UND) School of Medicine & Health Sciences (SMHS) Neurology Grand Rounds, faculty experts including Drs. Dinesh Bande, Sarah Sletten, Monica Norby, and Richard Van Eck are scheduled to present an up-to-the-minute review titled “A.I. update: Navigating A.I. policy at SMHS and Microsoft Copilot.” This session is expected to draw significant attention, as it tackles some of the most pressing questions surrounding the responsible and effective use of A.I. in health sciences education and clinical environments.
Redefining A.I. Policy in Healthcare and Academia
In anticipation of their Grand Rounds presentation, the SMHS faculty will break down the school’s current A.I. policy—a nuanced framework developed to address both present realities and the near future in the use of artificial intelligence. This framework is crucial as practitioners and educators grapple with a technology whose rapid evolution outpaces regulatory and ethical consensus.Core Elements of the SMHS A.I. Policy
The SMHS A.I. policy rests on several foundational pillars:- Definitions and Scope: Clear terminology is paramount. The policy distinguishes “public” and “private” A.I. tools—a critical separation as generative models grow in power and applicability. Public A.I. refers to broadly accessible, free solutions (like ChatGPT free tier or open Copilot web interfaces), while private A.I. denotes institutionally controlled, subscription-based platforms protected by organizational security measures (e.g., Microsoft Copilot for Enterprise).
- Role-Based Permissions: Different categories of users (faculty, staff, students, clinicians, administrators) are granted specific permissions regarding A.I. tool usage. This role-based approach mitigates the risks associated with inappropriate data exposures by aligning access with professional responsibilities and the sensitivity of data handled.
- Protected and Restricted Information (PRI): Perhaps the most sensitive area addressed by the policy is the handling of Protected and Restricted Information—including HIPAA-covered data, student records (FERPA), and proprietary research materials. Strict prohibitions prevent such information from being entered into public or consumer-grade A.I. platforms, limiting its use to internally compliant, private A.I. environments with robust data safeguards.
Policy Implications: Guardrails and Opportunities
By embedding these definitions and controls into day-to-day operations, SMHS is equipping its academic health science community with practical guidance: What’s safe, what’s not, and which A.I.-powered workflows are permitted.- Faculty and Staff: Faculty designing course materials or conducting research must ascertain whether their activity includes PRI, and if so, ensure any A.I. involvement is strictly within the approved, private systems, such as the institution’s licensed Microsoft Copilot.
- Students: The policy sharply defines permissible use for students, urging transparency and mandating clear disclosure when A.I.-assisted work is submitted, especially for assignments or projects involving sensitive topics.
- Administrators and IT Staff: Responsible for enforcing boundaries, these teams must regularly audit A.I. integrations and update security measures as platforms evolve.
Public vs. Private A.I.: Choosing the Right Tools
As A.I. solutions diversify, so, too, does their risk profile. The Grand Rounds session will draw a sharp distinction between public and private A.I. tools, underscoring where and why private enterprise platforms—like the paid tiers of Microsoft Copilot—are essential in academic medicine.Key Differences
- Public A.I. Tools: These include widely accessible, sometimes free-to-use platforms such as OpenAI’s ChatGPT (public/free version), Google Gemini, and public Copilot web tools. Such services do not guarantee organizational data privacy or compliance with protected health information rules.
- Private A.I. Tools: These are institutionally licensed, behind secure authentication, and meet the data privacy needs of healthcare and academic environments. Examples include Microsoft Copilot for Microsoft 365, which can be configured to stay within an institution’s security perimeter.
When to Use Private A.I.
- Protected Data: Any research, teaching, or administrative workflow involving personal health information (PHI), educational records, or internal strategic data require a private A.I. solution.
- Institutional Integrity: Tasks involving branding, official communications, and curriculum development should favor private tools to avoid the unintentional release of proprietary materials.
Practical Example
If a faculty member is drafting new neuroanatomy exam questions using Copilot, and those questions are to be based on recently unpublished clinical research, use of the private, institutionally controlled Copilot environment is mandatory. Uploading or pasting such data into a public A.I. tool would violate SMHS policy and may breach legal obligations under FERPA or HIPAA.Navigating the Complex Ethics of A.I. in Medical Education
With newfound power comes heightened responsibility. One of the most closely watched sections of the upcoming Grand Rounds is an exploration of best practices for ethical and professional use of A.I. in research and pedagogy.Essential Ethical Principles
- Transparency: Academic and clinical professionals must clearly disclose when A.I. contributes to research, educational materials, or decision support, preventing misrepresentation or inadvertent plagiarism.
- Human Oversight: A.I. is not a substitute for critical judgement. The policy mandates explicit human review of any output that impacts teaching, testing, or patient care.
- Data Privacy and Security: Ongoing vigilance is demanded to ensure that sensitive information never crosses into unstable, unsecured, or public A.I. domains.
- Integrity: Upholding the authenticity and originality of work remains vital; A.I.-generated content should serve as an adjunct, not a replacement, for expert insight.
Implementation in Daily Practice
Faculty are urged to explicitly state the role of A.I. in course syllabi. For instance, an assignment may include a statement such as: “A.I. writing tools are permitted for brainstorming but not for final writing. All external A.I. contributions must be cited as such.” Researchers, similarly, are tasked with detailing how A.I. tools were utilized in grant submissions and publications, reflecting emerging journal and funding agency policies.Real-World Applications and Boundaries: Microsoft Copilot in the Health Sciences
Central to the Grand Rounds update is a frank assessment of Microsoft Copilot—arguably the flagship A.I. productivity tool for institutions using Office 365. While Copilot promises to “supercharge” administrative efficiency, draft documents, summarize proceedings, and even generate code snippets, the session will urge caution and measured adoption when health science standards are at stake.Workflows Benefiting from A.I. Integration
- Document Summarization: Copilot’s ability to rapidly distill meeting minutes, literature, or educational content helps faculty and administrators keep pace with information overload.
- Drafting and Editing: Syllabus creation and policy documents see marked acceleration—with the caveat that all outputs must undergo thorough human review for accuracy and tone.
- Research Support: Searching institutional repositories, refining literature reviews, and even extracting key points from complex datasets are all areas where Copilot shines, especially when paired with compliant configurations.
Limitations and Risks
Despite Copilot’s promise, the session is expected to stress its boundaries:- Accuracy: Like all generative A.I., Copilot can hallucinate (fabricate) facts, misinterpret context, or fail to comprehend nuanced medical terminology. Human oversight is non-negotiable.
- Source Attribution: Not all Copilot outputs clearly reference source material, necessitating user diligence when incorporating suggested edits into official documents.
- Security Caveats: Configuration, not the mere presence of a license, determines whether Copilot is safe for use with sensitive data. Administrators must maintain rigorous security postures.
Addressing Bias and Inequity: Pursuing Fairness in A.I.-Augmented Education
The rise of A.I. in medical and health science education is not without controversy, particularly regarding equity and inclusivity. The SMHS Grand Rounds will engage faculty and students in discussions of A.I.-enabled bias—both in model outputs and in the broader impact of varying access to technology.Recognizing and Counteracting Bias
- Model Limitations: Large language models are only as fair as the data they are trained on; inadvertent propagation of bias (along lines of race, gender, or diagnosis) is a pressing concern. The policy demands vigilant scrutiny of A.I.-produced materials for biased language or misrepresentation.
- Curricular Impact: Courses must not unwittingly privilege those with greater digital fluency or personal access to premium tools, lest the digital divide widen. Flexibility and digital equity must be prioritized.
Strategies for Equitable Implementation
Faculty are encouraged to:- Offer alternatives for students unable to access institutional Copilot or related tools, ensuring learning objectives remain fair and balanced.
- Engage in ongoing dialogue—gathering student feedback, revising assignments, and adjusting policies as diversity and inclusion needs evolve.
Crafting Departmental and Course-Level Policies for A.I.
Recognizing that “one-size-fits-all” policies may fall short, the SMHS leadership has empowered individual departments and courses to develop tailored A.I. guidelines. The Grand Rounds will offer a framework for this next-level policy design:- Assignment Design: Faculty should specify which components (research, drafting, problem-solving) may or may not involve A.I. assistance.
- Prompt Writing: Training both students and staff in formulating effective, bias-minimized prompts is now a curricular objective.
- Digital Equity: As the boundary between “public” and “private” A.I. shifts, ongoing audits ensure all students and faculty can participate fully, regardless of personal resources.
The Continuing Education Mandate
The Grand Rounds session is officially designated for continuing medical education (CME), offering up to 0.75 AMA PRA Category 1 Credit™—underscoring the seriousness with which SMHS treats the topic. Faculty, residents, and medical professionals attending (in person at the Brain & Spine Clinic, Fargo, or virtually via Webex) must report participation as directed to receive credit. Importantly, all planners and presenters have declared no relevant financial relationships, ensuring content integrity and unbiased instruction.Inclusive Participation
Both Sanford Health employees and non-Sanford attendees can qualify for CME credit, provided they register and attend as specified. In a further nod to transparency, instructions for account setup via Sanford Success Center are made explicit, lowering barriers to participation.Future Directions: Policy Review and Continuous Adaptation
Anticipated questions during and after the presentation center on how often the SMHS policy will be revised, and how input from students, faculty, and patients will be incorporated. While the process is still evolving, the school’s commitment to a revision cycle that keeps pace with advances in A.I. is clear. Users are invited to recommend updates, flag new risks, and suggest integrations of promising approaches seen at peer institutions.Critical Analysis: Opportunities and Watchpoints
SMHS’s multi-layered approach to A.I. regulation and integration reflects best practices identified by prominent organizations including the Association of American Medical Colleges, EDUCAUSE, and the Healthcare Information and Management Systems Society (HIMSS). Several strengths are immediately apparent:- Proactive segmentation of public vs. private A.I. mitigates key compliance risks.
- Role-based access and clear guidance assist users at all levels, reducing “shadow IT” headaches.
- Commitment to digital equity and anti-bias policies, though early, signals a desire to avoid replicating historical disparities in the digital classroom.
- Enforcement and Auditing: As more third-party plugins and SaaS A.I. tools become available, continuous monitoring will be needed to ensure policy compliance. Automated tracking systems and regular user education are likely to become necessities.
- User Training Gaps: As with any digital transformation, the quality of output and adherence to policy hinges on user understanding. Without robust, iterative training, even well-crafted guidelines may be ignored or misunderstood.
- Evolving Legal Landscape: HIPAA, FERPA, and related laws are under continuous interpretation as courts, regulators, and legislators react to new A.I. use cases. Policies will require regular legal review and, when necessary, rapid amendment.
The Road Ahead
As July 24 approaches, the significance of this Grand Rounds presentation extends well beyond SMHS. Institutions across the healthcare and academic landscape are looking for blueprints that balance innovation with integrity—leveraging artificial intelligence to boost efficiency and insight, while jealously guarding patient privacy, educational standards, and equity.The SMHS model, grounded in clear definitions, robust privacy guardrails, ongoing review, and community input, stands at the vanguard of this movement. Ultimately, success will hinge on the school’s ability to adapt—continuously learning, iterating, and refining policy as technology and human needs evolve in lockstep.
For health professionals, educators, and students alike, the challenge is clear: Harness A.I. to drive discovery and streamline education, but never at the expense of the trust, security, and ethical clarity that anchor the practice of medicine and science. As new tools and platforms emerge, the lessons from SMHS’s Grand Rounds will help ensure that artificial intelligence fulfills its promise—enriching the educational experience without widening divides or compromising what matters most.
Source: University of North Dakota Several SMHS faculty to give A.I. updates at Neurology Grand Rounds on July 24 - For Your Health