WVU Workshops Teach Instructors to Coauthor Course Materials with Copilot Chat

  • Thread Author
West Virginia University's Teaching and Learning Center will begin hands‑on workshops the week of Oct. 16 that introduce instructors to Microsoft Copilot Chat as a tool for coauthoring course materials, with sessions designed to show how generative AI can speed the creation of grading rubrics, narrative case studies, and other time‑consuming teaching artifacts.

A team in a conference room collaborates on an AI prompt workflow.Background​

Why this matters now​

Generative AI tools have moved quickly from novelty to practical classroom utility. Institutions are balancing the promise of efficiency and personalization against legitimate concerns about accuracy, student privacy, and academic integrity. West Virginia University's (WVU) decision to run instructor workshops centered on Copilot Chat reflects a broader trend: universities are shifting from forbidding or ignoring AI to teaching faculty how to use it responsibly in course design and assessment.

What WVU is offering​

The Teaching and Learning Center (TLC) at WVU will host multiple workshop dates that walk instructors through Copilot Chat with an emphasis on collaborative prompt workflows and practical outputs. Workshop content includes step‑by‑step practice generating multi‑level rubric language, constructing narrative case studies with characters and backgrounds, iterating on outputs with follow‑up prompts, and exporting final drafts for instructor refinement. WVU states that Copilot Chat is available as part of the WVU Office 365 suite and that work done while signed in with university credentials is subject to WVU’s Enterprise Data Protection agreement with Microsoft.

Overview of Microsoft Copilot Chat in higher education​

What Copilot Chat actually is​

Copilot Chat is the conversational agent integrated into Microsoft 365 ecosystems that leverages advanced large language models to help users draft documents, summarize materials, generate lesson content, and more. For education customers, Microsoft has positioned Copilot Chat as a tool for lesson planning, rubric generation, personalized feedback, and accessibility improvements. Microsoft’s education guidance emphasizes that Copilot Chat can be included with Microsoft 365 accounts, and additional Copilot licensing unlocks deeper integrations with the Microsoft Graph and Office apps.

Technical and policy guarantees Microsoft advertises​

Microsoft’s messaging to institutions stresses enterprise‑grade data protection: when Copilot Chat is deployed under a university tenant and governed by contractual protections, the vendor says tenant data is not used to train the underlying foundation models and administrators retain control over access. Microsoft also documents features useful to educators—file upload, Copilot Pages, multimodal inputs, and integrations with Word, PowerPoint and Teams—while advising administrators on steps to enable and manage student access. Institutions are instructed to evaluate licensing and age‑related access rules for students.

What the WVU workshop will teach (practical breakdown)​

Goals and expected outcomes​

The workshop is explicitly designed to reduce the instructor workload on routine but time‑intensive tasks by demonstrating how to coauthor rather than fully delegate creation to AI. Participants will leave with:
  • At least one customized rubric in Word format covering multiple performance levels.
  • A narrative case study draft they can deploy in class discussion or assignment prompts.
  • A reproducible prompt recipe and editing workflow for ongoing use.
The TLC notes the workshop workflow includes interacting with Copilot, requesting multiple rounds of revision, and downloading the result as a Word file for final instructor editing.

Typical hands‑on sequence (how a session runs)​

  • Sign into Copilot Chat with institutional Office 365 credentials so the interaction is governed by WVU’s enterprise protections.
  • Provide the model an initial prompt that contains context: course level, learning outcomes, assignment type, and desired rubric criteria.
  • Review the generated rubric language and request specific refinements (e.g., raising the language clarity, aligning scoring bands to 0–4 instead of 1–5).
  • Ask Copilot to create a narrative case study or variations tailored to different student populations.
  • Export or copy the final output into Microsoft Word and perform instructor edits to localize tone and assessment nuance.
This exact workflow mirrors guidance Microsoft has published for educators using Copilot Chat to draft rubrics and personalized feedback.

Strengths: what instructors and institutions stand to gain​

Big wins for productivity​

  • Rapid drafting: Copilot Chat can generate multiple rubric levels and assignment text in minutes, replacing hours of manual drafting.
  • Iterative refinement: The chat model supports multi‑turn edits — instructors can tune tone, granularity, and alignment without starting over.
  • Consistency and accessibility: AI can help translate a single rubric into clearer, student‑facing language and adapt materials for accessibility needs.
  • File interoperability: Copilot can export or create content that easily moves into Word, PowerPoint, and LMS platforms for deployment.

Pedagogical benefits​

  • Scaffolded feedback: Instructors can generate tailored feedback templates for common student errors, saving time while preserving individualized responses.
  • Diverse representation: Narrative case studies can be quickly varied to include more diverse characters and contexts, supporting inclusive pedagogy.
  • Experimentation permission: Structured workshops like WVU’s give faculty permission to try and fail safely, which Microsoft and early‑adopter institutions report as crucial for productive integration.

Risks and limitations — what the workshops should not gloss over​

Accuracy and hallucination risk​

Generative models can invent facts or present incorrect procedural steps with high confidence. Outputs should be treated as drafts that require instructor verification, not authoritative final products. Even institutional tenants with enterprise protections are not immune to erroneous content generation. This is a core limitation of current large language models and a critical reason to keep instructors in the editorial loop.

Privacy and regulatory concerns​

While Microsoft advertises enterprise data protections and asserts that tenant data won’t be used to train foundation models, institutions must still evaluate compliance with local privacy and education laws, including FERPA in the U.S. Administrators should confirm how logs, chat history, and uploaded files are retained and who within the institution can access them. Workshop sign‑ins and institutional guidance are the right place to surface these policies.

Academic integrity and assessment design​

AI‑assisted creation of rubrics and prompts can be a double‑edged sword: easier content creation could inadvertently produce prompts that students can equally reproduce with AI. Instructors should design assessments and grading criteria with integrity controls in mind—specific local context, reflective prompts, or in‑class demonstration elements—to reduce risks of student misuse.

Equity and bias​

Models reflect patterns in their training data and can produce biased or culturally insensitive descriptions. When generating case studies or assessment descriptors, instructors must audit outputs for bias, stereotype reinforcement, or culturally exclusionary language. This is particularly important when using AI to craft scenarios intended to represent real‑world diversity.

Unverifiable vendor claims (flagged)​

Vendors sometimes make strong claims about data use and model behavior that can be complex to verify. While WVU’s internal pages state that Copilot interactions on university accounts are protected under enterprise agreements and not used to train foundation models, institutions should request and review contractual details and independent audits where possible to confirm vendor assertions. Treat any single vendor statement as one piece of evidence, not definitive proof.

Practical recommendations for instructors (best practices)​

Before using Copilot Chat​

  • Confirm you are signed in with your institutional Office 365 account so enterprise protections apply.
  • Understand the university’s policy on AI use in teaching and on handling student data.
  • Prepare a precise context statement for the model: course name, student level, learning outcomes, and intended use of the output.

Prompt recipes for common tasks​

  • Grading rubric (starter prompt): “Create a 4‑level rubric for a 1,000‑word persuasive essay in an undergraduate introductory history course. Include criteria for thesis clarity, use of evidence, organization, and writing mechanics. Provide short student‑facing descriptions for each level and a suggested score range for each criterion.”
  • Case study (starter prompt): “Draft a 600‑word narrative case study about a public policy decision affecting rural healthcare. Include two characters, a clear conflict, three learning questions, and a short list of suggested class activities.”

Post‑generation checklist (always do these)​

  • Fact‑check dates, names, and domain‑specific claims.
  • Localize language to reflect institutional grading scales and course learning outcomes.
  • Run a bias and sensitivity read — check for stereotypes or exclusionary assumptions.
  • Add an authenticity layer (e.g., local data, campus‑specific references) to reduce ease of reproduction by students.
  • Save final versions in university‑managed storage and note versions and edits for transparency.

Classroom use case ideas​

  • Use AI‑generated rubrics as a starting point for department‑wide calibration exercises.
  • Assign AI‑varied case studies to small groups and ask students to compare scenarios and propose alternative interventions.
  • Teach students to critique AI outputs as part of digital literacy modules (evaluate reliability, bias, and evidence quality).

Operational checklist for IT and academic leaders​

  • Confirm licensing and age‑access settings for Copilot Chat across the tenant and document entitlement rules.
  • Review contractual language on data retention, telemetry, model training, and export controls; make this summary available to faculty.
  • Provide a FAQ and contained demo environment where instructors can safely experiment with non‑sensitive data.
  • Integrate Copilot guidance into campus training programs and new faculty orientation.
  • Coordinate with the registrar and academic integrity offices to update assessment policies and student guidance.
These operational steps align with Microsoft’s published guidance for onboarding Copilot Chat in educational settings and mirror best practices adopted at peer institutions.

Case studies and comparable institutional moves​

Several higher education institutions have already incorporated Copilot Chat into faculty training and student pilots. For example, some universities provide public pages describing how Copilot Chat is accessible to faculty through institutional Office 365 accounts and differentiate between Copilot Chat (tenant‑level chat) and licensed Copilot for Office apps for richer integration. These early adopters report time savings in content creation and emphasize the need to pair technical rollout with pedagogical training. WVU’s workshop model—hands‑on practice with explicit privacy and governance messaging—aligns with those early‑adopter playbooks.

Measuring success: what to track after a workshop​

  • Faculty adoption metrics: number of instructors using Copilot Chat in course preparation, and how frequently.
  • Time saved: self‑reported reduction in hours spent drafting rubrics, assignment prompts, and feedback.
  • Quality measures: instructor and student satisfaction with clarity and usefulness of rubrics and case studies.
  • Integrity incidents: any change in academic misconduct cases tied to assignment design or AI use, used to iterate assessment design.
  • Policy compliance: evidence that faculty are following data protection guidance and tenant best practices.
Collecting these measures helps determine whether the productivity gains translate into better student outcomes and institutional readiness.

Critical appraisal: realistic expectations and next steps​

WVU’s workshops are a pragmatic step toward normalizing responsible Copilot Chat use in teaching. They emphasize co‑authoring—keeping instructors involved in the creative and evaluative loop—which is the right posture for integrating generative AI into education. That said, the effectiveness of such programs depends heavily on follow‑through: clear institutional policies, transparent vendor contracts about data handling, and ongoing faculty development.
A key caveat: vendor claims about data not being used to train models and enterprise protections should be independently reviewed by university counsel and IT leadership. Institutions should solicit clarifying language in contracts and request technical documentation about telemetry, retention, and the scope of “no use for training” promises. Treat vendor assurances as starting points for governance, not endpoints.

Final assessment​

Workshops like WVU’s—practical, hands‑on, and governance‑aware—are the most responsible pathway for faculty to adopt Generative AI in education. They teach instructors how to harness the efficiency of Microsoft Copilot Chat while preserving academic judgment, protecting student data, and redesigning assessments to reduce potential misuse. The benefits are tangible: time saved on repeated drafting, better accessibility and personalization, and increased capacity for faculty to focus on higher‑level instructional design.
At the same time, risks persist. Accuracy issues, bias, and privacy nuances require continuous vigilance. Institutional leaders must pair technical deployment with legal review, training, and assessment redesign. For instructors, the rule remains: use Copilot Chat to coauthor, not to abdicate authorship. WVU’s staged, practical workshop model provides a blueprint for that approach and a repeatable, classroom‑focused way to introduce an evolving technology into durable teaching practice.


Source: West Virginia University E-News | Workshop on coauthoring course materials with GenAI begins Oct. 16
 

Back
Top