Southern Alberta Institute of Technology’s rollout of Microsoft Copilot Chat for SAIT accounts is a pragmatic, policy-aware approach that gives students, faculty, and staff a way to use generative AI with clear guardrails — but it also raises a set of operational and privacy questions that every campus IT office must address now. SAIT’s guidance states there are two Copilot Chat experiences: a secure version that requires signing in with SAIT credentials (identifiable by a green shield) and is approved for Unrestricted and Protected data under SAIT’s Data Governance Procedure (AD.3.3.1), and a public version that may only be used with Unrestricted (public) information. The school’s advice emphasizes ethical use, consent before entering personal data, respect for copyright, and a strict prohibition on feeding Confidential or Restricted material into any AI tool. This article synthesizes SAIT’s guidance, verifies the technical and policy claims against Microsoft documentation and independent reporting, assesses the strengths and risks, and offers a practical governance and operational playbook for campuses and organizations adopting Copilot Chat.
Microsoft’s Copilot family now includes a multi‑tiered set of offerings: a broadly available, web‑grounded chat experience (often called Copilot Chat) and a paid, tenant‑aware product (Microsoft 365 Copilot) that can reason over an organization’s Microsoft Graph data and other tenant resources. The free Copilot Chat experience provides content‑aware assistance inside Office apps and the Edge browser, while the paid Microsoft 365 Copilot seat adds work grounding, priority model access, and enterprise controls — a distinction Microsoft clearly documents in its product and pricing pages.
SAIT’s brief mirrors these differences and layers them with institutional policy: use the secure, signed‑in Copilot Chat (green shield) for Unrestricted and Protected data, and reserve Confidential and Restricted data from any AI tool entirely. Those definitions map to SAIT’s AD.3.3.1 Data Governance procedure and the AD.2.15.3 Use of Artificial Intelligence Technologies policy, which together define data classifications and approved AI usages for students and staff. SAIT also calls out practical limits — for example, that the public Copilot cannot be safely used for Protected information and that local files are not shared automatically unless a user explicitly uploads them. This is broadly consistent with Microsoft’s design and vendor statements.
Microsoft’s product design separates web grounding from work grounding. Copilot Chat (free) is web‑grounded by default and can use web sources and general LLM reasoning for responses; Microsoft 365 Copilot (paid) can switch to work grounding, reasoning over mailbox, SharePoint/OneDrive, Teams, and other Graph data when permissions and licensing permit. Microsoft’s product pages and support documentation make this line explicit. The paid Microsoft 365 Copilot add‑on is publicly priced at approximately $30 per user per month in Microsoft’s commercial materials (subject to eligibility and licensing prerequisites), a number Microsoft lists in pricing materials.
Key technical confirmations:
However, efficacy depends on three operational realities:
Source: SAIT - Southern Alberta Institute of Technology Microsoft Copilot Chat
Background / Overview
Microsoft’s Copilot family now includes a multi‑tiered set of offerings: a broadly available, web‑grounded chat experience (often called Copilot Chat) and a paid, tenant‑aware product (Microsoft 365 Copilot) that can reason over an organization’s Microsoft Graph data and other tenant resources. The free Copilot Chat experience provides content‑aware assistance inside Office apps and the Edge browser, while the paid Microsoft 365 Copilot seat adds work grounding, priority model access, and enterprise controls — a distinction Microsoft clearly documents in its product and pricing pages. SAIT’s brief mirrors these differences and layers them with institutional policy: use the secure, signed‑in Copilot Chat (green shield) for Unrestricted and Protected data, and reserve Confidential and Restricted data from any AI tool entirely. Those definitions map to SAIT’s AD.3.3.1 Data Governance procedure and the AD.2.15.3 Use of Artificial Intelligence Technologies policy, which together define data classifications and approved AI usages for students and staff. SAIT also calls out practical limits — for example, that the public Copilot cannot be safely used for Protected information and that local files are not shared automatically unless a user explicitly uploads them. This is broadly consistent with Microsoft’s design and vendor statements.
What SAIT is telling users — clear, actionable points
- Two Copilot experiences: a secure version for SAIT accounts (green shield) and a public web version for general use. The secure version requires login with SAIT credentials and is the only approved route for Protected data at the institute.
- Data classification rules: SAIT defines four levels — Unrestricted, Protected, Confidential, Restricted — and explicitly forbids entering Confidential or Restricted content into any AI tool.
- Ethics and consent: obtain consent before entering personal information; respect creators’ IP; don’t use AI for high‑stakes decisions affecting individuals; disclose AI usage in outputs.
- Practical cautions: other public AI tools should be assumed to be unsafe for Protected or higher classifications; local files are not uploaded automatically — files are only processed if explicitly shared or attached.
How the secure vs public distinction works (technical verification)
SAIT’s guidance hinges on the green shield indicator: when users sign into Copilot Chat in Microsoft Edge with their SAIT (work/school) account, a protected/shield icon appears and Microsoft treats the session as covered by enterprise data protections. Microsoft’s documentation confirms that a shield or “protected” badge in the Copilot UI signals enterprise data protection and that signing in with a work or school Entra account is the route to that protected experience. University IT pages and corporate guidance published by Microsoft also instruct users to verify the shield before entering anything beyond public content.Microsoft’s product design separates web grounding from work grounding. Copilot Chat (free) is web‑grounded by default and can use web sources and general LLM reasoning for responses; Microsoft 365 Copilot (paid) can switch to work grounding, reasoning over mailbox, SharePoint/OneDrive, Teams, and other Graph data when permissions and licensing permit. Microsoft’s product pages and support documentation make this line explicit. The paid Microsoft 365 Copilot add‑on is publicly priced at approximately $30 per user per month in Microsoft’s commercial materials (subject to eligibility and licensing prerequisites), a number Microsoft lists in pricing materials.
Key technical confirmations:
- The green shield/protected badge is the indicator of a signed‑in, enterprise‑protected Copilot Chat session.
- Copilot Chat will be web‑grounded by default; Microsoft 365 Copilot adds work/tenant grounding and additional enterprise features.
- Microsoft’s in‑app Copilot surfaces (Edge sidebar, Office app panes, Windows Copilot) are designed to keep the user’s active document in context and, if signed into an enterprise account, provide EDP (Enterprise Data Protection) and admin controls.
What Microsoft actually collects and how files are handled — verified guidance
Two technical claims SAIT makes deserve verification: (1) that local files are not automatically shared with Copilot, and (2) what data Microsoft collects from Copilot Chat sessions.- Local files and on‑device content: Microsoft’s product communications and Windows Insider notes confirm that Copilot does not scan or upload a user’s entire system automatically. Copilot can surface recently accessed files or local items for convenience, but files are sent to Microsoft’s cloud only if the user explicitly attaches or uploads a file or gives permission for a specific action. In Windows Copilot and in the Office integrated Copilot workflows, attaching a file is a deliberate user action that grants Copilot permission to process that file.
- Data collection and retention: Microsoft’s privacy FAQ and file‑upload docs state that chat inputs/prompts, usage telemetry, and diagnostic data are collected to improve product performance, but files uploaded in a session are stored only for a limited period (Microsoft commonly documents storage windows and deletion policies) and Microsoft asserts uploaded files are not used to train foundational generative models. Users retain controls to delete conversation histories and opt out of training personalization features in many enterprise configurations. That said, exact retention windows and telemetry details can vary by product line and tenant configuration; organizations should verify their tenant settings and contractual protections.
Strengths of SAIT’s approach
- Clarity and low friction: SAIT’s binary recommendation — use the green‑shielded Copilot for Protected/Unrestricted, avoid Confidential/Restricted — is easy to understand and implement across campus. That lowers the cognitive load for students and staff who might otherwise guess what is safe to paste into an AI. SAIT’s combination of UI guidance (green shield) and policy mapping (AD.3.3.1 and AD.2.15.3) is a practical model for other campuses.
- Aligned with Microsoft design: SAIT’s rules map cleanly to Microsoft’s product controls (work account sign‑in, enterprise data protection, in‑app Copilot panes), which simplifies enforcement and reduces surprise behaviours. Microsoft explicitly documents the shield/protected mood for enterprise sessions and provides admin controls for tenants.
- Ethics and consent baked in: Requiring consent before adding personal data, mandating respect for IP, and forbidding AI for high‑stakes automated decisions is good policy hygiene. Those explicit prohibitions reduce legal and reputational risk and place the burden of human judgement where it belongs.
- Practical user guidance: SAIT’s FAQ‑style answers (e.g., “local files are not shared automatically” and “do not input Confidential or Restricted data”) align with Microsoft’s own guidance and common university practice, which makes it easier for end users to comply.
Risks, gaps, and where SAIT should tighten policy
- Green shield is necessary but not sufficient. The presence of the shield indicates enterprise data protection, but it does not automatically make every use case safe. Reasoning across tenant data, agent actions, and metered connectors can still create exposure vectors. Administrators must combine the shield check with permissions, labeling, and workflow controls. Microsoft acknowledges this operational complexity in its Copilot Control System guidance.
- Agent and connector risk. Copilot Studio agents and metered connectors can execute actions or query external services. An agent that can send mail, call APIs, or query HR systems greatly increases risk unless agent creation and runtime behavior are strictly governed. SAIT’s policy should explicitly require administrative approval and runtime monitoring for any agents that read or act on Protected or higher data. Independent product analyses emphasize agent governance as an adoption prerequisite.
- Human error and accidental disclosure. User education must go beyond “don’t paste Confidential data.” Real incidents often arise from poorly understood workflows — for example, when a user uploads a spreadsheet containing hidden columns with PII, or when chat history is retained and later accessed inadvertently. Policies should require periodic scanning of training exercises and lab activities to ensure compliance. Microsoft provides admin controls and logging, but the tenant is responsible for enforcement.
- Model hallucination and overreliance. Copilot outputs can be wrong, incomplete, or biased. SAIT’s guidance to “carefully review generated content for accuracy” is necessary, but operational rules are also required: no mission‑critical decisions, human sign‑off for legal/assessment outputs, and specific review processes for student work that affects grading or outcomes. This ties back to academic integrity and the institution’s learning objectives.
- Unverifiable third‑party claims. Statements about exactly how Microsoft stores telemetry, or guarantees about model training, should be treated as conditional: contractually confirm the tenant’s protections with Microsoft or through your reseller. Any statement presented to end users about “we never use your data to train models” should be accompanied by documentation showing tenant controls and opt‑out settings.
Practical controls and governance checklist for SAIT and similar institutions
The following checklist converts policy into operational controls. Prioritize items 1–6 for immediate action.- Identity and Access
- Enforce Azure AD/Entra sign‑in for Copilot Chat access; require MFA for admin roles.
- Block or limit personal Microsoft account sign‑ins for staff and students when using institutional systems.
- UI and UX Controls
- Publish simple, visible instructions: “Verify the green shield before entering Protected data.”
- Configure browser and Edge deployments so the shield and sign‑in prompts are unambiguous.
- Agent Governance
- Create an approval workflow for any Copilot Studio agents that access institutional data or perform actions (email, LMS changes, record updates).
- Maintain an inventory of agents and connectors, log actions and decisions.
- Data Handling and Labeling
- Map AD.3.3.1 classifications to everyday examples on a quick‑reference card for staff.
- Restrict or redact Protected fields in datasets before any upload or testing with Copilot.
- Audit, Logging and Incident Response
- Enable Copilot analytics and audit logs; integrate with SIEM.
- Add Copilot misuse and AI‑related incidents to the incident response playbook.
- Training and Awareness
- Run regular “AI hygiene” workshops demonstrating safe vs unsafe prompts.
- Use short micro‑learning modules for faculty, TAs, and student staff on the green shield, file upload mechanics, and consent.
- Procurement and Contractual Protections
- Ensure reseller agreements and Microsoft terms reflect retention and non‑training commitments as negotiated.
- Validate data residency and export controls for any Protected or regulated datasets.
- Academic Integrity Rules
- Update course AI policies to describe permitted, required, and prohibited uses of Copilot for assignments, with examples mapped to data classifications.
- Pilot and Measure
- Begin with 2–3 low‑risk pilot use cases (e.g., lecture note summarization, syllabus drafting, meeting minutes).
- Track KPIs: active usage rate, successful session rate, time saved, and incidents reported.
Recommended communications and student‑facing language
- “Do not paste Confidential or Restricted data into any AI tool. Use the green shielded Copilot only for Protected and Unrestricted data.”
- “If you are uploading data involving personal information, get written consent from the people involved before using Copilot.”
- “Always validate AI outputs; treat them as draft assistance rather than authoritative text.”
- “Report unexpected outputs or privacy concerns to the SAIT help desk immediately.”
Real‑world scenarios and policies mapped to SAIT’s classification
- Student transcript or grade spreadsheet (contains PII) — Classified: Restricted. Action: Never upload to Copilot Chat; use institutional tools with approved data flows and human review.
- Internal program schedule with staffing notes (sensitive outside SAIT) — Classified: Protected. Action: Use the green‑shielded Copilot only; ensure agent access is blocked; log activity.
- Public marketing copy or course brochure text — Classified: Unrestricted. Action: Public Copilot or secure Copilot are acceptable; prefer secure version to avoid ambiguity.
- Instructor exam items (sensitive within SAIT) — Classified: Confidential. Action: Do not input into any AI tool; handle inside approved LMS processes.
Final assessment — does SAIT’s Copilot Chat guidance go far enough?
SAIT’s guidance is well‑scoped: it gives a clear, enforceable boundary (green shield for secure use, strict ban on Confidential/Restricted inputs) and pairs that with ethical rules and consent requirements. That’s a solid foundation and aligns with how Microsoft frames Copilot Chat and Microsoft 365 Copilot in its product materials.However, efficacy depends on three operational realities:
- Configuration discipline: Admins must ensure tenant settings, agent permissions, and audit logging are correctly applied and maintained.
- Ongoing education: Users must repeatedly be trained on what “green shield” actually protects, and where the boundaries remain ambiguous.
- Runtime governance for agents: Without strict approval workflows, Copilot agents present the most significant escalation risk.
Conclusion
SAIT’s Copilot Chat policy is a model of practical campus AI governance: it acknowledges the benefits of on‑campus AI experimentation while setting hard boundaries that protect privacy, integrity, and institutional reputation. The green shield UI affordance is a useful, verifiable signal to users that they are in an enterprise‑protected session, and Microsoft’s product documentation confirms the technical underpinning of that signal. Yet the shield alone cannot shoulder the full burden — administrative controls, agent governance, contractual protections, and continuous user education are required to make the program safe at scale. Institutions that adopt these layered controls will gain the productivity benefits of Copilot while managing the privacy, compliance, and academic‑integrity risks that accompany generative AI.Source: SAIT - Southern Alberta Institute of Technology Microsoft Copilot Chat