
The Philippine Amusement and Gaming Corporation’s recent, agency-wide orientation on Microsoft Copilot marked a clear and deliberate shift: embrace generative AI for day-to-day productivity while anchoring adoption to governance, data protection, and administrative controls. The online briefing, delivered as part of Development Policy Research Month under the theme “Reimagining Governance in the Age of AI,” brought 98 staff from PAGCOR’s corporate and branch offices together for a practical session on Copilot Chat and the governance responsibilities that come with introducing AI into a regulator that handles highly sensitive personal and financial records.
Background
PAGCOR’s orientation was positioned as a practical primer rather than a technical rollout: trainers explained how Copilot Chat can accelerate routine tasks—drafting correspondence, summarising reports, generating ideas and answering work-related queries—while stressing the need for strict data handling and oversight. The session was led by an AI Workforce Specialist and explicitly contrasted the publicly grounded Copilot Chat experience with the licensed, tenant-grounded Microsoft 365 (M365) Copilot that can access an organisation’s Microsoft Graph content when administrators enable it.Microsoft’s own documentation makes that operational difference clear: Copilot Chat is web‑grounded by default and available with qualifying Microsoft 365 business subscriptions, while M365 Copilot requires an add‑on license and tenant configuration to provide answers grounded in internal work content such as emails, files, chats and calendars. Administrators can also configure controls that change how Copilot interacts with tenant data.
Why this briefing matters: context for a regulator
Regulators are custodians of extremely sensitive material—personally identifiable information (PII), KYC records, transaction logs and enforcement materials. Introducing AI assistants into that environment raises three hard constraints that PAGCOR’s session tried to surface:- Confidentiality: Copilot interactions must not become a leakage path for player identities, payment instruments or investigation files.
- Auditability: Any AI-assisted communication or analysis used in policy, enforcement or public-facing work must be traceable and defensible.
- Vendor & contract risk: Technical assurances from vendors are not substitutes for contractual rights over telemetry, retention and non-use-for-training clauses.
What was covered in the session
The orientation combined practical task demonstrations with governance rules and high-level policy signposts:- Overview of Copilot Chat capabilities: drafting, summarisation, brainstorming, spreadsheet assistance.
- The operational difference between web-grounded Copilot Chat and work-grounded M365 Copilot, and why it matters for PII and investigative files.
- Emphasis on data protection standards, organisational policy compliance and never pasting sensitive internal records into web-grounded chat.
- High-level mention of Copilot agents (Researcher, Analyst, Prompt Coach, Writing Coach) that can accelerate workflows for licensed users, and a reminder that some agent capabilities require add-on licensing or administrative enablement.
The technical reality: what IT admins must understand
Understanding how Copilot can access data is fundamental to safe deployment:- Copilot Chat (web-grounded): By default, uses public web indexes and LLM outputs. It does not access tenant data unless users explicitly upload files as part of a prompt or operate Copilot inside an app context where open content is accessible. Copilot Chat is generally included with many Microsoft 365 business subscriptions.
- Microsoft 365 Copilot (work-grounded): Requires an add-on license and can be configured to use Microsoft Graph data—email, Teams chats, OneDrive/SharePoint files and calendar items—when tenant administrators enable that access. This mode supports richer, organisation-aware responses and integrations across Office apps.
Strengths in PAGCOR’s approach
PAGCOR’s orientation displays several clear strengths worth noting for any IT or compliance leader watching this rollout:- Education-first posture: Bringing nearly 100 staff through a single orientation reduces the chance that employees will discover consumer AI independently and accidentally expose sensitive data. Awareness is the easiest first line of defence.
- Governance framing: The briefing did not treat Copilot as a productivity magic bullet; it explicitly tied adoption to accountability and data protection, signalling cross-functional interest in controls. That message helps align IT, legal and records teams from the outset.
- Pragmatic feature focus: Demonstrating immediate, low‑risk productivity wins—draft templates, meeting summarisation, spreadsheet assistance for de‑identified data—creates quick value while keeping sensitive tasks offline or in tightly controlled pilots.
Gaps and risks the orientation must now address
Awareness sessions are necessary but not sufficient. To move from briefing to safe, operational adoption, PAGCOR must close several critical gaps the orientation identified only at a high level.1. Telemetry and vendor commitments
Who retains logs of prompts and responses, where are they stored, and how long are they retained? The orientation emphasised policy but did not provide the contractual and technical specifics that procurement must secure. Without explicit telemetry and retention clauses, a regulator cannot reliably reconstruct an AI-related incident or meet records‑management obligations.2. Procurement and non‑training assurances
Product marketing claims about enterprise privacy are not contractual guarantees. Contracts must include explicit non‑training clauses (no use of tenant prompts to improve models), deletion rights, audit access and data residency assurances. These are non‑negotiable for organisations that process PII and financial records.3. Human‑in‑the‑loop and audit trails
Generative models hallucinate. Any AI‑assisted output used for public statements, policy drafts, enforcement actions, or legal conclusions must pass named human review and be recorded in the formal records management system. The orientation needs to be followed by binding workflows that require sign-off and logging for decision‑critical outputs.4. Overstated adoption metrics
The cited “86%” figure is a national AI usage metric among Filipino knowledge workers, not a direct measurement of M365 Copilot license penetration. Procurement and rollout decisions must be driven by seat counts, tenant telemetry and auditable enablement plans—not headline statistics.Practical, prioritized next steps (a 90‑day plan)
To convert a one‑day orientation into enforceable practice, PAGCOR (and agencies in similar positions) should adopt a clear, time-bound workplan:- Appoint an AI governance owner and establish a cross‑functional board (IT, Legal, Records, Security, HR).
- Run a 90‑day pilot limited to low‑risk functions (communications, HR templates, non‑sensitive drafting) with signed use agreements.
- Translate policy into enforceable tenant configurations:
- Disable web grounding by default for users handling sensitive data.
- Configure DLP and sensitivity labeling to block or quarantine prompts that include PII or financial identifiers.
- Negotiate procurement clauses that require:
- Detailed telemetry access and retention windows.
- Non‑use for model training or explicit model training opt‑out.
- Data deletion and audit rights.
- Define human‑in‑the‑loop workflows for any AI-assisted official output: named reviewer, approval timestamp, and archival into records management.
- Instrument monitoring and KPIs:
- Track productivity metrics (time saved), human‑review pass rate, DLP events, anomalous prompt detection and adoption depth.
- Run tabletop incident response exercises that include AI‑specific scenarios and forensic reconstruction of prompts/outputs.
Recommended technical controls and guardrails
Practical controls that security and IT teams should prioritise:- Enforce sensitivity labels and automatic DLP policies that mark high‑risk files and prevent upload to web‑grounded prompts.
- Map licences and admin groups: only grant M365 Copilot (work-grounded) access to roles that demonstrably need internal document access.
- Central prompt logging: collect prompts, responses and metadata into a secured, tamper-evident store aligned with records retention rules.
- Role-based disablement of web grounding for investigative, enforcement and financial teams.
- Post-generation classifiers: run outputs through automated checks that flag potential PII leakage or regulated-material exposure before content is shared.
Use cases to prioritise — and those to avoid
High‑value, low‑risk candidates for early Copilot use:- Drafting internal newsletters, non‑sensitive press release first drafts and standardized templates.
- Meeting summarisation with mandatory human validation before distribution.
- Spreadsheet assistance on de‑identified datasets and formula generation for internal analysts.
- Prompt and writing coaching to standardise first‑draft quality.
- Uploading KYC files, payment logs, transaction histories or case files into web‑grounded chat tools.
- Allowing Copilot-generated outputs to serve as final legal, enforcement or adjudicative determinations without named reviewer sign-off.
- Broadly enabling M365 Copilot across investigative or enforcement teams until telemetry and auditability are validated.
Measuring success: a balanced scorecard
Move beyond vanity metrics—measure the mix of productivity, safety and adoption:- Time saved per task (drafting, summarisation).
- Human‑review pass rate (proportion of AI outputs approved without substantive edits).
- Security events: DLP triggers and anomalous prompt activity.
- Adoption depth: active users, agent usage frequency, departmental penetration.
- User satisfaction and confidence in outputs (periodic surveys).
Regional significance: why other regulators will watch
The Philippines is a focal point for rapid workplace AI adoption in Southeast Asia; decisions taken by national regulators can ripple across regional policy discussions. Effective, well-documented governance frameworks created now will become referenced practice for neighbouring jurisdictions that must reconcile consumer protection, enforcement and technological innovation. PAGCOR’s choices—particularly in procurement clauses, telemetry access and human‑in‑the‑loop rules—will matter beyond its internal efficiency gains.Final assessment
PAGCOR’s orientation was the right first step: it raised staff awareness, clarified the difference between web and work grounding, and foregrounded governance. Those are essential first moves for any regulator introducing AI into workflows that touch PII and financial data. The real test, however, will be operational follow-through:- Convert governance rhetoric into enforceable tenant configurations, contractual commitments and human-review workflows.
- Instrument telemetry and auditing so incidents are reconstructable and records obligations are met.
- Phase the rollout by risk, measure outcomes against a balanced scorecard, and iterate.
Practical takeaways (short checklist)
- Appoint an AI governance owner and cross‑functional board.
- Pilot Copilot in low‑risk teams only; require signed use agreements.
- Enforce DLP and sensitivity labeling; disable web grounding for sensitive roles.
- Insist on procurement clauses for telemetry, non‑training, deletion and audit rights.
- Require named human review for any AI‑assisted public or enforcement output.
- Track time saved, human‑review pass rates and security incidents as core KPIs.
Source: Gambling Insider PAGCOR staff oriented on responsible use of AI assistant