PAGCOR’s recent, agency‑wide orientation on Microsoft Copilot signalled a deliberate move: the regulator is not only experimenting with generative AI for productivity gains but is explicitly trying to frame that experimentation inside governance, data protection and operational controls before broad deployment.
The Philippine Amusement and Gaming Corporation (PAGCOR) oversees licensing, compliance and enforcement across a sector that processes large volumes of personally identifiable information (PII), financial transaction data and investigative records. Introducing AI assistants into that environment raises immediate questions about confidentiality, auditability and vendor risk—questions the orientation aimed to surface rather than sidestep.
Microsoft’s Copilot family now ships in two meaningful flavours: the free Copilot Chat experience that is web‑grounded by default and the licensed Microsoft 365 Copilot that can be work‑grounded—able to consult an organisation’s Microsoft Graph (email, files, calendar and Teams content) when administrators enable it. The distinction is operationally critical for any regulator considering AI tied to internal documents.
Regional context also matters: surveys and local Microsoft reporting show exceptionally high interest and use of AI among Filipino knowledge workers. That enthusiasm explains why public institutions are racing to educate staff and craft safe‑use policies rather than discover risky behaviours by accident. However, usage statistics are frequently misreported or conflated with broader AI adoption trends; careful parsing of those metrics is essential when procurement and deployment decisions follow.
Key features and agents showcased to participants included vendor‑native tools that accelerate common tasks:
Telemetry and prompt logging deserve special attention. Generative AI systems often record prompts and responses for debugging, feature improvement or billing; regulators must know what telemetry is shared with vendors, how long it’s stored, whether the vendor uses it for model training, and what contractual deletion or non‑training clauses exist. The orientation raised governance expectations but, according to observers, did not fully detail telemetry retention and vendor commitments—an operational gap that must be closed before widening access.
However, awareness alone is not a governance programme. To convert a pilot into a safe, durable capability PAGCOR must:
Source: thechronicle.com.ph PAGCOR Enhances Governance with Microsoft Copilot AI Orientation
Source: Casino Guardian PAGCOR Prioritizes Responsible AI Integration with Staff Training on Microsoft Copilot
Background
The Philippine Amusement and Gaming Corporation (PAGCOR) oversees licensing, compliance and enforcement across a sector that processes large volumes of personally identifiable information (PII), financial transaction data and investigative records. Introducing AI assistants into that environment raises immediate questions about confidentiality, auditability and vendor risk—questions the orientation aimed to surface rather than sidestep.Microsoft’s Copilot family now ships in two meaningful flavours: the free Copilot Chat experience that is web‑grounded by default and the licensed Microsoft 365 Copilot that can be work‑grounded—able to consult an organisation’s Microsoft Graph (email, files, calendar and Teams content) when administrators enable it. The distinction is operationally critical for any regulator considering AI tied to internal documents.
Regional context also matters: surveys and local Microsoft reporting show exceptionally high interest and use of AI among Filipino knowledge workers. That enthusiasm explains why public institutions are racing to educate staff and craft safe‑use policies rather than discover risky behaviours by accident. However, usage statistics are frequently misreported or conflated with broader AI adoption trends; careful parsing of those metrics is essential when procurement and deployment decisions follow.
What PAGCOR presented: a concise summary
The orientation, delivered as a Microsoft Copilot Chat masterclass during Development Policy Research Month, introduced staff to Copilot’s core productivity capabilities—drafting correspondence, summarising reports, generating ideas and answering work‑related queries—while foregrounding governance guardrails such as enterprise data protection and policy compliance. Trainers emphasised the practical difference between web‑grounded and work‑grounded sessions and recommended caution around posting sensitive information into free, web‑grounded chat instances.Key features and agents showcased to participants included vendor‑native tools that accelerate common tasks:
- Drafting & editing (Writing Coach, Prompt Coach)
- Synthesis & research (Researcher agent)
- Data analysis (Analyst agent for spreadsheet and deeper reasoning tasks)
- Workflow agents for repeatable tasks and surveys
Technical reality check: grounding, admin controls and telemetry
Understanding how Copilot actually interacts with data is non‑negotiable for regulators. Microsoft’s own documentation lays out two grounding modes:- Web‑grounded Copilot Chat — included with qualifying Microsoft 365 business subscriptions, draws on web‑indexed data and public models and does not use organisational Microsoft Graph content by default. This mode is useful for general research and idea generation but is unsafe for PII or case materials.
- Work‑grounded Microsoft 365 Copilot — requires an add‑on license and, when configured by admins, can combine web data with internal documents, email, calendars and Teams content via Microsoft Graph. This mode allows Copilot to produce contextually richer outputs that reference internal files—but only if tenant administrators enable those capabilities and set appropriate access controls.
Telemetry and prompt logging deserve special attention. Generative AI systems often record prompts and responses for debugging, feature improvement or billing; regulators must know what telemetry is shared with vendors, how long it’s stored, whether the vendor uses it for model training, and what contractual deletion or non‑training clauses exist. The orientation raised governance expectations but, according to observers, did not fully detail telemetry retention and vendor commitments—an operational gap that must be closed before widening access.
Why governance matters for a gaming regulator
Gaming regulators handle data that is attractive to fraudsters and sensitive to privacy law and public trust. The risks are concrete:- PII exposure: player identities, KYC materials and payment records.
- Financial leakage: transaction histories and reconciliation data that could be misused.
- Investigative integrity: enforcement files and evidence that require strict confidentiality.
Strengths of PAGCOR’s approach
PAGCOR’s orientation demonstrated several immediate strengths that other regulators should note:- Education first, enforcement expectations second. Running an agency‑wide session reduces risky discovery‑learning behaviour—employees are less likely to paste confidential content into public chats if they understand the difference between web and work grounding.
- Governance‑first messaging. Positioning Copilot adoption inside a governance conversation aligns procurement, IT, legal and operations around controlled rollout instead of ad hoc BYOAI experiments.
- Practical, low‑risk use cases highlighted. Demonstrating value in non‑sensitive areas (HR templates, press drafting, meeting summarisation) offers immediate productivity lifts while limits on sensitive tasks keep risk low.
- Linking to broader anti‑illicit gaming efforts. The orientation complements PAGCOR’s existing education and enforcement drives, making AI training part of a larger public‑interest mission rather than a one‑off tech demo.
Gaps, risks and what the orientation under‑emphasised
The orientation was the correct opening chapter; the next chapters must be operational. Critical gaps that require immediate attention include:- Telemetry, retention and vendor commitments. The orientation emphasized policy but reportedly lacked concrete technical answers about what telemetry Microsoft retains and shares, for how long, and whether vendor‑side data is used for model training. These are procurement‑level questions that must appear in contracts and SOWs.
- Procurement and third‑party risk. Using commercial Copilot brings supply‑chain risk. Contracts need explicit clauses on data residency, non‑training or non‑use for model improvement, deletion rights, audit access and indemnities. The orientation did not translate governance rhetoric into enforceable procurement language.
- Human‑in‑the‑loop and auditability. Generative assistants confidently produce phrased outputs that may be incorrect (hallucinations). The agency must define who must verify AI outputs, what constitutes a decision‑critical output, and how AI assistance is recorded in official archives and Freedom of Information (FOI) regimes. The orientation should be followed by binding review workflows.
- Overstated adoption metrics. The session referenced an “86%” figure tied to local Copilot adoption; that number conflates general AI use among Filipino knowledge workers with product‑specific Copilot licensing penetration. Local Work Trend reporting shows very high AI usage among Filipino workers, but licensed, tenant‑grounded Microsoft 365 Copilot seat penetration is a distinct, lower figure requiring procurement verification. Treat headline numbers with caution.
Practical, prioritized recommendations (what PAGCOR should do next)
These are actionable steps to translate orientation into accountable, auditable capability.- Adopt a phased deployment strategy
- Pilot Copilot seats with low‑risk functions (communications, HR templates, non‑sensitive drafting).
- Expand to medium‑risk groups (policy analysts, licensing admins) only after DLP, labeling and logging are validated.
- Reserve decision‑critical functions (investigations, enforcement) until audit, provenance and human‑review workflows are formalised.
- Translate policy into enforceable technical controls
- Configure tenant‑level DLP and sensitivity labels to block or quarantine prompts containing PII, account identifiers, or investigative references.
- Disable web grounding for roles handling sensitive materials; require explicit admin enablement for any web access.
- Strengthen procurement language
- Insist on non‑training, deletion and audit rights in vendor agreements.
- Require data residency and clear telemetry disclosures, including retention windows and accessible logs for forensic review.
- Implement robust human‑in‑the‑loop processes
- For any AI‑assisted public communication or regulatory decision, require a logged named reviewer and an approval record.
- Ensure AI outputs that become official records are archived in the records management system and are auditable.
- Train to competence and certify pilot users
- Deliver role‑specific, scenario‑based training that includes prompt engineering, redaction practice and clear “Do / Don’t” cards for front‑line staff.
- Make training mandatory and track completion before enabling Copilot for a user.
- Monitor, measure and iterate
- Use Copilot analytics and tenant logs to track adoption, anomalous prompts, DLP triggers and error rates.
- Establish a balanced scorecard: productivity gains + safety metrics + adoption + user satisfaction (not just headline seat counts).
- Extend incident response to AI‑specific cases
- Add forensic steps to reconstruct prompts, output history and model versioning in case of a leak or suspicious outcome.
- Tie AI incidents to legal and records teams early to preserve evidentiary chains.
Use cases that make sense now — and those to avoid
High‑value, low‑risk immediate uses:- Drafting non‑sensitive communications (internal newsletters, press release drafts).
- Meeting summarisation for internal coordination, with mandatory human validation.
- Excel assistance for de‑identified or templated datasets (formula help, cleaning scripts).
- Uploading case files, KYC documents, transaction logs or raw player‑identifying data into web‑grounded chat.
- Using AI to produce enforcement decisions, final legal conclusions or anything requiring legal defensibility without explicit human sign‑off.
Measuring success: KPIs that matter
Focus on a balanced set of metrics to determine whether Copilot delivers value without compromising safety:- Time saved per activity (drafting, summarisation, spreadsheet prep).
- Human‑review pass rate: proportion of AI outputs approved without substantive edits.
- Security incidents: DLP triggers, prompt leak events, anomalous access flagged by monitoring.
- Adoption depth: active feature usage by designated power users and department penetration.
- Cost control: agent message volumes and metered usage to prevent surprise billing.
The “86%” claim — parsing the data
The orientation cited an “86%” figure tied to Copilot adoption in the Philippines. Close examination shows the figure is better read as an indicator of very high AI use among Filipino knowledge workers rather than licensed M365 Copilot penetration across organisations. Microsoft’s regional reporting confirms that Filipino workers rank among the world’s most active AI users, but BYOAI and unlicensed tool use are significant contributors to those numbers. Conflating general AI use with licensed Copilot seat penetration risks over‑estimating organisational readiness and under‑scoping procurement and licensing needs. Procurement decisions should be based on licence counts, tenant enablement plans and audited telemetry—not headline percentages alone.Balancing opportunity and risk: a realistic verdict
PAGCOR’s orientation was the right opening act: it prioritised staff awareness, emphasised governance and avoided a naïve “flip‑the‑switch” rollout. For a regulator that juggles privacy, financial integrity and public trust, that posture is essential. The orientation reduced the immediate behavioural risk of shadow AI use and clarified core technical distinctions that trip up many organisations.However, awareness alone is not a governance programme. To convert a pilot into a safe, durable capability PAGCOR must:
- Convert presentation‑level guidance into enforceable technical controls and procurement clauses.
- Mandate human review for decision‑critical outputs and ensure AI‑assisted records enter formal archives.
- Require transparent vendor telemetry commitments and contractual rights to audit and deletion.
Quick governance checklist (for immediate action)
- Assign an AI governance owner and create a cross‑functional board (IT, Legal, Records, Security, HR).
- Publish a short, accessible AI usage policy: approved tools, prohibited data types, escalation rules.
- Configure tenant controls: disable web grounding for sensitive roles; enforce DLP and sensitivity labels.
- Require role‑specific training and a signed Copilot use agreement for pilot participants.
- Log prompts and outputs centrally with retention aligned to records management rules.
- Negotiate procurement contracts with explicit non‑training, deletion and audit clauses.
- Run a 90‑day pilot, measure productivity and safety KPIs, then iterate.
Conclusion
PAGCOR’s Microsoft Copilot orientation is an instructive case study for public‑sector AI adoption: start with education, foreground governance, and then enforce technical and contractual safeguards before powering broad access. The regulator’s early decision to teach staff about grounding modes and limit adoption to governed pilots aligns with modern best practice. The next stage must translate high‑level principles into enforceable controls—DLP, sensitivity labeling, telemetry transparency, procurement clauses and human review workflows—so that productivity gains do not come at the cost of public trust or legal exposure. If PAGCOR follows that disciplined path, Copilot can be a practical ally in regulatory work; if it treats the orientation as an end rather than a beginning, the agency risks exposure that regulators are uniquely ill‑equipped to absorb.Source: thechronicle.com.ph PAGCOR Enhances Governance with Microsoft Copilot AI Orientation
Source: Casino Guardian PAGCOR Prioritizes Responsible AI Integration with Staff Training on Microsoft Copilot