PAGCOR’s agency-wide orientation on Microsoft Copilot on 30 September represented a decisive, education-first pivot: teach staff how to use AI assistants for routine productivity while making governance, data protection and human oversight non‑negotiable features of any rollout. The online Masterclass, led by AI Workforce Specialist Christeen Padilla and attended by 98 employees across corporate and branch offices, introduced Microsoft Copilot Chat’s drafting, summarisation and ideation capabilities—but framed those features inside strict rules about what counts as safe use, when to request tenant-grounded (work) responses, and how to avoid exposing sensitive regulatory data. This was not a product demo alone; it was a governance briefing that signalled PAGCOR intends to pair innovation with operational safeguards.
PAGCOR delivered the orientation as part of Development Policy Research Month under the theme “Reimagining Governance in the Age of AI.” The session focused on practical, low‑risk productivity gains—drafting correspondence, summarising reports, creating meeting notes and assisting with spreadsheets—while explicitly differentiating two operational modes of Microsoft’s Copilot offerings: the default, web‑grounded Copilot Chat and the licensed, work‑grounded Microsoft 365 (M365) Copilot that can access organisational content when administrators enable it. Trainers emphasised the simple but critical policy: do not paste personally identifiable information (PII), KYC documents, payment details or investigative case material into web‑grounded chat windows.
Why this matters: gaming regulators process large volumes of highly sensitive financial and identity data. Any productivity tool that touches documents or prompts carries the potential to create leakage paths, misstatements or auditability gaps. PAGCOR’s session treated the introduction of Copilot as an organisational change problem—one that requires licences, admin configuration, DLP, contractual controls and human workflows—rather than merely a user training event.
Two important clarifications:
The next and harder chapter is operational: convert guidance into enforceable tenant configurations, contractual protections and human‑in‑the‑loop workflows. That plumbing—DLP enforcement, sensitivity labeling, central prompt logging, procurement clauses and logged approvals—is where public trust and operational safety will be won or lost. If PAGCOR implements the practical roadmap above, the regulator can harness AI to improve drafting, analysis and monitoring without sacrificing confidentiality or auditability. If it treats this as a one‑off awareness event, the same tools that speed reporting will become new channels of data leakage and reputational risk.
PAGCOR’s masterclass was an example of how a regulator can start a responsible AI journey: practical demonstrations, governance emphasis and a clear distinction between web‑grounded experiments and tenant‑grounded enterprise deployments. The agency’s public signalling—education first, accountability always—was the correct posture. The test of success will not be in a single training session but in the next six months: whether PAGCOR converts guidance into enforceable policy, technical controls and procurement safeguards that make AI‑enabled productivity auditable, defensible and safe for the public it serves.
Source: Gambling Insider PAGCOR staff oriented on responsible use of AI assistant
Background
PAGCOR delivered the orientation as part of Development Policy Research Month under the theme “Reimagining Governance in the Age of AI.” The session focused on practical, low‑risk productivity gains—drafting correspondence, summarising reports, creating meeting notes and assisting with spreadsheets—while explicitly differentiating two operational modes of Microsoft’s Copilot offerings: the default, web‑grounded Copilot Chat and the licensed, work‑grounded Microsoft 365 (M365) Copilot that can access organisational content when administrators enable it. Trainers emphasised the simple but critical policy: do not paste personally identifiable information (PII), KYC documents, payment details or investigative case material into web‑grounded chat windows.Why this matters: gaming regulators process large volumes of highly sensitive financial and identity data. Any productivity tool that touches documents or prompts carries the potential to create leakage paths, misstatements or auditability gaps. PAGCOR’s session treated the introduction of Copilot as an organisational change problem—one that requires licences, admin configuration, DLP, contractual controls and human workflows—rather than merely a user training event.
What PAGCOR covered (concise, verifiable summary)
- The orientation introduced Microsoft Copilot Chat and demonstrated everyday uses: drafting email templates, summarising long documents, generating ideas and producing first‑draft text for internal use.
- Trainers outlined two grounding modes: Copilot Chat’s web‑grounded responses versus M365 Copilot’s potential access to Microsoft Graph content (emails, files, chats, calendars) when a tenant license is assigned and administrators enable that capability. This distinction was a central theme.
- The session stressed governance safeguards: data protection standards, adherence to internal policies, prompt redaction best practices and the immediate prohibition on dropping sensitive documents into the free, web‑grounded chat.
- Participants were introduced to Copilot agents—specialised assistants such as Researcher and Analyst that can automate multi‑step tasks—while being warned these expand the governance surface and require administrative enablement and licence review.
Microsoft Copilot: the technical reality (what administrators must know)
Understanding how Microsoft frames Copilot is essential to any safe rollout. Microsoft’s public documentation draws a clear operational line:- Copilot Chat (web‑grounded) is included with many Microsoft 365 business subscriptions and produces answers grounded in web indexes and general LLM outputs. It does not access an organisation’s Microsoft Graph content by default; users can upload files to a chat prompt but, otherwise, the chat uses web sources.
- Microsoft 365 Copilot (work‑grounded) requires an add‑on Copilot license and, when administratively enabled, can access Microsoft Graph content (emails, files, Teams chats, calendar items) to produce organisation‑aware responses. Admins see a Work / Web toggle when users are licensed.
- Administrators must decide which grounding modes are enabled for which groups and implement DLP, sensitivity labels and logging accordingly.
- Licencing and procurement matter: enabling work‑grounded Copilot is not merely a feature flip; it is an entitlement that must be provisioned by license and configured for safe access.
- Telemetry, prompt logging and vendor‑side retention are contractual issues. Security teams must verify what telemetry Microsoft retains and negotiate deletion, audit and non‑training clauses where required. PAGCOR’s briefing flagged this as an open procurement concern that needs follow‑up.
The adoption baseline: parsing the “86%” figure
During the orientation, the briefing cited a headline statistic that has been widely reported: 86% of Filipino knowledge workers use AI at work. That figure originates from Microsoft’s 2024 Work Trend Index reporting for the Philippines and reflects broad AI use—not necessarily licensed M365 Copilot seat penetration. The Work Trend Index states that 86% of Filipino knowledge workers reported using AI at work, and that BYOAI (bring‑your‑own‑AI) behaviour is common. This explains the urgency behind organisational training: employees are already experimenting with tools, often outside enterprise control.Two important clarifications:
- The 86% number indicates AI use among Filipino workers (including consumer tools, browser extensions and third‑party chat apps), not the proportion of organisations with tenant‑level Copilot deployments or Copilot add‑on licences. Treat it as evidence of demand and behaviour, not licence coverage.
- Procurement planning should be driven by seat counts and tenant enablement plans—verified by IT via license inventories and contract terms—rather than headline percentages. The orientation correctly cautioned against conflating general AI enthusiasm with safe, enterprise‑ready deployments.
Strengths in PAGCOR’s approach
PAGCOR’s orientation showed several responsible design choices that other regulators and public bodies can emulate:- Education‑first posture. Launching an agency‑wide session reduces the risk of shadow AI behaviour and helps set a single baseline for safe practices. This was an effective use of a single, synchronous event to set expectations.
- Governance framing from day one. The organizers led with data protection and policy compliance, rather than product hype. That signals a cross‑functional intent—security, legal and records management need to be at the table as adoption progresses.
- Practical, low‑risk demonstrations. Showing how Copilot can improve everyday office tasks (templates, summaries, formula help) gives staff immediate, usable value without exposing high‑risk content. This creates early wins while keeping sensitive workflows offline.
- Explicit licensing nuance. Drawing the practical distinction between web‑grounded chat and tenant‑grounded M365 Copilot reduces confusion and helps map use cases to procurement needs. Microsoft’s documentation supports the same distinction.
Gaps, risks and urgent next steps
Awareness without operational follow‑through would be a partial victory. The briefing rightly surfaced issues that must be resolved quickly:- Telemetry and vendor commitments. The orientation flagged vendor telemetry and retention as unresolved. Administrators must obtain contractual clarity on what Microsoft logs, how long logs are kept, whether prompts are eligible for removal and whether organisational prompts are used for model training. These are procurement and legal questions that require explicit contract language.
- Human‑in‑the‑loop for decision‑critical outputs. Generative AI can hallucinate or produce confidently phrased inaccuracies. Any output used in licensing decisions, enforcement communications or public statements must be subject to named human review, logged approvals and archival into the agency records management system. The orientation recommended this and it must be formalised.
- Technical controls are not automatic. Enabling M365 Copilot without DLP policies, sensitivity labels and role‑based access invites exposure. Administrators should default to web‑grounding disabled for investigative, enforcement and financial teams and only enable work‑grounded access with validated controls.
- Overreliance on headline metrics. The “86%” headline tells a story of worker behaviour, not of organisational readiness. Procurement must be conservative—budget and license projections should be based on audited seat counts and pilot telemetry rather than regional adoption figures.
- Algorithmic fairness and explainability for enforcement use. If Copilot agents are later repurposed for fraud detection, player behaviour monitoring or e‑KYC, PAGCOR will need governance that addresses bias, explainability and evidentiary standards. These are heavier governance obligations than for routine internal drafts. Local reporting shows PAGCOR is already evaluating AI tools for monitoring illegal platforms and player behaviour—steps that require even stronger safeguards.
Practical, prioritised roadmap (recommended next 90 days)
- Appoint an AI governance owner and create a cross‑functional steering group (IT, Legal, Records, Security, HR, Communications). Make this governance body the single place for licensing decisions and pilot approvals.
- Run a staged 90‑day pilot with measurable KPIs:
- Start with low‑risk teams (communications, HR, internal templates).
- Expand to medium‑risk teams (policy analysts) only after validating DLP, labels and logging.
- Reserve enforcement/investigations until telemetry and human‑review controls are operational.
- Translate policy into enforced tenant settings:
- Enforce sensitivity labels and automatic DLP that prevents labelled documents from being uploaded to web‑grounded chat.
- Disable web grounding by default for roles that handle PII or investigative materials.
- Centralise prompt logging and store it in a tamper‑evident archive aligned with records retention policies.
- Strengthen procurement and contracts:
- Require non‑training and non‑use clauses for organisational prompts unless explicit opt‑in is negotiated.
- Obtain clear retention windows for telemetry and deletion rights.
- Negotiate audit access and indemnities appropriate for a public regulator.
- Enforce human review workflows:
- For any AI‑assisted public output, require a named reviewer and a logged approval.
- Archive AI‑assisted drafts and final versions in the records system with metadata about prompts, reviewers and model version.
- Train and certify pilot users:
- Deliver role‑specific scenario training (redaction practice, prompt engineering, verification checklists).
- Issue “Do / Don’t” quick cards for front‑line staff and require signed use agreements for pilot participants.
- Monitor and measure:
- Track a balanced scorecard: time saved on routine tasks, human‑review pass rates, number of DLP incidents and anomalous prompt patterns. Don’t rely only on active‑user counts.
Use cases recommended now — and those to avoid
High‑value, low‑risk candidates for early Copilot use:- Drafting non‑sensitive press releases and internal newsletters (with human editing).
- Generating meeting summaries for internal coordination (with mandatory human validation).
- Excel help on de‑identified templates, formula advice and data‑cleaning scripts.
- Internal prompt and writing coaching to standardise first drafts.
- Uploading KYC files, payment logs, transaction histories or raw player‑identifying data into web‑grounded chat windows.
- Using Copilot outputs as final legal, enforcement or adjudicative conclusions without a named reviewer.
- Broadly enabling work‑grounded Copilot across investigative teams until telemetry and auditability are validated.
Legal and records considerations particular to regulators
Regulatory agencies face specific legal constraints that make AI governance harder than in many private‑sector contexts:- Records retention and FOI obligations may apply to AI‑assisted communications. Keep logs that satisfy transparency and auditability requirements.
- Third‑party contractual protections are essential; vendor marketing claims are not substitutes for explicit contract language. Insist on deletion and non‑use clauses.
- Algorithmic fairness and explainability become critical when AI influences enforcement or licensing. Document training datasets, decision logic and human oversight policies where relevant.
Final assessment — opportunity balanced by duty
PAGCOR’s orientation was the right first act: educate broadly, emphasise governance, and map feature differences that commonly confuse end users. The agency avoided the two common mistakes—either forbidding employee experimentation entirely or flipping a switch without controls. Instead, the session positioned Copilot as a managed capability that can deliver productivity gains if adopted thoughtfully.The next and harder chapter is operational: convert guidance into enforceable tenant configurations, contractual protections and human‑in‑the‑loop workflows. That plumbing—DLP enforcement, sensitivity labeling, central prompt logging, procurement clauses and logged approvals—is where public trust and operational safety will be won or lost. If PAGCOR implements the practical roadmap above, the regulator can harness AI to improve drafting, analysis and monitoring without sacrificing confidentiality or auditability. If it treats this as a one‑off awareness event, the same tools that speed reporting will become new channels of data leakage and reputational risk.
Short checklist — immediate actions to implement this week
- Appoint an AI governance owner and form a cross‑functional oversight board.
- Issue an immediate “no PII in web chat” policy and circulate quick “Do / Don’t” guidance to all staff.
- Audit current Microsoft 365 licences to map who has Copilot add‑on entitlements.
- Configure tenant DLP and sensitivity labels to block labelled files from web prompts.
- Negotiate vendor commitments around telemetry, deletion rights and non‑training clauses before enabling broader use.
PAGCOR’s masterclass was an example of how a regulator can start a responsible AI journey: practical demonstrations, governance emphasis and a clear distinction between web‑grounded experiments and tenant‑grounded enterprise deployments. The agency’s public signalling—education first, accountability always—was the correct posture. The test of success will not be in a single training session but in the next six months: whether PAGCOR converts guidance into enforceable policy, technical controls and procurement safeguards that make AI‑enabled productivity auditable, defensible and safe for the public it serves.
Source: Gambling Insider PAGCOR staff oriented on responsible use of AI assistant