Australia’s top national security official quietly used Microsoft’s Copilot to draft speeches and internal communications — and the timing of that use, revealed via freedom of information (FOI) disclosures, lands squarely alongside the federal government’s push to roll generative AI into everyday public‑service work, exposing a fast‑moving mix of productivity promise, governance gaps and new security vectors.
Background / Overview
Short, verifiable facts up front: FOI material released after a public statement by Hamish Hansford — the Department of Home Affairs’ Head of National Security, Commonwealth Counter‑Terrorism Coordinator and National Counter‑Foreign Interference Coordinator — shows logs of prompt‑and‑response exchanges with a Microsoft 365 Copilot instance. The released set runs to dozens of items and, according to reporting based on that FOI, includes about 50 documents spanning several months; they detail Hansford asking Copilot to draft analytical notes, speeches and internal messages that were later edited and used as delivered material. Those revelations arrive at a moment when Finance Minister Katy Gallagher publicly launched a coordinated, whole‑of‑government AI plan that formalises training, governance structures and a centrally hosted GovAI platform — including a GovAI Chat assistant — and requires every agency to appoint a senior AI lead. The government is explicitly encouraging public servants to use generative AI as a productivity tool, while signalling a need for tighter procurement, training and risk controls. This collision — senior security officials using commercial generative AI, FOI disclosure of their prompts, and a government drive to normalise AI in the Australian Public Service (APS) — creates an urgent case study in how to capture AI’s benefits without amplifying new risks.
What the FOI documents show
From concept to speech: the anatomy of a Copilot draft
The FOI logs show Hansford prompting Copilot on the morning of August 26 to “Write an analysis of the critical infrastructure environment where Australia has conceptually come from since 1980 to today; outline the emerging threats and then discuss what critical infrastructure can do immediately and into the future.” The assistant returned a near‑900‑word analytical briefing that Hansford used as the structural basis for a conscriptive speech at the Australian Institute of International Affairs later that night. Sections of the delivered remarks closely mirror Copilot’s original output — with Hansford adding anecdotes, analogies and a more conversational cadence when speaking. Other exchanges show iterative prompt engineering: Hansford asked Copilot to include specific Australian examples (the 2016 South Australia blackout, the foiled 2017 Etihad hijacking plot, JBS Foods’ ransomware incident and the COVID‑19 pandemic), and subsequent Copilot drafts incorporated those cases. The logs also document requests to insert analogies and academic references that Hansford then personally adapted in the oral delivery. Taken together, these records make two concrete points:
- Senior officials are not merely experimenting; they are using generative AI to produce substantive draft material that feeds public addresses and internal communications.
- The final outputs are human‑edited hybrids: AI provides the structural draft; humans provide judgment, localisation and tone.
Redactions, withheld drafts and personnel notes
Not all documents were released. Twelve items were withheld under exemptions on the basis that they concerned “personal interactions” with staff and intergovernmental communications that could adversely affect departmental operations. The FOI cover letter for the release — signed by the same official whose prompts are at issue — notes that some materials were “first drafts” intended for external bodies and were therefore redacted. That withholding highlights a tension: FOI regimes can produce transparency about AI‑assisted drafting, but agencies are also invoking operational secrecy where they judge disclosure would harm public administration.
The government’s APS AI plan and the institutional backdrop
In parallel to the FOI disclosure, the Department of Finance published a formal
APS AI Plan and Minister Katy Gallagher delivered an address rolling out the policy levers: mandatory training, an AI Delivery and Enablement team (AIDE), and the expansion of an in‑house GovAI platform and GovAI Chat. The plan expects agencies to appoint Chief AI Officers at a senior executive level and to provide guidance on using public and private generative AI tools up to the Official classification level. The plan has pragmatic aims: scale productivity gains across routine drafting, summarisation and search; avoid fragmented, agency‑by‑agency pilots; and centralise procurement and risk management. But it also acknowledges the governance burden: training, data classification, procurement clauses, provenance, audit logging and an AI review committee are all part of the roadmap. Finance expects GovAI Chat trials to begin in a phased schedule and has published guidance on which classes of data and use cases are appropriate for public tools. The optics matter. An APS plan that pushes adoption at scale while allowing senior figures — including those directly responsible for national security — to use vendor‑hosted copilots raises the stakes on procurement, contractual protections and technical isolation.
Vendor sovereignty and technical controls: Microsoft’s in‑country option
One of the key technical levers governments use to limit vendor risk is data residency and processing location. Microsoft has announced an option to process Microsoft 365 Copilot interactions
in‑country for selected markets (including Australia) by the end of 2025, and the company has publicly positioned this as a way to provide customers with greater sovereignty and regulatory alignment. That capability is a meaningful upgrade for agencies seeking to keep sensitive interaction telemetry and prompt logs within jurisdictional boundaries. Home Affairs told reporters that their Copilot use was part of an internal pilot, that staff received training covering data safeguarding and privacy, and that contractual arrangements and technical safeguards prevent Microsoft employees or external parties from accessing departmental Copilot logs. Those assurances are necessary, but they are not a panacea: contractual protections must be precise, auditable and enforceable, and the technical architecture must be configured so that model backends, telemetry and audit trails remain within defined legal boundaries.
Security risks: new attack surfaces and the EchoLeak case
Generative AI assistants change the attack surface in tangible ways. A real‑world example:
EchoLeak (CVE‑2025‑32711), a critical zero‑click prompt‑injection vulnerability disclosed in June 2025, demonstrated how an adversary could embed malicious instructions into ordinary documents or emails that a Copilot‑style assistant would ingest during normal retrieval operations and thereby exfiltrate internal content without user interaction. Microsoft issued mitigations and a patch, and there is no public evidence of wide‑scale exploitation; nonetheless the episode made clear that AI‑aware threat modelling and new runtime controls are essential. This is not a purely technical worry. For government use, risks include:
- Silent exfiltration of sensitive content when assistants retrieve or incorporate contextual material (attachments, speaker notes, chat history).
- Prompt injection delivered through seemingly innocuous files shared via email, Teams or collaboration platforms.
- Scope creep where an assistant’s internal connectors surface data beyond the immediate user’s remit.
- Audit gaps: if prompts and model versions are not immutably logged, reconstructing how a decision was formed becomes difficult — and that undermines accountability in administrative law contexts.
The EchoLeak episode is a concrete demonstration of why agencies must pair any Copilot deployment with runtime guards: strict content gating, connector allow‑lists, prompt sanitisation, automatic blocking of hidden metadata and tamper‑evident audit trails.
Governance, accountability and the human‑in‑the‑loop imperative
Generative AI excels at drafting. It does not carry legal responsibility, exercise judgement or accept accountability under administrative law. That distinction matters for APS processes that culminate in Cabinet submissions, ministerial briefings and decisions with real consequences.
Key governance controls that should be non‑negotiable:
- Mandatory human attestation: any high‑stakes deliverable (Cabinet submissions, legal advice, national security assessments) that used AI in drafting must carry a clear named human author who attests to verification of facts and sources.
- Provenance logging: every prompt, model identifier and returned output used as a draft must be recorded, retained and auditable for FOI and oversight purposes.
- Sensitivity gating: a binary policy that prevents classified or PROTECTED‑level material from entering public or inadequately isolated model instances.
- Procurement clauses: contractual rights to inspect processing, rescind vendor access, and guarantee no training on government inputs unless explicitly agreed and compensated.
- Role‑based training: mandatory, scenario‑based training tied to performance frameworks and demonstrated literacy in prompt hygiene and hallucination detection.
These are not novel prescriptions; they reflect principles used in other high‑risk domains. What is new is the velocity of adoption and the multi‑jurisdictional complexity introduced by vendor models and cloud backends.
Culture and signalling: why leadership use matters
When a high‑profile national security official publicly praises an AI tool and is later revealed to have used it substantively, that sends a powerful cultural signal across the APS. There are two related effects:
- Evangelism through example: senior leaders’ visible use accelerates uptake among staff — often before governance and technical controls are fully mature. That can produce shadow adoption and policy lag.
- Automation bias risk: staff are more likely to defer to AI outputs when those outputs are normatively endorsed at senior levels, which increases the probability that flawed or incomplete AI text will enter decision pipelines without adequate scrutiny.
The FOI record in this case therefore matters less as an isolated curiosity and more as an indicator of cultural momentum that the APS plan seeks to harness — and that also needs to be carefully governed.
Practical recommendations for the APS (a pragmatic checklist)
- Immediate technical fixes
- Block AI ingestion of hidden metadata, speaker notes and comments unless explicitly allowed.
- Enforce connector allow‑lists and disable any automatic inclusion of external document content in RAG (retrieval‑augmented generation) contexts.
- Mandatory governance controls
- Require human attestation for any AI‑assisted document entering formal decision pipelines.
- Log all prompts, model versions and outputs into tamper‑evident records retained under government recordkeeping rules.
- Procurement and legal safeguards
- Insist on contractual guarantees of model provenance, explicit data residency, non‑training clauses and the ability to audit vendor processing.
- Negotiate enforceable SLAs for incident response and breach notifications specific to AI telemetry.
- Workforce and cultural measures
- Expand role‑based training and establish a certification process for “AI‑assisted author” roles.
- Build an anonymous incident and near‑miss reporting channel focused on AI hallucinations, data leakage and prompt injection attempts.
- Independent validation
- Commission third‑party red‑teaming, red‑cell exercises and independent audits of GovAI instances before scaling beyond pilot cohorts.
Numbered steps like these are not bureaucratic red tape; they are the operational controls that reduce the probability of high‑impact mistakes while allowing staff to harness genuine productivity gains.
Strengths in the approach — and why the plan can work
The APS AI Plan contains sound elements:
- Central coordination (AIDE) helps avoid fragmented procurement and inconsistent technical controls across agencies. That centralisation is essential to manage vendor risk and craft common SLAs.
- Mandatory training and CAIO roles create accountability lines and help ensure capability uplift is not left to ad hoc team experiments.
- GovAI and in‑country processing options provide a path to sovereign control and reduce legal exposure from cross‑border model routing.
When implemented faithfully, these components position the APS to capture real productivity benefits — faster drafting, improved accessibility (for example, automatic note generation for staff with accessibility needs) and time savings on routine tasks.
Remaining questions and cautionary flags
Several consequential questions remain publicly unresolved:
- How will prompt logs be treated under FOI and archival law? The FOI release described here demonstrates they are discoverable; agencies must clarify retention, classification and redaction policies for prompt records.
- What legal test will satisfy the requirement that a “human decision‑maker” exercised the requisite mental process when AI materially shaped a submission or recommendation?
- How will procurement guardrails resist vendor pressure for telemetry or cloud routing that undermines sovereignty promises?
- Will CAIOs have teeth — budget, audit powers and statutory support — or will they become symbolic evangelists without enforcement capability?
If these questions are not explicitly addressed, the APS risks scaling tools faster than it can credibly audit and defend them.
Conclusion
The FOI disclosure that a senior Home Affairs official used Microsoft Copilot to craft speeches and internal draft material is a consequential, real‑world example of the dilemmas facing modern public administrations: the
immediate and visible productivity benefits of generative AI versus the
latent governance, legal and security risks that accompany it.
Australia’s newly published APS AI Plan offers a structured route to scale AI adoption across the public service with training, a central GovAI platform and senior AI leads. Those are sound foundations. But the Copilot case shows that early adoption is already happening at senior levels — and that FOI and security realities are able to surface the consequences of that adoption to public scrutiny.
The necessary policy reflex is straightforward: accelerate governance at the same pace as deployment. That means binding procurement protections, airtight technical controls (including in‑country processing where appropriate), mandatory human attestations, immutable prompt logging and independent red‑team validation before broad rollout. Done well, generative AI can be a powerful assistant to public servants; done poorly, it risks errors that are not merely embarrassing but legally and operationally consequential.
The public interest is best served by a model that embraces AI’s productivity benefits while insisting that
human judgment, accountability and transparency remain non‑delegable.
Source: Startup Daily
Australia's national security boss used Microsoft’s Copilot to write counter-terrorism speeches and government communications