Australia's AI Push: Copilot Drafts and the National GovAI Plan

  • Thread Author
Australia’s head of national security quietly used a generative AI chatbot to draft speeches and internal communications while the federal government simultaneously launched a sweeping “whole-of-government” AI plan that will push tools like Microsoft Copilot and a purpose-built GovAI assistant across the public service.

A businessman uses a glowing holographic screen labeled 'Copilot' to draft a speech.Background​

The employee at the centre of the story is Hamish Hansford, who on 17 July 2025 began serving as the Department of Home Affairs’ Head of National Security and also holds the roles of Commonwealth Counter‑Terrorism Coordinator and National Counter‑Foreign Interference Coordinator. Internal records released under freedom of information (FOI) processes show exchanges between Hansford and a Microsoft Copilot instance that include prompts asking the tool to produce analysis of the critical infrastructure environment and drafts used to construct public remarks and personnel communications.
The disclosures arrived as Finance Minister Katy Gallagher unveiled a government-wide AI strategy intended to accelerate generative AI adoption across the Australian Public Service (APS). That plan commits to training staff, appointing Chief AI Officers in agencies, expanding an in‑house platform (branded in public reporting as “GovAI”) and rolling out a GovAI Chat assistant to give public servants a secure, APS‑fronted conversational AI on their desktops and laptops. Early trial data cited by government briefings and reporting indicate mixed results: many users reported faster drafting and improved output quality, while a large share of participants also said they had to make “moderate to significant” edits to AI outputs.
Taken together the FOI material and the government’s launch of a rapid adoption program provide an urgent case study in the contradiction between speed and oversight when powerful text-generating tools enter the hands of senior officials who handle national security, regulatory and highly sensitive information.

What the FOI documents show​

  • The FOI records detail prompts and responses exchanged between a senior Home Affairs official and Microsoft Copilot. The prompts include instructions to write a historical analysis of critical infrastructure, outline emerging threats, and to draft communications to internal teams.
  • Material from those exchanges supplied the “bones” of at least one public speech and several internal messages, with portions used near‑verbatim in final texts.
  • Some documents were withheld from release by the authorised decision‑maker under exemptions citing internal deliberations or operational sensitivity; the withholding itself has raised questions about where internal AI use sits in existing disclosure and accountability frameworks.
These facts matter because they illustrate real-world behaviour: senior officials are not merely experimenting at the margins — they are deploying generative AI to produce substantive analytic content that feeds briefings, public addresses and potentially internal policy documents.

The new APS AI plan: what it proposes​

The government’s AI plan announces a coordinated push across the APS that includes:
  • Expanding access to generative AI tools for public servants and establishing guidance for using third‑party public platforms (ChatGPT, Gemini, Claude, Copilot).
  • Rolling out a GovAI platform and developing a tailored GovAI Chat assistant to operate in a “secure, in‑house” environment.
  • Requiring every agency to appoint a Chief AI Officer (CAIO) by a mandated date and creating a central team in the Department of Finance to coordinate adoption.
  • Setting training targets so that public servants receive skill uplift in using and governing AI systems.
Trial reporting accompanying the plan highlights productivity gains reported by many users: a majority said the tools helped them work faster and improved their output quality, while concurrently signalling substantial editing burden and accuracy problems that required human review.

Why this combination of facts is significant​

  • High sensitivity + new tools = risk magnification. National security analysis, counter‑terrorism coordination and foreign interference assessments are judgment‑intensive, contextually complex and highly consequential. Offloading a portion of the analytic labour — including framing, sourcing and narrative construction — to a black‑box model raises the stakes of errors, misattribution, omission and leakage.
  • Leadership example shapes culture. When senior leaders publicly praise a tool while privately relying on it to produce the core of their work, it signals to the rest of the APS that outsourcing cognitive labour to AI is acceptable practice. That can accelerate cultural adoption before governance, training and legal guardrails are mature.
  • Vendor dependency and data sovereignty concerns. The GovAI architecture described in public briefings rests on commercial cloud and vendor models. Any government ambition to have an “in‑house” assistant must confront the technical reality of foundation models, contractual arrangements, cross‑border access regimes and legal instruments that can compel disclosure of data.
  • Governance lag. The APS plan ties ambitious rollout timelines to a governance build‑out that, in some parts, comes later. When deployment and widespread use precede robust legislative, technical and auditing safeguards, the result is a governance gap.

Strengths in the government’s approach​

  • Pragmatic adoption model: The plan recognises productivity opportunities and seeks to give public servants tools that can reduce repetitive work and speed drafting of routine materials.
  • Central coordination: A central team and the CAIO role provide mechanisms for standard setting, shared procurement and consolidated risk management — in principle a move away from siloed agency pilots.
  • Training and guidance: Including a training emphasis and guidance on public platforms acknowledges that supply‑side controls alone won’t prevent misuse; people need capability to assess and edit AI outputs.
These points matter: rapid technological change in the public sector benefits from an ordered, centrally coordinated approach rather than a bewildering patchwork of agency-specific pilots.

Critical risks, gaps and unanswered questions​

1. The limits of “secure” and “in‑house”​

Labeling a tool in‑house is only as meaningful as the underlying architecture and contractual commitments. Foundation models and commercial toolchains often rely on multinational vendors and cloud services. Legal instruments and cross‑border data access regimes can extend third‑country authorities’ reach into supposedly local instances. The security boundary is therefore technical, contractual and legal — not merely a brand.
Practical consequence: Sensitive inputs — even if stored onshore — can be subject to access requests or have metadata and telemetry leave controlled environments unless specifically engineered and contracted against.

2. Automation bias and cognitive offloading​

Long‑standing research shows humans disproportionately accept or over‑trust algorithmic outputs when those outputs appear plausible. The FOI material suggests senior staff used AI to generate core analysis. Without disciplined verification practices and a culture that insistently demands proof, the APS risks a progressive atrophy of critical judgement for high‑stakes tasks.

3. Legal and accountability gaps​

Automated or semi‑automated drafting raises questions about who created the intellectual product and who is accountable for it. Administrative law principles require a human decision‑maker to exercise judgment; outsourcing the intellectual heavy lifting to models complicates proof that a decision rested on a human mental process rather than an opaque model output.

4. Transparency versus operational secrecy​

The FOI responses included withheld items. While legitimate national security considerations warrant discretion, blanket or expansive withholding of internal AI interactions will erode public confidence. Transparency obligations and the public’s right to understand how policy is formed clash with the bureaucratic impulse to treat AI use as operational detail.

5. Workforce and equity impacts​

The trials and plan identify productivity claims but the same research and union feedback highlight workforce anxieties — particularly among administrative roles that have a higher share of female employees. Rapid diffusion without job redesign, retraining and negotiated industrial safeguards could produce abrupt labour dislocations in lower‑skilled public service workstreams.

How this could go wrong — three scenarios​

  • The “sound‑alike” speech: a senior official uses AI to draft a speech that repeats factual errors or includes misattributed history. The speech is published, picked up by media, and requires retraction — damaging credibility and public trust.
  • The “leaky memo”: an official pastes sensitive, not‑for‑disclosure content into a Copilot prompt on a corporate cloud instance and the model’s telemetry or cached content exposes confidential program details, producing a security breach.
  • The “systemic robodebt redux”: AI-augmented processes create an efficiency‑first workflow that makes automated determinations (or near-automated recommendations) impacting welfare, migration or law enforcement decisions without robust explainability, redress channels or legal validation — repeating the governance failures that produced past scandals.
Each scenario is plausible if governance, training and technical controls are not aligned with deployment velocity.

Practical checks and recommendations for immediate action​

  • Establish mandatory sensitivity gating for prompts: any interaction that includes classified, secret or potentially identifying data must be blocked by client‑side controls and routed only to certified, audited model instances with appropriately restricted data flows.
  • Require prompt logging and audit trails: every prompt and model response used for government work must be logged, time‑stamped and retained in tamper‑evident records to support subsequent review and accountability.
  • Institute human‑in‑the‑loop certification: for high‑stake outputs (policy recommendations, legal advice, national security assessments) agencies must require a documented human verification step that affirms the reasoning, sources and limits of model‑derived content.
  • Publish agency AI transparency statements with technical detail: these should include risk classification for use cases, model provenance (onshore/offshore), data retention and the exact role generative AI played in deliverables.
  • Negotiate procurement and contracts that lock down on vendor access: procurement must include clauses that specify data residency, disallow training on government inputs, and limit telemetry disclosure. Seek legally enforceable commitments from vendors on data handling and access.
  • Run red team audits and independent review: third‑party audits should stress‑test GovAI instances for hallucinations, data bleed, adversarial exploitation and privacy leakage.
  • Protect and retrain the workforce: meaningful investment in reskilling, plus industrial engagement to manage role changes and job redesign, will reduce political and ethical backlash.

What the public and technologists should watch next​

  • How agencies define “sensitive” in practice, not just in mottos. The difference between a high‑risk and a routine internal use case must be transparent and consistently applied.
  • Whether prompt logs and model interactions become part of recordkeeping regimes or are treated as ephemeral operational artefacts.
  • The legal clarifications that will define when a human author’s “mental process” is sufficient for administrative decisions that relied heavily on AI‑generated drafting.
  • The contractual terms the APS secures with major vendors and whether those terms can materially reduce foreign legal exposure to government data.
  • How quickly CAIOs are appointed and whether they are empowered with auditing resources and statutory teeth, or mainly tasked with evangelising AI adoption.

Final assessment​

Australia’s public service is at an inflection point. The combination of senior officials already using generative AI tools in operational work and a government mandate to scale AI across the APS is inevitable and — if carefully managed — potentially transformative. The good news is that the government recognises both opportunity and risk by designing a whole‑of‑government approach and emphasising training and safeguards.
But the evidence so far shows a misalignment in sequencing: deployment and procurement are moving at pace while legal, audit and workforce protections remain under construction. That pattern amplifies the risk of reputational damage, systemic errors and governance failure. When the people responsible for protecting national security simultaneously champion and outsource the very analytical tasks that underpin national resilience, the APS must demand stronger, faster and more transparent guardrails.
Generative AI can be a powerful productivity tool for government — but only if its adoption is tethered to rigorous technical controls, binding procurement safeguards, independent oversight and a culture that privileges human judgement over convenience. Without those elements, the headlines will continue to be less about improved public service delivery and more about opaque systems making consequential decisions without a clear line of human accountability.

Action checklist for policymakers and IT leaders​

  • Mandate retention of all AI prompts and responses used for policy or operational outputs and secure them for audit.
  • Require technical blocking of inputs that include classified or personal identifiers to any model not certified for that classification.
  • Negotiate procurement clauses that prohibit vendors from using government data to train external models and that specify onshore processing and legal protections.
  • Institute independent reviews (red teams) before any high‑risk AI service reaches production.
  • Fund workforce reskilling programs and create negotiated transition pathways for administrative staff likely to face role changes.

Generative AI is already reshaping workflows inside government. The crucial test now is whether Australia’s public service can convert tactical convenience into strategic competence — delivering real, measurable improvements to public administration while preserving the legal, ethical and security standards that underpin democratic governance. The FOI disclosures are an early warning: adoption will happen, but how well it is managed will determine whether the result is enhanced public service or a costly, avoidable experiment in delegated judgement.

Source: The Mandarin How Australia’s national security chief used AI to write speeches, comms
 

Back
Top