A University of Sydney audit has found that Microsoft’s Copilot routinely sidelines Australian journalism in its AI‑generated news summaries, favouring US and European outlets, erasing bylines and flattening local context — a pattern that threatens referral traffic, newsroom revenue, and democratic information ecosystems.
The research, led by Dr. Timothy Koskie of the University of Sydney’s Centre for AI, Trust and Governance, examined 434 Copilot‑produced news summaries generated over a 31‑day sampling window. The study did not audit factual accuracy; instead, it focused on provenance — which outlets, regions, and journalists were amplified or rendered invisible in the assistant’s outputs. The headline finding: only about one in five Copilot replies linkedsources, while U.S. and European outlets dominated the remainder.
Koskie’s work joins other recent audits documenting structural problems in retrieval‑plus‑generation pipelines: assistants frequently fail to preserve attribution, surface locally relevant reporting, or route readers back to the original journalism that underpins a summarerely an editorial choice — it is a redistribution of attention and revenue away from smaller, regional, and independent outlets toward large, globally visible publishers.
Extending bargaining or incentive mechanisms to explicitly cover AI‑generated outputs raises hard definitional questions: what constitutes use of news (direct quotes? paraphrased summaries? model training data?), how to measure impact, venance standards. Yet the principle is straightforward: if AI assistants function as gateways to news, they must be governed in ways that protect the financial and informational role of local journalism.
But there are important caveats:
That transition can be framed two ways. On one hand, AI assistants deliver undeniable user benefits: speed, triage, and lowered friction for readers. On the other, without provenance, geographic sensitivity, and compensation frameworks, the convenience of an answer can hollow ouournalism. The central public‑policy challenge is to preserve the advantages of AI while guaranteeing that the underlying news ecosystem remains pluralistic and financially sustainable.
There is no single silver bullet. The remedies are hybrid: product engineers must bake provenance and geographic sensitivity into retrieval and presentation layers; regulators must adapt bargaining and transparency regimes to the realities of AI‑mediated discovery; and publishers must expose robust metadata and reorganise revenue strategies around direct relationships with readers. If stakeholders act quickly and collaboratively, it’s possible to preserve the benefits of AI assistance while protecting the pluralistic information ecosystems that underpin healthy democracies.
Source: Mi-3.com.au https://www.mi-3.com.au/28-01-2026/...er-australian-outlets-sydney-uni-study-finds/
Background
The research, led by Dr. Timothy Koskie of the University of Sydney’s Centre for AI, Trust and Governance, examined 434 Copilot‑produced news summaries generated over a 31‑day sampling window. The study did not audit factual accuracy; instead, it focused on provenance — which outlets, regions, and journalists were amplified or rendered invisible in the assistant’s outputs. The headline finding: only about one in five Copilot replies linkedsources, while U.S. and European outlets dominated the remainder. Koskie’s work joins other recent audits documenting structural problems in retrieval‑plus‑generation pipelines: assistants frequently fail to preserve attribution, surface locally relevant reporting, or route readers back to the original journalism that underpins a summarerely an editorial choice — it is a redistribution of attention and revenue away from smaller, regional, and independent outlets toward large, globally visible publishers.
What the study measured and why it matters
Methodology in plain terms
- Sampnews summaries, produced by Microsoft Copilot configured for an Australian user.
- Prompts: Seven news‑oriented prompts suggested by Copilot itself (examples included “what are the top global news stories today” and “what are the major health or medical news updates for this week”).
- Focus: Geographic provenance of linked sources, presence of bylines and local place names, and the composition of the “source set” that grounded each summary.
Headline findings
- Approximately 20% of Copilot’s replies included links to Australian outlets; over half of the most‑referenced sites were based in the United Statetps://www.sydney.edu.au/news-opinion/news/2026/01/27/ai-sidelines-australian-journalism-new-study-finds.html)
- In three of seven studied prompt categories, no Australian sources appeared at all.
- Where Australian outlets were included, they were concentrated among a small group of national players (for example, the ABC and major commercial publishers); regional and specialist publications were almost entirely absent.
- Journalists’ names and local place details were frequently omitted; reporting was often labelled generically (e.g., “researchers” or “experts”), which erases the labour and local accountability of named reporters.
Why Copilot and similar assistants skew toward global outlets
The study documents several interacting technical and commercial mechanisms that combine to privilege large international publishers.1) Training data and indexing footprints
Large U.S. and European outlets produce vast, well‑indexed archives. Retrieval ss bias toward domains with wide link authority, extensive archives, and strong SEO signals. When the candidate set is already dominated by global publishers, the model’s summary is naturally grounded in those sources.2) Retrieval and ranking heuristics
Most assistant architectures use a hybrid pipeline: a retrieval layer surfaces candidate documenel composes the summary. If the retrieval layer weights authority, freshness, and backlink profiles more heavily than geographical relevance, local sites with smaller technical footprints will be under‑represented. The absence of explicit geo‑weighting is a first‑order driver of the patterns Koskie observed.3) Prompt design and UX nudges
Copilot’s own recommended prompts in the sampled sessions were *globally framematter: many users accept suggested prompts, so platform defaults can scale a particular framing (global vs local) across millions of sessions. When prompts steer users to global briefs, the assistant’s outputs follow.4) Commercial licensing and platform aggregation
Platform owners have existing relationships,aggregator feeds that prioritize certain publishers. Microsoft’s news ecosystem — including MSN and other syndicated properties — can structurally favour the same set of partners that already enjoy global reach. Those commercial arrangements, when used to ground summaries, further amplify dominant outlets.5) Presentation and provenance loss
Even when local reporting is used, the presentation layer often strips metadata: no bylines, no publication dates, and terse paraphrases that bury the link. That erasure of provenance reduces the incentive to click through and makes the human labour of reporting invisible.The downstream harms: economics, trust and democracy
AI summaries that repackage reporting without routing readers to original articles create a cascade of harms for local journalism.- Referral traffic and revenue loss. Publishers depend on referrals for ad revenueels. Summaries that provide the answer eliminate the click that funds the journalism. Industry research and recent surveys anticipate significant declines in search and referral traffic as answer engines proliferate — a structural squeeze publishers are already preparing for.
- Erosion of local accountability. Regional reporting uncovers municipal mismanagement, local planning issues, and public‑health advisories. When AI outputs flatten region‑specific detail into national headlines, communities lose oversight and citizens receive letion.
- Invisible labour and weakened trust. By removing bylines and named journalists, assistants undermine professional recognition and make it harder for readers to judge source credibility. Trust in news is tightly bound to identifiable reporters and local institutions; anonymised summaries undercut that link.
- Acceleration of news deserts and consolidation. Reduced traffic and revenue hit smaller outlets hardest. Over time, closures and te or deepen news deserts, especially outside metropolitan centres. The University of Sydney frames local news as democratic infrastructure; its loss has civic consequences.
Policy context: the News Media Bargaining Code and the regulatory gap
Australia has been at the forefront of regulating platform‑publisher power. The 2021 News Media and Digital Platforms Mandatory Bargaining Code forced agreements with major platforms, and the later News Media Bargaining Incentive was intended to encourage negotiations and careful treatment of journalism. But Koskie’s study exposes a policy gap: existing frameworks were built around links, snippets and distribution mechanics, not the emergent practice of generative AI producing answer‑first summaries that bypass referrals.Extending bargaining or incentive mechanisms to explicitly cover AI‑generated outputs raises hard definitional questions: what constitutes use of news (direct quotes? paraphrased summaries? model training data?), how to measure impact, venance standards. Yet the principle is straightforward: if AI assistants function as gateways to news, they must be governed in ways that protect the financial and informational role of local journalism.
Practical fixes: prodher responses
Koskie and other commentators propose a mix of product engineering, regulatory design, and operational changes for publishers. These are practical, implementablrries trade‑offs.Product design changes (what platforms can do)
- Embed geographical weighting into retrieval: apply location signals (user country, outlet country of origin, local tags) as a configurable ranking factor to ensure local outlets apped queries.
- Preserve provenance by default: show outlet name, byline, and publication date before or inside the summary, and make the primary link prominent (“link‑first” UX). This increases clickthroughs and makes source labour visible.
- Offer local‑first prompt defaults: surface “Top Australian news” or “Local updates for [region]” as explicit prompt optioing to global starters.
- Source panels and transparency: when a summary draws on multiple stories, present a concise panel listing the contributing outlets and classification (local, national, internatiocountability and helps readers seek full context.
Policy levers (what governments and regulators can do)
- Expand bargaining remit to AI experiences: clarify that the scope of incentive mechanisms includes AI‑sunctionally replaces referral traffic or uses publisher content in a way measurable under bargaining regimes. This will require precise statutory definitions and implementation rules.
- Mandate minimum provenance standards: require that AI‑news expee attribution (outlet, author, date) and provide a direct route to the original article when summarising journalism.
- Require periodic independent audits: compel platforms to commission independent audits that measure geograph impacts, and byline preservation — with results published in accessible summaries.
- Support local journalism directly: targeted subsidies, grants, or tax incentives can buy tim to adapt to the discovery shift. Policy levers should be designed to avoid moral hazard while protecting critical reporting beats.
Publisher actions (what newsrooms can do now)
- Expose machine‑readable metadata: make bylines, region tags, and structured data (schema) consistently available so retrieval layers can more reliably surface local content.
- Monitor anlows: track sudden changes in search and direct referrals coincident with platform feature rollouts and use cohort analysis to quantify impact.
- Double down on unique value: invest in reporting that resists easy summarisation — local investigations, data journalism, and deeply contextual stories that reward direct engagement.
- Negotiate collectively: small publishers can gain leverage through sectoral barcensing approaches when engaging platforms or governments.
Critical reading: strengths, limitations and open questions
No empirical work is beyond critique. Koskie’s study is a focused, methodical audit with useful diagnostic value; it surfaces distributional facts that otherwise would remain anecdotude a clearly defined sample (434 summaries) and a practical focus on provenance and geographic diversity rather than chasing every accuracy metric.But there are important caveats:
- Snapshot in time. Assistant behaviour is dynamic. Index composition, licensing deals, retd model updates can change outputs quickly. The sampled behaviour reflects the period analysed and may evolve. The study’s authors acknowledge this limitation.
- Prompt framing matters. Many of the prompts tested were global by design; different user queries — explicitly local queries, for example — may surface more domestic outlets. The UX defaults, however, shape mainstream behaviour and are therefore relevant to public impact.
- Opacity of backend pipelines. Critical vars, licensing feeds or indices the assistant queried) are often proprietary and opaque. This makes precise causal attribution difficult without vendor cooperation. Where the paper speculates about licensing and platform feed effects, those claims are plausible and consistent with industry reporting but sometimes remain partially urecords. In those instances, cautionary language is appropriate.
- Measurement of economic impact requires more data. Demonstrating causal revenue losses from AI summaries requires publisher analytics across time and careful counterfactuals. Koskie’s study establishes plausible mechanisms and patterns; quantifying the economic loss across the sector will need coordinated analytics work.
Bigger picture: answer engines, the death of the click, and what comes next
Koskie’s findings are a timely case study in a global trend: search and discovery are shifting from link lists to answer engines and agentic assistants. Industry reports and surveys warn that publishers expect significant declines in traditional search referrals as AI answers proliferate — a structural change that necessitates new distribution and monetisation models.That transition can be framed two ways. On one hand, AI assistants deliver undeniable user benefits: speed, triage, and lowered friction for readers. On the other, without provenance, geographic sensitivity, and compensation frameworks, the convenience of an answer can hollow ouournalism. The central public‑policy challenge is to preserve the advantages of AI while guaranteeing that the underlying news ecosystem remains pluralistic and financially sustainable.
Practical advice for Windows users, publishers and civic actors
- For everyday readers: treat AI summaries as through to original reporting for context, bylines, and verification, especially for consequential or local stories.
- For publishers: audit referral analytics closely after major platform changes, publish clear machine‑readable metadata, alicensing discussions with platforms or government incentive schemes.
- For regulators and policymakers: consider extending bargaining and transparency obligations to AI experiences; mandate provenance defaults; and require independent audits of assistant outputs with public reporting.
- For platform engineers: implement geo‑aware retrieval, link‑first UX patterns, and explicit provenance panels. Small product decisions — default prompts, attribution visibility, ranking knobs — materially change downstream civic outcomes.
Conclusion
The University of Sydney’s audit is a clear warning: generative AI news summaries, as currently configured in widely deployed assistants like Microsoft Copilot, are not neutral compressions of the day’s reporting. They inherit and intensify pre‑existing attention economies that privilege large, globally clickable publishers while marginalising regional, specialist and independent Australian outlets. Without deliberate product safeguards and policy interventions, those technical choices risk deepening news deserts, erasing journalist labour, and weakening local democratic oversight.There is no single silver bullet. The remedies are hybrid: product engineers must bake provenance and geographic sensitivity into retrieval and presentation layers; regulators must adapt bargaining and transparency regimes to the realities of AI‑mediated discovery; and publishers must expose robust metadata and reorganise revenue strategies around direct relationships with readers. If stakeholders act quickly and collaboratively, it’s possible to preserve the benefits of AI assistance while protecting the pluralistic information ecosystems that underpin healthy democracies.
Source: Mi-3.com.au https://www.mi-3.com.au/28-01-2026/...er-australian-outlets-sydney-uni-study-finds/
