• Thread Author
A concerted pro-Russian influence operation aimed at Australia has come to light in the lead-up to the country's federal election. Dubbed the “Pravda Network,” this sprawling initiative leverages an array of dubious online portals—including the recently emerged “Pravda Australia”—to seed disinformation through the internet’s labyrinthine channels, with the unusual and strategic twist of targeting artificial intelligence chatbots as the medium for long-term narrative change. The emerging campaign signals a sophisticated evolution of information warfare—one that stretches far beyond immediate human audiences and into the digital fabric shaping how millions understand the world.

Digital AI network with interconnected data and global maps visualized in holographic displays.
The Anatomy of the Pravda Network: Not Just Old-School Disinformation​

Where traditional foreign influence campaigns commonly direct their energies toward sowing discord via viral posts or coordinated inauthentic activity on social networks, the Pravda Network embodies a quietly insidious shift. Data uncovered by private intelligence groups and verified by independent watchdogs indicate that Pravda Australia is merely one node in a constellation of roughly 180 websites globally. Each is designed to mimic the familiar trappings of legitimate news outlets while functioning as pipelines for repurposing and laundering Kremlin-approved narratives specifically for the eyes—and algorithms—of AI language models.
According to reporting by the Australian Broadcasting Corporation (ABC), these sites engage in a prolific churn, at times publishing over 150 “stories” a day in the days before the election. Their output largely consists of verbatim reposts from a select group of Russian government-friendly Telegram channels and established Russian propaganda platforms. The Australian sub-site, for instance, has, within the span of several months, published over 6,300 articles, nearly half of which explicitly relate to current events and controversies in Australia. Key themes surface reliably: criticism of Western leaders, narratives that foster cynicism about the West’s support for Ukraine, exaggerated divisions among Western allies, and local stories with a distinctly conspiratorial or far-right tilt.
The remarkable observation, shared by multiple disinformation experts tracking these operations, is that organic public engagement—the bread and butter of past troll-farm operations—remains essentially non-existent. Traffic analytics, social shares, and even the Telegram channels associated with these outlets linger in the single digits. Yet, the consensus among analysts is clear: humans are not the actual target.

The Long Game: AI as a New Battleground​

The unique focus of Portal Kombat (as the Pravda network is reportedly dubbed within intelligence circles) lies in contaminating or “poisoning” the training data of generative AI models. By interspersing large quantities of disinformation under the veneer of a news domain indistinguishable to AI from reputable outlets, these sites attempt to circumvent guardrails that would typically exclude overt propaganda from being ingested during AI model training.
This tactic was confirmed directly in a January roundtable in Moscow, featuring John Dougan—a key figure behind the Pravda network and a former US law enforcement officer now openly serving Kremlin interests. During the event, Dougan boasted that his constellation of websites had “infected approximately 35% of all worldwide artificial intelligence.” While this share is highly dubious and currently unverifiable, the strategy he outlined was unmistakable: not just to pollute current information spaces, but to subtly weaponize the next generation of information interfaces by ensuring pro-Russian narratives become part of the “neutral” baseline from which chatbots draw their knowledge.
Ms. McKenzie Sadeghi, AI and foreign influence editor at NewsGuard (a respected disinformation watchdog), is among those warning that the real metric of success is not clicks but whether, over time, AI systems regurgitate and thereby normalize these narratives to users who trust the objectivity of their virtual assistants.

Testing the Poison: Audits of AI Chatbots​

To assess whether the Pravda Network’s content was seeping into mainstream AI systems, NewsGuard conducted a comprehensive audit on 10 leading chatbots, including OpenAI’s ChatGPT-4o, Microsoft Copilot, Google Gemini, Meta AI, and xAI’s Grok-2. The researchers presented the bots with 300 prompts across 10 high-profile false narratives circulating within their web, each tailored to replicate both naive and malicious user queries.
The results are cause for concern: in roughly one out of every six cases (16.66%), chatbots provided responses that amplified the false stories. For reference, 233 of the 300 responses debunked the misinformation, while 17 declined to answer. Under “malign” or leading prompts—where users actively tried to elicit falsehoods—the rate of misinformation output was notably higher. While superficially reassuring compared to results in the United States (where algorithms reproduced false narratives about a third of the time), it still means that well over a dozen answers per hundred disseminated carefully crafted lies to users.
ABC’s own tests, albeit less extensive, returned similar patterns. Notably, when asked about the “Australian Muslim Party”—an entity that doesn’t exist—some popular chatbots confidently described its goals and even discussed its forthcoming role in the 2025 federal election, proving that sophisticated AI tools are not immune to carefully laundered falsehoods.

The Mechanics of Disinformation Laundering​

The Pravda Australia operation displays several key characteristics that set it apart from older, “noisier” influence campaigns:
  • Automated Content Farming: Leveraging scripts and networked automation, the sites generate a torrent of content by repackaging material from Telegram and Russian state media.
  • Disconnection from Genuine Audiences: The operation does not attempt to cultivate social engagement or foster communities, thereby evading traditional detection methods that track suspicious user clusters or engagement spikes.
  • Narrative Breadth and Flexibility: By rapidly pivoting topics, from local political scandals to crypto misinformation and anti-renewables conspiracies, the network maximizes the odds that their material becomes relevant and suitable fodder in a wide variety of AI training corpora.
  • Seeding Across Multiple Platforms: The same narratives are redirected and scattered across a web of domains, multiplying the chance that archive dumps, open web crawls, or language model researchers ingest the data as part of “the internet’s view” on an issue rather than as deliberate state media.
In certain instances, their reach extends indirectly through amplification by third parties: for example, prolific Australian pro-Kremlin influencer Simeon Boikov, known as “AussieCossack,” found his Telegram channel—ostensibly without his explicit approval—quoted in nearly one of every four Pravda Australia articles. While Boikov welcomed the exposure (“it warms my heart,” he told ABC), analysts stress that the overwhelming automation and lack of organic following highlight the real intent: training machines, not swaying minds directly.

Governance and Countermeasures: A Difficult Task​

The underlying sophistication of the Pravda operation makes traditional counter-disinformation tools less effective. Social media platforms, fact-checking outlets, and government agencies are primed to respond to viral hoaxes and coordinated messaging surges, not to the silent accretion of low-traffic, low-engagement outlets into the dark matter of the web. Australia’s Electoral Integrity Assurance Taskforce—which bundles the Australian Electoral Commission (AEC) and intelligence agencies—has acknowledged the existence of Pravda Australia but reports “very low” web visits and practically no social media amplification. Still, the lack of visible human engagement does not mean the threat can be ignored.
Politicians across the spectrum have called for heightened investigations. Senator James Paterson, home affairs spokesperson for the Coalition, has urged that every credible hint of foreign interference, including cases that “fly under the radar,” must be fully probed.
AI companies, too, have a role to play. While most major players have invested heavily in content filtering, source attribution, and adversarial testing, the nature of the Pravda Network—constantly mutating, mimicking, and camouflaging—presents a formidable and evolving challenge. Unlike legacy Russian outlets such as RT or Sputnik, these domains usually go unnoticed in blacklists and public datasets used by AI researchers for “safe” data acquisition.

Critical Analysis: Strengths, Weaknesses, and What Comes Next​

Several attributes highlight the ingenuity of the campaign:
  • Resilience and Adaptability: By automating both the generation and dissemination of content, the network sidesteps the need for costly, high-risk human labor, enabling it to scale and pivot with global events.
  • Subtlety: The lack of engagement is not a failure, but a deliberate design—making the operation hard to detect by conventional metrics.
  • Long-term Planning: As analysts note, Russian information warfare doctrine envisions victory not merely as the success of any one hoax or viral narrative, but in sowing “generational” doubt and division.
However, the campaign faces notable limitations:
  • No Direct Human Resonance: Without genuine engagement, these sites have little immediate effect on election cycles or public discourse.
  • Detection Once Known: Once identified by researchers, platforms and AI developers can actively blacklist these domains from both training datasets and real-time search, dramatically curtailing their effectiveness.
  • High Dependency on Generative AI Gaps: Much depends on whether developers of Western AI models can close loopholes and more comprehensively filter subtle forms of “soft laundering” of misinformation.
There is also uncertainty about the real impact. While the network has found more success “poisoning” outputs on U.S. political or vaccine-related narratives, penetration in Australian-specific political topics remains more limited—for the moment.

The Geopolitical Stakes​

Why Australia? The country’s role as an active and sometimes outsized player in international alliances like “Five Eyes,” as well as its steadfast support for Ukraine, gives it clear strategic importance for Moscow. Analysts such as Miah Hammon-Errey, CEO of Strat Futures, highlight that Russia actively seeks to undermine not just rival politicians, but the cohesion and morale of entire democratic alliances. The aim is clear: less to promote a favorite candidate than to convince Australians that “truth” itself is negotiable, that alliances are fragile, and that the stability of democratic processes is perpetually in doubt.
The infiltration of AI models with these narratives poses a new threat: AI’s simulacrum of neutrality makes users more likely to trust its outputs, so if chatbots subtly echo false claims, the effect could be far wider and more insidious than past viral campaigns. Experts warn that future iterations of this strategy might become more sophisticated—incrementally improving their ability to bypass detection or exploit linguistic and topical loopholes.

Recommendations and Path Forward​

  • Continuous Auditing: Regular, transparent audits of mainstream AI model outputs must become the norm—not just for high-profile events, but for the subtle, fringe topics where disinformation can quietly take root.
  • Enhanced Data Hygiene: AI companies must enhance their filtering of source material, employing both automated caveat checks (e.g., cross-referencing with established fact-checks) and human-in-the-loop oversight, especially for sources presenting as “news.”
  • International Cooperation: Disinformation crosses borders instantly. Building joint response frameworks between researchers, governments, and tech companies is vital—particularly among “Five Eyes” and EU alliance members.
  • User Literacy: Public education about the workings of AI chatbots and the possibility of “algorithmic misinformation” is essential. Trust in technology should never be blind.
A final note: the Pravda Network, for all its technological sophistication, is not yet invincible. Once detected, its model—dependent on the anonymity and credibility of newly-minted domains—can be disrupted by vigilant researchers, journalists, and AI developers working in tandem. But the broader arc of this operation tells us that the future of information warfare will not be dictated by loud spectacle, but by patient, persistent efforts to reshape the underlying data structures from which we all, increasingly, draw our knowledge and sense of the world.
As the boundaries between information, disinformation, and the “training data” of tomorrow’s digital tools blur, vigilance must be constant. The battle for the integrity of democratic processes is moving from the noisy viral post to the silent, encoded memories of the machines that mediate our reality—and the coming years will be shaped not just by who shouts loudest, but by who writes the code and curates the facts beneath the surface.

Source: Australian Broadcasting Corporation Pro-Russian influence operation targeting Australia in lead-up to election
 

Last edited:
Back
Top