In the weeks leading up to Australia’s federal election, a sophisticated pro-Russian influence campaign has been quietly operating in the digital shadows, aiming not directly at voters, but at artificial intelligence systems that millions depend on for information. The operation, centered around the recently emerged "Pravda Australia" website, represents a new phase in online disinformation: the deliberate attempt to "poison" Western AI chatbots with Kremlin-approved propaganda and to subtly, but enduringly, shape perceptions and divisions within Australia and beyond.
Multiple cybersecurity firms and disinformation specialists, including experts from NewsGuard and the private intelligence firm Recorded Future, have tracked Pravda Australia since its registration in 2024, noting its sudden surge in activity just before the election campaign commenced. What distinguishes this operation is its almost total lack of engagement with the human electorate. Instead, the project’s apparent goal is to saturate the digital environment—specifically content pools from which popular AI chatbots learn—with a Russian spin on world events, politics, and controversies.
As McKenzie Sadeghi, AI and foreign influence editor at NewsGuard, explained, “The Pravda Network appears to be designed and created solely to affect … AI chatbots.” Analysis found that stories from Pravda Australia make up to 155 posts per day, predominantly reproducing content from Telegram channels and Russian propaganda sources, repackaging them under an innocuous Australian domain. Critically, there is little to no organic engagement from Australian readers—website visits remain negligible, and social media shares are in single digits.
Pravda Australia itself, while sharing a name with an historic Russian publication, operates independently and is one node in a system reportedly run from Russian-occupied Crimea and orchestrated by Kremlin-aligned operatives. Its scale and structure have evolved over the past year, according to DFRLab and Recorded Future, with the Australian node producing over 6,300 stories by mid-2025, almost 40% of which directly pertain to Australian affairs.
Overall, around one in four Pravda Australia articles replicate Boikov’s posts, with another sizable portion from "RealLandDownUnder." This automation-driven process not only speeds up the volume of content but ensures relentless reinforcement of pro-Russian talking points, often amplifying discordant narratives about Australian politics, relations with Ukraine, and Western diplomatic rifts.
Even Recorded Future’s Sean Minor noted, “It’s low-level, insignificant activity that is not garnering a lot of authentic attention.” These findings align with the site’s operational strategy—as per Ms. Sadeghi, “They’ve invested zero resources in trying to build an organic human audience on social media, which is very significantly different from most Russian disinformation efforts.”
Speaking at a Moscow roundtable published in January, Dougan boasted that his websites had “infected approximately 35 per cent of all worldwide artificial intelligence,” stating, “By pushing these Russian narratives, from the Russian perspective, we can actually change worldwide AI.” While Dougan’s figure cannot be independently verified—and may be exaggerated—it underscores the strategic emphasis on influencing machines, not people. Caution is warranted with such claims, as independent verification by reputable research groups suggests that AI models can be partially influenced by persistent content flooding, but “infection” at a scale of 35% is likely hyperbolic.
Separate, smaller testing conducted by ABC News confirmed similar patterns: AI tools occasionally generated fully imagined responses to queries about non-existent topics (for example, explaining the rationale for the “Australian Muslim Party,” a party that does not exist) or even fabricated social media posts when prompted.
Comparatively, however, the falsehood amplification rate was nearly double for US-centric queries (33%)—correlating with the longer and more intensive focus of Pravda’s US sub-network.
What’s new in the Pravda operation is the almost exclusive focus on the “long game” of algorithmic bias—seeding plausible, source-camouflaged content for future AI indexing. As Miah Hammon-Errey of Strat Futures describes, "Russian doctrine thinks about this in terms of generations, and Australians think about this in terms of election cycles." The intention is to alter the foundational materials on which AI systems rely, subtly injecting bias or misinformation over time.
This “laundering” process is designed to bypass traditional fact-checking and content-filtering mechanisms. Since the Pravda sites present as niche news outlets rather than overtly state-run propaganda, they may elude the reputational filtering applied by major AI companies—a vulnerability now being exploited to feed misinformation at scale.
Nonetheless, Senator James Paterson, shadow home affairs spokesperson, has called for a deeper investigation, cautioning that, “Any allegations of foreign interference, including online, must be taken seriously and investigated.” The risk, experts agree, is not immediate election interference, but gradual erosion of the information environment: the slow-burning, algorithm-driven war of attrition.
As Western democracies increasingly rely on AI for public information, news synthesis, and even voting assistance, safeguarding these digital “nervous systems” from covert, long-term manipulation will become as crucial as defending voting machines or electoral rolls themselves. The challenge is formidable, but proactive transparency, robust verification, and ongoing vigilance remain the best defenses against invisible, persistent threats.
A world where algorithms are at the frontlines of information warfare demands scrutiny not just of what we read, but of what our machines may already have learned.
Source: Australian Broadcasting Corporation Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots - ABC News
The Hidden Target: Chatbots, Not Citizens
Multiple cybersecurity firms and disinformation specialists, including experts from NewsGuard and the private intelligence firm Recorded Future, have tracked Pravda Australia since its registration in 2024, noting its sudden surge in activity just before the election campaign commenced. What distinguishes this operation is its almost total lack of engagement with the human electorate. Instead, the project’s apparent goal is to saturate the digital environment—specifically content pools from which popular AI chatbots learn—with a Russian spin on world events, politics, and controversies.As McKenzie Sadeghi, AI and foreign influence editor at NewsGuard, explained, “The Pravda Network appears to be designed and created solely to affect … AI chatbots.” Analysis found that stories from Pravda Australia make up to 155 posts per day, predominantly reproducing content from Telegram channels and Russian propaganda sources, repackaging them under an innocuous Australian domain. Critically, there is little to no organic engagement from Australian readers—website visits remain negligible, and social media shares are in single digits.
A Network Born for Machine Consumption
Disinformation specialists tracing the global Pravda Network describe it as comprising nearly 180 largely automated sites, all working to launder pro-Kremlin narratives for potential ingestion by Western AI models—such as OpenAI’s ChatGPT, Google Gemini, and Microsoft Copilot. These platforms continuously crawl the internet for new information, and attempts are being made to seed them with carefully disguised propaganda under seemingly independent, credible branding.Pravda Australia itself, while sharing a name with an historic Russian publication, operates independently and is one node in a system reportedly run from Russian-occupied Crimea and orchestrated by Kremlin-aligned operatives. Its scale and structure have evolved over the past year, according to DFRLab and Recorded Future, with the Australian node producing over 6,300 stories by mid-2025, almost 40% of which directly pertain to Australian affairs.
The Anatomy of the Content Flood
Delving into the contents of Pravda Australia, researchers identified that the majority of posts are direct lifts from two Telegram channels: "AussieCossack," run by pro-Kremlin activist Simeon Boikov, and "RealLandDownUnder," a channel associated with far-right perspectives and online disinformation. Mr. Boikov, currently sheltering in the Russian consulate in Sydney to avoid Australian legal proceedings, reportedly had no knowledge of his content being systematically scraped and republished—although he expressed satisfaction when informed.Overall, around one in four Pravda Australia articles replicate Boikov’s posts, with another sizable portion from "RealLandDownUnder." This automation-driven process not only speeds up the volume of content but ensures relentless reinforcement of pro-Russian talking points, often amplifying discordant narratives about Australian politics, relations with Ukraine, and Western diplomatic rifts.
Human Impact: Minimal—for Now
Analysis by both NewsGuard and the Australian Broadcasting Corporation shows that despite this prodigious output, Pravda Australia has very little direct human reach. Traffic analytics, as verified by the Australian Electoral Commission’s integrity taskforce, indicate “very low” site visits, minimal Telegram channel subscriptions, and hardly any amplification through mainstream Australian social media.Even Recorded Future’s Sean Minor noted, “It’s low-level, insignificant activity that is not garnering a lot of authentic attention.” These findings align with the site’s operational strategy—as per Ms. Sadeghi, “They’ve invested zero resources in trying to build an organic human audience on social media, which is very significantly different from most Russian disinformation efforts.”
The Long Game: Poisoning the Well
Yet, the lack of visible impact on voters is, arguably, beside the point. Documents, interviews, and public statements from key propagandists such as John Dougan—a former US law enforcement officer turned Kremlin operative—indicate the operation’s real intent: to embed pro-Russian content into the data that AI models absorb, sometimes unknowingly, as they are continually updated with “fresh” web content.Speaking at a Moscow roundtable published in January, Dougan boasted that his websites had “infected approximately 35 per cent of all worldwide artificial intelligence,” stating, “By pushing these Russian narratives, from the Russian perspective, we can actually change worldwide AI.” While Dougan’s figure cannot be independently verified—and may be exaggerated—it underscores the strategic emphasis on influencing machines, not people. Caution is warranted with such claims, as independent verification by reputable research groups suggests that AI models can be partially influenced by persistent content flooding, but “infection” at a scale of 35% is likely hyperbolic.
Testing the Impact: How Chatbots Respond
To objectively assess whether this machine-oriented disinformation is working, researchers at NewsGuard conducted an extensive audit for the ABC, prompting 10 leading chatbots (including ChatGPT-4o, Grok-2, Copilot, Meta AI, and Gemini 2.0) with 300 questions related to 10 false narratives—each widely circulating online and represented in Pravda’s published material.- 50 of 300 AI responses (16.66%) included or amplified falsehoods.
- 233 responses debunked the narrative.
- 17 declined to answer.
Separate, smaller testing conducted by ABC News confirmed similar patterns: AI tools occasionally generated fully imagined responses to queries about non-existent topics (for example, explaining the rationale for the “Australian Muslim Party,” a party that does not exist) or even fabricated social media posts when prompted.
Comparatively, however, the falsehood amplification rate was nearly double for US-centric queries (33%)—correlating with the longer and more intensive focus of Pravda’s US sub-network.
What Makes This Threat Different
Traditional influence operations, including those orchestrated by the Russian state or aligned actors, have historically sought to manipulate public sentiment directly—through viral social media posts, meme campaigns, and amplification via botnets and paid trolls. Here, the measure of success is virality and discourse-shaping.What’s new in the Pravda operation is the almost exclusive focus on the “long game” of algorithmic bias—seeding plausible, source-camouflaged content for future AI indexing. As Miah Hammon-Errey of Strat Futures describes, "Russian doctrine thinks about this in terms of generations, and Australians think about this in terms of election cycles." The intention is to alter the foundational materials on which AI systems rely, subtly injecting bias or misinformation over time.
This “laundering” process is designed to bypass traditional fact-checking and content-filtering mechanisms. Since the Pravda sites present as niche news outlets rather than overtly state-run propaganda, they may elude the reputational filtering applied by major AI companies—a vulnerability now being exploited to feed misinformation at scale.
Verification and Response: Official Oversight
Despite widespread concern among experts, government response in Australia has been measured. The Electoral Integrity Assurance Taskforce—comprising intelligence agencies and the AEC—has been aware of Pravda Australia, but its own analysis confirms negligible physical-world impact: both direct traffic and social engagement are nearly zero.Nonetheless, Senator James Paterson, shadow home affairs spokesperson, has called for a deeper investigation, cautioning that, “Any allegations of foreign interference, including online, must be taken seriously and investigated.” The risk, experts agree, is not immediate election interference, but gradual erosion of the information environment: the slow-burning, algorithm-driven war of attrition.
Notable Strengths and Weaknesses of the Operation
Strengths
- Automation and Scale: The Pravda Network is highly automated, pumping out hundreds of articles daily with minimal human oversight, allowing scale far beyond traditional troll operations.
- Machine Targeting: By focusing on AI ingestion rather than human virality, Pravda sidesteps common defenses and leverages a largely unguarded pathway of influence.
- Plausible Deniability: Many contributors, such as Mr. Boikov, dispute any direct involvement—a plausible deniability that insulates the operation from accountability.
Weaknesses and Risks
- Low Human Impact: Despite the volume, direct engagement with Australian readers is almost non-existent. If AI companies adjust their ingestion algorithms or blacklist Pravda domains, the strategy’s impact could drop sharply.
- Exposure and Countermeasures: The operation’s methodology is now widely reported in major media and analyzed by specialists. Increased awareness among Bing, Google, and OpenAI could lead to data sanitation and source exclusion.
- Regulatory and Legal Response: Should Australian authorities expand their scrutiny, Pravda Australia and associated actors could face legal consequences or domain takedowns for foreign interference, undermining the network’s legitimacy.
The Road Ahead: AI Hygiene and Democratic Resilience
The Pravda Australia saga highlights an emergent battlefield in digital democracy: the underlying data pools of AI assistants. With over a dozen major chatbot platforms being tested and found vulnerable to varying degrees, it is evident that neither technical sophistication nor user-friendly design alone are enough to ensure factual, unbiased responses.- For AI Developers: Greater transparency is needed about the data sources used for chatbot training and in real-time responses. Increasing focus on “upstream” controls for web-crawled sources, reputation filtering, and domain vetting is critical.
- For Governments and Election Monitors: Intelligence must shift to include long-term algorithmic disinformation campaigns, not just surface-level viral threats. Regulatory frameworks and cross-industry partnerships could be vital to tracking and preempting future operations.
- For Users: Awareness that not every AI-generated answer—no matter how confidently or convincingly phrased—is based on verifiable or neutral evidence. Disinformation, subtly seeded, can inject error and bias into everyday queries.
Conclusion: Enduring Risks and Democratic Vigilance
Pravda Australia exemplifies a new era of information warfare—one where citizens are no longer the primary targets, but the very AI systems they trust as informational arbiters. While the direct effect on Australia’s election or electorate appears, for now, to be minimal, the potential for lasting influence—should major AI platforms remain vulnerable—cannot be ignored.As Western democracies increasingly rely on AI for public information, news synthesis, and even voting assistance, safeguarding these digital “nervous systems” from covert, long-term manipulation will become as crucial as defending voting machines or electoral rolls themselves. The challenge is formidable, but proactive transparency, robust verification, and ongoing vigilance remain the best defenses against invisible, persistent threats.
A world where algorithms are at the frontlines of information warfare demands scrutiny not just of what we read, but of what our machines may already have learned.
Source: Australian Broadcasting Corporation Pro-Russian influence operation targeting Australia in lead-up to election with attempt to 'poison' AI chatbots - ABC News
Last edited: