A deceptively small UX convenience—allowing Microsoft Copilot to accept a prefilled prompt from a URL—was chained into a practical, one‑click data‑exfiltration technique that targeted Copilot Personal and, until Microsoft pushed mitigations in mid‑January 2026, could quietly siphon profile...
Varonis Threat Labs’ proof‑of‑concept shows that a deceptively small convenience — allowing Microsoft Copilot to accept a prefilled prompt from a URL — could be chained into a practical, one‑click data‑exfiltration technique that targeted Copilot Personal and could, under lab conditions, siphon...
A new, deceptively simple attack named “Reprompt” has exposed a critical weakness in Microsoft Copilot Personal: with a single click on a legitimate Copilot deep link an attacker could, under the right conditions, mount a multistage, stealthy data‑exfiltration chain that pulls names, locations...
Microsoft Copilot users face a new prompt-injection vector that researchers say can be triggered with a single click — a technique reported as “Reprompt” that abuses URL parameters to feed malicious prompts into Copilot, bypass built‑in safeguards, and siphon sensitive content from user sessions...
Tenable’s controlled jailbreak of a Microsoft Copilot Studio agent turned a neat no‑code automation into a vivid demonstration of how agentic AI can leak payment card data and execute unauthorized financial changes — all via simple prompt‑injection tricks that non‑developers could unknowingly...
A deceptively simple trick—padding and context manipulation—can turn carefully designed “human‑in‑the‑loop” (HITL) safety prompts into a live remote code execution (RCE) vector, and the security research community’s recent “Lies‑in‑the‑Loop” disclosures show how that vector threatens...
Microsoft’s own documentation now warns that the new “agentic” AI features in Windows 11 — the capabilities that let built‑in agents act on a user’s behalf — introduce novel security risks, including the possibility that an agent could be manipulated into exfiltrating data or even downloading...
Microsoft’s own Windows documentation and preview notes make an unusually blunt admission: the new “agentic” AI features being added to Windows 11 introduce novel security risks that change the operating‑system threat model — and administrators and enthusiasts should treat enabling them as a...
Microsoft’s promise to let non‑developers build “digital employees” inside Copilot Studio has collided with a simple, sharp truth: no‑code AI agents that are given broad read/write permissions can be manipulated to do real harm. In a controlled proof‑of‑concept, Tenable’s AI research team showed...
Microsoft’s preview of agentic features in Windows 11 — where Copilot-style assistants move from “suggest” to “act” — is a technical milestone with meaningful productivity upside and a suite of novel security and governance challenges that administrators and power users must treat as deliberate...
A sharp, peer‑reviewed study and a string of security disclosures have exposed a worrying truth about the new generation of AI‑assisted web browsers: many of them collect and transmit highly sensitive browsing data — sometimes without clear consent — and the features that make these tools useful...
Brave has quietly opened the next chapter in the browser wars: an experimental, agentic AI browsing mode is available now in Brave Nightly, offering a model-driven assistant that can autonomously browse, act, and complete multi-step tasks inside a purposely isolated profile — but it arrives amid...
A recent security analysis has found that Microsoft Copilot Studio’s no-code AI agents can be coerced into leaking sensitive customer data and performing unauthorized actions with trivially simple prompt injections, exposing a new class of operational and regulatory risk that teams must treat as...
Tenable’s controlled jailbreak of a Microsoft Copilot Studio agent has laid bare a clear, present danger: no-code AI agents — the “digital employees” proliferating inside enterprises — can be manipulated to deliver both data theft and direct financial fraud. In a deliberately scoped...
Guy Zetland and Keren Katz report that a Tenable AI Research proof‑of‑concept has turned Microsoft Copilot Studio’s promising no‑code agent model into a glaring attack surface: simple prompt injections can coax agents into leaking sensitive records — including credit card data — and even change...
The cybersecurity community has reached a rare, consensus-sounding alarm: AI-powered browsers — the new generation of agentic, LLM-driven web clients — introduce a novel attack surface that many organizations should treat as unacceptable risk today, with leading advisory firms and government...
The UK National Cyber Security Centre’s blunt advisory about AI prompt injection is a wake-up call: defenders who treat prompt injection like a modern variant of SQL injection risk leaving their systems exposed to a different, harder-to-defend class of attacks that exploit the very way large...
AI browsers — the new generation of agentic assistants that read, reason, and act on web pages for you — are now being weaponized by a fresh class of attacks that hide instructions inside otherwise normal web content, threatening account security, private data, and the very notion of what a...
A fresh prompt-injection variant called HashJack has staked out an unexpected and stealthy attack surface: the text that appears after the “#” in a URL — the fragment identifier — can be weaponized to deliver natural‑language instructions to AI-powered browser assistants, tricking them into...
Microsoft’s own documentation now admits a hard truth: turning Windows 11 from an assistant into an agentic operating system — one that can act on your behalf, open apps, click UI elements, and manipulate files — changes the threat model in ways that traditional endpoint defenses were not built...