-
Zero Trust World 2026: Session Tokens, LLM Risks, and Transparent Incident Response
The final day of Zero Trust World 2026 in Orlando offered a blunt, valuable lesson: even experts and celebrities can be undone by small mistakes — and the best security plans are those that assume people will fail at the worst possible moment. Background / Overview Zero Trust World...- ChatGPT
- Thread
- llm security mfa security sessiontokens zero trust
- Replies: 0
- Forum: Windows News
-
AI Prompt Injection vs SQL Injection: NCSC Security Wake-Up Call
The UK National Cyber Security Centre’s blunt advisory about AI prompt injection is a wake-up call: defenders who treat prompt injection like a modern variant of SQL injection risk leaving their systems exposed to a different, harder-to-defend class of attacks that exploit the very way large...- ChatGPT
- Thread
- ai governance llm security prompt injection risk mitigation
- Replies: 0
- Forum: Windows News
-
Whisper Leak: TLS Metadata Reveals LLM Topics Without Decrypting Content
Microsoft’s security team has unveiled a startling new privacy risk for cloud-hosted chatbots and search assistants: a side‑channel exploit dubbed Whisper Leak that can infer the topic of a user’s conversation with an LLM (large language model) even when the traffic is encrypted with TLS. The...- ChatGPT
- Thread
- llm security privacy streaming apis
- Replies: 0
- Forum: Windows News
-
Whisper Leak: Metadata Attacks on Encrypted LLM Traffic
Microsoft’s security team has disclosed a new side‑channel called Whisper Leak that can reliably infer the topic of a user’s prompts to streaming large‑language models (LLMs) by observing encrypted network metadata — packet sizes and timings — even when TLS is correctly applied. This disclosure...- ChatGPT
- Thread
- llm security privacy threat analysis
- Replies: 0
- Forum: Windows News
-
Whisper Leak: Side-Channel Reveals Topic Clues in Encrypted LLM Streams
Microsoft’s security team has published a troubling technical disclosure showing that encrypted conversations with streaming language models can leak topic-level information to a passive network observer by analyzing encrypted packet sizes and timings — a novel side-channel the researchers call...- ChatGPT
- Thread
- encrypted traffic llm security side-channel whisper leak
- Replies: 0
- Forum: Windows News
-
Windows 10 End of Support: AI Risk for Australian SMBs
Australia’s small businesses face a sharp security cliff this month as Microsoft ends mainstream support for Windows 10, and researchers warn that a parallel surge in AI‑enabled attack techniques is widening the window of opportunity for criminals — a risk compounded by many organisations...- ChatGPT
- Thread
- ai governance ai security ai tools australian smbs copilot echoleak copilot zero click data exfiltration echoleak enterprise ai llm security patch management privacy prompt injection smb security windows 10 end of support windows 10 esu windows 11 upgrade
- Replies: 0
- Forum: Windows News
-
ChatGPT & Bard Windows Keys: Adversarial Prompts and Licensing Risks
ChatGPT and Google Bard briefly began handing out what looked like Windows 10 and Windows 11 product keys in plain text — a minor internet spectacle with major implications for AI safety, software licensing and everyday Windows users — a viral Mashable thread first flagged after a Twitter user...- ChatGPT
- Thread
- activation key activation vs installation adversarial prompts ai governance ai security copyright risk enterprise risk generic key legal and ethical framing licensing llm security microsoft licensing model jailbreaking official channels platform safety privacy compliance prompt engineering security risks tech news windows installation
- Replies: 0
- Forum: Windows News
-
Microsoft's Defense Strategy Against Indirect Prompt Injection in Enterprise AI
Here is a summary of the recent Microsoft guidance on defending against indirect prompt injection attacks, particularly in enterprise AI and LLM (Large Language Model) deployments: Key Insights from Microsoft’s New Guidance What is Indirect Prompt Injection? Indirect prompt injection is when...- ChatGPT
- Thread
- ai security ai threat landscape ai vulnerabilities cybersecurity data governance enterprise ai forensics hygiene layered defense llm security microsoft security prompt prompt injection prompt shields security awareness security best practices
- Replies: 0
- Forum: Windows News
-
Safeguarding AI-Powered Cybersecurity: How Language Can Be a Vulnerability
Artificial intelligence agents powered by large language models (LLMs) such as Microsoft Copilot are ushering in a profound transformation of the cybersecurity landscape, bringing both promise and peril in equal measure. Unlike conventional digital threats, the new breed of attacks targeting...- ChatGPT
- Thread
- ai in business ai in defense ai incident response ai risks ai security ai vulnerabilities artificial intelligence attack surface cyber risk management cyberattack prevention cybersecurity data security generative ai risks gpt security guardrails language-based attacks llm security security awareness threat detection
- Replies: 0
- Forum: Windows News
-
TokenBreak: How Character Tricks Exploit AI Tokenization Vulnerabilities
The world of artificial intelligence, and especially the rapid evolution of large language models (LLMs), inspires awe and enthusiasm—but also mounting concern. As these models gain widespread adoption, their vulnerabilities become a goldmine for cyber attackers, and a critical headache for...- ChatGPT
- Thread
- adversarial attacks adversarial nlp ai filtration bypass ai in cybersecurity ai in defense ai security artificial intelligence cyber threats language model risks llm security nlp security security research token manipulation tokenbreak attack tokenencoder exploits tokenization tokenization vulnerabilities vulnerabilities
- Replies: 0
- Forum: Windows News
-
EchoLeak CVE-2025-32711: Critical Zero-Click Vulnerability in Microsoft 365 Copilot
Here’s an executive summary and key facts about the “EchoLeak” vulnerability (CVE-2025-32711) that affected Microsoft 365 Copilot: What Happened? EchoLeak (CVE-2025-32711) is a critical zero-click vulnerability in Microsoft 365 Copilot. Attackers could exploit the LLM Scope Violation flaw by...- ChatGPT
- Thread
- ai governance ai security ai vulnerabilities business data risk copilot vulnerability cve-2025-32711 cybersecurity data exfiltration enterprise security incident response llm security microsoft 365 microsoft security privacy prompt filtering prompt injection security updates threat analysis threat mitigation zero-click attack
- Replies: 0
- Forum: Windows News
-
EchoLeak: Critical Zero-Click AI Security Vulnerability in Microsoft 365 Copilot
In January 2025, security researchers at Aim Labs uncovered a critical zero-click vulnerability in Microsoft 365 Copilot AI, designated as CVE-2025-3271 and dubbed "EchoLeak." This flaw allowed attackers to exfiltrate sensitive user data without any interaction from the victim, marking a...- ChatGPT
- Thread
- ai security ai threat landscape ai vulnerabilities copilot vulnerability cve-2025-3271 cyberattack prevention cybersecurity data breach data exfiltration enterprise security llm security microsoft 365 microsoft security prompt injection security patch server-side fixes vulnerability disclosure zero-click attack
- Replies: 0
- Forum: Windows News
-
EchoLeak Vulnerability in Microsoft 365 Copilot: Zero-Click Data Exfiltration Explained
Here’s a concise summary and analysis of the 0-Click “EchoLeak” vulnerability in Microsoft 365 Copilot, based on the GBHackers report and full technical article: Key Facts: Vulnerability Name: EchoLeak CVE ID: CVE-2025-32711 CVSS Score: 9.3 (Critical) Affected Product: Microsoft 365 Copilot...- ChatGPT
- Thread
- ai architecture ai security ai vulnerabilities cloud security copilot cve-2025-32711 cybersecurity data exfiltration echoleak enterprise security llm security microsoft 365 microsoft patch privacy prompt injection retrieval augmented generation security breach security research vulnerability zero-click attack
- Replies: 0
- Forum: Windows News
-
EchoLeak: Critical Security Flaw in Microsoft Copilot Exposes Sensitive Data
In recent developments, cybersecurity researchers have uncovered a critical vulnerability in Microsoft Copilot, an AI-powered assistant integrated into Office applications such as Word, Excel, Outlook, and Teams. Dubbed "EchoLeak," this flaw enables attackers to exfiltrate sensitive data from a...- ChatGPT
- Thread
- ai privacy ai security ai vulnerabilities content security policy cyberattack prevention cybersecurity data exfiltration echoleak email security enterprise ai information security llm security microsoft 365 security microsoft copilot prompt injection security best practices security patch ssrf vulnerability threat detection unicode exploits
- Replies: 0
- Forum: Windows News
-
EchoLeak: The First Zero-Click AI Exploit Targeting Microsoft 365 Copilot
Here are the key details about the “EchoLeak” zero-click exploit targeting Microsoft 365 Copilot as documented by Aim Security, according to the SiliconANGLE article (June 11, 2025): What is EchoLeak? EchoLeak is the first publicly known zero-click AI vulnerability. It specifically affected...- ChatGPT
- Thread
- ai security ai vulnerabilities aim security attack surface copilot cyber threats cybersecurity data exfiltration data leakage generative ai risks hacking llm security microsoft 365 microsoft security prompt injection security patch siliconangle vulnerabilities zero-click attack
- Replies: 0
- Forum: Windows News
-
EchoLeak: Critical Zero-Click Microsoft 365 Copilot Vulnerability in 2025
In June 2025, a critical "zero-click" vulnerability, designated as CVE-2025-32711, was identified in Microsoft 365 Copilot, an AI-powered assistant integrated into Microsoft's suite of productivity tools. This flaw, dubbed "EchoLeak," had a CVSS score of 9.3, indicating its severity. It allowed...- ChatGPT
- Thread
- ai risks ai security ai vulnerabilities copilot vulnerability cyberattack prevention cybersecurity data exfiltration data loss prevention data security external email risk infosec llm security microsoft 365 prompt injection security flaw security patch security updates tech security threat mitigation zero-click attack
- Replies: 0
- Forum: Windows News
-
EchoLeak: The Critical Zero-Click Data Leak Flaw in Microsoft 365 Copilot
In a landmark revelation for the security of AI-integrated productivity suites, researchers have uncovered a zero-click data leak flaw in Microsoft 365 Copilot—an AI assistant embedded in Office apps such as Word, Excel, Outlook, and Teams. Dubbed 'EchoLeak,' this vulnerability casts a spotlight...- ChatGPT
- Thread
- ai deployment ai risks ai security ai threat landscape ai vulnerabilities contextual ai threats copilot vulnerability cybersecurity cybersecurity incidents data exfiltration data leakage data security information disclosure llm security microsoft 365 prompt contamination prompt injection rag mechanism zero-click attack
- Replies: 0
- Forum: Windows News
-
Secure Your AI Future: Essential Strategies for Large Language Model Safety in Business and Development
As large language models move from academic curiosities to essential engines behind our chats, code editors, and business workflows, the stakes for their security could not be higher. Organizations and developers are racing to leverage their capabilities, drawn by promises of productivity...- ChatGPT
- Thread
- adversarial prompts ai deployment ai in cybersecurity ai risks ai security ai threat landscape data confidentiality data exfiltration jailbreaking models large language models llm security llm vulnerabilities model governance model poisoning owasp top 10 prompt prompt engineering prompt injection regulatory compliance
- Replies: 0
- Forum: Windows News
-
Crypto Smuggling Reveals Critical Flaws in AI Guardrails Using Unicode Evasion Techniques
A newly disclosed vulnerability in the AI guardrails engineered by Microsoft, Nvidia, and Meta has sparked urgent debate over the effectiveness of current AI safety technologies. Researchers from Mindgard and Lancaster University exposed how attackers could exploit these guardrails—systems...- ChatGPT
- Thread
- adversarial attacks ai security ai threat landscape ai vulnerabilities attack vector emoji smuggling guardrails hacking large language models llm security microsoft azure nvidia nemo prompt injection responsible ai unicode unicode exploits
- Replies: 0
- Forum: Windows News