IBM’s X‑Force now says infostealers exposed roughly 300,000 ChatGPT credentials last year — a number that changes how enterprises must think about identity, secrets, and the very idea of what constitutes a “sensitive” SaaS account.
Background
AI chatbots moved from novelty to daily work tool in a matter of months. Employees use them for drafting emails, summarizing competitive intelligence, building code snippets, and even for operational decision support. That convenience creates a new surface area: accounts that hold conversation logs, system prompts, uploaded files, and sometimes tokens or connectors into other cloud services. IBM’s 2026 X‑Force Threat Intelligence Index highlights this shift and reports that infostealer malware led to the exposure of more than 300,000 ChatGPT credential sets in 2025.
Security trade press and independent analysts immediately echoed the finding, noting that AI platforms now sit alongside email, cloud storage, and CRM as attractive, credential‑driven attack vectors.
Why chatbot credentials matter — beyond “just another login”
At face value, a leaked ChatGPT password allows access to a single account. In practice, that access is rarely trivial.
- Chat histories often contain raw IP: early drafts, product roadmaps, vendor negotiations, and research notes are frequently pasted into prompts or uploaded for model-assisted summarization. An adversary who reads those conversations can harvest ideas, blueprints, and strategic intent.
- Tokens and connectors matter: many AI platforms support plugins or integrations that can exchange tokens with other services (cloud storage, CRM, code repositories). A credential that appears to unlock a chatbot session might also be a stepping stone to other systems through OAuth, delegated tokens, or saved API keys. IBM explicitly warns of AI‑specific blast radius where stolen chatbot credentials can enable broader infiltration via token‑based access.
- Manipulation and sabotage: a valid account permits an attacker to impersonate a user inside AI workflows — injecting malicious prompts, altering outputs, or modifying automation steps that teams rely on for drafting alerts, reports, or procedures. Security practitioners at several vendors flagged this as a realistic danger in the wake of IBM’s report.
These factors convert what looks like a low‑value consumer credential into an enterprise‑grade risk when reused, linked, or embedded into business processes.
The scale and mechanics: how infostealers harvest AI creds
IBM’s X‑Force ties the surge in AI credential exposure to the continued growth of
infostealer malware delivered at scale through phishing and deceptive downloads. The 2025 telemetry shows:
- A marked increase in phishing campaigns that deliver infostealer payloads rather than direct ransomware—criminals prefer rapid credential exfiltration.
- Infostealers are typically configured to harvest browser‑stored credentials, cookies, autofill data, saved passwords, and any local secrets that can be exfiltrated; many families target popular apps by default. IBM and other researchers name families such as AgentTesla, RedLine, and others as prominent in the ecosystem.
- Stolen credential sets are posted, listed, and monetized on dark‑web marketplaces and specialized forums — sometimes in bulk listings claiming millions of accounts. IBM and subsequent reporting note both mass postings and samples that appeared on forums in 2025.
In short: infostealers operate like a vacuum cleaner for local secrets; once on a device, they sweep up any stored tokens or logins they can find and ship those results to criminal buyers or automated feeds.
IBM’s findings, summarized and checked
IBM’s 2026 X‑Force index frames three interlocking trends driving the rise of AI‑related credential theft:
- Attackers use AI to accelerate recon and vulnerability discovery, which increases exploit velocity.
- Infostealer distribution rose sharply via phishing, which increased the volume of credentials available for sale on the dark web. IBM reports a spike in infostealer delivery and a corresponding growth in exposed credentials.
- AI platforms — because they are ubiquitous and used for cross‑cutting tasks — are now a credential category that attackers harvest and trade in volume; IBM highlights over 300,000 ChatGPT credential sets advertised in 2025.
Independent reporting and analyst commentary measured against IBM’s release confirm the headline numbers and add context: multiple outlets and intelligence vendors described the phenomenon as a natural extension of credential markets and highlighted the potential for token‑based escalation.
Real‑world abuse scenarios — what attackers can do with a chatbot account
Understanding exploitation helps prioritize defenses. Here are concrete ways an attacker can abuse a stolen chatbot credential:
- Passive reconnaissance: quietly read years of prompts and responses to map IP, upcoming launches, and contract negotiations.
- Active manipulation: add or modify prompts in workflows used to generate corporate communications, SOC summaries, or automated incident responses — undermining decisions or causing operational confusion. IBM and other analysts warn about the ease of injecting malicious instructions once an attacker has account access.
- Lateral escalation via connectors: extract refresh tokens or use saved OAuth consent to pivot into cloud storage, mailbox exports, or repository access where connectors are configured. IBM explicitly flags token‑based infiltration as a critical risk.
- Social engineering augmentation: use the account to craft hyper‑personalized spear‑phish campaigns that appear to originate from the victim, leveraging the account’s content to boost credibility.
- Data exfiltration and sale: bulk harvesting of conversation logs and attached files, then resale to competitors or extortion marketplaces.
Many of these modes of abuse are subtle — attackers can roam through AI records with little immediate sign they did anything more than legitimate queries, making detection difficult without specific monitoring.
The dark‑web economy and attribution gaps
Dark‑web marketplaces aggregate stolen sets and offer them for sale either individually or in bulk. IBM’s researchers observed postings and advertisements containing ChatGPT credentials and other chatbot account sets, with at least one threat actor claiming to have dozens of millions of accounts in 2025 — a claim IBM and reporters flagged for caution but not outright dismissal.
Two structural realities complicate defenders’ response:
- Attribution and provenance are muddled. It’s rarely possible to know whether credentials were harvested via an infostealer, phished, scraped from a breach, or reused from a consumer leak. IBM’s index emphasizes this uncertainty and the difficulty of tracing initial compromise vectors.
- Third‑party and federated identity flows obfuscate victims. Many services use federated sign‑on or third‑party providers, which means some exposed “chatbot credentials” represent only a subset of the true authentication picture. IBM notes that not all platforms store platform‑specific credentials in a way that appears in credential dumps, which can undercount or mischaracterize exposure.
The market effect: as more credentials flow into unlimited‑scale criminal marketplaces, opportunistic buyers can scan for password reuse and try scalation techniques en masse. That’s the main reason defenders should treat leak notification as an urgent, binary event — not a low‑value data point.
Critical assessment: strengths of the finding and the risk model
What IBM’s report gets right
- It reframes AI platforms as assets that require the same identity controls as mailboxes and cloud drives. That’s a crucial conceptual shift; treating chatbots as ephemeral consumer tools invites exactly the kind of shadow use that increases risk.
- The report’s telemetry is broad and rooted in real incident response engagements and dark‑web observation, lending credibility to the 300k figure as a measured market phenomenon rather than speculation.
- IBM ties technical telemetry (infostealer distributions, token behavior) to practical recommendations — MFA, passkeys, controlled enterprise AI — which are immediate, actionable defenses.
Where the finding needs cautious interpretation
- Coverage bias and counting ambiguity: credential dumps and dark‑web listings are noisy. Listings may contain duplicates, automated scraped records, or low‑quality entries; headline counts do not always equal unique, high‑value accounts. IBM acknowledges some of these counting challenges and the difficulty of tracing exact origin.
- The risk profile varies wildly by organization: a single ChatGPT account for an employee who uses it only for casual personal prompts is far less consequential than an account tied to privileged integrations or a security team’s analysis workload. Risk assessments must be contextualized.
- Attack novelty is incremental, not revolutionary: infostealers and credential markets have existed for years. The core difference is that the asset set now includes chatbots and their associated connectors. That matters, but defenders should not treat this as an entirely new class of attacker capability — more a shift in target prioritization and consequence.
Practical, prioritized defenses for Windows‑centric environments
IBM’s core recommendations — examine AI policies, protect credentials, require MFA — are the starting point. Here’s a prioritized, hands‑on checklist tailored for Windows administrators and security teams responsible for enterprise endpoints and identities.
Identity and access controls (first line of defense)
- Enforce phishing‑resistant MFA for all corporate identities (FIDO2 security keys, Windows Hello for Business, device‑bound passkeys). Microsoft guidance recommends passwordless, phishing‑resistant methods as the baseline for high‑risk services and Copilot access.
- Centralize chatbot access behind enterprise identity providers (SSO) and block consumer login methods for corporate use. Treat Copilot/ChatGPT access like any other enterprise SaaS and apply the same Conditional Access policies. Microsoft recommends dedicating Conditional Access and device compliance controls to Copilot‑class tools.
- Require device compliance for AI tool usage: only allow access from Intune‑managed, compliant endpoints and hardened privileged devices for sensitive user roles. Microsoft’s Zero Trust guidance for Copilot maps directly to this pattern.
Endpoint and secrets hygiene
- Harden endpoints and reduce saved credentials: disable browser auto‑save of passwords for business accounts; push enterprise SSO and passkeys instead of local storage.
- Deploy and tune EDR to detect infostealer behaviors (exfiltration patterns, suspicious shell activity, credential dumping). IBM’s trend data shows infostealers are increasingly delivered via phishing, so EDR detection of initial payloads is high‑value.
- Rotate tokens and audit connectors regularly: periodically revoke OAuth refresh tokens that link chatbots to other cloud services and require re‑consent through enterprise‑managed SSO flows.
Governance, policy, and human controls
- Create an approved AI usage policy that defines:
- Allowed tools and plugins
- Data classes that cannot be uploaded (sensitive IP, PII, financial data)
- Approved access mechanisms (SSO only, bridging via enterprise plugins)
- Operationalize an AI access request workflow for service integrations rather than allowing shadow plugins. IBM recommends controlled, internally‑sanctioned AI placed behind security controls like VPNs for sensitive workstreams.
- Train staff on the specific danger of pasting confidential content into chatbots and require use of corporate scrubbers or internal model endpoints for classified work.
Monitoring and response
- Monitor for anomalous chatbot logins and unusual session patterns (access from new geolocations, IPs, or devices). Set automated alerts to force re‑authentication and token revocation.
- Integrate chatbot telemetry into SIEM/SOAR pipelines to correlate suspicious prompts with downstream actions (file downloads, repo access).
- Treat any confirmed exposure as a tier‑one incident: revoke tokens, reset SSO sessions, force passkey re‑registration for affected identities, and conduct a scope investigation for lateral activity. IBM’s guidance and vendor commentary stress quick token revocation and reconfiguration to limit blast radius.
Incident playbook — what to do if ChatGPT or other chatbot credentials are exposed
- Immediately revoke the compromised account’s tokens and sessions via the identity provider and the chatbot’s admin console (if available).
- Force re‑authentication and require phishing‑resistant MFA or re‑provision passkeys for the user.
- Rotate any service account secrets or API tokens associated with the account’s connectors.
- Search logs for suspicious activity originating from the account during the exposure window (data access, file uploads/downloads, plugin usage).
- Scan endpoints used by the affected user for infostealer indicators; if present, isolate and perform forensic triage. IBM’s telemetry shows infostealers often precede credential dumps, so host compromise is likely.
- Notify legal and threat intelligence teams; monitor dark‑web marketplaces for the exposed credential sets and consider using a monitoring service to detect resale.
- Use the incident as a trigger to enforce stronger identity controls across similar user populations.
Policy and governance: building an enterprise AI security program
Protecting AI use requires blending identity, endpoint, and data governance:
- Classify AI use cases by sensitivity and map them to allowed toolsets and protective controls.
- Create a plugin and connector approval board; any third‑party integration must pass security review and be provisioned through enterprise SSO.
- Bake data handling rules into onboarding and procurement; include contractual protections with AI vendors for data retention, export controls, and incident notification.
- Periodically test with red teams focused on prompt‑level reconnaissance and adversary simulation that includes credential theft scenarios.
IBM frames these as core elements of a secure AI posture: treat chatbots as enterprise apps and apply the same identity and access hygiene that protects email and storage.
The long view: what defenders must prepare for next
IBM’s index is not just a snapshot; it’s a forecast. As AI becomes more embedded, its ecosystem will attract specialized abuse techniques:
- Credential harvesting will remain a high‑volume, low‑complexity commodity for attackers. Expect more infostealer variants tuned to discover desktop AI apps, tokens, and local model caches.
- Prompt‑level manipulation and malicious grounding of AI outputs will create new integrity threats to operational decision‑making.
- Attackers will increasingly combine stolen account access with generative capabilities to automate and scale credential phishing campaigns — creating a feedback loop that drives higher compromise rates.
The right defensive posture is proactive: harden identities first, then lock down endpoints and governance. Tools like Conditional Access, passwordless passkeys, device compliance, and enterprise SSO drastically reduce the probability that a harvested ChatGPT password becomes an enterprise breach. Microsoft guidance for Copilot and security vendors’ playbooks map well to these priorities and should be actioned now by Windows administrators.
Conclusion
The 300,000‑account figure from IBM’s X‑Force is a blunt indicator: attackers already treat AI chat platforms as another credential market. The immediate takeaway for IT teams is straightforward but urgent — treat chatbots like enterprise apps. Apply phishing‑resistant MFA, centralize access behind identity providers, monitor for anomalous sessions, and bring AI usage under the same governance that protects email and cloud storage.
Absent those steps, organizations will continue to discover that what looked like a harmless conversational assistant has quietly become a surveillance window into your most valuable intellectual property. IBM’s findings are a warning and a roadmap: fix the basics first, then build AI controls that match the new threat horizon.
Source: IT Brew
Infostealers nab 300,000 ChatGPT credentials: IBM