Enterprise IT is hurtling toward an inflection point where AI is no longer an optional productivity layer but a persistent, machine‑speed conduit for both business value and cyber risk—and the latest ThreatLabz analysis from Zscaler makes that danger unmistakably clear. Released January 27, 2026, the ThreatLabz 2026 AI Security Report catalogues nearly one trillion AI/ML transactions observed across the Zscaler Zero Trust Exchange during 2025 and paints a stark picture: AI adoption surged 91% year‑over‑year, more than 18,000 terabytes of corporate data flowed into third‑party AI services (a 93% increase), and enterprise security controls had to block roughly 39% of AI/ML traffic to stop sensitive data from leaking. Those headline numbers matter because they quantify a newly dominant attack surface—one that traditional security architectures were not designed to inspect, govern, or harden at machine speed.
In 2025 the enterprise AI landscape matured from pockets of pilots into continuous, cross‑functional automation. Teams across engineering, IT, marketing and operations moved from episodic experiments to daily reliance on generative AI, copilots, and embedded model features inside business apps. That rapid operationalization created two simultaneous effects: massively increased volume of AI traffic and a parallel proliferation of shadow and embedded AI instances that escape standard security filters.
Zscaler’s ThreatLabz telemetry captures this transition at scale: almost one trillion AI/ML transactions from roughly 9,000 organizations, covering more than 3,400 AI applications. The dataset shows both the depth of enterprise AI usage and the breadth of exposure points that adversaries can target—standalone public models, developer tools, content assistants like Grammarly, and embedded features inside mainstream SaaS platforms.
Security teams should prioritize controls that:
Security teams must respond with a structured program that blends continuous discovery, Zero Trust enforcement, AI‑aware DLP and inline inspection, supply‑chain validation, and regular adversarial testing. Failure to act will not leave organizations safer; it will simply delay the inevitable day when a high‑impact breach arrives at machine speed.
The good news is that many of the controls required are extensions of concepts security teams already know—least privilege, segmentation, inventory, and continuous testing—applied to a different asset class. The imperative now is to operationalize those controls, prioritize the highest‑risk AI flows, and treat AI governance as a board‑level responsibility. The window to act is narrow; the cost of inaction will be measured not in lost productivity but in breached systems, stolen IP, and regulatory fallout.
Source: Petri IT Knowledgebase AI Boom Expands Enterprise Attack Surfaces
Background
In 2025 the enterprise AI landscape matured from pockets of pilots into continuous, cross‑functional automation. Teams across engineering, IT, marketing and operations moved from episodic experiments to daily reliance on generative AI, copilots, and embedded model features inside business apps. That rapid operationalization created two simultaneous effects: massively increased volume of AI traffic and a parallel proliferation of shadow and embedded AI instances that escape standard security filters.Zscaler’s ThreatLabz telemetry captures this transition at scale: almost one trillion AI/ML transactions from roughly 9,000 organizations, covering more than 3,400 AI applications. The dataset shows both the depth of enterprise AI usage and the breadth of exposure points that adversaries can target—standalone public models, developer tools, content assistants like Grammarly, and embedded features inside mainstream SaaS platforms.
Why this matters now: AI as both tool and target
AI is unique as an enterprise dependency because it is simultaneously:- a productivity accelerator used across functional teams,
- a repository and processor of potentially sensitive data, and
- a programmatic surface that can be probed, manipulated, and weaponized at machine speed.
The new asset class: AI data reservoirs
Enterprises moved the equivalent of billions of photos—18,033 TB in 2025—into AI and ML services. Many of those transfers occurred via highly integrated tools like Grammarly, GitHub Copilot, and ChatGPT, turning widely adopted productivity tools into concentrated repositories of corporate intelligence. ChatGPT alone accounted for hundreds of millions of Data Loss Prevention (DLP) policy violations in 2025, underscoring how everyday workflows now create high‑value targets for espionage and exfiltration.Deep dive: key findings and their operational implications
1) Explosive transaction growth and the visibility gap
- AI/ML transactions grew 91% year‑over‑year and were observed across more than 3,400 distinct applications.
- Many enterprises lack a comprehensive inventory of where AI is running and what data the models can touch.
2) Data concentration and DLP scale
- Enterprises sent 18,033 TB of data to AI/ML applications in 2025 (a 93% increase).
- ChatGPT alone generated on the order of hundreds of millions of DLP violations tied to attempts to share PII, source code, and regulated data.
3) Embedded AI: the hidden bypass
- AI features built into commercial SaaS (for example, copilots or content assistants embedded in productivity suites) often inherit application permissions and can bypass legacy network and gateway controls.
- Embedded AI is frequently enabled by default and escapes traditional security tooling, creating a stealthy exfiltration path.
4) Red‑team results: machine‑speed fragility
- Red‑team testing revealed critical vulnerabilities across every AI system assessed, with common failure modes including data leakage, prompt manipulation, hallucinations, policy bypasses, and safety alignment failures.
- Simple adversarial prompts could rapidly induce high failure rates.
5) Supply‑chain exposure and agentic threats
- Attackers will increasingly target the AI supply chain—models, pre‑trained datasets, connectors, and orchestration components—as high‑leverage intrusion points.
- The report forecasts a rise in agentic or autonomous AI‑driven attacks that automate reconnaissance, social engineering, and lateral movement.
What this means for risk posture: strengths, gaps, and emergent threats
Strengths (what organizations are doing right)
- Many security teams have begun blocking high‑risk AI applications and reducing exposure through policy enforcement at the gateway level.
- Adoption of Zero Trust principles (least privilege, continuous verification) is gaining traction as a framework to secure AI interactions.
- Security vendors and cloud providers are rapidly introducing AI‑aware inspection, inline DLP, and telemetry to identify model interactions.
Persistent gaps and weaknesses
- Inventory blind spots: a surprising number of organizations still can’t answer “Which models and embedded AI features touch regulated data?”
- Default settings: vendors ship embedded AI features enabled by default, and users often grant permissive scopes that bypass least‑privilege expectations.
- Testing and validation: model testing focused on safety, alignment, and prompt‑injection robustness is still immature in many environments.
- Supply chain trust: model provenance, dataset integrity, and connector security are infrequently validated with the same rigor applied to software libraries.
Emergent attack types to watch
- Prompt injection and chained prompt attacks that manipulate model behavior to leak secrets or disobey policy guards.
- Data poisoning of shared datasets and third‑party model artifacts used by enterprises.
- Agentic attack frameworks that instantiate chained agents to perform automated reconnaissance, exploit discovery, and exfiltration.
- “Zero‑click” or “no‑user” exploit paths leveraging embedded copilots or API integrations inside widely used SaaS.
Practical, prioritized controls: a security‑first playbook for AI
The scale and speed of AI adoption demand a pragmatic, prioritized program that integrates governance, tooling, and cultural change. Below is a condensed, operational playbook security teams can implement now.1) Build and maintain an authoritative AI inventory
- Continuously discover all AI interactions: models, SaaS features with AI, connectors, browser extensions, and developer‑tooling integrations.
- Classify each entry by data sensitivity, legal/regulatory impact, vendor trust level, and update/patch cadence.
- Make this inventory accessible to compliance, legal, and procurement teams.
2) Apply Zero Trust to AI interactions
- Enforce the principle of least privilege for all model access and connectors.
- Require authenticated, audited, and scoped credentials for model calls; avoid embedding long‑lived secrets in client code or notebooks.
- Segment AI traffic and apply role‑based access controls so that only authorized personas can query sensitive models.
3) Deploy AI‑aware inline inspection and DLP
- Upgrade DLP to be AI‑aware: analyze prompts, file payloads, and contextual metadata; block or redact sensitive tokens before they leave the enterprise.
- Inspect encrypted traffic where legally and operationally possible to discover hidden model calls.
- Apply contextual risk scoring to minimize operational friction for low‑risk AI use while blocking high‑risk flows.
4) Harden embedded AI and SaaS integrations
- Disable risky default AI features in SaaS until they’ve been assessed and sanctioned.
- Validate and restrict inherited permission scopes for embedded copilots and in‑app AI assistants.
- Require vendor attestations and contractual protections for data handling and model usage.
5) Secure the AI supply chain
- Validate model provenance, versioning, and cryptographic integrity for any third‑party artifacts.
- Treat models and datasets like code libraries: require SBOM‑style disclosures, signed releases, and controlled update channels.
- Maintain an allowlist of trusted model vendors and require third‑party risk assessments for new model sources.
6) Continuous adversarial testing and monitoring
- Incorporate prompt‑injection, hallucination, and data‑leakage tests into pre‑production and CI/CD pipelines.
- Run periodic red‑team exercises that simulate agentic attacks targeting model connectors and embedded AI features.
- Monitor model outputs and production logs for anomalous queries, abnormal data exfiltration patterns, and suspicious prompt sequences.
7) Governance, policies, and user education
- Create clear policies that define permitted AI usage by role and data classification.
- Educate users on safe prompting, handling of sensitive data, and the risks of pasting proprietary material into public models.
- Integrate procurement with security and legal so vendor contracts include data residency, retention, and incident response commitments.
Implementation roadmap: step‑by‑step for CISOs
- Inventory sprint (0–30 days)
- Rapidly discover AI endpoints, browser plugins, and SaaS features; tag them by risk.
- Identify the top 10 apps and connectors that process sensitive data and prioritize controls.
- Containment and policy baseline (30–60 days)
- Block or limit high‑risk applications by policy.
- Enforce least privilege on model connectors and rotate any exposed secrets.
- Deploy AI‑aware DLP and inline inspection (60–120 days)
- Extend DLP rules to parse and redact model inputs; instrument inline inspection into AI traffic paths.
- Begin decrypting and analyzing relevant TLS streams where permitted.
- Adversarial testing and supply‑chain hardening (90–180 days)
- Add prompt‑injection and data‑leakage tests to development pipelines.
- Mandate signed model artifacts and supply‑chain attestations from vendors.
- Governance and culture (ongoing)
- Formalize AI governance: approvals, review boards, and an AI risk register.
- Train staff across IT, legal, and business units on AI risk and safe usage.
Assessing tradeoffs and operational costs
Securing AI systems is not free. Inline inspection, traffic decryption, and continuous adversarial testing require investment in tooling and personnel. Policies that restrict popular AI features will slow some developer productivity in the short term. However, the tradeoff is between measured operational friction and the systemic risk of uncontrolled data exfiltration, supply‑chain compromise, and ultra‑fast automated attacks.Security teams should prioritize controls that:
- offer the largest reduction in probability × impact (e.g., preventing PII from leaving the enterprise),
- are feasible to automate at scale (e.g., policy enforcement in the data plane), and
- produce measurable telemetry that ties policy actions to risk reduction.
What vendors and platforms must do
Software and AI vendors share responsibility. Practical steps vendors must take include:- shipping secure‑by‑default AI features with conservative permission scopes,
- providing model provenance and signed artifacts,
- supporting enterprise controls such as token scoping, telemetry hooks, and granular disablement of embedded capabilities, and
- offering enterprise‑grade contracts that address data handling, retention, and incident notification.
The human factor: education and policy aren’t optional
End users will continue to be the primary vector for accidental data exposure. Practical user controls should include:- mandatory training on safe AI usage and examples of risky prompts,
- enforced policies for handling regulated data (e.g., redact PHI/PII before using models),
- tactical UX changes like copy/paste warnings or explicit confirmation dialogs before sending sensitive content to an external model.
Conclusion: treat AI like infrastructure—not an experiment
The Zscaler ThreatLabz 2026 analysis makes one thing clear: generative AI has graduated to core enterprise infrastructure, and the security models that treated it as a novelty are obsolete. The sheer volume of transactions, the concentration of sensitive data in third‑party models, the fragility of model deployments under adversarial pressure, and the rising threat of agentic, supply‑chain attacks together create a new paradigm.Security teams must respond with a structured program that blends continuous discovery, Zero Trust enforcement, AI‑aware DLP and inline inspection, supply‑chain validation, and regular adversarial testing. Failure to act will not leave organizations safer; it will simply delay the inevitable day when a high‑impact breach arrives at machine speed.
The good news is that many of the controls required are extensions of concepts security teams already know—least privilege, segmentation, inventory, and continuous testing—applied to a different asset class. The imperative now is to operationalize those controls, prioritize the highest‑risk AI flows, and treat AI governance as a board‑level responsibility. The window to act is narrow; the cost of inaction will be measured not in lost productivity but in breached systems, stolen IP, and regulatory fallout.
Source: Petri IT Knowledgebase AI Boom Expands Enterprise Attack Surfaces