During RSAC 2026, the cybersecurity conversation turned decisively toward agentic AI, and the tone was less celebratory than cautionary. Security leaders spent the week in San Francisco warning that the next wave of risk may not come from a single model prompt or a clever phishing email, but from autonomous systems with access, memory, and the ability to act across enterprise workflows. The message was clear: the industry is moving from talking about AI as a feature to treating it as an operational security domain.
RSAC 2026 took place at the Moscone Center from March 23 to March 26, 2026, and the conference’s agenda reflected how quickly AI security has become central to mainstream cyber planning. Cisco’s RSAC conference page framed the event around “the agentic era,” while RSAC’s own agenda highlighted sessions focused on generative and agentic AI risks, governance, and incident response. That alone says something important: the discussion is no longer theoretical. It is now part of the standard enterprise security playbook.
The reason agentic security resonated so strongly is that it lands at the intersection of familiar fears and new architectural realities. Security teams already understand identity compromise, data leakage, and shadow IT. Agentic systems fold all three together and add automation, persistence, and machine-speed decision-making. In practice, that means the old distinction between “user action” and “system action” starts to blur in uncomfortable ways.
The IT Brew account of RSAC captured that mood with unusual clarity. Microsoft’s Sherrod DeGrippo warned of a future “unicorn threat actor,” described as an apex threat actor with extreme capability, automation, reach, and persistence. Check Point’s Dave Meister focused on the risk of compromised AI assistants and unvetted agents inside organizations, warning that the first breach of one of these systems could shake the industry. That combination of visionary and practical anxiety defined the week.
Just as notable was the emphasis on data. AI security discussions at RSAC repeatedly circled back to the same point: agents are only as safe as the data they can see, the privileges they inherit, and the controls surrounding them. That may sound obvious, but it is precisely why the topic is gaining traction now. The most serious AI risk is often not model hallucination; it is unauthorized access to sensitive workflows, brittle guardrails, and poorly governed information flows.
At RSAC and elsewhere, the industry has spent the past two years shifting from “Can we use AI safely?” to “How do we secure systems that are themselves using AI to take actions?” That transition matters. A chat interface is one thing; an agent with calendar access, email access, API keys, and delegated authority is another. Once a system can retrieve data, make decisions, and act, it ceases to be just a model and becomes part of the enterprise control plane.
One reason this matters is that traditional guardrails were designed for a more deterministic environment. Jameeka Green Aaron’s warning that guardrails can fail because AI is not deterministic reflects a core problem for CISOs: systems that learn, adapt, and infer can also bypass controls in ways static rules were never built to handle. The more autonomy you grant, the less useful it is to assume policy enforcement will behave predictably in every context.
The conference setting reinforced the idea that this is now an ecosystem-wide issue rather than a niche concern. RSAC’s agenda included sessions on autonomous AI risk, while vendor materials stressed zero trust, non-human identity, and agent visibility. Those are not marketing flourishes so much as an admission that enterprise security is being redefined around entities that are not people, but still behave like actors.
That is why DeGrippo’s “unicorn threat actor” comment landed so strongly. It is an extreme description, but it captures a real possibility: an attacker who combines deep technical skill, social engineering, automation, and persistent pressure campaigns. Whether that actor arrives as an individual, a loosely coordinated collective, or a hacktivist-inspired network almost matters less than the capability profile. In cyber defense, capability beats category.
The danger is not only external takeover. A company can also create its own exposure by allowing employees or contractors to spin up unvetted agents that inherit permissions without strong oversight. That becomes a control problem, not merely a software problem. It creates conditions for accidental exfiltration, unauthorized actions, and task automation that quietly drifts beyond policy.
The deeper concern is that deepfakes attack trust itself. They do not need to perfectly imitate a target; they only need to create enough ambiguity that an employee hesitates, complies, or escalates incorrectly. In a world where agents can initiate actions and people already struggle to verify identity quickly, that is a dangerous combination.
Security teams have traditionally worried about whether data is encrypted, backed up, or retained according to policy. In the agentic era, they also need to ask whether the data is semantically legible to machines, whether it is scoped correctly, and whether it sits behind the right approval gates. If an agent can interpret too much, too loosely, or too broadly, it may behave in ways the business never intended.
This is why organizations increasingly talk about zero trust for agents, not just for users. Cisco’s RSAC materials explicitly frame agent activity, access control, and real-time intervention as part of the new security model. The practical takeaway is that enterprises need policies that can distinguish between a human reading a record and an agent retrieving, transforming, or redistributing it at scale.
For enterprises, that creates a governance puzzle. They want the productivity benefits of agents without building a shadow archive of trade secrets, customer data, and security workflows inside a model ecosystem. The likely answer is not to avoid AI entirely, but to implement stricter lifecycle controls, clear data classification, and retrieval policies that limit what an agent can see and remember.
The deeper issue is that AI systems are probabilistic, while enterprise security often depends on predictable outcomes. When those two worlds collide, subtle inconsistencies emerge. A model may comply with a request in one context and refuse in another, or interpret the same instruction differently depending on prompt phrasing, retrieved context, or tool availability. That unpredictability is not a bug to be patched away; it is a property to be designed around.
This matters because many organizations are still testing AI security through prompt filtering and content moderation alone. Those are necessary, but not sufficient. If an agent has the wrong permissions, the wrong memory scope, or the wrong action boundaries, it can still cause harm even if it never emits a clearly unsafe sentence.
The practical result is more work for security teams, not less. But it also gives them a clearer mandate: if AI systems are going to act, then the enterprise needs visibility into those actions. That is the only way to preserve accountability in a system where the line between advice and execution keeps fading.
Vodafone Global CISO Emma Smith’s comments at RSAC are important because they reflect a defensive strategy that goes beyond blocking. She argued for making AI accessible to employees through approved tools so that security teams can keep pace and reduce the incentive to go around official channels. That is a sensible approach because shadow AI often thrives where sanctioned alternatives are too slow, too fragmented, or too hard to use.
That does not mean every approved tool is safe by default. It means the enterprise must create a narrow enough set of options that it can review, monitor, and govern them properly. A good AI strategy is therefore as much about curation as it is about innovation.
Enterprises that ignore this are likely to see more unsanctioned adoption, not less. The better strategy is to align policy with behavior, then make the secure path easy to find and worth using. That may sound simple, but it is one of the hardest problems in modern security.
That repositioning matters because security vendors are no longer selling only prevention, detection, and response. They are now selling trust frameworks for AI systems, monitoring for agent activity, and protections for non-human identities. The market is moving from endpoint-centric thinking to action-centric thinking, where the unit of security is the workflow itself.
That creates a new market for control layers that can understand agent behavior without assuming personhood. It also creates pressure on identity vendors, SIEM platforms, and access management providers to adapt their models. A security stack built around human login events is not enough when the primary actor is an autonomous workflow.
This is good news for the market in one sense. It is creating demand for new control points, new telemetry, and new policy layers. But it also raises the stakes for vendors, because the first widely publicized breach of an AI agent could quickly become a category-defining event.
It is also likely that the first major agent compromise, whenever it comes, will reshape the category in a way that a dozen white papers cannot. The event does not need to be catastrophic to be influential. It only needs to be vivid enough to convince boards that agentic AI belongs in the same risk tier as cloud identity, third-party access, and endpoint compromise.
Source: IT Brew Agentic risks take center stage at RSAC
Overview
RSAC 2026 took place at the Moscone Center from March 23 to March 26, 2026, and the conference’s agenda reflected how quickly AI security has become central to mainstream cyber planning. Cisco’s RSAC conference page framed the event around “the agentic era,” while RSAC’s own agenda highlighted sessions focused on generative and agentic AI risks, governance, and incident response. That alone says something important: the discussion is no longer theoretical. It is now part of the standard enterprise security playbook.The reason agentic security resonated so strongly is that it lands at the intersection of familiar fears and new architectural realities. Security teams already understand identity compromise, data leakage, and shadow IT. Agentic systems fold all three together and add automation, persistence, and machine-speed decision-making. In practice, that means the old distinction between “user action” and “system action” starts to blur in uncomfortable ways.
The IT Brew account of RSAC captured that mood with unusual clarity. Microsoft’s Sherrod DeGrippo warned of a future “unicorn threat actor,” described as an apex threat actor with extreme capability, automation, reach, and persistence. Check Point’s Dave Meister focused on the risk of compromised AI assistants and unvetted agents inside organizations, warning that the first breach of one of these systems could shake the industry. That combination of visionary and practical anxiety defined the week.
Just as notable was the emphasis on data. AI security discussions at RSAC repeatedly circled back to the same point: agents are only as safe as the data they can see, the privileges they inherit, and the controls surrounding them. That may sound obvious, but it is precisely why the topic is gaining traction now. The most serious AI risk is often not model hallucination; it is unauthorized access to sensitive workflows, brittle guardrails, and poorly governed information flows.
Background
To understand why RSAC 2026 became such a flashpoint for agentic AI concerns, it helps to trace the evolution of cybersecurity’s relationship with AI. For much of the last decade, security teams treated AI as an assisting layer: a tool for detection, triage, and automation. The rise of generative AI changed that dynamic by putting natural-language interfaces, autonomous workflows, and external tool use into the hands of both employees and attackers. The result is a broader attack surface that feels deceptively familiar, yet behaves differently in almost every meaningful way.At RSAC and elsewhere, the industry has spent the past two years shifting from “Can we use AI safely?” to “How do we secure systems that are themselves using AI to take actions?” That transition matters. A chat interface is one thing; an agent with calendar access, email access, API keys, and delegated authority is another. Once a system can retrieve data, make decisions, and act, it ceases to be just a model and becomes part of the enterprise control plane.
The move from copilots to autonomous actors
The industry has largely normalized the idea of copilots. Employees use them to summarize documents, draft messages, and accelerate research. But agentic AI goes further by chaining steps together, calling tools, and executing tasks with limited human intervention. That capability is exactly what makes the technology valuable and exactly why security leaders are uneasy.One reason this matters is that traditional guardrails were designed for a more deterministic environment. Jameeka Green Aaron’s warning that guardrails can fail because AI is not deterministic reflects a core problem for CISOs: systems that learn, adapt, and infer can also bypass controls in ways static rules were never built to handle. The more autonomy you grant, the less useful it is to assume policy enforcement will behave predictably in every context.
Why RSAC became the natural venue
RSAC has always been where the industry names the next threat before it becomes a board-level problem. In 2026, that meant agentic AI, deepfakes, shadow AI, and the governance gaps around enterprise data. It also meant vendors could position products not merely as AI-enhanced, but as AI-defensive, with Cisco, Splunk, Check Point, Microsoft, and others all using the conference to showcase their own agent-era security narratives.The conference setting reinforced the idea that this is now an ecosystem-wide issue rather than a niche concern. RSAC’s agenda included sessions on autonomous AI risk, while vendor materials stressed zero trust, non-human identity, and agent visibility. Those are not marketing flourishes so much as an admission that enterprise security is being redefined around entities that are not people, but still behave like actors.
The Threat Landscape
The most striking theme from RSAC 2026 was how quickly the threat model around AI has expanded beyond classic misuse. Attendees heard about shadow AI, deepfake attacks, rogue agents, and the possibility of malicious actors creating unvetted agents inside organizations. The common thread is that attackers no longer need to win a single battle; they can exploit the whole lifecycle of AI adoption, from procurement and data ingestion to deployment and user trust.That is why DeGrippo’s “unicorn threat actor” comment landed so strongly. It is an extreme description, but it captures a real possibility: an attacker who combines deep technical skill, social engineering, automation, and persistent pressure campaigns. Whether that actor arrives as an individual, a loosely coordinated collective, or a hacktivist-inspired network almost matters less than the capability profile. In cyber defense, capability beats category.
Rogue agents and data exfiltration
Meister’s warning about AI assistants such as ChatGPT and Microsoft Copilot points to a particularly uncomfortable risk: enterprises may be concentrating sensitive data in systems that were not originally conceived as hardened operational repositories. Once those platforms are connected to internal systems, documents, tickets, and workflows, they become high-value targets. If compromised, the damage could be faster and broader than a traditional account breach.The danger is not only external takeover. A company can also create its own exposure by allowing employees or contractors to spin up unvetted agents that inherit permissions without strong oversight. That becomes a control problem, not merely a software problem. It creates conditions for accidental exfiltration, unauthorized actions, and task automation that quietly drifts beyond policy.
- Data sprawl makes agent risk harder to contain.
- Delegated access can magnify the impact of one compromise.
- Unvetted tools can enter the enterprise faster than security teams can review them.
- Persistent agents may create a longer dwell time than single-session prompts.
- Trust boundaries become harder to define when systems act on behalf of humans.
Deepfakes and social engineering at scale
Deepfake attacks are not new, but AI-driven realism and speed make them more operationally dangerous. Voice phishing, synthetic video, and persona spoofing can now support more convincing multi-channel fraud attempts. Vodafone has publicly warned that emerging technologies such as AI will exacerbate threats through voice phishing and deepfakes, which aligns with the broader sentiment at RSAC.The deeper concern is that deepfakes attack trust itself. They do not need to perfectly imitate a target; they only need to create enough ambiguity that an employee hesitates, complies, or escalates incorrectly. In a world where agents can initiate actions and people already struggle to verify identity quickly, that is a dangerous combination.
Data as the New Security Battlefield
One of the most consistent lessons from RSAC 2026 was that data quality and data clarity may determine whether an AI deployment succeeds or becomes a liability. Meister emphasized that agents are only useful when they can access the right information and read it properly. That sounds straightforward, but it hides a nasty truth: badly structured data is not just an analytics problem; it is a security problem.Security teams have traditionally worried about whether data is encrypted, backed up, or retained according to policy. In the agentic era, they also need to ask whether the data is semantically legible to machines, whether it is scoped correctly, and whether it sits behind the right approval gates. If an agent can interpret too much, too loosely, or too broadly, it may behave in ways the business never intended.
Why governance now includes machine readership
The old assumption was that if a file was stored somewhere secure, it was safe enough. That assumption breaks down when an agent can surface, summarize, and act on content across repositories that were never designed to be combined. The challenge is not simply access control; it is contextual access control, in which a model’s interpretation of a document can create exposure even if the raw file remains protected.This is why organizations increasingly talk about zero trust for agents, not just for users. Cisco’s RSAC materials explicitly frame agent activity, access control, and real-time intervention as part of the new security model. The practical takeaway is that enterprises need policies that can distinguish between a human reading a record and an agent retrieving, transforming, or redistributing it at scale.
Enterprise memory versus enterprise exposure
There is a subtle but important distinction between useful memory and dangerous accumulation. AI assistants retain context to be more effective, yet that same context can become a repository of highly sensitive organizational knowledge. If those systems are not scoped carefully, they may turn into concentration points for privileged information, making them exceptionally valuable to attackers.For enterprises, that creates a governance puzzle. They want the productivity benefits of agents without building a shadow archive of trade secrets, customer data, and security workflows inside a model ecosystem. The likely answer is not to avoid AI entirely, but to implement stricter lifecycle controls, clear data classification, and retrieval policies that limit what an agent can see and remember.
The Guardrails Problem
Jameeka Green Aaron’s observation that guardrails have failed in every AI implementation she has built was one of the week’s most memorable lines because it challenged a comforting assumption. Many security teams talk about guardrails as if they are a set-and-forget layer. In reality, they are part policy, part tooling, part operational discipline, and part constant tuning.The deeper issue is that AI systems are probabilistic, while enterprise security often depends on predictable outcomes. When those two worlds collide, subtle inconsistencies emerge. A model may comply with a request in one context and refuse in another, or interpret the same instruction differently depending on prompt phrasing, retrieved context, or tool availability. That unpredictability is not a bug to be patched away; it is a property to be designed around.
Why static controls are not enough
Traditional security controls assume that the thing being controlled behaves consistently. That assumption weakens when a system can infer intent, select tools, and chain actions dynamically. As recent academic work on agentic AI security argues, many model-centric mitigations are brittle, while policy-first and architectural controls are more durable.This matters because many organizations are still testing AI security through prompt filtering and content moderation alone. Those are necessary, but not sufficient. If an agent has the wrong permissions, the wrong memory scope, or the wrong action boundaries, it can still cause harm even if it never emits a clearly unsafe sentence.
The operational implication for CISOs
CISOs now have to think in terms of behavioral containment. That means logging not just user prompts, but tool calls, permission escalations, model outputs, and downstream side effects. It also means defining what “safe failure” looks like when an agent’s uncertainty should stop a workflow rather than continue it.The practical result is more work for security teams, not less. But it also gives them a clearer mandate: if AI systems are going to act, then the enterprise needs visibility into those actions. That is the only way to preserve accountability in a system where the line between advice and execution keeps fading.
Shadow AI and the Consumerization of Risk
Shadow AI has become the enterprise cousin of shadow IT, but with a more dangerous twist. Employees do not need to install a rogue server or create a hidden app to introduce risk; they can simply use an unsanctioned model, agent, or browser-based assistant with access to sensitive work. The barrier to entry is low, and the temptation to save time is high.Vodafone Global CISO Emma Smith’s comments at RSAC are important because they reflect a defensive strategy that goes beyond blocking. She argued for making AI accessible to employees through approved tools so that security teams can keep pace and reduce the incentive to go around official channels. That is a sensible approach because shadow AI often thrives where sanctioned alternatives are too slow, too fragmented, or too hard to use.
Approval beats prohibition when users need speed
The most effective way to reduce shadow AI is not to ban it outright, but to provide trustworthy, usable alternatives. If the approved path is clumsy, users will route around it. If the approved path is good enough, they are more likely to stay inside the security perimeter.That does not mean every approved tool is safe by default. It means the enterprise must create a narrow enough set of options that it can review, monitor, and govern them properly. A good AI strategy is therefore as much about curation as it is about innovation.
Consumer habits are shaping enterprise exposure
The consumerization of AI is speeding up enterprise risk because workers are already accustomed to using public tools at home. They expect the same frictionless experience at work, even if the work involves regulated data, customer records, or proprietary workflows. That mismatch creates a security gap that is more cultural than technical.Enterprises that ignore this are likely to see more unsanctioned adoption, not less. The better strategy is to align policy with behavior, then make the secure path easy to find and worth using. That may sound simple, but it is one of the hardest problems in modern security.
Vendor Strategy and Market Implications
RSAC 2026 also made clear that the cybersecurity market is reorganizing itself around the agentic era. Cisco and Splunk framed their joint presence around securing the “agentic enterprise,” while Check Point emphasized AI-powered enterprise security and agentic framework research. Microsoft positioned its RSAC participation around AI-first cyber capabilities and threat intelligence. These are not isolated product themes; they are signs of a broader market repositioning.That repositioning matters because security vendors are no longer selling only prevention, detection, and response. They are now selling trust frameworks for AI systems, monitoring for agent activity, and protections for non-human identities. The market is moving from endpoint-centric thinking to action-centric thinking, where the unit of security is the workflow itself.
The rise of non-human identity
One of the most important architectural shifts in this space is the emergence of non-human identity as a security category. If agents can authenticate, request data, call APIs, and trigger actions, then they need identities, permissions, audit trails, and revocation logic just like humans do. The difference is that they may do all of that much faster and at much greater scale.That creates a new market for control layers that can understand agent behavior without assuming personhood. It also creates pressure on identity vendors, SIEM platforms, and access management providers to adapt their models. A security stack built around human login events is not enough when the primary actor is an autonomous workflow.
Competitive pressure on incumbents and startups
For established vendors, the challenge is to integrate AI security into existing platforms without making them unwieldy. For startups, the challenge is to prove that specialized agent security is more than a niche category. Both sides face the same reality: buyers now want evidence that the product can monitor, constrain, and explain AI behavior, not just classify it.This is good news for the market in one sense. It is creating demand for new control points, new telemetry, and new policy layers. But it also raises the stakes for vendors, because the first widely publicized breach of an AI agent could quickly become a category-defining event.
Strengths and Opportunities
The upside of the RSAC 2026 conversation is that it showed the industry is no longer pretending AI security is a future problem. Security teams are finally treating agentic systems as a real operational category, which should improve governance and accelerate better tooling. That creates a rare opportunity to bake security in before the architecture becomes too entrenched.- Clearer risk framing is helping CISOs justify budgets and controls.
- Vendor competition is accelerating innovation in agent monitoring and governance.
- Zero trust for agents can improve control over non-human identities.
- Approved AI tools can reduce shadow AI and improve visibility.
- Data governance is getting renewed attention, which should benefit broader security hygiene.
- Incident readiness is improving as teams rehearse AI-specific breach scenarios.
- Market education is narrowing the gap between hype and deployment reality.
Risks and Concerns
The downside is that the industry may underestimate how quickly agentic AI can compound existing weaknesses. If enterprises move too fast, they could create highly privileged systems with limited oversight, weak logging, and inconsistent approval logic. In that environment, even a small compromise can scale into a broad operational incident.- Over-trusting guardrails could create false confidence.
- Excessive agent permissions may turn minor failures into major breaches.
- Data sprawl can make exposure harder to detect and contain.
- Shadow AI adoption may outpace governance programs.
- Deepfakes and impersonation can erode human verification.
- First-breach shock could damage trust in AI initiatives across industries.
- Tool proliferation may fragment accountability across too many systems.
Looking Ahead
The next phase of the agentic security debate will likely move from warning to implementation. Security leaders are already talking about visibility, approval workflows, and identity controls; the next step is proving those ideas work under pressure. That means testing not just how models answer, but how agents behave when they can touch real systems.It is also likely that the first major agent compromise, whenever it comes, will reshape the category in a way that a dozen white papers cannot. The event does not need to be catastrophic to be influential. It only needs to be vivid enough to convince boards that agentic AI belongs in the same risk tier as cloud identity, third-party access, and endpoint compromise.
- Expect more agent governance frameworks from major vendors.
- Expect broader use of non-human identity controls.
- Expect more discussion of machine-readable data policy.
- Expect security teams to demand tool-call logging and action tracing.
- Expect a stronger push for approved AI ecosystems over ad hoc adoption.
Source: IT Brew Agentic risks take center stage at RSAC