Bonfy ACS 2.0: Agent-First Data Security for Copilot and Shadow AI Risk

  • Thread Author
Bonfy’s launch of Adaptive Content Security 2.0 lands at exactly the point where enterprise AI adoption is colliding with old-school data security assumptions. The company is betting that the next major security problem is not just who has access to data, but what autonomous and semi-autonomous AI agents do with it once they get inside business systems. That framing matters because AI agents are no longer hypothetical; they are being embedded into productivity suites, collaboration platforms, CRM systems and cloud workflows at a pace that makes visibility, policy enforcement and auditability much harder to maintain. Bonfy is trying to turn that disruption into an opportunity.

Blue “Policy Control Plane” connects networked services like Microsoft 365, Google Workspace, Slack, and AWS S3.Background​

Enterprise data security has been shifting for years, but the rise of generative AI has accelerated the move from static controls to contextual ones. Traditional DLP and DSPM products were built around users, endpoints and repositories, which worked reasonably well when people themselves were the primary actors moving sensitive files around. Once AI assistants, copilots and workflow agents began reading, summarizing, rewriting and forwarding data across systems, those models became incomplete at best and blind at worst.
Bonfy’s ACS line has been building around that gap, positioning itself as a multi-channel content security layer rather than a point tool. The company’s public materials describe an AI-native platform that sees “what traditional DLP can’t,” with contextual intelligence, behavioral analytics and adaptive remediation as core themes. Its earlier Microsoft-focused messaging already pointed toward Copilot risk, upstream and downstream exposure, and more granular policy enforcement across Mail, SharePoint, Entra and Purview.
The company’s ACS 2.0 announcement extends that message into the agentic AI era. Instead of focusing only on files at rest or messages in transit, the platform is being pitched as a policy layer across email, SaaS tools, browsers, cloud storage and on-premises file repositories. That matters because agent workflows often stitch together multiple services in a single task, making it difficult for conventional security tools to reconstruct what happened end to end.
The broader market context is equally important. Vendors across the AI stack are shipping agent frameworks, tool connectors and workflow orchestration features, while security teams are still trying to understand how to govern them. In that environment, “shadow AI” has become more than a compliance buzzword; it is a practical description of unsanctioned tools, browser extensions and consumer-grade assistants that employees use to move work faster, often without realizing how much sensitive data they expose.

Why this moment is different​

The old security model assumed a relatively stable sequence: a user opens a file, edits it, sends it, and policy is checked at one or two control points. AI agents break that sequence by acting as intermediaries that can read context from multiple systems, synthesize new outputs and then trigger further actions. That creates more opportunities for productivity, but also more opportunities for leakage, prompt injection and accidental oversharing.
For security vendors, the challenge is no longer just identifying a file or a message. It is understanding the relationship between an agent, the data it consumed, the tools it called and the output it generated. If a product cannot connect those dots, it may still provide alerts, but it will struggle to support meaningful governance.
Bonfy is clearly arguing that the market now needs a different control plane. The company’s language around content lifecycle, agent identity and browser visibility reflects that view. It is a strong bet that buyers will pay for a platform that can explain not just what data moved, but why the agent moved it and where it went next.

What ACS 2.0 is trying to solve​

ACS 2.0 is aimed at a concrete problem: AI agents do not behave like traditional users, and therefore they cannot be protected by user-centric controls alone. Bonfy says the platform treats agents as distinct entities, letting teams track which agent accessed data, how it processed that data and where outputs were sent. That is a subtle but important shift, because it moves governance from “who clicked” to “what system of action performed the work.”
The company’s pitch also acknowledges that agentic systems create multi-step risk. Data might be read from SharePoint, summarized by a copilot, stored temporarily in context memory, then forwarded into Slack or Salesforce. At each stage, there may be legitimate business intent, but the cumulative path can still violate policy or widen exposure.
Bonfy is positioning ACS 2.0 as a response to that chain of events. Rather than watching only for the final leak, it wants to monitor the path that leads to it. That is particularly relevant for regulated industries and for large organizations where unstructured content is spread across many systems and subject to different access rules.

The lifecycle problem​

One of the more persuasive parts of Bonfy’s argument is that AI risk is not a single event but a lifecycle. Agents ingest data, transform it, and emit new content; each stage can alter the risk profile. A prompt that seems harmless in isolation may expose sensitive data when combined with contextual memory or enterprise connectors.
That creates a serious governance gap for organizations still using endpoint-only or browser-only protection. If the agent runs in a cloud environment or inside a vendor-managed workflow, the endpoint may never see the relevant event. Security teams then lose the ability to reconstruct the full chain of custody.
Bonfy’s response is to attach control to the content itself, not merely the device. That makes the platform more aligned with how enterprise AI actually works, especially in environments where data is constantly crossing application boundaries.

Agent-focused controls​

The headline capability in ACS 2.0 is AI Agent Data Guardrails, which Bonfy says supports the Model Context Protocol and other agent-framework connections. That signals a move toward governing not only AI output, but also the content flowing into and out of the agent during execution. In practical terms, it is an attempt to observe the agent’s working memory rather than just its final answer.
This is a notable design choice because agent frameworks are becoming the plumbing for enterprise AI. If a platform can inspect content at the moment it is read, shared or transformed, then it has a chance to enforce policy before leakage occurs. If it only looks afterward, it is already too late for many use cases.
Bonfy’s emphasis on treating agents as first-class entities also reflects a change in threat modeling. A human user account can be governed through identity, training and MFA. An agent, by contrast, may operate continuously, call tools at machine speed and blend instructions from multiple sources in ways that are hard for humans to review manually.

Why MCP matters​

The Model Context Protocol is quickly becoming one of the important connective standards in the AI ecosystem, because it standardizes how assistants and agents access tools and data sources. That makes it attractive for vendors, but it also raises security questions about permissions, context leakage and tool abuse. Bonfy’s decision to highlight MCP compatibility suggests it wants ACS 2.0 to sit close to the agent runtime, where policy can be applied before data leaves a controlled boundary.
That is not a trivial implementation detail. A lot of security tooling still assumes data can be secured at the perimeter of an application or repository. MCP-style integrations imply a more dynamic model in which the security layer must understand the content, the context and the intent of each agent interaction.
Bonfy is therefore making a broader claim about the future of enterprise defense: if the agent factory is spreading across Microsoft, Google, Salesforce and the hyperscalers, then the policy engine has to follow the agent, not the app.

Shadow AI and browser visibility​

A second major piece of ACS 2.0 is the browser extension Bonfy says can monitor web traffic tied to unsanctioned AI tools and browser-based assistants. That may sound modest compared with model-level guardrails, but it addresses one of the hardest realities in enterprise AI: employees do not always use approved workflows.
Shadow AI is attractive because it is frictionless. Workers paste content into public chatbots, use browser add-ons to summarize documents, or move data into personal AI tools because those systems feel faster than approved enterprise services. From a productivity standpoint, that behavior is understandable. From a security standpoint, it is a liability that often shows up only after sensitive content has already left the organization.
Bonfy’s browser approach is therefore a pragmatic move. It recognizes that not every AI interaction will pass through centrally managed APIs or enterprise copilot consoles. Some of the riskiest behavior happens at the edge, where browser automation and web apps meet human habit.

The limits of endpoint-only DLP​

Endpoint-focused controls have historically been effective against managed desktops and corporate laptops, but agentic workflows complicate that picture. If a system-level agent runs in cloud-hosted compute provided by Microsoft, Google, OpenAI, Anthropic or Salesforce, endpoint visibility may be irrelevant. Bonfy explicitly argues that browser-only or endpoint-only DLP cannot see that world, and its product design appears intended to close that gap.
That claim is directionally persuasive, especially for organizations that have already seen how quickly browser-native AI tools can spread. Still, a browser extension is only one layer of defense. It can help detect usage and flag risk, but it cannot solve every policy problem on its own.
The real question is whether customers can correlate browser events with downstream repository activity, identity context and agent telemetry. If they can, the control model becomes much stronger. If they cannot, the browser layer risks becoming just another alert source.

Coverage across Microsoft, Google and SaaS​

ACS 2.0 expands Bonfy’s reach across a familiar enterprise stack, including Microsoft 365, Google Workspace, Salesforce, HubSpot, Slack, AWS S3 and on-premises file stores. That breadth matters because most large organizations do not live in a single-vendor world anymore. They live in mixtures of Microsoft, Google and cloud-native collaboration tools, with email, chat and file sharing all carrying sensitive content.
Bonfy says the new Google Workspace coverage brings parity with its Microsoft integrations, which is important for mixed environments. In practice, that means a security team can apply more consistent handling to unstructured data, regardless of whether employees are working in Gmail and Drive or Exchange Online and SharePoint.
This is where the product starts to look less like an AI novelty and more like an enterprise governance platform. The more connectors it supports, the more useful it becomes as a policy layer. The downside is equally obvious: broad coverage increases integration complexity, and enterprises will expect the controls to behave consistently across all of those systems.

Microsoft and Google as control anchors​

Bonfy’s Microsoft stack support is especially strategic because Microsoft 365 Copilot has become one of the most visible examples of enterprise AI adoption. The company’s own materials emphasize Mail, SharePoint, Entra and Purview as part of the security story, showing that it wants to align with Microsoft-native workflows rather than compete against them head-on.
Google Workspace support is equally important for organizations that split work across ecosystems. Many firms use Microsoft for identity and email while relying on Google Drive or Gmail in pockets of the business. A platform that can’t follow content across those boundaries will inevitably miss risk.
The broader implication is that AI-era security is becoming cross-platform by necessity. If the data moves across multiple productivity stacks, then governance has to do the same.

Visibility of the data surface​

Another notable addition in ACS 2.0 is a data surface visibility view that maps where sensitive content resides across repositories such as SharePoint, Google Drive, AWS S3 and on-premises file stores. That sounds similar to classic discovery, but Bonfy is layering on interaction telemetry so teams can also see how employees and AI agents touch that content.
This combination is important because location alone no longer tells the full story. A sensitive file sitting in SharePoint may be low risk until a copilot indexes it, a browser assistant summarizes it or an agent passes it into a downstream workflow. In that sense, the “surface” is not just a storage inventory; it is a living map of how data is consumed.
Bonfy also says the platform covers unstructured data at rest, in motion and in use. That is a broad promise, but it aligns with how enterprise risk is converging across storage, messaging and browser activity. The more these layers are connected, the more the platform can present a coherent picture of what happened.

Why unstructured data is the real battleground​

Most sensitive enterprise data is not neatly organized in rows and columns. It lives in emails, PDFs, chats, presentations, attachments and documents with varying levels of context. That makes unstructured data harder to classify and easier to mishandle when AI systems begin using it for grounding, summarization or retrieval.
Bonfy’s value proposition is that contextual intelligence can reduce the noise while preserving policy precision. If that works in practice, it gives teams a way to focus on the highest-risk content instead of drowning in alerts. If it does not, the platform will face the same skepticism that has dogged many DLP products over the years.
The market is clearly moving toward richer context, but the standard for success is high. Security teams do not just want more visibility; they want actionable visibility.

Security controls and retention​

ACS 2.0 also introduces data minimization measures, encryption updates and configurable retention settings, according to Bonfy. Those capabilities may sound routine, but they are crucial for an AI security platform because the more content is inspected, the more important it becomes to limit how long that content is retained and where it is stored.
Bonfy says it has also completed SOC 2 Type 2 certification, which is a meaningful trust signal for buyers evaluating a security vendor that handles sensitive enterprise content. In practical terms, certification does not guarantee product efficacy, but it does show that the company has invested in the controls and audit discipline customers expect.
The retention and encryption angle matters because AI security products often sit in a sensitive position: they need visibility to enforce policy, but they also must avoid becoming data hoarding systems themselves. That tension is one of the defining challenges of the category.

Trust, compliance and buyer scrutiny​

Enterprise buyers will likely ask whether Bonfy’s controls reduce exposure without introducing new compliance burdens. That is especially true in industries where data residency, retention and evidence handling are tightly governed. A platform that can inspect content but not manage its own lifecycle responsibly will struggle to win long-term trust.
This is why the SOC 2 Type 2 announcement is more than a footnote. It signals maturity and may help Bonfy get through procurement, but it also raises expectations. Buyers will assume the product can withstand operational scrutiny.
If the platform can pair contextual protection with disciplined retention, it will have an advantage over tools that are strong on detection but weaker on governance. That combination is increasingly what enterprises want from modern data security.

Competitive positioning​

Bonfy’s move into AI agent security is competitive in two directions. On one side, it is competing with traditional DLP and DSPM vendors that have been trying to adapt to generative AI. On the other, it is competing with native controls from platform vendors like Microsoft, Google and the major AI providers themselves.
That is a difficult place to stand, but also a promising one. Native vendors have reach and integration advantages, yet their controls may be too tightly coupled to their own ecosystems. Bonfy is trying to occupy the middle ground: broad enough to cover mixed environments, but specific enough to understand enterprise content context.
The company’s claims about Microsoft 365, Google Workspace, Salesforce, Slack, AWS S3 and on-prem stores reinforce that strategy. It is not trying to replace the foundational platforms. It is trying to become the policy and visibility layer that sits above them.

The platform-vs-point-solution question​

Security buyers increasingly dislike point solutions that solve one slice of the problem. They want platforms that can correlate identities, content, workflows and risk signals. Bonfy appears to be leaning into that demand by offering APIs, an MCP server interface and connectors into SIEM and SOC workflows such as Splunk, Microsoft Sentinel and Rapid7.
That said, platform claims are only as strong as the day-to-day experience of deployment and tuning. Enterprises will want to know how quickly the tool can be deployed, how much policy engineering it requires and whether it generates meaningful detections without overwhelming analysts.
In that sense, Bonfy’s strongest competitor may be inertia. If organizations can get “good enough” coverage from existing suites, they may delay adopting a new platform unless the agent problem becomes painful enough to demand it.

Enterprise vs consumer impact​

For enterprises, ACS 2.0 speaks directly to the pain of controlling sensitive data in a world where AI features are being added faster than governance policies can keep up. Large organizations will care about audit trails, identity integration, repository visibility and the ability to distinguish sanctioned AI use from shadow AI. They will also care about whether the platform can keep pace with the explosion of agent frameworks coming from vendors they already trust.
For consumers, the impact is more indirect but still important. As enterprises tighten their controls, employees may encounter more friction when they try to use public tools or transfer content between work and personal accounts. That can feel restrictive, but it is also a sign that organizations are beginning to treat AI usage as a governed business process rather than a novelty.
The consumer angle also highlights a deeper trend: as AI assistants become embedded in browsers, email and productivity apps, the line between personal convenience and corporate risk will continue to blur. Bonfy’s product is designed to force that line back into view.

Different priorities, same underlying risk​

Consumers often prioritize speed and ease of use. Enterprises prioritize containment, accountability and compliance. The same AI workflow can be valuable in one setting and unacceptable in another, depending on what data it touches and how it is stored.
That is why agent security tools will likely grow first in the enterprise market. The complexity of corporate data environments creates a clear need for control layers that consumer tools do not require. Over time, though, the design patterns established in the enterprise could influence how safer AI products are built for everyone else.
The long-term lesson is simple: if AI is going to participate in real work, it has to participate in real governance.

Strengths and Opportunities​

Bonfy’s ACS 2.0 launch has several clear strengths. It addresses a genuine market gap, aligns with current enterprise AI concerns and presents a coherent story around agent-aware data protection. If the product performs as described, it could help security teams move from reactive alerting to more proactive governance.
  • Agent-first framing gives the product relevance in a market shifting toward autonomous workflows.
  • Cross-platform coverage makes it useful in mixed Microsoft and Google environments.
  • MCP support positions it around emerging AI integration standards.
  • Browser monitoring addresses shadow AI where many leaks begin.
  • Data-surface visibility helps unify storage, messaging and AI telemetry.
  • SOC 2 Type 2 certification should improve buyer confidence and procurement readiness.
  • SIEM and SOC integrations make it easier to operationalize findings inside existing security workflows.

Where Bonfy can win​

The biggest opportunity is to become the “context layer” for AI-era content movement. If Bonfy can connect repository, identity and agent activity into one usable view, it may stand out from tools that only see fragments of the story.
Another opportunity is category timing. Many companies are only now discovering how quickly copilot and agent deployments can outpace policy. Vendors that solve the problem before the pain becomes widespread often earn more trust than those that arrive later with a more polished but less urgent message.

Risks and Concerns​

Bonfy’s thesis is compelling, but the category is still early and the execution burden is high. The company will need to prove that its visibility claims hold up across diverse workflows, that its policies remain manageable at scale and that its detections are accurate enough to avoid alert fatigue. In a market full of ambitious security language, precision will matter more than aspiration.
  • Integration complexity could slow deployment in large, heterogeneous environments.
  • False positives may undermine analyst trust if contextual detection is too noisy.
  • Cloud-hosted agents can still create visibility gaps if telemetry is incomplete.
  • Browser extensions may be bypassed or limited by user behavior and IT policy.
  • Native platform controls from Microsoft, Google and others may reduce demand for third-party tools.
  • Security sprawl could increase if Bonfy becomes another silo instead of a unifying layer.
  • Category immaturity means buyers may still be unsure what “agent security” should look like in practice.

The adoption hurdle​

The hardest problem may not be technology, but governance maturity. Many enterprises are still defining who owns AI policy, where exceptions are approved and how evidence is retained. A sophisticated tool cannot fix organizational ambiguity on its own.
Bonfy will therefore need to do more than sell features. It will need to help customers define operating models for AI oversight. That is a more demanding value proposition, but also a potentially more durable one.

Looking Ahead​

The most important thing to watch is whether ACS 2.0 becomes a credible control plane for agentic AI or remains a well-timed expansion of a familiar DLP story. The market will reward vendors that can translate AI risk into operational controls, not just detection language. Bonfy’s emphasis on agent identity, multi-channel visibility and contextual policy gives it a strong starting point.
The next phase will likely be shaped by real-world deployments, especially in Microsoft-heavy organizations and in mixed SaaS environments where shadow AI is already common. If customers can demonstrate lower leakage, better auditability and manageable policy overhead, Bonfy could gain meaningful traction. If not, the category may fragment into point features inside broader platforms.
What to watch next:
  • RSAC demonstrations and how clearly Bonfy explains agent-specific controls.
  • Customer case studies showing measurable reductions in AI-driven exposure.
  • Depth of MCP support and whether it extends across more frameworks.
  • Operational reporting for SOC and compliance teams.
  • Competitive responses from larger security and productivity vendors.
  • Expansion of browser and shadow AI visibility beyond basic monitoring.
  • Evidence of enterprise scale across Microsoft, Google and SaaS-heavy deployments.
Bonfy’s ACS 2.0 launch is more than a product update; it is a statement about where enterprise security is headed. If AI agents are going to become embedded in everyday work, then the security model has to become just as dynamic, contextual and distributed as the workflows it protects. The companies that understand that shift early will shape the next generation of data security, and Bonfy is clearly trying to be one of them.

Source: IT Brief UK https://itbrief.co.uk/story/bonfy-unveils-acs-2-0-to-secure-data-for-ai-agents/
 

Back
Top