Bonfy ACS 2.0: Agentic AI Data Guardrails for Microsoft 365 and Google Workspace

  • Thread Author
Bonfy’s launch of Adaptive Content Security 2.0 lands squarely in the center of the enterprise AI security debate: how do you protect sensitive data when AI agents can read, write, and move information across email, collaboration suites, SaaS apps, browsers, and cloud storage without behaving like a conventional user? The company is pitching ACS 2.0 as a control plane for that problem, with coverage that spans Microsoft 365, Google Workspace, identity systems, security operations tools, and agent interfaces built around MCP and other frameworks. In a market where many tools still think in terms of users and repositories, Bonfy is arguing that the new attack surface is the workflow itself. Its timing is deliberate, with a preview planned around RSAC 2026, which runs March 23–26 in San Francisco.

Futuristic AI workflow diagram with Microsoft 365, control plane, transformations, and audit trail safeguards.Background​

The security industry has spent years building defenses around endpoints, identities, email, files, and cloud applications. That model made sense when people were the primary actors and the movement of data followed relatively predictable paths. Once copilots, agents, and generative AI systems started interacting directly with enterprise content, the old perimeter-style assumptions began to fray.
Bonfy’s move reflects a broader shift in the market toward agentic AI security. Vendors increasingly describe a need to inspect not just final outputs, but also prompts, intermediate steps, tool invocations, and cross-system actions. That is the key difference between protecting a document and protecting the chain of actions that can transform a document into a spreadsheet row, a support case, a chat message, or a customer-facing response.
The company is entering a crowded but fast-maturing segment. Recent launches from larger cybersecurity and networking firms have also focused on AI runtime protection, AI-aware SASE, and policy layers for model and agent interactions. Palo Alto Networks, Cisco, and Thales have all framed AI security as a lifecycle problem rather than a point product, suggesting the market is converging on the same diagnosis even if the solutions differ.

Why this matters now​

AI agents are not just faster chatbots. They can authenticate, query, synthesize, route, and trigger actions across multiple systems, which means they can also multiply mistakes at machine speed. That creates a new kind of exposure: not merely data leakage, but data leakage compounded by automation, persistence, and scale.
For security teams, the practical problem is visibility. If a user pastes content into a public tool, the event is obvious. If an enterprise agent pulls the same content into a workflow, splits it across calls, and produces a derivative artifact somewhere else, the trail can be far harder to reconstruct. Bonfy is betting that the companies who solve that visibility gap first will shape the next generation of data security controls.

The broader market context​

The vendor language is telling. Across the industry, companies now talk about runtime governance, shadow AI, tool-level controls, and policy enforcement across modalities. These are not buzzwords so much as admissions that classic DLP and DSPM tooling are incomplete for AI-native workflows. The market appears to be moving from “detect sensitive content” to “understand the full content lifecycle.”
That shift matters for both buyers and competitors. Buyers want fewer false positives and stronger policy automation; competitors want to own the layer where data, identity, and AI meet. Bonfy’s ACS 2.0 is an attempt to claim that layer before it becomes table stakes.

What Bonfy Is Trying to Solve​

Bonfy’s core argument is simple: enterprise data security must follow the action, not just the asset. ACS 2.0 is designed to monitor how data is accessed, transformed, and redistributed by AI agents, copilots, and GenAI workflows across multiple platforms. That makes the product more ambitious than a typical repository scanner or email filter.
The company is framing agents as distinct entities rather than extensions of a human account. That distinction matters because an agent can be authorized to do things that a human user would never do manually, and it may do them in ways that are difficult to attribute after the fact. If the agent can read from one system and write into another, the security model has to understand the journey, not just the destination.
Bonfy also appears to be positioning ACS 2.0 as a policy layer rather than a single-product control. That is a subtle but important strategic choice. Policy layers are easier to integrate into enterprise architectures because they can sit above existing systems and coordinate behavior across them.

Agent identity vs user identity​

Traditional controls often assume a person is the actor. In the AI era, that assumption breaks down quickly. Bonfy’s approach suggests that each agent should have a recognizable security profile, access context, and behavior history.
That could help answer questions like: which agent touched the file, which tool it invoked, which application received the output, and whether the content was altered along the way. In a compliance investigation, those distinctions are no longer academic; they are the difference between an explainable event and an opaque one.

Why conventional DLP falls short​

Classic DLP excels at spotting patterns in email, attachments, and endpoints, but it struggles when data is embedded in a sequence of AI actions. If the sensitive information is fragmented or transformed during processing, the final output may look harmless even though the process itself was risky.
Bonfy is arguing that browser-only or endpoint-only defenses cannot see the full picture because a great deal of AI activity is now happening in vendor-managed compute, not on corporate devices. That is an important point, and it aligns with the industry’s broader push toward policy enforcement in the cloud and at the application layer.
  • User-centric controls miss machine-to-machine workflows.
  • Static repository scanning misses runtime transformations.
  • Endpoint monitoring may never see cloud-hosted agent actions.
  • Final-output filtering misses intermediate leakage.
  • Cross-app context becomes essential for risk decisions.

ACS 2.0 and the AI Agent Data Guardrails Layer​

One of the most consequential additions in ACS 2.0 is the AI Agent Data Guardrails capability. Bonfy says it supports MCP and other agent framework connections, which means the platform is trying to inspect what agents read, share, and generate while they are operating. That is a more granular promise than simply scanning the output after the fact.
This matters because MCP-style architectures are designed to connect hosts, clients, and tools in ways that are intentionally flexible. That flexibility is powerful, but it also expands the number of places where content can move. If Bonfy can observe those transitions, it may be able to create a more reliable control point for agent behavior.
The challenge, of course, is implementation. Real-time visibility into agentic workflows is technically difficult, especially when systems span vendors and hosting environments. The more environments a platform covers, the more it has to normalize protocols, metadata, and policy logic without becoming brittle.

Inspecting the journey, not just the result​

The real value in agent guardrails is that they track how data was used, not just whether it exists. That gives security teams the ability to define policy around transformations, redactions, and permissible tool usage.
It also creates a stronger story for incident response. If a workflow is compromised, the organization can reconstruct which steps were executed, which data categories were involved, and where the output was routed. In practice, that audit trail could be one of the platform’s most valuable selling points.

The MCP angle​

MCP is increasingly important because it standardizes how models connect to external tools and data sources. Official documentation describes MCP as a client-host-server architecture that emphasizes context exchange and security boundaries, while also noting that tools are model-invoked and should be governed with a human in the loop where possible. That makes MCP both an opportunity and a risk surface for enterprise adopters.
Bonfy’s decision to support MCP is smart because it avoids locking the company into one vendor’s agent stack. It also helps the platform fit into the reality of the market, where enterprises are likely to mix and match Microsoft, Google, OpenAI, Anthropic, Salesforce, and internal frameworks. That heterogeneity is not a temporary condition; it is the operating model.

Shadow AI and Browser-Based Risk​

Bonfy’s browser extension is aimed at one of the most persistent enterprise problems of the AI era: shadow AI. Employees often experiment with unsanctioned tools, paste sensitive snippets into public interfaces, or use browser assistants that sit outside approved governance. Those behaviors may look harmless in isolation, but they can undermine data policy quickly.
The browser remains a critical control point because it is where users, SaaS apps, and public AI tools often intersect. A browser extension can observe web traffic tied to unsanctioned tools, which is useful because many AI interactions now happen through web interfaces rather than formal enterprise integrations. That makes the browser both a productivity hub and a leakage channel.
Bonfy’s framing suggests that organizations need visibility into which assistants are being used, what content is entering them, and whether the use is sanctioned. That is especially important in regulated industries, where accidental disclosure into the wrong service can become a legal and compliance issue almost immediately.

Why browser controls still matter​

A lot of vendor messaging implies the browser is old-fashioned security territory. In reality, it is one of the few places where organizations can still observe behavior that would otherwise vanish into the cloud. As AI assistants become embedded in webpages and SaaS products, the browser becomes a practical inspection point.
That said, browser extensions are not a complete answer. They can miss native desktop apps, API-driven flows, and system-level agent activity in vendor clouds. Bonfy appears aware of that limitation, which is why it is coupling the browser layer with broader integrations and API support.

Shadow AI as governance failure​

Shadow AI is not only a security problem; it is also a governance problem. If employees are using tools that IT and security teams cannot inventory, then policy enforcement becomes uneven and compliance reporting becomes incomplete.
Bonfy’s strategy suggests that security teams need to treat unsanctioned AI use like any other unapproved data pathway. The point is not to ban experimentation outright, but to make it observable and governable. That distinction will matter as organizations try to balance innovation with control.
  • Unsanctioned web tools can expose regulated data.
  • Browser assistants may bypass standard procurement and review.
  • Copy-paste workflows remain a major leakage path.
  • Invisible AI use complicates audit and compliance.
  • Mixed personal and corporate behavior creates policy ambiguity.

Coverage Across Microsoft, Google, and Core Business Apps​

Bonfy is emphasizing broad integration coverage, including Microsoft 365, Google Workspace, Salesforce, HubSpot, Slack, AWS S3, and on-premises file stores. That breadth is strategically important because most enterprises do not run a single-vendor environment. They live in a patchwork of collaboration, CRM, storage, and identity systems, and their AI workflows increasingly traverse all of them.
The Microsoft side of the story is especially notable. Bonfy says ACS 2.0 includes native coverage for Exchange Online, SharePoint, Entra, Copilot, and Purview. On the Google side, it now adds Gmail, Drive, and Directory coverage, which Bonfy says brings parity with its Microsoft capabilities. That parity matters to organizations running mixed environments and wanting one policy model for unstructured data.
The practical implication is that Bonfy wants to sit above the collaboration stack rather than compete with each individual app. If it can do that successfully, it becomes a cross-platform control point rather than just another point product in a crowded enterprise tooling ecosystem.

Mixed environments need consistent policy​

Many security tools are strong in one ecosystem and weaker in another. That creates asymmetry, which attackers and careless users can both exploit. A platform that can normalize policy across Microsoft and Google could solve a real operational headache for security teams.
This is not just a convenience feature. In large enterprises, separate policies for separate collaboration suites often become separate risk models, separate dashboards, and separate incidents. That fragmentation is expensive, and it makes AI governance harder than it needs to be.

Identity and workflow alignment​

Bonfy’s integrations with Microsoft Entra and Google Directory suggest that it wants data policy to align with identity context. That is critical because AI agent activity often inherits authority from a human account or service principal, but the operational behavior may diverge from what the account owner actually intended.
It also helps that Bonfy connects to Splunk, Microsoft Sentinel, and Rapid7. That means the platform is trying to make its findings visible to the broader security stack, not trap them inside a proprietary console.
  • Microsoft 365 coverage supports enterprise coexistence with Copilot and Purview.
  • Google Workspace coverage broadens the mixed-environment story.
  • CRM and chat integrations bring business context into the policy model.
  • Identity directory connections improve attribution and response.
  • SIEM integrations make the platform more operationally useful.

Data Surface Visibility and Unstructured Data​

Bonfy’s new data surface visibility view is another sign that the company is targeting the messy reality of enterprise unstructured data. Sensitive content does not live in one place anymore. It is scattered across file stores, collaboration tools, cloud buckets, inboxes, chat logs, and AI-generated derivatives.
The platform’s focus on SharePoint, Google Drive, AWS S3, and on-premises file stores is telling. These repositories are where much of an organization’s informal intellectual property, customer information, and operational knowledge reside. They are also where AI tools increasingly train, summarize, search, and synthesize content.
Bonfy says the platform covers unstructured data at rest, in motion, and in use. That phrasing matters because it signals a full-lifecycle approach. If the company can truly connect those states, it can offer a more coherent story than tools that only inspect storage or only watch traffic.

Why unstructured data is the real battleground​

Structured data gets most of the governance attention because it is easier to classify and query. Unstructured data, by contrast, is messy, redundant, and often over-shared. That makes it the natural fuel for AI systems and the hardest category to defend.
In AI workflows, unstructured content becomes even more sensitive because it can be summarized, rephrased, embedded, or recombined. A single file may feed multiple outputs, each of which could reveal more than the original author intended.

Visibility as a governance prerequisite​

If security teams cannot see where sensitive content resides, they cannot govern how AI systems touch it. That is why Bonfy is leaning so hard into visibility. The more complete the map of content exposure, the easier it becomes to enforce minimization, retention, and access policies.
This also explains why the company is emphasizing content flows rather than just content classification. A label is useful, but a label without context is incomplete. The movement of the content is the risk multiplier.

Security Controls, Compliance, and the Enterprise Buying Case​

Bonfy says ACS 2.0 includes data minimisation, encryption updates, and configurable retention settings. Those are classic enterprise control themes, but they matter more in the AI era because content can now be replicated and repurposed in ways that make over-retention more dangerous. The company also says it has completed SOC 2 Type 2 certification, which helps with enterprise procurement conversations.
From a buying perspective, this is where the product needs to prove that it is more than a concept. Security leaders will want to know whether ACS 2.0 reduces risk without creating too much friction for users and admins. The value proposition has to be policy precision, not just policy volume.
Bonfy’s messaging also suggests it understands the difference between consumer-style AI adoption and enterprise deployment. In consumer settings, users often tolerate opaque controls if the tool is useful. In enterprise settings, that tradeoff is much less acceptable. Security, auditability, retention, and identity all have to line up.

Compliance is becoming product design​

The best enterprise AI products increasingly treat compliance as a design constraint, not a bolt-on feature. That is especially true in regulated sectors, where retention rules and access controls can determine whether a tool is even deployable.
Bonfy’s messaging fits that direction by stressing control across productivity apps, cloud storage, and agent frameworks. If the platform can consistently enforce minimization and retention rules, it may reduce one of the biggest objections enterprises have to AI workflows: that they create data sprawl faster than governance can keep up.

Enterprise vs consumer impact​

For enterprises, the main benefit is centralized governance over AI-assisted data movement. For consumers, the story is simpler and less consequential: fewer accidental disclosures and more transparent assistant behavior. The gap between those two audiences is wide, and Bonfy is clearly aiming at the enterprise end of the market.
That matters because the enterprise buyer will ask harder questions about legal hold, audit evidence, tenant boundaries, identity federation, and exception handling. Consumer-grade AI safety features rarely survive those conversations.
  • SOC 2 Type 2 supports procurement credibility.
  • Retention controls address data lifecycle governance.
  • Minimisation features align with privacy and compliance goals.
  • Encryption updates help reduce exposure during transit and storage.
  • Enterprise controls must be explainable to auditors.

Competitive Positioning and Market Implications​

Bonfy’s launch puts it into a competitive conversation with vendors that are all converging on similar phrases: AI security, runtime guardrails, agent governance, and data protection across workflows. That convergence is healthy for the market, but it also means differentiation will come down to depth of integration, quality of telemetry, and how well products handle multi-vendor reality.
The company is trying to carve a niche between DLP, DSPM, SASE, and AI security platforms. That is a difficult but potentially lucrative position. If Bonfy can own the layer where agent actions intersect with unstructured data, it may become a control point enterprises did not know they needed until AI adoption accelerated.
The biggest strategic question is whether Bonfy can keep its narrative sharp enough in a crowded field. Large vendors can bundle AI security into broader platforms, which may pressure smaller specialists to prove that their controls are more precise, more adaptive, or easier to deploy. In that sense, ACS 2.0 is both a product launch and a market test.

How rivals may respond​

Large security vendors are likely to keep extending their platforms upward and outward into AI governance. That could mean deeper integrations with agent frameworks, richer runtime controls, or more emphasis on content-aware inspection. Smaller vendors may focus on niche use cases, faster deployment, or sharper data control.
The competitive pressure will also push buyers to demand proof. Demonstrations, telemetry examples, and real incident scenarios will matter more than generic AI language. In the next phase of the market, the winners will be the vendors who can show not just that they see agent activity, but that they can act on it without slowing the business down.

What this says about the market​

The market is moving from “AI usage detection” to “AI workflow enforcement.” That is a meaningful evolution. It suggests enterprises no longer view AI security as a side project; they are starting to see it as part of core data governance.
If that trend continues, data security vendors will increasingly have to think like workflow orchestrators. The future may belong to platforms that can unite identity, content, policy, and runtime observation in one operational model.
  • Platform breadth will matter as much as point features.
  • Cross-vendor support will be a key differentiator.
  • Policy precision will beat noisy detection.
  • Runtime visibility will become a buying requirement.
  • AI governance will increasingly overlap with DLP and DSPM.

Strengths and Opportunities​

Bonfy’s ACS 2.0 arrives with several strengths that could resonate with enterprise security teams already struggling to contain AI-driven data movement. The company is not just adding a feature; it is trying to reframe the security problem around agent behavior, unstructured data, and workflow visibility. That is a credible response to a real market gap.
  • Agent-first design addresses a genuine blind spot in legacy controls.
  • Broad platform coverage across Microsoft, Google, cloud storage, and SaaS tools improves relevance.
  • MCP support gives the product a path into modern agent ecosystems.
  • Browser monitoring adds a practical shadow AI control point.
  • SIEM integrations help fit the product into existing SOC workflows.
  • Visibility across data states strengthens governance and investigation use cases.
  • SOC 2 Type 2 helps reduce procurement friction.

Risks and Concerns​

The biggest risk is overpromising on visibility in environments that are inherently hard to observe. If AI agents operate in vendor-managed clouds and across loosely governed toolchains, even a strong platform may struggle to deliver complete coverage. Buyers will want evidence, not just ambition.
  • Complex deployments could slow adoption in large enterprises.
  • False positives may create alert fatigue if policy tuning is weak.
  • Vendor sprawl could dilute the simplicity Bonfy is trying to sell.
  • Coverage gaps may remain in native desktop or API-only workflows.
  • Competitive bundling from large vendors could compress differentiation.
  • Change management may be harder than the technology itself.
  • Shadow AI may move faster than policy enforcement can adapt.

Looking Ahead​

The next few months will tell us whether ACS 2.0 is a meaningful security platform or simply an early marker of where the market is heading. Bonfy’s RSAC presence will be important because buyers will want to see how the platform behaves in real enterprise scenarios, especially around Microsoft 365, Google Workspace, and agent frameworks. If the company can demonstrate reliable policy enforcement without crushing usability, it may gain traction in a very hot category.
The broader industry trend is unlikely to reverse. Enterprises are moving deeper into AI-assisted and agentic workflows, and with that comes a need for stronger data governance. The companies that win this race will probably be the ones that make security feel invisible when things are normal and exhaustive when things go wrong.
  • Watch RSAC demos for real-world workflow examples.
  • Track support for more agent frameworks beyond the current integrations.
  • Monitor policy granularity around prompts, outputs, and intermediate steps.
  • Compare runtime visibility against bundled alternatives from larger vendors.
  • Evaluate deployment friction in mixed Microsoft/Google environments.
Bonfy’s ACS 2.0 is best understood as a bet on the future shape of enterprise security: not a world where AI simply consumes data, but one where AI continuously reshapes it. If that future arrives as quickly as vendors expect, the winners will be the platforms that can see the whole path from source content to generated action. Bonfy is trying to be one of them.

Source: SecurityBrief UK https://securitybrief.co.uk/story/bonfy-unveils-acs-2-0-to-secure-data-for-ai-agents/
 

Back
Top