Vectra AI’s warning about Azure logging is more than another vendor alert; it is a reminder that cloud visibility can change when the platform underneath it changes. The company says Microsoft’s migration away from legacy Azure Diagnostics extensions toward the Azure Monitor Agent and Data Collection Rules can shift logging control from VM-level signals to control-plane operations, creating potential blind spots for defenders who still watch the old indicators. Microsoft’s own Azure Monitor documentation confirms the broader transition away from older agents and toward DCR-driven monitoring, while Vectra’s testing claims a single API path may disrupt logging across multiple virtual machines with limited telemetry. For WindowsForum readers, the lesson is direct: Azure security monitoring is not “set and forget,” and detection logic must evolve with the cloud architecture it depends on.
Microsoft has spent years consolidating Azure monitoring around newer agent and policy models. The older Azure Diagnostics extension, commonly associated with WAD for Windows and LAD for Linux, collected guest operating system and workload telemetry from Azure compute resources and could send data to destinations such as Storage, Event Hubs, Azure Monitor Metrics, and related tooling. Microsoft’s documentation states that the Azure Diagnostics extension was deprecated on March 31, 2026, with customers directed toward newer alternatives including Azure Monitor Agent.
The Azure Monitor Agent, or AMA, is part of a larger shift toward centralized collection policy. Instead of treating monitoring primarily as an extension-specific configuration on each VM, AMA uses Data Collection Rules to define what gets collected, where it goes, and which machines receive those rules. Microsoft’s own guidance describes monitoring enablement as three linked steps: install the agent, create DCRs, and associate those DCRs with virtual machines or Arc-enabled servers.
That architectural change has operational benefits. DCRs make it easier to manage collection at scale, reduce duplicate configurations, and apply consistent settings across fleets. They also fit modern infrastructure-as-code practices better than hand-managed VM extension settings, especially in large enterprises with thousands of Windows Server, Linux, Azure Virtual Desktop, and hybrid Arc resources.
But every simplification creates a new assumption. If security detections still expect logging disruption to appear as VM extension activity, they may miss activity that now appears under Microsoft.Insights control-plane operations. Vectra AI’s analysis argues that this is precisely where some organizations may be exposed: not because Azure stopped logging, but because defenders may be looking in the wrong place.
DCRs solve part of that problem by separating the agent from the collection policy. One agent can process multiple rules, and one rule can apply to multiple machines. That is efficient for operations teams, but it also means a single rule or association can have a wide operational blast radius if modified incorrectly or maliciously.
The upside is clear for platform teams:
That mixed state is where visibility gaps often appear. Security teams may believe monitoring is healthy because AMA is installed, while their SIEM rules still depend on older extension events. Conversely, operations teams may complete the Microsoft migration checklist without realizing that security engineering must update detection content separately.
The most concerning claim is Vectra’s testing result that an API call could delete a DCR and associated links using a parameter that removes associations, while the portal behaved more defensively by preventing deletion of a DCR with active associations. According to Vectra, logging stopped immediately across affected VMs, but only a single dataCollectionRules delete event was observed, with no individual association delete events.
That distinction matters because portal behavior is not always API behavior. Many defenders validate controls through the Azure portal because it is visible and convenient. Attackers and automated tools, however, often use APIs, command-line interfaces, SDKs, stolen service principals, or compromised automation identities.
This is the classic control-plane concentration problem. The cloud provider gives administrators a high-leverage abstraction, and that abstraction becomes valuable to adversaries. If an identity can tamper with collection rules or associations, the impact may extend beyond one server.
Key risks include:
For modern incident response, two hours is a long window. Credential theft, lateral movement, backup deletion, data staging, and exfiltration can all unfold inside that period. The more automated the adversary, the less useful delayed attribution becomes.
Detection engineering teams should treat AMA migration as a content migration, not just an infrastructure migration. That means reviewing every analytic rule, workbook, scheduled query, and SOAR playbook that assumes monitoring configuration changes are visible through VM extension telemetry.
A practical coverage model should include:
Large organizations also tend to have layered ownership. Cloud platform teams manage Azure policy, infrastructure teams manage VM fleets, SOC teams manage Sentinel or third-party SIEM content, and compliance teams rely on logs after the fact. When a monitoring architecture changes, the responsibility boundary can blur.
The issue is especially important for hybrid environments. Azure Arc brings non-Azure servers into Azure management, and AMA can apply to Arc-enabled servers as well. That gives defenders a unified path, but it also extends Azure control-plane risk into on-premises visibility.
If a logging disruption can occur with minimal telemetry, compliance teams may need to revisit control testing. They should verify that monitoring changes are logged in a way that supports audit requirements, not merely that collection appears enabled in a dashboard. The difference between configured and provably monitored is becoming more important.
Enterprises should consider a formal review process:
Still, the story matters indirectly. Many consumer-facing services run on Azure or similar public cloud platforms. If enterprises suffer visibility gaps, the downstream effects can include slower incident response, longer outages, or delayed breach discovery.
Small businesses are in a more complicated position. Many SMBs now use Azure VMs, Microsoft Sentinel, managed service providers, and Microsoft 365 security tooling without maintaining a large internal security engineering team. They may assume the cloud provider and managed tools automatically cover architectural changes.
SMBs should ask providers direct questions:
That matters commercially because cloud security buyers are overwhelmed by overlapping tools. Microsoft Sentinel, Defender for Cloud, CNAPP platforms, CDR vendors, NDR vendors, SIEM vendors, and observability platforms all claim to improve visibility. A concrete example of missed telemetry gives vendors a sharper story than generic “AI-powered security” language.
If Vectra can show that its detections caught or anticipated AMA-related blind spots, it gains a useful proof point. It can argue that cloud security requires continuous adaptation to provider architecture changes, not just static rule packs.
Vectra says it reported the behavior to Microsoft and that Microsoft acknowledged it, with additional VM-level logging for DCR association removal expected around April 21. As of any live tenant deployment, customers should validate behavior directly because cloud rollouts can vary by region, subscription, feature state, and logging configuration.
Competitors will watch closely. CrowdStrike, Palo Alto Networks, Wiz, SentinelOne, Datadog, Splunk, Elastic, Rapid7, and Microsoft’s own security stack all compete around cloud detection, identity correlation, and control-plane visibility. The winners will be the vendors that can explain these architecture shifts in operational terms and ship reliable detection content quickly.
That matters in investigations. If an alert says a Microsoft-managed identity modified an extension hours after logs stopped, analysts still need to know whether a compromised administrator, service principal, CI/CD workflow, or automation account caused the original disruption. Without the initiating control-plane event, attribution becomes slower and weaker.
Defenders should enrich Azure Activity Log monitoring with identity and access context. A DCR delete event from a break-glass account during a documented maintenance window is different from the same event through an unfamiliar application ID at 2:00 a.m. from a new IP range.
A mature response playbook should treat sudden telemetry loss as suspicious until proven benign. That does not mean every DCR edit is malicious. It means the SOC should correlate DCR changes with identity anomalies, privileged role activation, new tokens, impossible travel, unusual CLI usage, and changes to backup or network resources.
Useful triage questions include:
Begin with the basics: enumerate DCRs, associations, VMs, VM scale sets, Arc-enabled servers, and destinations. Then compare that map to SIEM rules. If the SIEM only watches extension writes or agent uninstall events, it is incomplete for AMA-era monitoring.
Recommended actions:
Teams should design detections around behavior:
Azure administrators should expect more of these issues as cloud platforms abstract away local configuration. The more management shifts into control planes, the more security monitoring must follow. Windows Server veterans who once watched event logs and local agents now need to understand Azure Resource Manager operations, service principals, managed identities, and API-level behavior.
Watch these areas closely:
For WindowsForum readers managing Azure estates, this is the moment to audit assumptions before an attacker audits them for you. Azure Monitor Agent and Data Collection Rules are now central security dependencies, not background plumbing. The organizations that adapt their detections, permissions, and playbooks fastest will gain the most from Microsoft’s modern monitoring model while avoiding the blind spots that platform transitions can create.
Source: TipRanks Vectra AI Flags Potential Security Gaps From Azure Logging Changes - TipRanks.com
Background
Microsoft has spent years consolidating Azure monitoring around newer agent and policy models. The older Azure Diagnostics extension, commonly associated with WAD for Windows and LAD for Linux, collected guest operating system and workload telemetry from Azure compute resources and could send data to destinations such as Storage, Event Hubs, Azure Monitor Metrics, and related tooling. Microsoft’s documentation states that the Azure Diagnostics extension was deprecated on March 31, 2026, with customers directed toward newer alternatives including Azure Monitor Agent.The Azure Monitor Agent, or AMA, is part of a larger shift toward centralized collection policy. Instead of treating monitoring primarily as an extension-specific configuration on each VM, AMA uses Data Collection Rules to define what gets collected, where it goes, and which machines receive those rules. Microsoft’s own guidance describes monitoring enablement as three linked steps: install the agent, create DCRs, and associate those DCRs with virtual machines or Arc-enabled servers.
That architectural change has operational benefits. DCRs make it easier to manage collection at scale, reduce duplicate configurations, and apply consistent settings across fleets. They also fit modern infrastructure-as-code practices better than hand-managed VM extension settings, especially in large enterprises with thousands of Windows Server, Linux, Azure Virtual Desktop, and hybrid Arc resources.
But every simplification creates a new assumption. If security detections still expect logging disruption to appear as VM extension activity, they may miss activity that now appears under Microsoft.Insights control-plane operations. Vectra AI’s analysis argues that this is precisely where some organizations may be exposed: not because Azure stopped logging, but because defenders may be looking in the wrong place.
Why Azure Monitoring Architecture Changed
From VM-local configuration to fleet policy
The move to AMA and DCRs reflects a wider trend in cloud operations: centralized policy beats per-machine configuration. In traditional server management, administrators installed an agent, configured it locally, and trusted the machine to report back. In cloud-native environments, that model becomes brittle because machines are frequently created, replaced, scaled, or attached through automation.DCRs solve part of that problem by separating the agent from the collection policy. One agent can process multiple rules, and one rule can apply to multiple machines. That is efficient for operations teams, but it also means a single rule or association can have a wide operational blast radius if modified incorrectly or maliciously.
The upside is clear for platform teams:
- Less manual configuration drift across VM fleets
- Reusable collection definitions for performance counters, events, and logs
- Better infrastructure-as-code alignment for DevOps workflows
- More flexible routing to Log Analytics and other destinations
- Cleaner migration path away from older monitoring extensions
Why the retirement timeline matters
Microsoft’s retirement of older agents is not a theoretical future concern. The Log Analytics agent retired in 2024, with Microsoft warning that legacy ingestion services could be shut down after March 2026, while the Azure Diagnostics extension reached its own 2026 deprecation milestone. These overlapping transitions mean many organizations are still in mixed states, running old detections, new agents, and partially migrated alert logic.That mixed state is where visibility gaps often appear. Security teams may believe monitoring is healthy because AMA is installed, while their SIEM rules still depend on older extension events. Conversely, operations teams may complete the Microsoft migration checklist without realizing that security engineering must update detection content separately.
The Control-Plane Blind Spot
What Vectra says changed
Vectra AI’s blog focuses on a specific behavioral shift: logging control for Azure VMs has moved from VM extension changes toward DCR and DCR association operations. The company says older detections often watched for extension writes, while AMA-era logging changes may instead appear as operations such as data collection rule writes, deletes, and association deletes.The most concerning claim is Vectra’s testing result that an API call could delete a DCR and associated links using a parameter that removes associations, while the portal behaved more defensively by preventing deletion of a DCR with active associations. According to Vectra, logging stopped immediately across affected VMs, but only a single dataCollectionRules delete event was observed, with no individual association delete events.
That distinction matters because portal behavior is not always API behavior. Many defenders validate controls through the Azure portal because it is visible and convenient. Attackers and automated tools, however, often use APIs, command-line interfaces, SDKs, stolen service principals, or compromised automation identities.
Why “single action, many machines” is risky
Security teams often build detections around unit-level assumptions. If one VM’s logging extension changes, one alert should fire. If one machine stops sending logs, one heartbeat or ingestion alert should appear. DCR-based management changes that model because a shared rule can affect many machines.This is the classic control-plane concentration problem. The cloud provider gives administrators a high-leverage abstraction, and that abstraction becomes valuable to adversaries. If an identity can tamper with collection rules or associations, the impact may extend beyond one server.
Key risks include:
- Multi-VM telemetry disruption from a single compromised identity
- Reduced attribution if delayed signals point to managed identities rather than the actor
- SIEM blind spots when detections only watch VM extension events
- Delayed response if log loss is treated as an operations problem rather than defense evasion
- Compliance ambiguity when audit trails show fewer events than defenders expect
Detection Engineering Must Move With the Platform
Old signals are not enough
The central detection lesson is simple: the old signal may still exist, but it is no longer sufficient. Vectra says the VM extension signal it expected was delayed by roughly two to two-and-a-half hours and attributed to a Microsoft-managed identity rather than a directly actionable actor. If a SOC depends on that delayed signal, an attacker may gain valuable time.For modern incident response, two hours is a long window. Credential theft, lateral movement, backup deletion, data staging, and exfiltration can all unfold inside that period. The more automated the adversary, the less useful delayed attribution becomes.
Detection engineering teams should treat AMA migration as a content migration, not just an infrastructure migration. That means reviewing every analytic rule, workbook, scheduled query, and SOAR playbook that assumes monitoring configuration changes are visible through VM extension telemetry.
What teams should monitor now
Vectra recommends expanded coverage for DCR deletion, DCR association deletion, and DCR write operations. Microsoft documentation also shows that removing a DCR association stops collection from that DCR, while removing the Azure Monitor Agent entirely disables monitoring of the client OS and workloads. Together, these facts suggest defenders need layered detection across rules, associations, agents, and ingestion health.A practical coverage model should include:
- Microsoft.Insights/dataCollectionRules/delete
- Microsoft.Insights/dataCollectionRules/write
- Microsoft.Insights/dataCollectionRuleAssociations/delete
- Microsoft.Insights/dataCollectionRuleAssociations/write
- VM extension uninstall or modification events
- Unexpected drops in Log Analytics ingestion
- Azure Activity Log events from unusual identities or automation paths
- DCR changes outside maintenance windows
Enterprise Impact: SOCs, Compliance, and Hybrid Estates
Why large Azure tenants are most exposed
Enterprises are most likely to feel this issue because they operate at the scale DCRs were designed to serve. A shared rule associated with dozens or hundreds of VMs is normal in a mature tenant. That is efficient for operations, but it increases the consequences of mistakes and malicious changes.Large organizations also tend to have layered ownership. Cloud platform teams manage Azure policy, infrastructure teams manage VM fleets, SOC teams manage Sentinel or third-party SIEM content, and compliance teams rely on logs after the fact. When a monitoring architecture changes, the responsibility boundary can blur.
The issue is especially important for hybrid environments. Azure Arc brings non-Azure servers into Azure management, and AMA can apply to Arc-enabled servers as well. That gives defenders a unified path, but it also extends Azure control-plane risk into on-premises visibility.
Compliance pressure will intensify
For regulated sectors, logging is not merely a security feature. It is evidence. Financial services, healthcare, government, higher education, and critical infrastructure teams must prove that security events are captured, retained, and attributable.If a logging disruption can occur with minimal telemetry, compliance teams may need to revisit control testing. They should verify that monitoring changes are logged in a way that supports audit requirements, not merely that collection appears enabled in a dashboard. The difference between configured and provably monitored is becoming more important.
Enterprises should consider a formal review process:
- Inventory all AMA-enabled VMs, VM scale sets, and Arc-enabled servers.
- Map each machine to its DCRs, associations, and Log Analytics destinations.
- Identify DCRs shared by high-value or regulated workloads.
- Test deletion, association removal, and write operations in a lab tenant.
- Confirm which events appear in Azure Activity Logs and SIEM pipelines.
- Update analytic rules and playbooks before applying changes in production.
- Run tabletop exercises where logging disruption is treated as active defense evasion.
Consumer and SMB Impact
Why home users are mostly indirect observers
Most Windows consumers will never touch Azure Monitor Agent or DCR associations. A home user running Windows 11, Microsoft Defender, and OneDrive is not directly affected by enterprise VM logging architecture. The issue lives in Azure infrastructure, not the local Windows telemetry settings familiar to consumer users.Still, the story matters indirectly. Many consumer-facing services run on Azure or similar public cloud platforms. If enterprises suffer visibility gaps, the downstream effects can include slower incident response, longer outages, or delayed breach discovery.
Small businesses are in a more complicated position. Many SMBs now use Azure VMs, Microsoft Sentinel, managed service providers, and Microsoft 365 security tooling without maintaining a large internal security engineering team. They may assume the cloud provider and managed tools automatically cover architectural changes.
The managed service provider angle
For MSPs and MSSPs, this is a customer trust issue. If clients rely on managed monitoring, the provider must prove it has updated Azure detections for AMA-era control-plane operations. A service that only checks whether agents are installed may miss the more important question of whether the collection rules remain intact.SMBs should ask providers direct questions:
- Do you monitor DCR and DCR association changes?
- Do you alert when Azure VM log ingestion suddenly drops?
- Do you distinguish portal actions from API actions?
- Do you track which identity changed logging configuration?
- Do you test monitoring tamper scenarios after Microsoft platform changes?
Competitive Implications for Cloud Security Vendors
Vectra’s positioning opportunity
Vectra AI is clearly using this issue to reinforce its broader message: visibility gaps are where modern attackers live. The company has emphasized network detection and response, cloud detection and response, identity threat detection, and signal clarity across hybrid environments. Its release notes also show ongoing Azure-focused detection work, including coverage for suspicious flow log deletion, DNS security policy changes, network security configuration changes, and risky deletions.That matters commercially because cloud security buyers are overwhelmed by overlapping tools. Microsoft Sentinel, Defender for Cloud, CNAPP platforms, CDR vendors, NDR vendors, SIEM vendors, and observability platforms all claim to improve visibility. A concrete example of missed telemetry gives vendors a sharper story than generic “AI-powered security” language.
If Vectra can show that its detections caught or anticipated AMA-related blind spots, it gains a useful proof point. It can argue that cloud security requires continuous adaptation to provider architecture changes, not just static rule packs.
Pressure on Microsoft and rivals
Microsoft remains the platform owner, and platform owners face a different burden. Customers expect Azure to provide powerful APIs, scalable monitoring, and clear auditability. When portal and API behavior differ in ways that affect detection, Microsoft must communicate the difference clearly and improve telemetry where needed.Vectra says it reported the behavior to Microsoft and that Microsoft acknowledged it, with additional VM-level logging for DCR association removal expected around April 21. As of any live tenant deployment, customers should validate behavior directly because cloud rollouts can vary by region, subscription, feature state, and logging configuration.
Competitors will watch closely. CrowdStrike, Palo Alto Networks, Wiz, SentinelOne, Datadog, Splunk, Elastic, Rapid7, and Microsoft’s own security stack all compete around cloud detection, identity correlation, and control-plane visibility. The winners will be the vendors that can explain these architecture shifts in operational terms and ship reliable detection content quickly.
Technical Analysis: Why Attribution Gets Hard
Identity, automation, and delayed signals
Cloud attribution is difficult because many actions pass through layers of automation. A human may trigger a pipeline, a pipeline may use a service principal, a service principal may call an Azure API, and a downstream platform identity may perform related work. By the time a delayed signal appears, the visible actor may not be the same identity that initiated the risky change.That matters in investigations. If an alert says a Microsoft-managed identity modified an extension hours after logs stopped, analysts still need to know whether a compromised administrator, service principal, CI/CD workflow, or automation account caused the original disruption. Without the initiating control-plane event, attribution becomes slower and weaker.
Defenders should enrich Azure Activity Log monitoring with identity and access context. A DCR delete event from a break-glass account during a documented maintenance window is different from the same event through an unfamiliar application ID at 2:00 a.m. from a new IP range.
Telemetry loss as an attack stage
Security teams should classify logging disruption as defense evasion, not just configuration drift. Attackers often disable logging, weaken retention, delete backups, modify network controls, or change diagnostic settings before moving into impact or exfiltration. The cloud makes those actions faster because APIs can change many resources at once.A mature response playbook should treat sudden telemetry loss as suspicious until proven benign. That does not mean every DCR edit is malicious. It means the SOC should correlate DCR changes with identity anomalies, privileged role activation, new tokens, impossible travel, unusual CLI usage, and changes to backup or network resources.
Useful triage questions include:
- Did the actor recently receive new privileges?
- Was the action performed through portal, CLI, SDK, or automation?
- How many resources lost collection after the change?
- Were high-value workloads affected first?
- Did backup, firewall, identity, or storage policies change nearby?
- Did the same actor perform reconnaissance before modifying logging?
Practical Checklist for Azure Administrators
Immediate steps for Windows and Azure teams
Administrators should start by finding where their monitoring assumptions are stale. If an organization recently migrated from WAD, LAD, MMA, or OMS-era collection to AMA, it should not assume old detection content migrated automatically. The most urgent work is mapping dependencies.Begin with the basics: enumerate DCRs, associations, VMs, VM scale sets, Arc-enabled servers, and destinations. Then compare that map to SIEM rules. If the SIEM only watches extension writes or agent uninstall events, it is incomplete for AMA-era monitoring.
Recommended actions:
- Inventory all Azure Monitor Agent deployments
- List all DCRs and DCR associations
- Identify shared DCRs with broad VM coverage
- Review Azure RBAC permissions for DCR write and delete actions
- Alert on DCR deletion and association deletion
- Alert on sudden log ingestion drops by workload group
- Validate portal and API behavior in a non-production tenant
- Document expected maintenance identities and automation accounts
KQL and SIEM thinking, not just KQL snippets
This issue is bigger than a single KQL query. A useful analytic should combine Azure Activity events with resource inventory, identity data, time windows, and ingestion trends. Static matching on one operation name is better than nothing, but attackers often chain benign-looking operations.Teams should design detections around behavior:
- Logging configuration changed outside an approved window
- A DCR with many associations was deleted or modified
- A rarely used identity changed monitoring policy
- A DCR change preceded a drop in SecurityEvent, Syslog, or custom log volume
- The actor also changed backup, network, or identity controls
- The event came through API or automation rather than the portal
Strengths and Opportunities
The AMA and DCR model still offers meaningful operational advantages, and the right lesson is not to reject Microsoft’s newer monitoring architecture. The right lesson is to align security operations with it. Centralized collection policy can make Azure monitoring more consistent, more scalable, and more automation-friendly when paired with strong detection engineering and least-privilege governance.- DCRs simplify fleet-wide monitoring by separating collection policy from agent installation.
- AMA supports modern Azure operations across VMs, scale sets, and Arc-enabled servers.
- Infrastructure-as-code workflows improve repeatability when monitoring configuration is versioned and reviewed.
- Security teams can build richer detections by watching control-plane events directly.
- Vendors can differentiate through fast content updates when platform telemetry changes.
- Enterprises can reduce drift by standardizing data collection rather than hand-configuring agents.
- Compliance teams gain a forcing function to test logging controls more rigorously.
Risks and Concerns
The risk is not that Azure Monitor Agent is inherently flawed. The risk is that many organizations may run new architecture with old assumptions, leaving defenders dependent on delayed, incomplete, or poorly attributed signals. In security, a visibility gap is often most dangerous when teams do not know it exists.- Legacy detections may miss DCR-based logging disruption if they only watch VM extension activity.
- A shared DCR can increase blast radius when modified or deleted.
- API behavior may differ from portal behavior, complicating validation and training.
- Delayed extension-level signals can weaken response timelines during active intrusion.
- Attribution may suffer when downstream activity appears under platform-managed identities.
- Mixed migration states can confuse ownership between cloud, infrastructure, and SOC teams.
- Overbroad RBAC permissions can turn monitoring policy into an attacker target.
Looking Ahead
What to watch after the initial warning
The next phase will depend on how Microsoft improves visibility, how quickly customers update detections, and whether security vendors can demonstrate practical coverage rather than marketing claims. Vectra’s warning is valuable because it turns a platform migration into a concrete detection-engineering problem. But customers still need to verify the behavior in their own tenants, especially after Microsoft’s expected telemetry updates.Azure administrators should expect more of these issues as cloud platforms abstract away local configuration. The more management shifts into control planes, the more security monitoring must follow. Windows Server veterans who once watched event logs and local agents now need to understand Azure Resource Manager operations, service principals, managed identities, and API-level behavior.
Watch these areas closely:
- Microsoft guidance updates for DCR deletion, association removal, and activity logging
- Sentinel and Defender content updates that account for AMA-era monitoring disruption
- Third-party CDR and NDR detections focused on Azure control-plane tampering
- Customer reports from large tenants validating whether telemetry improved after Microsoft changes
- Regulatory expectations around provable cloud logging continuity and attribution
For WindowsForum readers managing Azure estates, this is the moment to audit assumptions before an attacker audits them for you. Azure Monitor Agent and Data Collection Rules are now central security dependencies, not background plumbing. The organizations that adapt their detections, permissions, and playbooks fastest will gain the most from Microsoft’s modern monitoring model while avoiding the blind spots that platform transitions can create.
Source: TipRanks Vectra AI Flags Potential Security Gaps From Azure Logging Changes - TipRanks.com