OpenText Core Threat Detection and Response Expands with Microsoft Integrations on Azure

  • Thread Author
OpenText has expanded the availability of its AI-powered security offering, OpenText Core Threat Detection and Response, with deeper, native integrations into the Microsoft security stack — notably Microsoft Defender for Endpoint, Microsoft Entra ID, and Microsoft Security Copilot — and has made the solution more accessible to Microsoft-centric customers through Azure and the Azure Marketplace.

Background​

OpenText first positioned Core Threat Detection and Response as the center of its next-generation OpenText Cybersecurity Cloud earlier in 2025, marketing the product as a cloud-first, composable open XDR platform that leverages unsupervised machine learning and behavioral analytics to detect insider threats, credential misuse, and advanced hands‑on‑keyboard attacks. The vendor emphasized Azure as the primary delivery plane and built pre‑configured integration kits to ingest telemetry from Defender for Endpoint and Entra ID while feeding contextual signals into Security Copilot workflows.
This October update formalizes broader availability and reiterates that the product will be available via the Azure Marketplace, reducing procurement friction for enterprises already standardized on Microsoft cloud tooling. OpenText positions the integration as identity‑centric XDR: fusing endpoint telemetry with identity signals to raise the signal‑to‑noise ratio for SOC teams.

What OpenText announced — the essentials​

  • The Core message: OpenText Core Threat Detection and Response is now more widely available and deeply integrated with Microsoft Defender for Endpoint, Microsoft Entra ID (identity), and Microsoft Security Copilot (Copilot for Security).
  • Delivery model: the solution is cloud‑hosted on Microsoft Azure and offered through the Azure Marketplace, enabling faster deployment and simpler procurement for Azure customers.
  • Key capabilities emphasized by OpenText: behavior‑based indicators, identity context fusion, natural‑language summaries, guided investigations, and playbook‑driven automated response — all aimed at cutting alert noise and accelerating mean time to respond (MTTR).
These are vendor announcements and product positioning intended for enterprise security teams that have already invested heavily in Microsoft security telemetry as their primary signal source.

Integration deep dive: Defender, Entra ID, and Security Copilot​

Microsoft Defender for Endpoint: endpoint telemetry as the backbone​

OpenText ingests Defender for Endpoint telemetry to capture process events, file behaviors, endpoint alerts, and other host‑level signals. By adding behavioral baselining and unsupervised anomaly detection on top of Defender data, OpenText aims to detect subtle, credential‑based or slow-moving attacks that signature or IOC-based solutions can miss. This approach is common in XDR architectures and is explicitly called out by OpenText as a primary data source.

Microsoft Entra ID: identity context for higher‑fidelity detections​

The platform fuses Entra ID (formerly Azure AD) telemetry — sign‑in events, risked sign‑ins, conditional access signals, and device identity mappings — with endpoint behavior to build identity‑aware incidents. By correlating who did what (identity) with what happened on which machine (endpoint), the system prioritizes incidents that show both behavioral deviation and risky access patterns, which are typical precursors to account takeover and insider data misuse. OpenText explicitly markets the identity fusion as central to catching credential abuse and lateral movement.

Microsoft Security Copilot: augmenting analyst workflows​

OpenText states that it feeds behavior‑based indicators, identity correlation, and incident narratives into Microsoft Security Copilot, which is intended to provide analysts with concise summaries, suggested investigation steps, and recommended containment actions. Microsoft has been expanding Security Copilot’s agentic and partner‑built capabilities, and OpenText’s integration is designed to make Copilot outputs more context‑rich and actionable by pre‑packaging correlated evidence and playbooks for the assistant to consume.

The technology claims — what’s verifiable and what to treat cautiously​

OpenText’s public materials make several technical claims that buyers will want to validate during evaluation:
  • Claim: the product uses hundreds of AI algorithms and unsupervised machine learning to continuously learn an organization’s “unique normal.” This phrasing appears repeatedly in vendor collateral and press materials. It describes a multi‑model detection stack (anomaly detection, entity scoring, correlation/story engines, and natural language explanation layers), but the exact model architectures, training data, and detection performance metrics are proprietary and not externally audited in the announcement. Treat the “hundreds of algorithms” claim as marketing shorthand for a complex model ensemble rather than a testable performance metric.
  • Claim: agentless, fast onboarding with a 30‑day backfill to produce actionable detections within hours. OpenText product pages and blog posts describe SaaS onboarding via native Microsoft APIs and a typical backfill workflow; however, real‑world time‑to‑value depends on tenant size, telemetry volume, API throttles, and customer configuration. The vendor’s stated deployment speed is plausible but should be validated in pilot deployments.
  • Claim: reduction of alert noise and higher‑confidence detections when fused with Entra signals. This is logically compelling (identity context often reduces false positives) and consistent with industry practice, but the magnitude of alert reduction will vary widely by customer and must be measured in controlled pilots. OpenText cites customer interest in insider threat and credential misuse cases; independent validation is required to quantify real gains.
In short: the high‑level technical claims are consistent with modern XDR patterns and are corroborated across OpenText announcements and product pages, but many operational and performance statements are vendor‑centric and should be piloted for verification.

Independent corroboration and marketplace availability​

Multiple independent sources corroborate key elements of the announcement:
  • OpenText’s corporate press release and investor/PR channels describe the Microsoft integrations and product availability.
  • The OpenText product page and blog confirm the technical design (behavioral analytics, unsupervised ML, identity fusion) and the Azure delivery model.
  • Microsoft’s Security blog and the broader Security Copilot roadmap show that Microsoft has been enabling partner agents and integrations to enrich Copilot workflows — a context that makes OpenText’s integration both feasible and strategically aligned.
  • The Azure Marketplace and Microsoft community listings highlight the platform’s availability on the Marketplace, simplifying procurement and governance for enterprises on Azure.
Taken together, these sources support the headline claims: the offering exists, it is aimed at Microsoft‑centric enterprises, it is available via Azure channels, and it targets identity‑centric detection scenarios.

Why this matters for Microsoft‑centric enterprises​

  • Identity‑centric detection addresses one of today’s highest‑value threat classes: credential compromise and insider risk. By correlating Entra sign‑ins and device identity with endpoint behaviors, analysts can triangulate higher‑confidence incidents that would otherwise be lost in alerts from multiple siloed tools.
  • Plug‑and‑play Azure delivery and Azure Marketplace listing reduce procurement cycles and align technical controls with existing cloud governance, data residency, and network architecture. For organizations that already centralize telemetry with Microsoft tools, an integrated XDR that speaks native APIs removes a major integration burden.
  • Feeding pre‑correlated evidence into Microsoft Security Copilot can shorten analysis lifecycles if Copilot’s recommendations are accurate and contextually complete. In quality deployments, Copilot augmentation can reduce manual evidence collection and accelerate containment actions. However, a Copilot‑driven workflow also raises new governance needs (see Risks section).

Risks, limitations, and operational cautions​

No single vendor product eliminates the need for disciplined security governance. Consider these critical risk areas before broad adoption:
  • Model explainability and investigation integrity: AI‑driven detections depend on model inputs and configuration. SOC teams must have transparent incident narratives, clear evidence trails, and the ability to export raw telemetry for independent verification. Relying only on summarized reasoning (LLM outputs) without traceable artifacts risks missed context and auditability gaps.
  • False positives and false negatives: vendor claims of “reduced alert noise” are plausible but variable. Over‑trusting statistical prioritization without baseline validation can create dangerous blind spots. Pilot programs should measure true positive rates, false positive rates, and detection latency for representative threat scenarios.
  • Data residency, compliance, and telemetry routing: routing sensitive Entra or Defender telemetry into a third‑party cloud service introduces regulatory and compliance questions (GDPR, HIPAA, sectoral rules). Customers must confirm contractual data residency, encryption‑in‑transit and at‑rest, log retention policies, and Microsoft‑to‑vendor data flows before enabling ingestion.
  • Copilot risks: Security Copilot is powerful but can hallucinate or produce recommendations that require human oversight. When Copilot actions are tied to playbooks that can trigger automated responses, strict guardrails, approvals, and rollback procedures are essential to avoid unintended disruptions.
  • Vendor lock‑in and integration scope: OpenText’s Threat Integration Studio promotes multi‑vendor ingestion, but deep value is strongest when Defender and Entra are primary telemetry sources. Organizations should evaluate how the platform coexists with existing SIEM/SOAR investments and whether integration costs offset projected ROI.
  • Claims without independent benchmarks: statements like “hundreds of AI algorithms” and “actionable detections in hours” are marketing language unless validated by independent testing or customer case studies. Treat such claims as starting points for technical validation rather than guarantees.

Practical rollout recommendations — a pilot checklist​

A disciplined pilot is the best way to validate OpenText Core Threat Detection and Response in your environment. Follow these sequential steps:
  • Scope and objectives: define clear pilot objectives (e.g., reduce false positives by X%, detect simulated credential abuse within Y minutes, integrate with an existing incident response playbook).
  • Data mapping: identify required Defender for Endpoint and Entra ID tenants, necessary API permissions, data residency constraints, and legal approvals for telemetry sharing.
  • Baseline measurement: capture current SOC metrics (alerts per day, average MTTI, false positive rate) to compare pilot outcomes.
  • Controlled ingest: enable ingestion for a limited set of tenants or organizational units and backfill the agreed historical window; collect model outputs and raw evidence.
  • Validate detections: run a red team or scripted scenarios (credential theft, lateral movement, exfil simulation) and measure detection time and quality.
  • Copilot evaluation: run analyst‑assisted playbooks in a review mode before allowing automated actions; evaluate recommended actions for accuracy and suitability.
  • Governance and rollback: confirm playbook approvals, operational SLAs, and a rollback plan for any automated responses.
  • Measure and iterate: compare pilot metrics against baseline, document gaps, and iterate on connector tuning, playbooks, and analyst workflows.
This pilot sequence protects operations while producing measurable evidence of value.

Governance, model stewardship, and SOC readiness​

To realize the most value, organizations should pair technical pilots with governance and operational controls:
  • Model stewardship: schedule regular model reviews, monitor concept drift, and maintain a test set of labeled incidents to detect degradation.
  • Explainability: require the vendor provide incident timelines, event IDs, and raw telemetry export for every prioritized detection.
  • Human‑in‑the‑loop controls: ensure Copilot recommendations are advisory by default and require human approval for high‑impact automated playbooks.
  • Logging and audit trails: preserve immutable records of model outputs, analyst decisions, and automated responses for incident reviews and compliance audits.
  • Cross‑tool choreography: map OpenText playbooks to existing SOAR runbooks and update service‑level agreements (SLAs) and escalation matrices accordingly.

Pricing, availability, and procurement notes​

OpenText states Core Threat Detection and Response is available now as part of the OpenText Cybersecurity Cloud and has been listed on the Azure Marketplace to streamline procurement for Azure customers; earlier releases ran as limited or early access programs and the company is expanding availability. Pricing models for XDR services typically include subscription tiers, data ingestion volumes, and optional professional services for onboarding — all items that must be negotiated and validated during procurement. Expect enterprise licensing conversations to cover telemetry volume, retention, support SLAs, and professional services for pilot and deployment.

Bottom line — who should care, and why​

For organizations that have standardized on Microsoft Defender for Endpoint and Microsoft Entra ID, OpenText Core Threat Detection and Response offers a pragmatic path to add behavioral analytics, identity‑aware correlation, and AI‑assisted analyst workflows without rebuilding telemetry pipelines. The native Azure delivery and Marketplace listing simplify procurement and governance for Microsoft‑centric enterprises, while prebuilt connector kits lower integration effort.
However, the value will be realized only through careful pilots, measurable validation of detection efficacy, and robust governance around AI‑augmented workflows. Vendor marketing claims around the number of algorithms and alert reduction require independent verification in production‑like conditions; security leaders should insist on measurable outcomes (true positive rate, false positive reduction, MTTI/MTTR improvements) before scaling.

Final recommendations for security leaders​

  • Treat this as an opportunity to modernize identity‑aware detection, not as a plug‑and‑play cure for all SOC problems.
  • Run a scoped pilot with defined success metrics and real adversary emulation scenarios.
  • Insist on explainable detections and ready access to raw telemetry for audits and investigations.
  • Establish strict guardrails for Copilot‑driven automation; begin in advisory mode.
  • Negotiate contractual protections for data residency, access controls, and model governance.
OpenText’s announcement is a notable example of the market trend toward identity‑centric, AI‑assisted XDR on Azure. The technical direction — fusing Defender endpoint telemetry with Entra identity signals and augmenting analyst workflows via Security Copilot — is compelling for Microsoft‑centric environments, but the leap from product claims to SOC value will depend on rigorous validation, governance, and ongoing oversight.

Source: The Globe and Mail OpenText Expands Availability of Core Threat Detection and Response with Deep Microsoft Integrations