New Relic MCP Server Brings Agentic Observability to Azure SRE and Foundry

  • Thread Author
New Relic’s latest push to embed observability directly into Azure’s agentic surfaces marks a decisive step toward making AI-driven agents practical, auditable, and — importantly — actionable inside developer and SRE workflows, promising shorter mean time to resolution (MTTR) while raising new governance and cost questions for production deployments.

Blueprint-style diagram of an MCP Server connected to Azure and New Relic components, telemetry, audits, and automation flow.Background​

The industry has shifted rapidly from passive monitoring to agentic observability — observability data that not only informs humans but is produced and consumed by AI agents that can recommend or (with controls) execute actions. New Relic’s announcement bundles two connected moves: the public preview of the New Relic AI Model Context Protocol (MCP) Server and a set of integrations that surface New Relic telemetry into Microsoft Azure’s agentic products, notably the Azure SRE Agent and Microsoft Foundry. These integrations aim to let agents and portal-native assistants query high-fidelity telemetry (metrics, traces, logs) and receive structured, time-bound diagnostic payloads for root-cause analysis and remediation suggestions. Why this matters now: Gartner forecasts overall global AI spending will top $2.02 trillion in 2026, accelerating adoption of agentic and generative AI across enterprise stacks and increasing demand for integrated operational controls. Against that backdrop, cloud providers and observability vendors are racing to turn insight into verifiable action within cloud control planes.

What New Relic shipped (the essentials)​

  • MCP Server (Public Preview) — A protocol bridge that exposes New Relic observability as an MCP-compatible toolset so LLM-based agents and orchestration frameworks can:
  • Convert plain-language requests into NRQL queries and return structured diagnostic payloads.
  • Retrieve causal chains, dependency maps, and traces to speed RCA.
  • Package remediation hints and runbook steps as auditable outputs.
    New Relic published setup and onboarding docs for the public preview (early November 2025).
  • Azure SRE Agent integration — When alerts fire or deployments occur in New Relic, Azure’s SRE Agent can call the New Relic MCP Server to fetch evidence, ranked probable causes, and runbook suggestions so portal-native recommendations appear where SREs already operate. This is positioned to reduce context switching and accelerate MTTR.
  • Microsoft Foundry (Foundry) support — Foundry-hosted agents, Visual Studio/Copilot surfaces, and GitHub-aligned workflows can surface New Relic telemetry for developers building and managing AI apps and agents. New Relic Monitoring for Microsoft Foundry ingests Azure logs and metrics into New Relic, delivering a nuanced view of agent or app performance.
  • Azure Autodiscovery / Dependency mapping — A tailored discovery solution for Azure that overlays configuration changes onto performance graphs and dependency maps to help platform engineers correlate infra changes with telemetry and find blind spots faster.
  • New Relic Monitoring for SAP on Microsoft Marketplace — An agentless connector for SAP and non‑SAP systems that aims to minimize business-process interruptions and is being made available through Microsoft Marketplace channels for Azure customers.
The user-facing promise: fewer console hops, more context inside the tools teams already use, and staged automation (recommend → gate/approve → act) with audit trails and identity-bound approvals.

How the integration works (technical overview)​

MCP as the interoperability layer​

The MCP Server presents New Relic as a structured tool manifest for MCP-capable agents. Agents make MCP calls to request context (for example: “Why did checkout latency spike after the last deployment?”). The MCP Server translates the request into NRQL or other observability queries, returns a time‑bounded diagnostic bundle (traces, implicated services, confidence-scored causes), and packages recommended runbook steps for human review or gated execution. This reduces brittle point-to-point integrations between each agent and each monitoring system.

Portal and IDE surfaces​

  • Azure SRE Agent acts as the portal-native reliability assistant. It can call New Relic’s MCP Server when incidents appear, display evidence and remediation hints, and — depending on tenant governance — execute low-risk actions under approval gates. Microsoft documents the SRE Agent pricing model in Azure Agent Units (AAUs): a baseline always-on component and a usage-based component for active tasks. The technical flow explicitly ties actions to Azure identity (Entra) to maintain auditability.
  • Microsoft Foundry and Copilot Studio provide developer-facing surfaces where New Relic telemetry (via MCP) can be surfaced inline in IDEs or multi-agent orchestration environments. That lets devs query production performance with natural language while coding and reviewing.

Telemetry correlation and discovery​

New Relic continues to ingest metrics, traces, and logs (APM, OpenTelemetry pipelines). Azure Autodiscovery maps services, overlays configuration changes onto performance graphs, and feeds these causal relationships into the MCP payloads so agents and SRE surfaces receive prioritized, high-confidence signals rather than raw alerts. This is the foundation for converting an alert into an actionable remediation suggestion.

Verified claims and what still needs validation​

What is verified:
  • New Relic publicly launched an MCP Server in public preview (early November 2025) and documented onboarding steps.
  • New Relic published press materials announcing agentic integrations with Microsoft Azure that reference the Azure SRE Agent, Microsoft Foundry, and Azure Monitor.
  • Microsoft documents the Azure SRE Agent product page and AAU billing model (4 AAU/hour baseline; 0.25 AAU/second for active tasks) in its preview pricing materials. These are public and should be used to model expected monthly costs.
  • Market context (Gartner AI spending forecast) supports the macroeconomic premise that enterprise AI investment is accelerating and creates pressure to integrate observability into agentic workflows.
Claims requiring cautious reading:
  • The exact operational contract or joint technical whitepaper that details how Azure SRE Agent calls New Relic’s MCP Server in a production tenant (data flows, egress, encryption, supported payloads, and latency SLAs) is not yet documented as a formal New Relic–Microsoft co-authored technical integration guide in public channels. While vendor materials and demos show the workflow and feasibility, procurement and platform teams should validate the precise integration method and compliance envelope during pilots.
  • Availability constraints such as preview region restrictions, compliance constraints for FedRAMP/HIPAA, and marketplace availability may vary by tenant and region; these should be validated against both New Relic account settings and Azure subscription region controls before production rollout.

The operational case: benefits for engineering and SRE teams​

When architected carefully, the New Relic + Azure agentic integration can deliver concrete operational value:
  • Faster MTTR — Agents that fetch causal traces and dependency overlays can surface the likely root cause within minutes instead of hours, because they eliminate manual context assembly across consoles.
  • Reduced context-switching — Engineers see telemetry evidence inside the Azure portal, IDE, or Copilot Studio rather than toggling between monitoring dashboards and ticketing systems. That’s a measurable productivity gain in real incident playbooks.
  • Auditable automation — Packaging remediation hints as runbook templates and gating execution via identity-bound approvals (Azure Entra, RBAC) preserves human-in-the-loop safety while enabling low-risk automation to be executed reliably.
  • Developer feedback loop — Surfacing production telemetry in developer surfaces shortens the feedback loop between code changes and production impact, enabling faster debugging of regressions tied to specific commits or deployments.
  • Agentic observability for AI workloads — As organizations run agentic apps and models, the ability to correlate model calls, token usage, latency and downstream service impacts becomes vital; New Relic’s MCP-backed payloads aim to make that correlation first-class.

The risks and trade-offs — what engineering leaders must evaluate​

  • Cost and consumption modeling
  • Azure billing for agentic work uses AAUs. A baseline cost (4 AAU/hour per agent) plus active task costs (0.25 AAU/second per task) can add up quickly under frequent automation or parallel incident activity. Use the published AAU examples to model monthly burn for expected agent count and incident frequency.
  • Telemetry volume and observability cost
  • High-cardinality telemetry (AKS clusters, machine-learning training workloads, token-level model telemetry) raises ingestion, storage, and query costs in New Relic. Teams must design sampling, retention, and aggregation policies to balance fidelity with predictable cost.
  • Operational safety and runaway automation
  • Agentic remediation introduces the possibility of cascading actions if runbooks are insufficiently validated. Staging automation as “recommend → gate/approve → act,” with strict approvals and canary rules, is the prudent path. Relying solely on agent confidence scores without human checks risks unintended configuration changes.
  • Governance, compliance, and data residency
  • Preview availability and compliance exceptions (e.g., FedRAMP, HIPAA) may limit MCP usage. Enterprises with regulatory constraints must demand red-team testing and explicit data-flow diagrams showing what telemetry leaves the tenant, what is stored, and how long contextual snapshots are kept.
  • Model and agent governance
  • Agents driven by LLMs can produce plausible but incorrect remediation suggestions. Teams must version prompts, track model endpoints, and capture rationale for each suggestion (provenance) to enable audits and post-incident reviews. This is particularly important when agents touch stateful systems or billing-sensitive resources.
  • Vendor lock and multi-cloud considerations
  • Tight integration where Azure portal surfaces depend on New Relic telemetry (and New Relic depends on Azure Monitor exports) can bake in vendor contracts that complicate multi-cloud portability. Platform teams should codify runbooks as code and ensure observable artifacts can be rehydrated in alternate providers if needed.

Practical rollout checklist (recommended steps)​

  • Run a scoped pilot:
  • Start with one team, a narrow set of services (e.g., a single microservice or job queue), and a limited set of agent actions (read-only diagnostics; no automated writes).
  • Model AAU and telemetry costs:
  • Use Azure’s AAU pricing examples and New Relic ingestion estimates to build a monthly cost view for expected agent count and incident frequency.
  • Codify runbooks as code:
  • Convert recommended remediation steps into tested, versioned runbooks in CI/CD with automated tests and playbooks.
  • Enforce identity and RBAC:
  • Map agent privileges to least-privilege Azure Entra identities and include agents in periodic access reviews.
  • Define human-in-the-loop policies:
  • Set severity thresholds and action categories that require human approval versus those allowed as safe, automated fixes.
  • Validate data residency and compliance:
  • Obtain a clear data-flow diagram from vendors showing telemetry egress, retention, and compliance controls for your tenant.
  • Measure outcomes:
  • Track MTTR, number of human interventions prevented, false-positive automated actions, and realized cost savings from rightsizing recommendations.

Competitive context and market implications​

New Relic’s move is part of a broader trend: observability vendors (including Dynatrace, Elastic and others) and cloud providers are racing to populate agentic control planes with high-confidence diagnostics and auditable runbooks so that SRE workloads become more automated and less error-prone. Customers can expect vendor-specific differences in causal modeling approaches (statistical vs. causal-AI), storage models (lakehouses vs. indexed telemetry), and integration depth (export-only vs. bi-directional context exchange). Vendors that provide the most accurate, explainable diagnostics and tightest governance controls will be more successful in enterprise rollouts.

Critical analysis — strengths, weaknesses, and what to watch​

Strengths​

  • Practical productivity gains: Eliminating console hopping and surfacing telemetry where engineers work yields immediate, measurable time savings in incident response.
  • Standards-forward approach: Embracing MCP lowers integration friction across agent runtimes and reduces custom connector sprawl.
  • Auditability baked in: The emphasis on RBAC, Entra identity, and staged approvals aligns with enterprise governance expectations.

Weaknesses / risks​

  • Cost unpredictability: AAUs plus telemetry ingestion can create unexpected monthly bills without careful modeling. The preview AAU mechanics should be used to create conservative budgets.
  • Operational drift and safety: LLM-driven remediation hints can be incorrect; without rigorous runbook testing and approvals, you risk automated misconfiguration at scale.
  • Incomplete public integration docs: While demos and press releases show the workflow, a joint, detailed New Relic–Microsoft integration spec (data contracts, SLAs, compliance artifacts) is not yet published; this is a gap teams should validate during pilots.

What to watch next​

  • Publication of a joint technical integration guide from Microsoft and New Relic that details payload schemas, authentication, telemetry flow controls, and recommended guardrails.
  • Case studies showing real MTTR reductions and measurable cost impacts from customers who ran pilots in production.
  • Expansion of MCP support across additional observability vendors and agent runtimes, which would broaden interoperability and reduce single-vendor risks.

Verdict for platform engineering and SRE leads​

New Relic’s MCP Server plus the Azure integrations deliver a credible evolution in observability: from passive dashboards to agentic, contextualized operations that can speed incident response and reduce toil. The technical building blocks — MCP, Azure SRE Agent, Entra identity, and New Relic telemetry — line up into a sensible operational pattern: high-fidelity telemetry → structured context → auditable actions.
That said, success depends less on vendor promises and more on disciplined rollouts: conservative pilots, explicit cost modeling (AAUs + telemetry), strong runbook-as-code practices, and human-in-the-loop controls. The technology is ready for production experiments, but it is not a “set-and-forget” replacement for SRE judgement.

Final takeaways​

  • New Relic’s MCP Server public preview and Azure integrations materially lower the friction for agentic observability on Microsoft Azure, bringing contextual diagnostics into SRE and developer workflows.
  • Microsoft’s Azure SRE Agent pricing and AAU model should be modeled carefully during pilots; baseline and active costs are both significant levers.
  • The macro case for agentic observability is strong — Gartner-backed AI spending projections underline enterprise urgency — but operational success requires robust governance, cost controls, and validated runbooks.
  • Before wide rollout, engineering teams should validate the precise integration architecture, compliance footprint, and failure modes in a controlled pilot and insist on documented, auditable workflows.
New Relic’s announcement repositions observability as a first-class contributor to agentic automation in Azure; the immediate prize is lower MTTR and better developer feedback loops, while the longer-term prize will be trustworthy, auditable automation that scales without sacrificing safety or governance.
Source: HPCwire New Relic Introduces Agentic AI Integrations with Microsoft Azure - BigDATAwire
 

Neon blue blueprint of a cloud telemetry stack with New Relic, MCP Server, Azure, and governance modules.
New Relic’s latest push to put observability directly into the flow of cloud work—through a public-preview AI Model Context Protocol (MCP) Server and new integrations with Microsoft Azure—marks a pivotal evolution in how SREs, DevOps and developers will diagnose and act on incidents in the era of agentic AI. The company is shipping a protocol bridge that lets Azure’s emerging control-plane agents (notably the Azure SRE Agent and Microsoft Foundry surfaces) call New Relic for time‑bound telemetry, causal clues and packaged remediation hints so teams can reduce mean time to resolution (MTTR) and avoid costly context switching. This change is consequential: New Relic made the MCP Server publicly available in early November 2025 and announced broader Azure-focused integrations at Microsoft Ignite 2025, positioning its platform as the observability backbone for agentic operations across Azure.

Background​

Why agentic observability matters now​

Agentic AI—multi-step agents that orchestrate tools and services to accomplish tasks—has moved quickly from research to production-grade tooling across hyperscalers and developer platforms. The Model Context Protocol (MCP) has emerged as a de facto interoperability layer so agents can call tools deterministically and in standardized ways. Microsoft’s Foundry, Copilot Studio and the Azure control plane explicitly lean on MCP and related agent frameworks to give agents access to tools, telemetry and governance services; New Relic’s MCP Server plugs observability data into that same fabric so agents can act with situational awareness. The macroeconomic tailwind is unambiguous: Gartner forecasts global AI spending to top approximately $2.02 trillion in 2026, creating intense pressure on engineering and IT teams to scale AI-enabled workflows while preserving reliability and governance. Vendors are responding by wiring monitoring, causal insights and runbook automation directly into the places engineers already work.

What New Relic announced (the essentials)​

MCP Server in public preview​

New Relic’s AI Model Context Protocol (MCP) Server is a protocol bridge that exposes New Relic telemetry, NRQL query capabilities and packaged diagnostic payloads to MCP-capable agents. In practice, the MCP Server converts plain‑language or MCP-standardized tool requests into structured observability queries and returns time‑bound evidence: traces, dependency graphs, probable causes and recommended remediation steps that can be surfaced inside agent UIs or portal surfaces. The MCP Server entered public preview in early November 2025.

Native integrations with Microsoft Azure​

At Microsoft Ignite 2025 New Relic announced integrations that deliver its observability insights directly to Azure’s agentic surfaces:
  • Azure SRE Agent now calls New Relic’s MCP Server when alerts fire or deployments occur, enabling portal-native recommendations and (subject to tenant governance) gated remediation actions.
  • Microsoft Foundry and Copilot Studio can surface New Relic telemetry inside developer workflows, so agents and developers see production impact and diagnostic evidence inline.
  • New Relic Azure Autodiscovery gives Azure-tailored dependency maps and overlays configuration changes on performance graphs to help find root causes faster.
  • New Relic Monitoring for SAP Solutions is now available in the Microsoft Marketplace, offering agentless SAP observability for Azure customers.
Taken together, these features aim to convert raw telemetry into agent-ready context and auditable outputs so that agentic automation becomes both useful and governable inside Azure.

How it works: a technical walk-through​

The MCP Server as the interoperability layer​

At its core the MCP Server publishes a tool manifest and endpoints that MCP-capable agents can call. Agents request context (for example: “Why did checkout latency spike after last deployment?”), and the MCP Server generates NRQL queries or equivalent retrieval calls and returns structured payloads with:
  • Time-bounded traces and spans
  • Dependency and topology overlays
  • Ranked probable causes with confidence signals
  • Suggested remediation steps packaged as runbook templates for gating or execution
This architectural pattern reduces brittle point‑to‑point integrations (agent → each observability product) and instead centralizes observability access through a standardized protocol. New Relic documented setup and preview constraints with onboarding guides in November 2025.

Azure SRE Agent — recommend → gate → act​

Azure’s SRE Agent is a portal-native reliability assistant and execution surface that uses a consumption unit called Azure Agent Units (AAUs) for billing. The typical flow that New Relic and Microsoft describe is:
  1. Detect — New Relic or Azure Monitor raises an alert.
  2. Enrich — Azure SRE Agent calls New Relic MCP Server to fetch causal evidence and remediation hints.
  3. Recommend — The SRE Agent displays ranked causes and suggested runbook steps in the Azure portal.
  4. Gate — Actions are gated by tenant governance (Azure Entra identity, RBAC) and human approvals where required.
  5. Act — For low-risk, idempotent steps (scale pods, clear cache, restart service), the SRE Agent can execute runbooks, recording audit trails and approvals.
Microsoft publishes the AAU billing model publicly: each agent incurs a baseline of 4 AAU per hour plus 0.25 AAU per second for active tasks. Teams must model these units into monthly FinOps scenarios to avoid unexpected bill shock.

Foundry and developer workflows​

Foundry is Microsoft’s “AI app and agent factory”: a hub for building, testing and governing agents with integrations into GitHub, Visual Studio and Copilot Studio. By feeding New Relic telemetry into Foundry, developers can query production performance while coding, see agent‑to‑agent activity for multi‑agent workflows, and validate that changes produce expected production outcomes. This tightens the feedback loop between code, deployment and production diagnostics.

What this actually delivers to teams​

  • Faster MTTR: Agents that can fetch causal traces and dependency overlays can surface likely root causes rapidly, eliminating the manual assembly of context across consoles. Early pilots reported measurable MTTR improvements in targeted scenarios, but outcomes are environment-specific and should be validated with POCs.
  • Reduced context switching: Engineers see telemetry and remediation suggestions inside the Azure portal, IDEs, or Copilot Studio instead of juggling multiple dashboards. This is a tangible productivity win for Azure‑first teams.
  • Auditable automation: Packaging remediation hints as runbook templates and gating execution through Azure Entra/RBAC preserves human-in-the-loop controls while enabling safe automation.
  • Observability for AI workloads: New Relic’s MCP payloads can include model-call metadata (token usage, latency), making it possible to correlate model behavior with downstream service impacts. This becomes important as more agentic applications reach production.

Critical analysis: strengths, limits and operational trade-offs​

Strengths — where this makes sense​

  • Meets teams where they work. Consolidating telemetry into Azure SRE Agent and Foundry surfaces materially reduces friction for Azure-native organizations and simplifies governance. Microsoft’s product surfaces are broadly adopted by enterprises, making this a pragmatic integration play.
  • Standardized agent-tooling. Adopting MCP as the interoperability standard avoids brittle custom integrations and makes it easier to onboard new agents or evolve agent capabilities without reworking the telemetry plumbing. This is important as agent frameworks proliferate.
  • Causality-first approach. New Relic’s packaging of ranked probable causes with contextual evidence reduces “noise” and gives SREs a more actionable starting point than raw alerts, improving signal-to-noise in high-cardinality environments.

Risks and practical constraints​

  • Cost and consumption modeling. Agentic automation introduces a new billing axis: AAUs. Baseline scanning plus active task execution can add up—particularly for many agents, frequent incidents, or chatty multi-agent workflows. Organizations should model AAU burn rates and pair them with New Relic ingestion costs to forecast total run rates. Microsoft’s AAU mechanics are explicit (4 AAU/hour baseline; 0.25 AAU/second active tasks), and New Relic customers must fold those figures into FinOps planning.
  • Telemetry volume and observability spend. High-cardinality telemetry from AI workloads (per‑token model traces, per-request traces across hundreds of microservices) increases ingestion and storage costs. Teams must design sampling and retention strategies that preserve actionable signal while controlling spend.
  • Operational safety and runaway automation. Agents can cascade actions if runbooks are insufficiently validated. The prudent pattern is recommend → gate/approve → act with strict approvals, canary rules and kill switches. Relying solely on model confidence scores without approvals risks unintended configuration changes and business impact.
  • Governance, compliance and data residency. Enterprises with strict compliance requirements (FedRAMP, HIPAA, regional data residency rules) must map what telemetry leaves the tenant and how contextual snapshots are retained. Preview-stage integrations may have availability and compliance constraints that vary by tenant and region.
  • Model hallucinations and provenance. LLM‑driven agents can propose plausible but incorrect fixes. Organizations must version prompts, track which model endpoints agents used, and capture provenance for each suggested remediation to enable audits and post‑incident reviews.
  • Vendor lock and multi‑cloud portability. Deeply embedding New Relic telemetry into Azure native surfaces simplifies operations on Azure but may create migration friction for multi‑cloud strategies. Platform teams should codify runbooks as code and ensure observable artifacts can be rehydrated on other providers if needed.

Practical rollout: recommended steps for platform and SRE teams​

  1. Start with a scoped pilot: select a single team and a narrow set of services (for example, one microservice or a single AKS cluster). Limit the agent’s actions to read‑only diagnostics and human‑in‑the‑loop recommendations at first.
  2. Model AAU and telemetry costs: use Microsoft’s AAU pricing examples to estimate monthly consumption and combine that with New Relic ingestion & retention cost projections. Factor in worst‑case bursts and parallel incident scenarios.
  3. Codify runbooks as code: turn remediation hints into versioned, tested automation playbooks in CI/CD and require automated tests before enabling execution in production.
  4. Gate and approve: default to recommendation-only for anything with material availability or billing impact; require multi-party approvals and enforce RBAC policies.
  5. Instrument observability governance: track which telemetry fields leave the tenant, define retention windows, and apply redaction or tokenization where necessary for sensitive data.
  6. Measure KPIs: define MTTR baseline, quantify time reclaimed per incident, and track AAU cost per incident to evaluate ROI before widening the rollout.

Where claims are verified — and where to be cautious​

  • Verified, independently: New Relic published the MCP Server public preview and the November Azure integrations in official documentation and press releases dated November 4 and November 18, 2025; Microsoft publishes Foundry and SRE Agent documentation, including the AAU pricing model and developer guidance. These vendor artifacts confirm the core technical claims and the availability of previews.
  • Cross-checked with independent industry reporting: Analyst coverage and trade press echo the core narrative (agentic control-plane surfaces plus observability bridging), and Gartner’s spending forecast supports the market rationale. Use those independent signals for context but treat vendor statements about firsts or exclusivity as marketing until validated in customer POCs.
  • Claims requiring empirical validation: specific MTTR reductions, percent productivity uplifts, or cost‑savings will vary by workload, telemetry fidelity and runbook quality. Vendor demos and press materials highlight benefits, but real-world impact must be measured in situ. Any single numeric uplift cited in marketing should be validated with controlled pilots.

Business and market implications​

  • For Azure-first organizations, the tight coupling of New Relic telemetry into Azure’s agentic surfaces reduces friction and shortens feedback loops between code and production. That can accelerate time to market for AI‑enabled features and reduce incident toil when implemented conservatively.
  • FinOps will change: AAUs add a new consumption axis and, combined with rising telemetry ingestion from AI workloads, will make cost management a core part of SRE strategy. Expect platform engineering and FinOps to collaborate more closely to set AAU budgets, enforce sampling and retention policies, and define stimulus/response thresholds for automation.
  • Competitive dynamics: Observability platforms that adopt MCP or equivalent agent protocols can position themselves as the “data plane” for agentic workflows across providers and IDEs. For vendors, deep integration with hyperscaler control planes is a strategic way to remain central to enterprise operational stacks; procurement teams should evaluate portability and exit options before committing.

Final verdict: pragmatic optimism with disciplined governance​

New Relic’s MCP Server and Azure integrations are a necessary, pragmatic step toward making agentic AI operationally useful and auditable. For Azure-first enterprises, surfacing New Relic’s causal telemetry and runbook templates inside the Azure SRE Agent and Microsoft Foundry reduces manual context assembly and advances a safer route to partial automation. Public documentation and press releases confirm the technical approach and preview availability, and Microsoft’s AAU pricing is explicit—giving teams the data needed for FinOps planning. That said, the payoff depends on disciplined implementation: conservative pilots, runbooks as code, rigorous identity- and policy-based gating, telemetry cost controls, and careful validation of agent suggestions. Without those guardrails, organizations risk runaway costs, unsafe automation, and compliance gaps. The right path is iterative—prove value in a narrow domain, instrument outcomes, and expand only when confidence and ROI are demonstrable.

Quick reference: verified facts and dates​

  • New Relic AI MCP Server entered public preview on November 4–5, 2025.
  • New Relic announced agentic integrations with Microsoft Azure at Microsoft Ignite on November 18, 2025.
  • Microsoft Foundry and Azure SRE Agent are publicly documented Azure services; Foundry integrates with IDEs and Copilot Studio and supports MCP.
  • Microsoft’s Azure SRE Agent billing is based on Azure Agent Units (AAUs): 4 AAU/hour baseline and 0.25 AAU/second for active tasks (preview pricing model).
  • Gartner forecasts global AI spending to exceed $2.02 trillion in 2026.
New Relic’s move is an important step toward making agentic AI operationally useful at scale, but success will come down to conservative pilots, measurable KPIs and diligent governance—not marketing promises.

Source: Express Computer New Relic introduces agentic AI integrations with Microsoft Azure to reduce MTTR and boost developer productivity - Express Computer
 

Back
Top