Microsoft's announcement that Azure will protect data not only at rest and in transit but
while it’s being processed marks a significant shift in cloud security: Azure Confidential Compute places sensitive data inside Trusted Execution Environments (TEEs) so that even plaintext values inside memory are hidden from the host, hypervisor, and—depending on the trust model—even Microsoft itself.
Background
When most organizations moved to the cloud they relied on two standard encryption pillars:
data at rest (disk encryption) and
data in transit (TLS). What remained exposed, however, was
data in use—data that must be decrypted to be processed by applications or machine learning models. Azure Confidential Compute fills that gap by enabling applications to place the most sensitive code and data into
enclaves or TEEs, which cryptographically protect memory and enforce that only authorized code can access secrets. Microsoft first introduced the Confidential Compute concept for Azure in public previews and early access programs, positioning it as a platform capability that supports multiple TEE implementations so developers can choose a trust model appropriate for their workload. The two initial TEEs Microsoft promoted were a software-based Trusted Execution Environment implemented via Hyper-V’s Virtual Secure Mode (VSM) and a hardware-based enclave using Intel Software Guard Extensions (SGX).
What Microsoft announced and why it matters
The core promise
- Encryption in use: Confidential Compute ensures sensitive data remains encrypted while being processed in memory. Data is decrypted only inside the protected TEE, where cryptographic and policy protections prevent inspection or tampering from outside actors.
- Multiple trust models: Customers can choose SGX enclaves when they want to exclude the cloud operator from the trust boundary, or VSM-based TEEs when they prefer a Hyper-V-based software isolation model.
- Platform tooling and compatibility: Microsoft committed to SDKs, attestation tooling, and integration with Azure services so that confidential workloads can be developed on Windows and Linux and deployed on Azure VMs and container platforms.
Why this is meaningful: many regulated industries—finance, healthcare, government—need technical guarantees that plaintext secrets won’t be exposed to operators, compromised admin credentials, or the host OS. Confidential Compute adds hardware-rooted cryptographic controls and attestation chains that materially reduce these risks and unlock use cases such as confidential multi-party analytics, protected model inferencing, and encryption-aware database processing.
Technical overview: VSM vs SGX and how TEEs work
Virtual Secure Mode (VSM)
VSM is a Hyper‑V feature that creates a minimal, isolated virtualized environment inside which sensitive components can run. That environment is isolated from the host OS and other VMs by hypervisor-enforced boundaries. In practical terms, VSM allows applications to split into two parts: the general-purpose logic that runs in the regular VM, and the sensitive logic that runs inside a small, protected VSM enclave. The hypervisor prevents administrators and host processes from reading or tampering with the enclave memory. Benefits of VSM:
- Works with the Hyper-V ecosystem and requires fewer hardware prerequisites than SGX.
- Easier to adapt existing services that can be refactored into protected modules.
- Offers an operational model where the cloud operator still controls the platform but cryptographic protections limit what administrators can extract.
Limitations:
- VSM is software-based isolation that ultimately depends on the hypervisor and platform firmware; its threat model is different from processor-protected TEEs.
Intel SGX (Software Guard Extensions)
SGX creates an enclave within a process that the CPU hardware enforces. Memory pages assigned to an enclave are encrypted as they leave the CPU package; they are decrypted only within the CPU’s protected enclave area. Remote attestation allows the enclave to cryptographically prove to a remote party that:
- It runs on genuine Intel hardware,
- It is running a specific, measured code image, and
- Security patches and microcode are at certain levels (when attestation services are used appropriately).
Benefits of SGX:
- Strong hardware-enforced isolation: even the host OS and cloud operator cannot read enclave memory in principle.
- Enables use cases where customers want to exclude the cloud provider from the trust boundary.
Limitations and practical caveats:
- SGX’s security is not absolute in practice; research has demonstrated microarchitectural side-channel attacks (for example SGAxe, CacheOut, CrossTalk and others) that can extract enclave secrets if the attacker has sufficient capabilities and the platform has unpatched vulnerabilities. These risk vectors force continuous mitigation via microcode, firmware updates, and attestation policies.
How Azure implements Confidential Compute today (platform evolution)
Azure implemented Confidential Compute through a progression of VM families and attestation services that give customers options for hardware-backed and VM-level confidentiality:
- DC-series VMs (SGX-enabled): The DCsv2 and DCsv3 families offered SGX-enabled VMs for running enclave workloads in Azure public regions, enabling developers to use SGX enclaves without owning SGX-capable servers.
- Confidential VMs with SEV-SNP (AMD) and TDX (Intel): Azure expanded to whole‑VM confidentiality modes using AMD SEV-SNP and Intel TDX (or trust-domain architectures) for customers that prefer VM-level memory encryption rather than enclave-based models. These options aim to support "lift-and-shift" workloads where minimal code changes are desired.
- Platform attestation and Microsoft Azure Attestation (MAA): Azure provides attestation services so customers can verify the platform state and the integrity of TEEs before provisioning secrets into an enclave. Attestation is a key part of the security story because it establishes the measurable chain of trust needed to safely hand secrets to a remote enclave.
These platform choices reflect a pragmatic approach: support hardware enclaves for the strongest isolation while also offering confidential VM models that balance ease of migration and performance.
Real-world use cases and early adoption
Azure and early partners highlighted several compelling use cases that confidential computing unlocks:
- Confidential multi-party computation: Multiple organizations can contribute datasets to a joint computation without disclosing their raw data to each other—useful in finance (fraud detection), healthcare (collaborative genomics), and telecom analytics.
- Protected machine learning inference: Models that work on sensitive inputs (medical images, personally identifiable information) can run inference inside enclaves so the raw inputs never leave the protected boundary.
- Encryption-aware database processing: Extensions like Azure SQL’s enclave-enabled Always Encrypted let SQL Server execute queries on encrypted columns inside an enclave, enabling richer database functionality for sensitive fields.
A range of early adopter stories and marketplace partners sprung up around these scenarios as Azure made SGX and VSM accessible through preview and general availability VM families.
Strengths: what Confidential Compute delivers well
- Tangible reduction of privileged‑insider risk: By design, TEEs constrain what cloud operators or compromised local admins can access—this matters for organizations that must demonstrate technical controls against privileged access.
- Flexible trust models: The ability to choose between enclave-based SGX, VM-level SEV/TDX, or software-enforced VSM lets customers pick the right balance between portability, performance, and isolation.
- Ecosystem integration: Microsoft invested in SDKs, attestation flows, Azure Key Vault and Managed HSM integrations so confidential workloads can incorporate robust key management and auditability.
- New business scenarios: Confidential Compute lowers the legal and technical barriers for multi-tenant or cross-organizational analytics where sharing raw plaintext was previously prohibitive.
Additionally, platform upgrades—such as adding AMD SEV-SNP and Intel TDX options—reduce vendor lock-in and provide migration paths for legacy applications that can’t or shouldn’t be rewritten as enclave-dependent software.
Risks, caveats, and the attack surface that remains
Confidential Compute is a powerful control, but it is not a silver bullet. Key risks to understand:
1. Hardware and microarchitectural vulnerabilities
TEEs rely on CPU microarchitecture. Over the years, researchers have demonstrated side-channel attacks that can recover secrets from SGX enclaves under certain conditions (SGAxe, CacheOut, CrossTalk, Foreshadow/L1TF and related transient execution attacks). These discoveries force regular microcode and platform updates; they also mean the
threat model must be realistic: TEEs raise the bar but do not make systems impenetrable to nation-state‑level or advanced persistent attackers with host-level footholds. Flag: SGX-based claims that “not even Microsoft can see the data” must be qualified—SGX prevents ordinary administrative access, but the practical security guarantees depend on patch posture, attestation practices, and the absence of unmitigated side channels.
2. Key management is the operational fulcrum
Protecting keys—how they are generated, where they are provisioned, and who controls them—remains the core trust decision. Confidential Compute reduces exposure of decrypted values
in memory, but if attestation or key-certification processes are misconfigured, or keys are provisioned by a third party, the confidentiality guarantees can be undermined. Azure’s Managed HSM, Key Vault, and external key custody options are critical components in a correctly implemented solution.
3. Supply-chain, firmware, and lifecycle risks
TEEs are only as secure as the firmware, microcode, and platform provisioning pipeline. Compromised firmware, unsigned updates, or weak supply-chain controls can corrupt the root of trust. Customers should demand attestation evidence and independent audits for high‑assurance deployments.
4. Developer complexity and observability
Enclave programming, attestation integration, and secure secret provisioning introduce operational and development complexity. Debugging inside enclaves is intentionally constrained; organizations must adapt CI/CD, monitoring, and incident response processes to account for limited observability within TEEs.
Practical guidance for architects and security teams
- Define threat models clearly. Decide whether you need to exclude the cloud operator from the trust boundary, or whether software-based isolation with strong key control is sufficient.
- Use attestation before provisioning secrets. Integrate Azure Attestation or vendor attestation services to validate platform integrity and microcode levels before keys are released to enclaves.
- Pair TEEs with strong key custody. Use Managed HSM, Dedicated HSM, or Bring‑Your‑Own‑Key models where compliance requires cryptographic separation.
- Plan for patching and mitigation. Establish processes for rapid microcode and firmware updates; treat cryptographic attestation policies as living documents tied to patch levels.
- Test performance and behavior. TEEs can introduce resource constraints (for example, SGX enclave memory limits in earlier generations) and tail-latency differences; benchmark realistic workloads.
- Audit and engage vendors. Request attestation and compliance documentation from cloud providers and hardware vendors; for sovereign or regulated deployments consider localized governance (partner-operated sovereign clouds) that combine confidential compute with regional HSM custody models.
Where the technology is going: from SGX to TDX/SEV and beyond
Since the initial Confidential Compute announcements, the industry has evolved toward broader, whole-VM memory encryption and trust-domain models that simplify migration and reduce the need for application rewrites.
- AMD SEV-SNP provides VM-level memory encryption and stronger integrity checks that protect a whole VM from the hypervisor. This enables simpler "lift-and-shift" migrations into confidential VMs without enclave rewrites.
- Intel TDX / Trust Domain Extensions aim to provide a similar whole-VM confidential model where the CPU’s trust domain protects VM memory from the host. These architectures reduce reliance on enclave programming while maintaining hardware-enforced isolation guarantees.
- Platform integration and attestation improvements (Microsoft Azure Attestation and Integrated HSM approaches) are designed to make it easier and faster to verify platform integrity and provision keys securely for confidential workloads.
This trend reflects a maturing confidential computing ecosystem: more choices for customers, broader hardware support, and easier migration pathways—at the cost of a more nuanced decision set for architects.
Critical analysis: strengths, business impact, and remaining unknowns
Microsoft’s push for Confidential Compute addresses a long‑standing gap in cloud security and provides the strongest practical technical control many customers have had for on‑cloud secret processing. The combination of enclave models (SGX), hypervisor-enforced TEEs (VSM), and whole-VM memory encryption (SEV‑SNP/TDX) creates a spectrum of options that fit different organizational priorities: maximum isolation, minimal application change, or operational familiarity. Notable strengths:
- Capability breadth: Azure’s portfolio now spans SGX enclaves, confidential VMs, confidential containers, and attestation services—giving customers alternatives rather than a one-size-fits-all lock-in.
- Ecosystem investment: Microsoft’s work on SDKs, SQL enclaves, Key Vault integrations, and marketplace partners simplifies adoption for enterprise customers.
Potential risks and open questions:
- Hardware vulnerability lifecycle: As academic research has repeatedly shown, microarchitectural attacks evolve. Customers must assume that TEEs need ongoing monitoring, patching, and contingency plans.
- Operational trust vs contractual protections: Technical controls reduce certain vectors but don’t fully replace legal, contractual, and procedural safeguards. Customers in strictly regulated environments should combine technology with contractual audits, independent attestation, and key custody arrangements.
- Performance and economics at scale: Enclave and confidential VM performance characteristics vary by SKU and workload. The cost tradeoff versus standard VMs should be evaluated with pilot workloads and end‑to‑end metrics.
Where claims are hard to fully verify: Statements such as
“no one—not even Microsoft—can access enclave data” are directionally correct for properly configured hardware‑rooted enclaves, but they depend on: the absence of unmitigated hardware bugs, correct attestation and key management, and secure firmware lifecycle. For high assurance, customers should treat these as
conditional guarantees and require documented attestation evidence and key custody proofs.
Conclusion
Azure Confidential Compute is a strategic and technically significant expansion of cloud security controls. By encrypting data while it’s processed, Microsoft and its silicon partners have closed a gap that long held back cloud adoption for the most sensitive workloads. The platform’s multi‑vendor approach—enclaves, VSM, SEV‑SNP, and TDX—gives architects a flexible palette of trust models and migration options. At the same time, confidential computing should not be treated as a panacea. Strong key management, attestation discipline, firmware and microcode hygiene, and realistic threat modeling are essential to realize its promise. Organizations that pair Confidential Compute with robust operational controls, independent attestations, and pilot-based performance validation will be best positioned to move regulated and IP‑sensitive workloads to Azure with confidence.
Source: BetaNews
Microsoft adds Confidential Compute to Azure cloud platform