Charles Hoskinson’s defense of relying on hyperscalers for Cardano’s upcoming Midnight mainnet — and his claim that cryptography, multi‑party computation (MPC) and confidential computing neutralize the centralization risk — is a legitimate technical position, but it is not the only one, and it understates several structural and operational failure modes that remain real even when state‑of‑the‑art cryptography is deployed. The debate that played out at Consensus in Hong Kong in February 2026 crystallizes a practical question every blockchain architect must answer today: can cryptography and enclave technology alone make dependence on a small set of global cloud providers compatible with the long‑term goals of decentralization? The short answer: yes, to an extent — but no, not unconditionally or without careful, protocol‑level mitigations. onsensus Hong Kong in February 2026 Charles Hoskinson defended Cardano’s plan for Midnight — a privacy‑focused smart contract chain — by arguing that modern cryptographic primitives and hardware protections remove the single‑point‑of‑failure argument against using hyperscalers such as Google Cloud and Microsoft Azure as early infrastructure partners. The thrust of his argument was threefold: (1) advanced cryptography prevents cloud operators from reading or altering secrets, (2) MPC distributes key material so no single host can reconstruct secrets, and (3) confidential computing (TEEs, confidential VMs) encrypts data while in use to prevent provider access. That defense is plausible, technically sophisticated and important; but it is incomplete when evaluated against operational realities and the evolving threat surface around hardware and infrastructure concentration.
Hyperscalers are not villains by default. There are concrete reasons prominent projects lean on them.
More concretely, TEEs depend on a chain of hardware, firmware and vendor‑maintained microcode. If any link in that chain is compromised — whether by supply‑chain manipulation, buggy microcode, or a future speculative‑execution or memory bus side channel — the confidentiality guarantees weaken. Attestation mechanisms help detect some classes of tampering, but attestation itself hinges on vendor‑controlled roots of trust and may be brittle when firmware updates or attestation policies change.
Hyperscalers can still exert influence via non‑inspection controls: rate‑limits, network throttling, or selective region availability. History shows cloud providers suffer outages and, at times, enforce content and service restrictions for legal or compliance reasons; an architecture that anchors critical proof or availability functions to a single vendor thus inherits those systemic risks. Recent hyperscaler outages across providers — and the industry conversation around multi‑cloud resilience — make the operational fragility of concentrated cloud dependency an empirical reality, not a hypothetical.
That model — execution off‑chain, verification on‑chain — underpins rollups, zero‑knowledge proofs, and verifiable compute networks. The crucial architectural question is not whether L1 can do the compute but who controls the execution fabric that produces the verifiable outputs. If provers, sequencers, or DA (data‑availability) providers are concentrated inside hyperscaler networks, the dispute becomes one of infrastructure dependency rather than cryptographic correctness. Independent analyses of rollup and prover ecosystem roadmaps emphasize the need to decentralize provers and DA layers precisely because off‑chain compute centralization can reintroduce single‑point failure modes.
Hyperscalers optimize for generality: they serve thousands of heterogeneous workloads and provide elasticity and tooling that favor optionality. That generality adds layers — virtualization, multi‑tenant orchestration and compliance scaffolding — that increase overhead for narrow, persistent workloads like ZK proof generation.
By contrast, a vertically integrated proving network can trade generality for sustained efficiency:
Key design principles for fortifying decentralization while leveraging hyperscaler capabilities:
For teams building privacy‑first, verifiable compute systems today, the practical path is hybrid and defensive: use hyperscalers for reach, acceleration and early reliability; build vertically‑specialized proving capacity where efficiency matters; use decentralized DA and storage to anchor durability; and adopt governance that enshrines rapid recovery and distributed participation. Decentralization is not a binary checkbox that cryptography ticks for you — it is an engineering posture baked into choices about who runs hardware, who stores artifacts, and how systems respond when the unexpected happens.
Put plainly: don’t cede the critical proving and availability plane to brand‑name cloud vendors and then hope cryptography will save you if the infrastructure itself becomes the chokepoint. Design so the network keeps working — perhaps slower, perhaps differently — even when a major provider turns away. That combination of cryptographic rigor and infrastructural diversity is what will actually preserve decentralization in practice, not just in theory.
Source: CoinDesk Hoskinson might be wrong about the future of decentralized compute
Why the hypelling
Hyperscalers are not villains by default. There are concrete reasons prominent projects lean on them.- They provide elastic, globally distributed capacity that few specialized networks can match overnight. Hyperscalers operate massive datacenters and a global networking backbone that makes them an obvious choice for bursty or latency‑sensitive workloads.
- They have invested in confidential computing — services that aim to encrypt data in use via hardware TEEs such as Intel TDX, AMD SEV‑SNP and vendor‑managed attestation systems — and they publish tooling and managed offerings that shrink time‑to‑market for privacy‑first applications. Google Cloud, Microsoft Azure and AWS each talk openly about confidential VMs, attestation and isolation features tailored for sensitive workloads.
- MPC and threshold cryptography are mature enough to power many custody, signing, and coordination use cases today. Major fintech and custody platforms already use MPC in production to avoid single‑device key custody and to create robust signing workflows.
Why cryptography and TEEs don’t eliminate infrastructure concentration risk
Cryptography and TEEs change the nature of the risk; they do not eliminate the existence of an operational leverage point.TEEs are stronger, but not foolproof
Trusted Execution Environments narrow the attack surface by encrypting memory and providing attestation, but they are still hardware and microarchitecture that can and have been attacked. Academic surveys and field research have repeatedly demonstrated side‑channel and microarchitectural attacks against popular enclave technologies (Intel SGX, AMD SEV families, ARM TrustZone), and new attack classes continue to appear. For example, recent systematizations of TEE vulnerabilities show persistent classes of side‑channel and power‑management attacks; and applied research in 2025 found novel DDR5/attestation‑targeting side channels capable of extracting enclave secrets from commodity components. Those vulnerabilities demonstrate the difference between “hard to read” and “impossible to read.” TEEs raise the bar, but do not make enclosure arguments absolute.More concretely, TEEs depend on a chain of hardware, firmware and vendor‑maintained microcode. If any link in that chain is compromised — whether by supply‑chain manipulation, buggy microcode, or a future speculative‑execution or memory bus side channel — the confidentiality guarantees weaken. Attestation mechanisms help detect some classes of tampering, but attestation itself hinges on vendor‑controlled roots of trust and may be brittle when firmware updates or attestation policies change.
MPC distributes keys but expands the trust surface
MPC avoids single custodian risk by splitting cryptographic responsibilities across many parties, but that distribution comes with coordination, communication and governance complexity. Reliable MPC at scale raises operational questions:- Who runs the MPC participants? If a small set of cloud providers host a majority of MPC shards, the distribution is only theoretical.
- Network latency and bandwidth matter for MPC performance; increasing the number of parties increases coordination overhead, sometimes quadratically.
- Governance and key‑rotation policies require robust on‑chain or off‑chain coordination; those meta‑systems are frequent weak points.
The cloud can still control the pipes, not the math
Even if cloud providers cannot directly decrypt enclave memory or reconstruct MPC secrets, they control operational levers:- Bandwidth and cross‑region networking
- Capacity allocation and throttling during periods of scarcity
- Physical machine access and removal of ROI‑negative customers
- Compliance actions and region restrictions driven by governmental orders
Hyperscalers can still exert influence via non‑inspection controls: rate‑limits, network throttling, or selective region availability. History shows cloud providers suffer outages and, at times, enforce content and service restrictions for legal or compliance reasons; an architecture that anchors critical proof or availability functions to a single vendor thus inherits those systemic risks. Recent hyperscaler outages across providers — and the industry conversation around multi‑cloud resilience — make the operational fragility of concentrated cloud dependency an empirical reality, not a hypothetical.
The Layer‑1 compute fallacy: capacity vs. control
Hoskinson’s point that no single Layer‑1 (L1) can run global compute workloads is correct in context: L1 blockchains were never designed to host the world’s compute‑intensive workloads (AI training, analytics pipelines, high‑frequency systems). L1s are consensus engines: they verify state transitions, enforce rules, and provide durable settlement. What modern scaling designs do is move heavy lifting off‑chain while making results easily verifiable on‑chain.That model — execution off‑chain, verification on‑chain — underpins rollups, zero‑knowledge proofs, and verifiable compute networks. The crucial architectural question is not whether L1 can do the compute but who controls the execution fabric that produces the verifiable outputs. If provers, sequencers, or DA (data‑availability) providers are concentrated inside hyperscaler networks, the dispute becomes one of infrastructure dependency rather than cryptographic correctness. Independent analyses of rollup and prover ecosystem roadmaps emphasize the need to decentralize provers and DA layers precisely because off‑chain compute centralization can reintroduce single‑point failure modes.
Specialization wins for sustained, narrow workloads
A common framing pits hyperscalers’ scale against specialized networks’ efficiency. But specialization matters.Hyperscalers optimize for generality: they serve thousands of heterogeneous workloads and provide elasticity and tooling that favor optionality. That generality adds layers — virtualization, multi‑tenant orchestration and compliance scaffolding — that increase overhead for narrow, persistent workloads like ZK proof generation.
By contrast, a vertically integrated proving network can trade generality for sustained efficiency:
- Customized ASICs or tuned GPU clusters deliver better proof‑per‑dollar and proof‑per‑watt for fixed circuits.
- Co‑design of prover software, circuit scheduling and aggregation logic reduces pipeline waste.
- Specialized operators amortize capital over predictable, 24/7 workloads rather than short‑term spot rentals.
Practical architecture: use hyperscalers, but don’t depend on them
Hyperscalers can and should be part of a resilient stack — but as accelerants, not as the core that, when removed, collapses the system.Key design principles for fortifying decentralization while leveraging hyperscaler capabilities:
- Separate settlement from execution and availability.
- Keep final settlement and dispute resolution on an L1 that is vendor‑agnostic.
- Publish succinct proofs on‑chain so verification remains possible independent of any hosting provider.
- Diversify the prover and storage ecosystem.
- Incentivize a geographically and economically diverse set of provers via token economics, staking and slashing, and lightweight onboarding. Marketplaces for proving jobs reduce single‑operator leverage.
- Store proof artifacts, critical metadata and historical records on decentralized storage or DA layers (Filecoin, Arweave, Celestia/EigenDA) to prevent a single cloud provider from withdrawing crucial evidence. Decentralized DA/storage reduces single‑vendor “withdrawal” risk.
- Use hyperscalers for bursts and edge distribution, not as the persistent proving plane.
- Treat cloud VMs as elastic accelerators for non‑critical or temporary tasks.
- Anchor the canonical proving and storage mechanisms to economically aligned, protocol‑native operators.
- Design for graceful degradation.
- If a major host disappears, the network should slow down and reassign jobs rather than halt finality or permit censorship of proof submission.
- Create fallback submission paths (e.g., light‑client bridging, multi‑relay proof submission) so proofs can be published by alternative channels in emergencies.
- Harden attestation and auditability.
- Use multi‑vendor attestation strategies (chains of attestation from different hardware providers) and publicly auditable measurement logs so independent verifiers can corroborate attestation claims.
- Mandate reproducible verifier code and on‑chain proofs of prover software versions to reduce opaque “black box” trust.
What to watch — risks, trade‑offs and policy friction
- Supply‑chain and hardware attacks remain an open threat. Researchers continue to find novel microarchitectural and memory‑bus attacks against TEEs; defendable architectures must assume that some classes of hardware bugs will surface over time and plan for key rotation, revocation and attestation evolution.
- MPC hides secrets but raises coordination costs. Projects that lean heavily on MPC must validate the economic and latency profile of their chosen topology at production scale, and they must design governance for shard recovery and reseeding. Academic and industry studies report continued tension between MPC’s privacy guarantees and the practical realities of communication overhead and robustness.
- Data availability remains the Achilles’ heel of off‑chain compute. Without durable, decentralized DA/storage, proofs and artifacts may be gated by providers who control the storage or egress. Projects are increasingly aligning with DA networks (Celestia, EigenDA, Avail) and permanent storage (Filecoin, Arweave) to prevent vendor‑reasoned withdrawal. ([uplatz.com](Data Availability Is the New Consensus: The Modular Revolution and the Structural Transformation of Decentralized Networks | Uplatz Blog and policy compliance are real levers. Hyperscaler outages in 2024–2025 showed how quickly a mistake in a globally distributed control plane can cascade. Providers also operate under jurisdictional compliance pressures that might force access limits or service suspensions in specific regions — an operational reality for any network that ties its proving plane to those providers.
Conclusion: a pragmatic, layered approach to decentralization
The debate at Consensus Hong Kong — and the CoinDesk commentary that followed — highlights a necessary tension in crypto infrastructure: the technical elegance of cryptographic protections versus the messy reality of infrastructure economics and control. Hoskinson was right to emphasize that MPC, attestation and confidential computing are powerful tools that materially reduce certain classes of risk. But the counterargument is equally important: those tools change the shape of trust; they do not absolve architects from planning for concentrated infrastructure failures, policy interventions, or emergent hardware vulnerabilities.For teams building privacy‑first, verifiable compute systems today, the practical path is hybrid and defensive: use hyperscalers for reach, acceleration and early reliability; build vertically‑specialized proving capacity where efficiency matters; use decentralized DA and storage to anchor durability; and adopt governance that enshrines rapid recovery and distributed participation. Decentralization is not a binary checkbox that cryptography ticks for you — it is an engineering posture baked into choices about who runs hardware, who stores artifacts, and how systems respond when the unexpected happens.
Put plainly: don’t cede the critical proving and availability plane to brand‑name cloud vendors and then hope cryptography will save you if the infrastructure itself becomes the chokepoint. Design so the network keeps working — perhaps slower, perhaps differently — even when a major provider turns away. That combination of cryptographic rigor and infrastructural diversity is what will actually preserve decentralization in practice, not just in theory.
Source: CoinDesk Hoskinson might be wrong about the future of decentralized compute