Undersea fiber isn’t a niche concern for network engineers anymore — it’s the physical spine of the global cloud economy, and this week’s reveal of Amazon Web Services’ new transatlantic cable, Fastnet, underscored how Big Tech is quietly building an ocean-spanning private network to control latency, capacity and resilience for cloud and AI services. Amazon says Fastnet will deliver more than 320 terabits per second between Maryland and County Cork, Ireland — a capacity the company frames with memorable analogies (the “entire digitized Library of Congress three times per second” and “12.5 million HD films simultaneously”), and it joins a crowded and strategic field of submarine projects driven by Google, Meta and Microsoft.
Undersea fiber-optic cables carry the vast majority of international data traffic today. Industry estimates commonly place that figure well above 95% of intercontinental traffic, and major network operators and cloud providers treat submarine systems as critical national and commercial infrastructure. The pattern over the last decade has been decisive: cloud platforms that once bought capacity on third‑party systems now build or co-invest in cables to secure predictable bandwidth and route diversity for latency‑sensitive services. The new Amazon Fastnet announcement is a useful hinge point to examine the scale, tactics and risks involved when hyperscalers take ownership of the ocean floor’s data arteries. Sherwood News’ breakdown of Big Tech’s undersea footprint provides a readable snapshot of who has what, and why it matters — the piece maps Amazon, Microsoft, Meta and Google’s projects and shows just how closely cloud architecture and submarine topology are tied.
The technical answer is clear: more diverse, modern routes plus improved repair capacity and standardized transparency are necessary to reduce single‑corridor fragility. For Windows admins and IT architects, the operational answer is equally clear: plan for physical‑path risk, adopt edge‑first patterns, demand path transparency in vendor contracts, and practice failover for the real world — because no amount of cloud abstraction will splice a broken fiber on the seafloor for you.
(Industry reporting, company releases and operational advisories used to prepare this feature include AWS’s Fastnet announcement and coverage of Big Tech subsea projects, along with contemporaneous reporting about subsea faults and provider mitigations. Readers should treat precise route‑kilometer totals and percentage estimates as best‑available figures that are periodically updated by industry trackers.
Source: Sherwood News Big Tech’s most important infrastructure is at the bottom of the sea
Background / Overview
Undersea fiber-optic cables carry the vast majority of international data traffic today. Industry estimates commonly place that figure well above 95% of intercontinental traffic, and major network operators and cloud providers treat submarine systems as critical national and commercial infrastructure. The pattern over the last decade has been decisive: cloud platforms that once bought capacity on third‑party systems now build or co-invest in cables to secure predictable bandwidth and route diversity for latency‑sensitive services. The new Amazon Fastnet announcement is a useful hinge point to examine the scale, tactics and risks involved when hyperscalers take ownership of the ocean floor’s data arteries. Sherwood News’ breakdown of Big Tech’s undersea footprint provides a readable snapshot of who has what, and why it matters — the piece maps Amazon, Microsoft, Meta and Google’s projects and shows just how closely cloud architecture and submarine topology are tied. Fastnet in context: what Amazon announced and what it means
The Fastnet announcement — technical highlights
Amazon’s Fastnet is described as a dedicated transatlantic subsea cable that will land in Ocean City, Maryland, and County Cork, Ireland, with a planned in‑service date around 2028. AWS states a design capacity exceeding 320 Tbps, built with armored nearshore protection, deep burial where appropriate, optical switching/branching to allow future scalability and route flexibility, and local community funds tied to the landing locations. The company positions Fastnet as a resilience and routing‑diversity investment for AWS services and AI networking. Amazon’s own blog post and other coverage confirm this is AWS’s first wholly owned solo subsea project; previously, AWS primarily purchased capacity in consortia. That shift — from capacity buyer to sole owner — is significant because it gives AWS direct control over pathing, maintenance decisions and the ability to prioritize traffic inside its own global fabric.Interpreting capacity claims
Capacity numbers on subsea systems are a mix of physical fiber-pair design and projected optical modulation advances. Statements like “320 Tbps” reflect the system’s design bandwidth assuming advanced coherent optics and future upgrades; they are not immutable throughput guarantees to a single customer. Still, the order‑of‑magnitude jump matters: a privately controlled 320+ Tbps trunk materially raises a provider’s ability to move AI training datasets, multi‑region replication and CDN traffic with lower inter‑continental contention. Amazon’s marketing analogies (Library of Congress / millions of HD films) are vivid but intended for scale; the engineering takeaway is the system will be a high‑capacity artery into Europe for AWS.Who else is building, and how the market is divided
Google: the most aggressive subsea investor
Google has been building private cables since the 2010s and lists many named projects — Dunant (US–France), Curie (US–Chile), Grace Hopper (US–UK/Spain), Equiano (Portugal–South Africa), Firmina and others. Public Google posts note that private subsea cable investment lets Google plan capacity for long‑term cloud, search and services demands; reporting aggregates suggest Google’s total access to subsea route kilometers is far larger than any single peer. Google’s corporate blog has repeatedly stated that “98% of international internet traffic” is carried on subsea cables and highlights the strategic role private cables play in its service delivery.Meta: scale, a ‘longest cable’ project and 2Africa
Meta has shifted from consortium participation to an ambition for a fully owned, global system. In early 2025 Meta confirmed Project Waterworth, a proposed network of roughly 50,000 kilometers intended to span five continents and open three new oceanic corridors, with deep‑water routing and enhanced burial techniques to mitigate damage risks. Meta is also a major partner in the 2Africa project, a roughly 45,000‑kilometer system intended to circumnavigate Africa and link Europe, Asia and Africa — one of the largest regional builds in recent years. Tech and trade coverage places Meta among the largest subsea investors, and these projects show how platform operators are seeking both scale and route diversity.Microsoft: consortiums, open designs and operational exposure
Microsoft has been a significant subsea investor and a founding partner in open transatlantic projects such as MAREA (≈6,600 km), which Microsoft developed with Meta (formerly Facebook) and Telxius. MAREA was described at launch as an “Open Cable System,” designed to let landing stations interoperate with multiple vendors and upgrade capacity without physically relaying cable. Microsoft’s own infrastructure is not immune to subsea risk: in September 2025 Azure experienced measurable latency increases after multiple undersea cables were cut in the Red Sea corridor, an incident that showcased how physical cuts translate into cloud performance degradations.Amazon: now building solo, but smaller subsea footprint so far
Before Fastnet, AWS participated in consortiums and capacity purchases but owned fewer direct subsea miles than competitors, according to industry tracking. Sherwood News and industry sources show AWS’s investments are meaningful but have until now trailed Google and Meta in direct subsea route-kilometers; Fastnet closes some of that gap for the North Atlantic corridor specifically. Amazon also reports massive totals when terrestrial fiber is included, but ownership of subsea route miles is the strategic lever here.Why Big Tech is building cables: technical drivers and business logic
- Bandwidth demands from AI and cloud services: Training datasets, cross‑region model replication and distributed inference consume massive intercontinental capacity. Owning fiber reduces exposure to market‑rate capacity scarcity and improves planning predictability.
- Route diversity and resilience: Private cables allow hyperscalers to choose landing sites and routes that avoid congested or geopolitically sensitive chokepoints. Fastnet’s Maryland–Cork path, for example, deliberately adds an alternative East‑Coast landing and avoids traditional pathways.
- Operational control for traffic engineering: When a provider controls both endpoints and the middle-mile, it gains superior observability and can implement traffic‑engineering, protection switching and capacity upgrades that map to service SLAs.
- Commercial and geopolitical strategy: Owning or co‑owning cables can be cheaper over the long term than buying lit waves and gives leverage in peering and commercial negotiations; it also creates bargaining chips in national infrastructure discussions.
Notable strengths of this strategy
Predictability and scale
Private subsea builds let cloud platforms guarantee capacity for their most demanding workloads, reducing long‑term procurement uncertainty and enabling predictable cost curves for inter‑regional data flows.Resilience when combined with routing diversity
Multiple private trunks with dissimilar physical routes materially reduce the risk that a single corridor failure will throttle a provider’s entire transcontinental pipeline. Amazon’s Fastnet — by landing in Maryland rather than the more common Boston/New York corridor — adds that desired diversity.Technical modernization
New systems are designed with open architectures, optical switching and the expectation of incremental capacity upgrades through optics, not new cable — a design philosophy that extends useful life and lowers long‑term capital intensity. Project examples (MAREA, Grace Hopper, Dunant) show how optical upgrades increase usable capacity long after initial laydown.Real and growing risks
1) Concentrated chokepoints and security exposure
The oceans contain a handful of chokepoints (Red Sea/Bab el‑Mandeb, Suez approaches, Singapore Strait, English Channel approaches) where dozens of cables converge. Multiple faults or a coordinated attack in those corridors can force traffic onto much longer detours. The September Red Sea cuts that affected Azure are a real‑world demonstration: rapid rerouting kept reachability but increased latency across broad geographies.2) Repair logistics are slow, complex and geopolitically entangled
Subsea repair is a maritime operation: survey vessels, permissioning, insurance, ship availability and on‑site safety all govern timelines. In contested waters, repair ships may be delayed for diplomatic reasons, stretching what would otherwise be a few‑day fix into weeks. That reality means operational mitigations (rerouting, caching, edge inference) are necessary but imperfect.3) Economic concentration and strategic lock‑in
Hyperscaler control over major intercontinental nodes raises competition and policy questions. When a handful of firms control critical corridors, smaller carriers and national providers may face reduced bargaining power and potential single‑party dependencies for mission‑critical bandwidth. This structural concentration is drawing attention from regulators and governments.4) Attribution and escalation risks
When cable breaks appear in conflict zones, preliminary public attribution can be premature and politically hazardous. Forensic confirmation requires operator diagnostics and sometimes physical inspection, so early claims of sabotage should be treated as provisional. The industry has repeatedly warned against quick attributions without technical evidence.What the Red Sea incident taught cloud customers — a practical checklist
The September 6, 2025 Red Sea cuts that produced Azure latency spikes provide a practical template for sysadmins and architects. Key immediate and medium‑term actions:- Map exposure: inventory which apps and backups cross specific intercontinental corridors; ask cloud operators for route geometry when available.
- Harden timeouts and retries: avoid bursting behavior that amplifies congestion under reroute conditions.
- Use edge caching / CDN and regional inference: reduce cross‑ocean round trips for latency‑sensitive workloads.
- Design asynchronous replication for cross‑continent copies where possible.
- Negotiate network transparency and contingency clauses with cloud/transit vendors: require post‑incident RCA and clear escalation channels.
Policy and industry implications
- Protecting subsea assets as critical infrastructure: Several national governments and international organizations are discussing frameworks to protect cable corridors, restrict anchoring in key lanes and accelerate repair permits. That thinking is accelerating after incidents where repairs were delayed by access issues.
- Repair fleet capacity and coordinated staging: The global fleet of cable ships is finite and expensive; some proposals call for public–private partnerships to stage repair assets in strategic regions to shorten response windows.
- Transparency standards for consortiums: Calls for clearer, faster operatorbulletins and standardized forensic reporting aim to reduce uncertainty after incidents and to prevent premature or inaccurate public attributions.
- Incentives for route diversity: Governments can incentivize routes that avoid chokepoints (overland corridors, southern routes) through permitting, landing‑site facilitation and subsidies for strategic builds. Meta’s Waterworth and Google’s new routes show operators are already seeking alternative corridors; policy can accelerate that trend.
Cross‑checking key claims: verification and caution
- Claim: “Submarine cables carry 99% (or >95%) of international network traffic.” Industry reporting and academic reviews consistently place submarine fiber as the primary medium for bulk intercontinental traffic; estimates vary (95–99%), depending on measurement definitions, but the claim that the vast majority of international traffic uses subsea cables is supported by multiple industry and research sources. Treat exact percentages as estimates rather than immutable constants.
- Claim: “Fastnet will deliver 320+ Tbps and land in Maryland and County Cork, operational by 2028.” AWS’s own announcement and multiple news outlets corroborate the route, capacity and timeline; the 320 Tbps figure reflects design capacity and a high‑level engineering target, but practical per‑flow throughput will depend on optics and provisioning.
- Claim: “Google has access to ~267,000 kilometers through 33 projects.” The Sherwood News summary attributes these totals to TeleGeography’s tracking. Independent industry trackers and multiple news outlets report that Google leads Big Tech in route‑kilometers and project count, though exact totals can differ between public summaries and TeleGeography’s behind‑paywall database. Where precise kilometer totals are material, treat TeleGeography‑sourced numbers as the authoritative industry tracker but note they are compiled and periodically updated; always check the latest TeleGeography database for the current count.
- Claim: “Meta’s Project Waterworth is 50,000 km.” TechCrunch and Wired independently covered Meta’s Project Waterworth disclosures in 2025, both citing Meta’s project brief. That figure is a corporate announcement and represents planned route length rather than immediately deployed cable. As with any multi‑year submarine project, routing is subject to change during engineering, permitting and construction.
What network and Windows admins should do next — practical, prioritized actions
- Immediate (days): Monitor provider status pages (Azure Service Health, AWS network advisories), enable subscription‑scoped alerts, and temporarily throttle/shift non‑urgent cross‑continent transfers.
- Short term (weeks to months): Add route diversity checks to procurement, validate multi‑region failovers that avoid shared chokepoints, and increase CDN/edge cache usage for static assets.
- Medium term (quarters): Negotiate contingency transit capacity with carriers, include physical‑path transparency clauses and emergency transit pricing in SLAs, and test disaster playbooks that simulate long repair windows.
- Strategic (year+): Re‑architect latency‑sensitive services to be regional-first, adopt asynchronous multi‑region data strategies for bulk replication, and participate in industry initiatives to improve cable protection and repair capacity.
Conclusion
Fastnet’s announcement is more than a press release about a new cable; it’s confirmation that hyperscalers see subsea infrastructure as a strategic layer they must design, own and operate to meet the demands of cloud‑scale AI and global services. That trend buys capacity and resilience for major providers but also concentrates control over the ocean’s data arteries, amplifying geopolitical and systemic risk if chokepoints are not addressed collectively.The technical answer is clear: more diverse, modern routes plus improved repair capacity and standardized transparency are necessary to reduce single‑corridor fragility. For Windows admins and IT architects, the operational answer is equally clear: plan for physical‑path risk, adopt edge‑first patterns, demand path transparency in vendor contracts, and practice failover for the real world — because no amount of cloud abstraction will splice a broken fiber on the seafloor for you.
(Industry reporting, company releases and operational advisories used to prepare this feature include AWS’s Fastnet announcement and coverage of Big Tech subsea projects, along with contemporaneous reporting about subsea faults and provider mitigations. Readers should treat precise route‑kilometer totals and percentage estimates as best‑available figures that are periodically updated by industry trackers.
Source: Sherwood News Big Tech’s most important infrastructure is at the bottom of the sea