Oracle’s new E6 compute shape — the first broadly available OCI instance class powered by 5th‑Gen AMD EPYC “Turin” silicon — delivers a clear step up in raw CPU throughput and performance-per-dollar for many general‑purpose and CPU‑bound cloud workloads, according to a fresh round of public‑cloud benchmarks and underlying vendor specifications. The E6 shape uses a custom AMD EPYC 9J45 SKU and, in Phoronix’s comparative testing of 16‑vCPU configurations across Oracle, AWS, Azure and Google Cloud, the E6‑based instances regularly outperformed the prior E5 generation and stood well against comparable instances from other hyperscalers — with price-performance wins in a number of real‑world workloads.
Oracle’s E6 shape is the company’s SP5‑socket offering built around a custom 5th‑Gen AMD EPYC part (listed as EPYC 9J45 in Oracle’s compute‑shape documentation). Oracle’s public compute‑shape pages list the E6 family as using the EPYC 9J45 with a base frequency of 2.7 GHz and a max boost of 4.1 GHz, and the vendor presents E6 as the successor to the E5 shapes already in OCI’s lineup. AMD’s launch material for the 5th‑Gen EPYC (codename Turin, EPYC 9005 series) positions the family as a major generational step for cloud, enterprise and AI infrastructure — improved IPC, new core microarchitecture benefits from Zen 5, and higher single‑thread boost characteristics that matter for mixed workloads. AMD’s public statements highlight up to sizable gen‑to‑gen gains and emphasize silicon and platform-level advantages that include more memory channels and expanded I/O. Phoronix executed an extensive cross‑cloud test matrix that compared Oracle E6 (and the prior E5), Azure D_v6 variants (both x86 and ARM variants), Google Cloud c4 and c4a flavors (x86 and Axion/Arm), and AWS m8 families (Graviton4 and Intel‑based m8i). The comparison targeted matched 16‑vCPU configurations (for OCI the tests used 8 OCPU and 16 OCPU variants where relevant), ran more than 100 tests per instance, and reported both absolute performance and performance‑per‑dollar using on‑demand hourly pricing. The raw Phoronix/OpenBenchmarking test logs for the E6 runs are publicly available as Phoronix test-suite results.
Oracle’s E6 shapes are a pragmatic, well‑engineered x86 offering for users who need a high‑performance, general‑purpose compute option in OCI; they should be treated as a top contender in any cloud evaluation alongside the increasingly capable Arm alternatives. The best practice remains: pick representative workloads, reproduce the tests in your target regions, and evaluate cost and performance across the contract types you actually plan to use.
Source: Phoronix Oracle OCI Compute E6 Benchmarks For Leading AMD EPYC Turin Performance In The Cloud Review - Phoronix
Background / Overview
Oracle’s E6 shape is the company’s SP5‑socket offering built around a custom 5th‑Gen AMD EPYC part (listed as EPYC 9J45 in Oracle’s compute‑shape documentation). Oracle’s public compute‑shape pages list the E6 family as using the EPYC 9J45 with a base frequency of 2.7 GHz and a max boost of 4.1 GHz, and the vendor presents E6 as the successor to the E5 shapes already in OCI’s lineup. AMD’s launch material for the 5th‑Gen EPYC (codename Turin, EPYC 9005 series) positions the family as a major generational step for cloud, enterprise and AI infrastructure — improved IPC, new core microarchitecture benefits from Zen 5, and higher single‑thread boost characteristics that matter for mixed workloads. AMD’s public statements highlight up to sizable gen‑to‑gen gains and emphasize silicon and platform-level advantages that include more memory channels and expanded I/O. Phoronix executed an extensive cross‑cloud test matrix that compared Oracle E6 (and the prior E5), Azure D_v6 variants (both x86 and ARM variants), Google Cloud c4 and c4a flavors (x86 and Axion/Arm), and AWS m8 families (Graviton4 and Intel‑based m8i). The comparison targeted matched 16‑vCPU configurations (for OCI the tests used 8 OCPU and 16 OCPU variants where relevant), ran more than 100 tests per instance, and reported both absolute performance and performance‑per‑dollar using on‑demand hourly pricing. The raw Phoronix/OpenBenchmarking test logs for the E6 runs are publicly available as Phoronix test-suite results. What the hardware actually is: EPYC Turin and Oracle’s 9J45
EPYC 9005 “Turin” — architecture and positioning
- Zen 5 microarchitecture: 5th‑Gen EPYC (“Turin”) is built on the Zen 5 family and represents AMD’s latest server core design, bringing IPC improvements and expanded vector capabilities compared with prior generations. AMD positions Turin as optimized for cloud and AI host duties — high core counts, higher boost clocks on selected SKUs, and large memory bandwidth for multi‑threaded server workloads.
- Product breadth: The 9005 series ranges from low‑core to very high‑core parts (up to 192 cores in AMD‑listed SKUs), letting cloud providers pick SKUs for density, power, or single‑thread performance as needed. AMD’s public launch materials emphasize strong per‑watt and per‑socket throughput improvements, metrics that matter for cloud TCO.
Oracle’s EPYC 9J45 — the E6 platform silicon
- Custom cloud SKU: Oracle refers to the E6 shape’s processor as AMD EPYC 9J45. Oracle’s compute‑shape documentation lists the E6 processor line with a base frequency of 2.7 GHz and a max boost of 4.1 GHz, matching a high‑frequency, cloud‑oriented Turin SKU. Oracle’s documentation also documents larger BM (bare‑metal) E6 shapes up to E6.256 and the VM shapes for flexible OCPU configurations.
- OCPU semantics: Oracle bills CPU capacity in OCPUs; on x86 (Intel/AMD) an OCPU corresponds to one physical core with simultaneous multithreading enabled (effectively two vCPUs). That means the Oracle testing that used 8 OCPU or 16 OCPU maps to the provider’s physical/virtual CPU accounting model — a crucial detail when matching 16‑vCPU instances across clouds. Oracle’s own docs explain the OCPU ⇄ vCPU mapping and the billing implications.
The benchmarking matrix — what was tested and how
Test targets and methodology (high level)
- Test set: Phoronix compared instances from Oracle (E6 and E5), Microsoft Azure (Intel Xeon and Microsoft/ARM Cobalt/Cobalt‑100 family), Google Cloud (x86 and Axion/Google‑custom Arm), and Amazon AWS (Graviton4 and Intel Granite Rapids). Each instance was provisioned to yield 16 vCPUs where possible, and Oracle’s runs used both the 8 OCPU and 16 OCPU shapes to map to the same vCPU count in their billing model.
- OS and software: The public summary for the cloud round states tests were run on Ubuntu 24.04 LTS with default system packages; the published Phoronix/OpenBenchmarking logs include the test‑suite traces used to reproduce runs. Readers should note that some OpenBenchmarking run files show Oracle Linux in certain KVM traces (Phoronix produces per‑run details) — so while the high‑level article reported Ubuntu 24.04 for the cross‑cloud comparison, individual test logs and permutations reflect the variety of environments used for platform‑specific runs. Treat the stated OS baseline as the intended cross‑cloud baseline for comparability, and check specific run logs for per‑run operating system details when reproducing tests. This is an important caveat when interpreting small percentage differences.
- Workloads: The suite included more than 100 benchmarks across real‑world and synthetic workloads: compilers, multi‑threaded renderers and encoders, scientific kernels, database microbenchmarks, and developer toolchain tasks. Geometric means and per‑test charts were used to summarize aggregate behavior and show variance across workload types.
Pricing and value measurement
Phoronix computed performance‑per‑dollar using on‑demand hourly rates for each instance type — the most conservative and broadly comparable cost metric for cloud users who don’t or can’t commit to reserved or spot pricing. This is a practical choice for a vendor‑neutral apples‑to‑apples look, but real procurement decisions should replicate the analysis using region, commitment discount, and sustained‑use pricing relevant to the buyer.Key findings — raw performance and price‑performance
Oracle E6 vs E5 (same price tier)
- On the selected compute‑heavy workloads, Oracle E6 shapes delivered up to ~2× the performance of E5 shapes in certain cases when compared at the same price point, consistent with AMD and Oracle public marketing that highlight a generational uplift from Turin‑based SKUs versus the prior E5 parts. The geomean uplift varies by workload class but the E6 family was generally faster across CPU‑bound producer workloads. AMD’s own positioning for 5th‑Gen EPYC forecasts major generational gains that align with these results.
- The wins were most pronounced on throughput‑oriented, highly parallel workloads that benefit from higher clock headroom and Zen‑5 IPC gains. Cache‑sensitive or single‑thread‑dominated tasks showed smaller, workload‑specific deltas.
Oracle E6 vs other cloud providers (16‑vCPU comparisons)
- Against AWS Graviton4 (m8g.4xlarge): The Arm‑based Graviton4 instances remain highly competitive on price‑performance for many integer and throughput workloads, and in some benchmarks Graviton4 either matched or slightly exceeded the E6 in performance‑per‑dollar, especially where Arm‑optimized toolchains and code generation yield benefits. For workloads where x86 vectorization or specific instruction set features (AVX‑512 family variants) provide a real advantage, the E6 often took the lead.
- Against Azure D16ds_v6 / D16pds_v6: Azure’s D_v6 family spans both x86 Emerald Rapids and Microsoft/Arm variants (D16pds_v6 with Arm cores). Performance varied per workload — Microsoft’s x86 Emerald Rapids offerings retain strong single‑thread and predictable latency behavior; the D16pds_v6 Arm variants can be highly efficient for scale‑out server workloads. Pricing and availability differences by region matter greatly for dollar comparisons.
- Against Google c4 / c4a (Axion/Arm): Google’s Axion (C4A) Arm custom cores are explicitly positioned for strong price‑performance and energy efficiency, and in several Google‑published examples Axion‑based instances show substantial cost advantages on typical cloud services. Phoronix’s tests show Axion/c4a instances often offer compelling performance for scaled workloads, narrowing (or in some cases beating) the Oracle E6 on price‑sensitive throughput tasks.
Price‑performance nuance
- The headline is not universally “E6 is best.” Instead, the tests show a clear pattern: E6 is an excellent, cost‑efficient x86 choice for mixed latency‑sensitive and throughput workloads that benefit from Zen‑5 IPC and higher turbo clocks, while modern Arm platforms (Graviton4, Axion) deliver compelling alternatives where Arm‑native workloads or optimized toolchains are available.
- For many users, the right choice will be workload‑dependent:
- Long‑running, highly parallel compute (render farms, batch encodes) → E6 or Arm high‑throughput shapes depending on toolchain tuning.
- Latency‑sensitive, single‑thread flares or workloads that use x86‑only binaries/optimizations → E6 or Azure x86 shapes.
- Native Arm ecosystems, or price‑conscious scale‑out services → Graviton4 or Axion variants can be superior in price‑performance.
Technical analysis — strengths, where the E6 shines
- Higher sustained and turbo clocks: The EPYC 9J45 SKU’s higher boost behavior (documented by Oracle as up to 4.1 GHz) helps in mixed single‑thread bursts inside parallel workloads and for drivers/traces that depend on short‑term turbo headroom. That directly translates into better wall‑clock times in many developer toolchain tasks and some render kernels.
- Zen 5 IPC gains and platform I/O: Zen 5 design improvements yield measurable IPC advantages versus older Zen families. Combined with EPYC platform memory bandwidth and PCIe Gen5 lanes, those improvements keep CPUs fed with data at scale — particularly important in CPU‑heavy cloud workloads. AMD’s public specifications and comparative claims support the general magnitude of gains reported in independent tests.
- Vendor co‑engineering and cloud SKU tailoring: Oracle’s choice of a custom EPYC SKU (9J45) lets it tune the platform for cloud density and multi‑tenant stability, which matters for predictable performance in shared or noisy environments. That co‑design is visible in the higher BM (bare‑metal) networking and local NVMe options for large E6 BM shapes.
Risks, caveats, and what the tests don’t prove
- Workload specificity: Benchmarks are workload dependent. The Phoronix suite focused on CPU‑bound, multi‑threaded tasks. Results will differ for storage‑heavy, latency‑sensitive databases, GPU‑accelerated ML, or network‑bound services. Procurement decisions should always be validated with representative, in‑house workloads.
- OS/toolchain sensitivity: Minor differences in kernel, compiler, libc, or runtime versions can swing percentiles. Phoronix’s methodology intentionally used default software stacks; tuned images or vendor‑optimized drivers can change outcomes. In short: the numbers are directional, not absolute.
- Region and pricing variability: Price‑per‑hour varies by cloud region and over time. Phoronix used on‑demand pricing to standardize comparisons; committed/spot/CUD pricing will change the value calculations materially for production deployments. Always re‑run the performance‑per‑dollar math for your target region and discount commitments.
- Reproducibility and telemetry: For critical procurement choices, run repeated, reproducible tests capturing kernel, BIOS/microcode, and driver versions. Small firmware or hypervisor changes can flip a result in tight comparisons. Phoronix’s public test logs (OpenBenchmarking) provide a starting point to reproduce and extend tests.
- Unverifiable or transient claims: Vendor marketing materials sometimes express “up to” improvements that depend on precise configurations; where a vendor quote isn’t backed by independent third‑party measurements for a workload that matches your environment, treat the claim cautiously. Where the public runs differ from the headline article text (for example, variations in the OS listed in run logs), that discrepancy should be called out and investigated before making firm architectural choices.
Practical guidance for cloud buyers and engineers
- Identify representative workloads (exact binaries, data sets, and runtime flags).
- Reproduce the Phoronix‑style cross‑cloud run in your target region(s) using the exact images and instance sizes you plan to deploy.
- Include cost scenarios: on‑demand, reserved/committed, and spot/interruptible pricing to reflect real purchasing models.
- Capture observability: kernel versions, microcode, firmware, and hypervisor details for each run to explain variance.
- For production fleets, prefer a pilot fleet and measure TCO over a realistic multi‑month window (including egress, storage, and networking costs).
Bottom line — how to read these results
Oracle’s E6 shapes, leveraging the custom EPYC 9J45 Turin SKU, are a meaningful upgrade over the prior E5 generation in many CPU‑bound cloud workloads. For x86‑centric applications that require predictable single‑thread bursts plus high throughput across many cores, E6 is an outstanding option. However, modern Arm‑based cloud silicon (AWS Graviton4, Google Axion) remains a strong, often lower‑cost alternative for many scale‑out workloads — and the right choice is workload and price model dependent. Phoronix’s broad test sweep and the underlying OpenBenchmarking logs provide useful, reproducible starting points for anyone planning an OCI vs. AWS/Azure/GCP evaluation, but every organization should validate with its own representative tests and pricing scenarios.Appendix — verification notes and source checklist
- Oracle compute‑shape docs list VM.Standard.E6 as using AMD EPYC 9J45, base 2.7 GHz, boost to 4.1 GHz; Oracle’s docs also explain OCPU semantics used in the tests.
- AMD’s public press materials for 5th‑Gen EPYC (Turin / EPYC 9005) describe Zen 5 IPC gains, product stack breadth, and targeted performance claims that support the generational uplift seen in third‑party tests.
- Phoronix/OpenBenchmarking test‑suite result bundles for E6/E5 comparisons and cross‑cloud runs are published and contain per‑run logs, configuration, and output — use these to reproduce or audit the recorded runs. Note that some per‑run logs show Oracle Linux variants depending on the specific test harness used; treat the high‑level article’s OS baseline as the intended cross‑cloud comparison and check individual run files for exact OS images.
- Google Axion (C4A) public materials and Google Cloud blog posts document Axion’s Arm‑centric positioning and price‑performance claims that are corroborated by third‑party user reports and vendor case studies.
- AWS and Azure instance family specs and public cloud instance trackers provide the practical instance attributes (vCPU counts, memory, and publicized base frequencies) required to map the 16‑vCPU comparison matrix. These community and vendor pages are useful for verifying instance family equivalence when matching vCPU counts and expected platform behavior.
Oracle’s E6 shapes are a pragmatic, well‑engineered x86 offering for users who need a high‑performance, general‑purpose compute option in OCI; they should be treated as a top contender in any cloud evaluation alongside the increasingly capable Arm alternatives. The best practice remains: pick representative workloads, reproduce the tests in your target regions, and evaluate cost and performance across the contract types you actually plan to use.
Source: Phoronix Oracle OCI Compute E6 Benchmarks For Leading AMD EPYC Turin Performance In The Cloud Review - Phoronix