Trust store shift: Certifi drops e Tugra roots amid CVE-2023-37920

  • Thread Author
Certifi’s decision to remove e‑Tugra root certificates—tracked as CVE‑2023‑37920—was a corrective security action that rippled across software ecosystems and vendor supply chains, but it also exposed a practical tension: removing a distrusted root protects integrity while simultaneously risking availability for services that still depend on that root. In short, the “fix” created real-world outages for some users and systems, and the episode is a valuable case study in how trust‑store management, package updates, and downstream dependency chains interact in modern infrastructure.

Illustration of a trust store vault with TLS issues and CVE-2023-37920.Background / Overview​

Certifi is a widely used Python package that provides a curated bundle of trusted root certificates for TLS validation in Python applications. In July 2023 Certifi released version 2023.07.22 to remove root certificates associated with the Turkish CA known as e‑Tugra following an investigation and reporting of security issues in that CA’s systems. The removal was documented as CVE‑2023‑37920 and recorded by multiple vulnerability databases and vendor advisories.
Why this matters: applications that rely on Certifi (or on downstream OS/distribution ca‑certificates packages that incorporated the change) suddenly stopped trusting any TLS leaf certificates chained to the removed e‑Tugra roots. For services still presenting certificates issued under those roots, clients using the updated trust stores would fail TLS verification, causing connection failures and, in affected deployments, degraded or fully unavailable services. Multiple vendors (Linux distributions, cloud images, and enterprise products) tracked the issue and pushed updates or advisories.
This article explains what happened, why it produced availability outcomes that in some cases match the denial‑of‑service definitions used by major security authorities, how organizations should evaluate the risk to their environments, and practical remediation and mitigation steps for administrators and developers.

What CVE‑2023‑37920 actually is​

The technical core​

  • Certifi is a curated root store used by many Python applications to validate TLS server certificates.
  • Certifi versions prior to 2023.07.22 included the e‑Tugra root certificates; Certifi 2023.07.22 removed those roots. The change was made after investigation into security issues reported in e‑Tugra’s systems.
  • The vulnerability entry (CVE‑2023‑37920) is not a code‑execution bug in Certifi; it is an operational/trust issue: either the presence of a root that may be compromised poses integrity risk, or the removal of the root causes validation failures for legitimate services still using that CA. The metadata assigns this to weaknesses in verification/trust posture (CWE‑345 in some trackers).

Severity, scoring, and downstream classification​

Different security trackers assigned distinct severity scores depending on their assessment model:
  • Several distribution and cloud advisories rated the issue as Important/High, with CVSS v3 scores commonly cited around 7.4–7.5 (reflecting high impact to integrity but not always confidentiality).
  • Other aggregations (pulling broader impact contexts) reported a 9.8 figure; variations reflect different assumptions about scope, exploitability, and the number of affected downstream packages and systems. Practitioners should therefore check vendor‑specific advisories for final prioritization.
Put plainly: the action was deliberate and security‑focused, but the downstream operational impact was non‑trivial and required coordination.

Timeline and vendor responses​

Chronology (concise)​

  • Reports and investigation into e‑Tugra’s systems triggered concern at major root‑store maintainers.
  • Certifi released 2023.07.22 removing e‑Tugra roots from its bundle.
  • Linux distributors, cloud vendors, and enterprise vendors tracked the change and published advisories or updated their ca‑certificates packages to reflect the removal. Examples include Amazon Linux, Debian, Ubuntu, SUSE and various enterprise products that bundle Certifi or rely on distribution certificates.
  • Some downstream products and appliances required vendor patches or configuration changes to restore connectivity to servers that still used e‑Tugra‑issued certificates. IBM, NetApp, Oracle and others logged advisories describing impact and required fixes.

Notable vendor actions​

  • Distributions: Many Linux distributions synchronized their ca‑certificates packages with the upstream decisions; some patched quickly and backported fixes. Debian documented how their python‑certifi package was adapted to reference Debian‑provided CA bundles where appropriate.
  • Cloud images: Amazon Linux published ALAS advisories and updated its ca‑certificates to match.
  • Enterprise products: Vendors whose appliances or SaaS offerings embed Certifi published product‑specific advisories and released updated packages or configuration guidance (IBM, Oracle, NetApp examples).

Availability impact — why a root removal can cause outages​

At first glance, removing a potentially compromised root looks purely safety‑first. But availability consequences are immediate and practical:
  • TLS validation is binary in most client stacks: if the validated chain does not end in a trusted root the handshake fails. That prevents new connections to affected hosts. If your application depends on TLS endpoints (APIs, package indexes, update servers, monitoring endpoints) that still present certificates chaining to e‑Tugra, clients using updated trust stores will fail to establish TLS sessions. This can manifest as:
  • New TLS sessions failing (clients cannot download data or send telemetry).
  • Service‑to‑service calls failing, causing cascading failovers or resource exhaustion.
  • Management tooling (configuration agents, patching clients, container images) failing to update or bootstrap.
  • For systems that heavily rely on such connectivity (call‑home, license servers, update feeds), the removal can produce a sustained loss of availability while the client remains using the updated trust store, or persistent outages if the service operators do not replace or reissue certificates under a trusted CA. These outcomes match the availability loss descriptions used by major vulnerability frameworks. The Microsoft characterization of "total loss of availability" is precisely the operational reality when TLS checks are enforced and clients refuse to talk to the older CA. (User‑provided MSRC description highlights the same class of availability impact.)
Historical precedent reinforces this: removing a root is a blunt instrument that can immediately revoke trust for legitimate services, as seen in prior incidents where vendors removed suspect roots (for example, vendor actions around DigiNotar and other CA removals). Those removals protected integrity but also required remediation for legitimate services.

Real‑world consequences (examples and patterns)​

  • Enterprise storage and monitoring products that embed Python stacks and Certifi saw immediate fallout: agents failed to connect to backend servers or licensing/telemetry endpoints until vendors released updated packages or customers reconfigured trust paths. IBM and similar suppliers documented affected product versions and advised updates.
  • Distributions and packaged images that propagate updated certificate bundles caused containerized applications to experience connectivity failures when they rely on embedded roots or specific CA chains. Administrators who used pinned old images or did not rebuild images found applications unable to reach essential endpoints.
  • The event highlighted supply chain brittleness: a change to a small, widely reused package (Certifi) cascaded into heterogeneous vendor ecosystems. Several vendors issued guidance to update their products’ Certifi or ca‑certificates packages.

Strengths of the Certifi decision (security-first rationale)​

  • Integrity protection: Removing a root that may have been subject to compromise is the correct action to prevent silent man‑in‑the‑middle attacks by adversaries who could issue certificates under that root. Numerous parties (Mozilla, other root maintainers) coordinated similar removals for e‑Tugra roots. The action prevents trust in certificates that cannot be reliably vouched for.
  • Precedent and transparency: The change was tracked publicly (advisories and GitHub advisory database), enabling incident response teams to locate cause and apply fixes. The Certifi project’s change was visible and therefore allowlisted in vendor triage processes.
  • Upstream‑driven mitigation: By eliminating a risky trust anchor early, package consumers benefit from a conservative, centralized decision instead of relying on each downstream implementer to detect CA compromise.

Risks and operational downsides​

  • Availability loss for legacy or misconfigured services: Services still using e‑Tugra‑issued certificates simply stop being trusted by updated clients. That may produce outages until the server operator reissues certificates under a trusted CA or customers implement local exceptions.
  • Patch and coordination burden: Organizations running mixed or unmanaged fleets must identify which systems use the Certifi bundle (or a packaged ca‑certificates bundle tracing the change) and orchestrate updates, which can be time‑consuming and error‑prone.
  • Inconsistent scoring and prioritization: As we saw with differing CVSS values across trackers and vendors, a single vulnerability ID covering an operational change is open to varied interpretations. That inconsistency can complicate automated remediation policies that act based on severity thresholds.
  • Potential for unsafe workarounds: The most tempting immediate fix—rolling back to an older Certifi that still trusts e‑Tugra—reintroduces the exact integrity risk that motivated the removal. Organizations that adopt such workarounds without compensating controls expose themselves to real MITM danger.

Detection and triage: how to find if you’re affected​

Short checklist for defenders and admins:
  • Inventory Python applications and containers that use requests, urllib3, pip, or other tooling that may import certifi. Not all Python TLS validation uses certifi; some rely on the OS certificate store (OpenSSL/ca‑certificates).
  • Check versions: certifi versions >= 2015.4.28 and < 2023.7.22 are in scope for the vulnerability record (i.e., they recognized e‑Tugra roots). The patched release is 2023.7.22.
  • Monitor TLS failures in logs: look for TLS validation errors referencing untrusted issuer/root or "certificate verify failed" errors in Python client logs. These are typical signs that the client no longer trusts the server chain.
  • Network sampling: record TLS handshakes (or review proxy/NGINX/TLS termination logs) to find servers presenting an issuer chain terminating in e‑Tugra roots. That identifies servers needing reissuance. Tools that extract the certificate chain (openssl s_client -showcerts equivalent) are useful in bulk scans.
  • Vendor advisories: track vendor CVEs and product advisories for items that bundle Certifi or for packaged ca‑certificates that mirror Certifi changes (distributions and appliances often publish exact package names and versions).

Remediation and mitigation: recommended steps​

  • Inventory and prioritize (Immediate)
  • Identify Python apps, containers, and appliances that use Certifi or a packaged ca‑certificates manifest that includes Certifi’s changes.
  • Prioritize internet‑facing services, update clients (agents and backup systems), and security‑critical telemetry pipelines.
  • Patch (Primary fix)
  • Update Certifi to version 2023.7.22 or later, which explicitly removes the e‑Tugra certificates from the bundle and is the official remediation. For systems where Certifi is supplied by OS packages, install the vendor’s ca‑certificate/patch that aligns with that change.
  • Reissue server certificates (if you operate affected servers)
  • If servers still present e‑Tugra‑issued leaf certificates, obtain new certificates from a trusted CA and deploy them. This permanently resolves client failures without undermining the security posture.
  • Favor secure exceptions only when unavoidable (temporary, controlled)
  • If you must restore availability immediately and cannot patch or reissue certificates quickly, use tightly scoped, temporary exceptions:
  • Add specific server certificates to a local trusted store with strict expiry and logging.
  • Use short‑lived exception windows and require follow‑up reissuance.
  • Do not revert to globally downgrading the Certifi bundle or permanently trusting an untrusted root.
  • Improve monitoring and change control
  • Monitor TLS error rates after patches.
  • Add certificate chain checks as part of CI/CD and configuration management (reject builds that include broken/trusted‑by‑disallowed roots).
  • Use certificate transparency and automated scanning to detect server chains that depend on deprecated roots.
  • Long‑term resilience
  • Where possible, shift critical services to use automated certificate issuance (ACME/Let’s Encrypt or enterprise PKI with automated renewal) to avoid prolonged dependence on a single CA.
  • Implement defensive pinning or additional verification steps for extremely high‑value connections, but do so cautiously (pinning brings its own operational risks).

Practical playbook for common roles​

For platform engineers and SREs​

  • Run a global scan of endpoints used by the fleet to identify any servers still presenting e‑Tugra chains.
  • Stagger updates to Certifi and dependent packages to avoid simultaneous mass failures, and test updates in canary groups first.
  • If your observability stack uses Certifi (many Python monitoring agents do), update them before or in tandem with application stacks.

For security teams​

  • Treat this event as a supply‑chain/infrastructure risk: update asset inventories to include CA dependencies and trust‑store ownership.
  • Add CA removals and trust‑anchor changes to your incident playbooks; exercise these playbooks in tabletop runs.

For product vendors and OEMs​

  • If your appliance embeds Python and Certifi, release coordinated, easy‑to‑apply updates and clear rollback guidance that does not encourage reintroducing the compromised root as a permanent fix.
  • Communicate clearly to customers: what versions are affected, whether reissue is required on the server side, and whether short‑term exceptions are supported.

Why this episode matters for the wider security community​

  • It highlights how trust anchors are critical parts of infrastructure. Decisions about trust stores are not purely academic — they cause immediate operational effects across a broad supply chain.
  • It exposes a recurring trade‑off: integrity vs availability. Security responders must balance the need to block a suspect root against the operational damage of blocking it immediately.
  • It reinforces the need for robust dependency hygiene. Small, ubiquitous libraries like Certifi can create systemic exposure when they change. The industry needs better automation and observability around trust‑store changes.
  • It shows the value of vendor coordination. Where root removals are required, synchronized communication among root maintainers, distributions, and major vendors reduces downstream surprise.

Caveats, unanswered questions, and things to watch​

  • Scoring inconsistency: you will see different CVSS numbers across trackers and vendors; this is expected because scoring depends on attack assumptions and distribution footprints. Always consult vendor advisories for prioritization.
  • Evidence of exploitation: as of the reporting milestone for this advisory there were no broadly publicized exploits leveraging the presence of e‑Tugra roots to perform large‑scale MITM attacks, but the potential risk from a compromised root is sufficiently severe to justify removal. That said, the downstream availability problems created very tangible business impact.
  • Unverifiable or vendor‑specific claims: certain site‑level writeups (community blogs or aggregator posts) sometimes overstate the availability impact or conflate the issue with unrelated TLS bugs. When in doubt, rely on primary vendor advisories (the Certifi project advisory and distribution security notices) for precise remediation steps.

Closing analysis — lessons and recommendations​

The Certifi / e‑Tugra episode (CVE‑2023‑37920) is an instructive example in modern vulnerability management that blends cryptographic trust decisions with systems reliability. It demonstrates that:
  • A well‑intentioned security action at the root level can create immediate availability impacts for downstream users who must respond operationally.
  • Organizations should treat trust stores and certificate‑authority dependencies as first‑class inventory items, not as invisible infrastructure.
  • The safest long‑term posture is to avoid single points of fragility: automate certificate lifecycle management, adopt robust monitoring for TLS failures, and embed CA‑change detection into release verification.
Actionable short list (prioritized)
  • Inventory all systems that use Certifi or ship independent certificate bundles.
  • Update Certifi (or vendor ca‑certificates) to the patched versions; if running distribution images, apply the vendor ca‑certificates update.
  • Reissue any server certificates chained to e‑Tugra with a trusted CA immediately.
  • Use only controlled, temporary exceptions if you must restore availability; document and remediate those exceptions on an accelerated timeline.
  • Build trust‑anchor change detection and certificate‑chain scanning into your CI/CD and fleet management tools.
Removing a compromised root is the morally correct and technically necessary choice to protect the integrity of TLS at scale. But the operational cost of doing so—manifested here as broken connections and partial or total availability loss for affected systems—must be anticipated and managed. Treating trust‑store changes as inevitable maintenance events, not emergency patches, and investing in automation around certificate management are the clearest ways to reduce the collateral damage the next trust‑anchor rotation will cause.


Source: MSRC Security Update Guide - Microsoft Security Response Center
 

Back
Top