Most Actively Exploited Flaws Remain Unpatched for Months in Large Organisations

  • Thread Author
Nearly nine out of ten large organisations exposed to vulnerabilities that are already being exploited in the wild leave those critical weaknesses unpatched for six months or longer, a new analysis of more than 2,000 firms indicates — a finding that sharpens focus on a long‑running problem in corporate cybersecurity and raises urgent questions for IT operations, risk teams, and insurers alike.
[HEADIN[... truncated ...]=1] Background[/HEADING]
KYND, a London‑based cyber risk analytics firm, analysed exposure data across more than 2,000 organisations — including firms drawn from major indices — and reported that roughly 11% of those organisations were exposed to vulnerabilities that threat actors have already weaponised in real‑world attacks. Of that exposed group, KYND found that 88% remained vulnerable for six months or more despite fixes being available. The study also identified the most common classes of exposed vulnerabilities and named remote code execution (RCE) as the leading issue, accounting for 31% of the top vulnerability types observed.
These headline numbers are consistent with contemporaneous industry reporting and public vulnerability advisories, and they align with a wave of high‑profile incidents in late 2025 that underscored how quickly attackers can turn public disclosures into active campaigns. One such incident — a critical deserialization flaw in Windows Server Update Services (WSUS) tracked as CVE‑2025‑59287 — prompted emergency updates and government action after exploitation was observed in the wild, illustrating how a single unpatched, highly privileged service can multiply systemic risk.
That combination of empirical analysis and concrete incidents creates an unambiguous narrative: patch availability does not equal patch adoption, and delayed remediation of actively‑exploited flaws is a chronic, enterprise‑scale vulnerability.

Why the KYND findings matter​

  • Active exploitation raises urgency. When a vulnerability sits in the ‘actively exploited’ bucket, attackers have working techniques, tooling, or proof‑of‑concept code to weaponise it. That makes any delay in fixing materially more dangerous than a theoretical or low‑confidence discovery.
  • Scale matters. The study looked across thousands of firms and found persistent behaviour patterns; the risk is not isolated to a few laggards but appears as a portfolio‑level problem.
  • Operational discipline is a signal. KYND’s analysis (and the industry response to it) frames remediation speed as a proxy for organisational maturity: slow patching often indicates deeper issues with asset inventory, change processes, or resource constraints.
  • Insurance and underwriting are changing. Insurers increasingly treat remediation timelines and patching practices as underwriting inputs rather than simply counting vulnerabilities, shifting how cyber risk is priced and managed.

The WSUS case study: how a single flaw amplified systemic risk​

What happened​

In October 2025 a critical remote code execution vulnerability affecting Windows Server Update Services (WSUS) — identified as CVE‑2025‑59287 — was published and then observed in active exploitation. Initial vendor updates proved incomplete; Microsoft issued emergency out‑of‑band patches after researchers and incident responders documented exploitation attempts. A government cyber agency subsequently placed the CVE on its list of known exploited vulnerabilities and issued patching directives requiring rapid remediation for federal systems.

Why WSUS mattered​

WSUS is often deployed as a trusted internal patch distribution service. A fully exploitable RCE in WSUS allows an unauthenticated attacker to run arbitrary code with SYSTEM privileges on a high‑privilege host. From that foothold, an attacker can potentially:
  • Tamper with update distribution to propagate malicious payloads
  • Move laterally across the enterprise network
  • Exfiltrate data or deploy post‑compromise toolkits
  • Disrupt patching processes, turning the supply chain of software updates into an attack vector
This is the very scenario KYND warns about: preventable vulnerabilities whose continued exposure can turn into catastrophic incidents.

Where organisations trip up: five persistent operational failures​

The slow remediation KYND documents is rarely the result of a single cause. Instead, it emerges from a combination of technical, organisational, and economic frictions:
  • Poor or incomplete asset inventory. If organisations can’t reliably answer “what do we have?” they can’t reliably answer “what needs patching.” Unknown instances, shadow deployments, and unmanaged endpoints silently erode patch coverage.
  • Change management friction. Rigid change windows, lengthy QA cycles, and dependence on vendor‑specific testing can push critical patches out by weeks or months.
  • Legacy and bespoke systems. Older platforms, custom integrations, and unsupported versions create dependency chains where patching one component threatens compatibility or availability elsewhere.
  • Risk tolerance and testing gaps. Teams sometimes delay critical updates because of fear of breaking production flows; without robust rollback and staging strategies, the perceived risk of patching can outweigh the real risk of staying exposed.
  • Resource constraints and prioritisation. Security teams overwhelmed by high volume of alerts and vulnerabilities — many of them low priority — struggle to triage and escalate the few that truly matter.
Each of these failures is fixable, but they require different investments: better asset discovery, automated testing, stronger change orchestration, and governance that aligns business risk with security operations.

Technical reality: why RCEs and certain platforms dominate exposure lists​

KYND’s analysis found remote code execution to be the most pervasive class of actively‑exploited weakness. There’s a technical logic to that finding.
  • RCEs are high leverage. They let an attacker execute arbitrary commands without credentials and frequently yield privilege escalation and persistence pathways.
  • Widely‑deployed platforms multiply exposure. Issues in commonly used stacks — web servers, content platforms, and database middleware — create exponential attack surface because the same bug can be present across many organisations.
  • Networked infrastructure increases blast radius. Vulnerabilities in routing, switching, secure communications, and update distribution services (like WSUS) can be particularly damaging since they provide mechanisms for mass compromise or lateral movement.
Platforms frequently mentioned in exposure reporting include enterprise databases, web content management systems, open‑source web servers, and networking gear. The practical takeaway for defenders is clear: focus remediation on high‑privilege services, internet‑exposed management interfaces, and components capable of remote, unauthenticated code execution.

Verification and limits: what the numbers mean — and what they don’t​

The headline figures KYND reported — sample size, exposure percentages, and vulnerability class shares — were widely covered by industry outlets and referenced in KYND communications. These independent confirmations make the overall pattern credible. However, two important caveats should temper interpretation:
  • Methodology transparency. Public reports summarise findings but do not publish a raw dataset or a full methodology breakdown suitable for independent re‑analysis. That limits external verification of sample selection and exact measurement criteria.
  • Outside‑in vs inside‑out detection. Tools that measure exposure from the outside (internet/endpoint telemetry, open‑source scans) can over‑ or under‑count certain classes of vulnerability compared with internal asset scans and patch‑management inventories. Both perspectives are valuable but measure slightly different questions.
Given those limits, the prudent reading is this: the magnitude and direction of KYND’s findings are supported by multiple independent observations and public advisories — the existence of persistent exposure to actively‑exploited vulnerabilities is well validated — but precise percentages and distributions should be treated as strong signals, not immutable facts.

Insurance and underwriting: remediation speed as a risk metric​

One of the more consequential shifts signalled by KYND’s work is the way cyber insurers are moving beyond simple vulnerability counts.
  • From counts to cadence. Underwriters increasingly care about how quickly a firm remediates critical issues, not just how many issues it has at a point in time. Persistent delays are interpreted as systemic operational weakness.
  • Portfolio effects. Insurers must evaluate aggregated exposure across many insureds. If the same set of firms habitually lags on patching, the insurer’s tail risk increases even if each firm’s point‑in‑time vulnerability count looks manageable.
  • Incentives and contractual change. Policies may evolve to include remediation SLAs, faster incident reporting requirements, or premium adjustments tied to demonstrable patch cadence.
For risk managers and brokers, this means simply showing a low vulnerability count may not be sufficient; demonstrating fast, repeatable remediation workflows and measurable reduction in exposure over time will become underwriting currency.

Practical remediation roadmap: what organisations should do now​

The challenge is operational as much as technical. The following practical steps are proven, action‑oriented measures to reduce exposure windows for actively‑exploited vulnerabilities:
  • Inventory and prioritise.
  • Create a near‑real‑time asset inventory that tracks software versions, roles, and exposure (internet vs internal).
  • Map assets to business risk (crown jewels) and to known‑exploited vulnerability lists.
  • Triage using exploitability and impact.
  • Prioritise fixes for vulnerabilities that are actively exploited, enable RCE, or affect high‑privilege services.
  • Use a combination of CVSS, exploit availability, and business context to prioritise.
  • Automate patch deployment where possible.
  • Automate patch application, testing, and rollback for common platforms.
  • Implement canary deployments and staged rollouts to reduce perceived risk of breaking production.
  • Compensating controls for hard‑to‑patch systems.
  • Isolate legacy systems behind access controls, micro‑segmentation, or application proxies.
  • Restrict management interfaces to private networks and reduce exposure of default management ports.
  • Strengthen change orchestration and testing.
  • Develop fast‑path change processes for critical security updates with pre‑approved rollback playbooks.
  • Maintain reliable test environments that mirror production sufficiently to validate patches quickly.
  • Instrument and monitor patch effectiveness.
  • Verify patch installation and expected behavioural changes (e.g., service restarts, changed binaries).
  • Continuously monitor for anomalous network traffic, exploitation indicators, and post‑patch regressions.
Those steps are straightforward in concept but require investment: automation, cross‑team coordination, and a shift from batch quarterly patching to continuous patching for critical issues.

For insurers and brokers: practical checks to operationalise remediation speed​

Insurers seeking to operationalise remediation speed as an underwriting metric should consider:
  • Asking for documented remediation SLAs and historical timeliness statistics during underwriting.
  • Requesting evidence of automatic patching capabilities for critical services and demonstration of production‑rollback processes.
  • Running continuous outside‑in monitoring on portfolios to identify persistent non‑remediators early.
  • Structuring premium incentives or remediation credits tied to demonstrable reduction in exposure time for KEV items.
This more dynamic approach better aligns the economics of insurance with operational security posture, but it also requires more technical due diligence and ongoing portfolio telemetry.

Larger risks and future threats​

The persistent slow remediation problem amplifies several systemic risks:
  • Supply‑chain attack vectors. Compromised update servers or orchestration services can convert a local vulnerability into a supply‑chain catastrophe.
  • Wormability. High‑impact unauthenticated RCEs in widely used services can enable worm‑like propagation — making patch speed a public‑safety problem, not just an IT issue.
  • Regulatory pressure. Governments and sector regulators are increasingly mandating timelines for remediation of known exploited vulnerabilities; non‑compliance can carry operational and legal consequences.
  • AI‑accelerated exploitation. As offensive tooling becomes augmented with automation and AI, attackers can more quickly transform disclosures and PoCs into effective exploit chains.
Taken together, the risks create strong incentives for organisations to move from occasional patch projects to continuous remediation practices overseen at board level.

Strengths and weaknesses of the KYND analysis​

Strengths​

  • Portfolio‑level visibility. Analysing thousands of firms reveals systemic patterns that single‑company case studies miss.
  • Focus on actively‑exploited flaws. Prioritising vulnerabilities that are already being weaponised elevates practical risk over theoretical risk.
  • Actionable framing for insurers. The emphasis on remediation cadence provides a clear lever for underwriters to assess operational discipline.

Risks and limitations​

  • Limited public methodological detail. Without a fully public dataset and granular methodology, precise replication is constrained; the study’s directional conclusions remain robust, but exact percentages should be treated as estimates.
  • Potential measurement biases. External scanning approaches can miss internal exposures or misclassify patched systems depending on configuration and visibility.
  • Causation vs correlation. The observation that slow remediation correlates with higher incident rates is compelling, but organisations differ widely; remediation speed is a key signal but not the only one that matters.
These caveats do not negate the study’s core alarm: organisations continue to run with avoidable, highly dangerous exposures for far too long.

Conclusion​

The KYND analysis and recent incidents paint a simple, urgent picture: the cybersecurity problem for many large organisations is no longer simply discovering vulnerabilities — it is closing the loop between detection and remediation fast enough to stay ahead of active attackers. Remote code execution bugs in high‑privilege, widely‑deployed services are especially dangerous because they accelerate lateral movement and potential supply‑chain compromise.
Operational solutions already exist: robust asset inventory, automation, fast‑path change controls, compensating isolation controls, and measurable SLAs for remediation. What’s new is the commercial and regulatory context: insurers are moving to treat remediation cadence as an underwriting input, governments are mandating faster responses for known exploited vulnerabilities, and attackers are increasingly automated and opportunistic.
Addressing this problem is a practical, measurable task. The strategic choice for boards, CISOs, and underwriters is whether to treat remediation as an operational hygiene item — executed fast and continuously — or as an intermittent project left to chance. The evidence makes clear which choice reduces real business risk.

Source: SecurityBrief Asia https://securitybrief.asia/story/kynd-big-firms-leave-critical-cyber-flaws-unpatched/