AVEVA’s PI Data Archive has been the subject of coordinated security advisories after the vendor and U.S. authorities confirmed multiple denial-of-service class vulnerabilities that can be triggered by malformed or unexpected input and — in at least one case — by an uncaught exception that can shut down critical PI subsystems. The headline risk is straightforward: an attacker with network access and low privileges could remotely crash PI services, potentially causing data loss from write caches or snapshots and disrupting downstream process visibility. This article unpacks what is known, verifies the technical details against authoritative advisories, explains practical mitigations and detection steps for Windows-based engineering teams, and offers an operational playbook to reduce exposure while you patch.
AVEVA’s PI Data Archive (the historian component inside PI Server) is widely used in industrial operations to collect, store, and serve time-series process data. In June 2025, CISA published an advisory that summarizes vendor-coordinated disclosures for multiple PI product issues, including two high‑severity denial‑of‑service problems assigned CVE identifiers. The central failure mode for the most severe entries is an uncaught exception (CWE‑248) and related input‑handling faults that can cause subsystems to terminate unexpectedly, leading to service outages and possible data loss depending on timing.
Independent CVE/NVD trackers list at least two closely related IDs tied to those advisories (CVE‑2025‑44019 and CVE‑2025‑36539). Both are assessed with high availability impact and network‑accessible attack vectors in the public coordination materials. AVEVA has published security bulletins that map affected versions and provide upgrade guidance; national CERTs and multiple third‑party vulnerability databases have reproduced the vendor/CISA findings.
Recommended immediate priorities for operators:
Conclusion: AVEVA has published fixes and national authorities have issued clear mitigations — act promptly, validate vendor patch identifiers against the official support portal, and prioritize engineers’ and SOC teams’ short‑term mitigations to avoid an unnecessary outage.
Source: CISA AVEVA PI Data Archive | CISA
Background / Overview
AVEVA’s PI Data Archive (the historian component inside PI Server) is widely used in industrial operations to collect, store, and serve time-series process data. In June 2025, CISA published an advisory that summarizes vendor-coordinated disclosures for multiple PI product issues, including two high‑severity denial‑of‑service problems assigned CVE identifiers. The central failure mode for the most severe entries is an uncaught exception (CWE‑248) and related input‑handling faults that can cause subsystems to terminate unexpectedly, leading to service outages and possible data loss depending on timing. Independent CVE/NVD trackers list at least two closely related IDs tied to those advisories (CVE‑2025‑44019 and CVE‑2025‑36539). Both are assessed with high availability impact and network‑accessible attack vectors in the public coordination materials. AVEVA has published security bulletins that map affected versions and provide upgrade guidance; national CERTs and multiple third‑party vulnerability databases have reproduced the vendor/CISA findings.
What the vulnerabilities are — technical summary
Uncaught exceptions and heap/stack faults: how they affect PI Data Archive
- The primary class of the high-severity issues is an uncaught exception in PI Data Archive subsystems; when triggered, these exceptions can forcibly terminate processes that provide data ingestion or retrieval services. Because the PI Data Archive uses in-memory buffers and write caches to buffer incoming telemetry, abrupt termination risks both availability and recent data persistence.
- Complementing the uncaught-exception findings are input‑parsing weaknesses and memory-handling bugs reported across PI Server and related components. Some advisories identify heap‑based buffer faults or other memory-corruption categories that increase the potential for crashes; while most public descriptions focus on DoS, memory faults can in other contexts be weaponized for code‑execution—raising the need for rapid remediation and careful monitoring.
Attack model and prerequisites
- Attack vector: Network — several advisories describe the attack surface as reachable over the network and tied to the PI Message Subsystem and other PI services. That makes exposed PI Servers reachable from untrusted segments an immediate risk.
- Privileges: Low (authenticated user or network access in some advisory variants). Some entries note limited privilege requirements; others indicate unauthenticated access to specific PI message endpoints may be sufficient in older related advisories. Treat the exact privilege requirement as version-dependent.
- Impact: High availability impact (DoS), with possible limited integrity effects (if transient corruption or partial writes occur) and potential for data loss from caches/snapshots.
Affected versions and patch guidance (what to install)
AVEVA’s coordinated notes and multiple national advisories agree on a simple remediation principle: install the vendor security updates for the PI Server family or upgrade to fixed builds. The publicly referenced remediation guidance is:- Upgrade to PI Server 2024 (or later) for affected PI Data Archive and PI Server builds as the comprehensive remediation path for the documented CVEs. This upgrade line includes the Data Archive fixes for the uncaught-exception and related issues.
- For organizations that cannot immediately move to PI Server 2024, the vendor recommended intermediate fixes in the 2018 SP3 maintenance branch (for example, 2018 SP3 Patch 7 or later, with some advisories referencing Patch 8 in subsequent communications). If you manage a 2018 SP3 deployment, apply the first available non‑vulnerable SP3 patch your support contract permits and follow vendor notes about compatibility and migration.
Immediate mitigations you can apply now (0–72 hours)
If you cannot patch immediately because of operational constraints, take the following prioritized steps to reduce near‑term risk. These are pragmatic compensating controls used by Windows and OT teams while planning a safe maintenance window.- Monitor and harden the PI message and archive processes:
- Monitor liveness for the services launched by your PI Server installation (for example, the services invoked by your local "\PI\adm\pisrvstart.bat"). Implement service‑level monitors that generate alerts on stop/fail events.
- Set critical PI services to automatic restart. Configure the PI Message Subsystem, PI Archive Subsystem, and any related services to auto‑restart on failure to reduce manual recovery time. This doesn’t remove the root risk but limits downtime.
- Restrict network access to PI ports:
- Limit inbound access to the PI Message Subsystem port (commonly port 5450) to a small set of trusted hosts (engineering workstations, jump boxes, or middleware servers) using firewalls and host‑based access controls. This is repeatedly recommended in national advisories as a primary compensating control.
- Isolate and protect backups and archives:
- Treat offline snapshots and archived project files as potential sources of system-state information; ensure backups are encrypted and stored with strict ACLs. If you keep rollback artifacts, restrict access and log all reads.
- Operational hardening for engineering hosts:
- Move engineering/editing tasks to hardened jump hosts with restricted file share access, require MFA for remote admin access, and enforce endpoint protection/EDR on all hosts that can reach PI services. This reduces the risk of an attacker gaining an initial foothold on a machine that can reach the PI Message Subsystem.
Detection, hunting, and incident response
What to log and watch for
- Service crashes and restarts: set SIEM alerts for PI-related service stops, crashes, and automatic restarts. Correlate these with network traffic anomalies and host events.
- Unusual network traffic to port 5450: log flows and raise alerts for inbound connections to PI ports from unexpected subnets, especially from user or contractor networks. Implement network flow baselines so deviation can be flagged quickly.
- High-frequency failed requests: repeated malformed requests, malformed session attempts, or repeated parsing errors in PI logs can indicate a brute‑force probe or attempted fuzzing activity. Increase logging verbosity on test systems to refine detection rules.
Containment and recovery checklist (if you suspect abuse)
- Preserve evidence: capture memory snapshots and disk images of affected hosts, and archive PI logs with integrity controls.
- Isolate the host: remove the affected PI Server from the network if you suspect a crash is due to malicious input rather than benign misconfiguration.
- Rotate credentials: after validating system integrity, rotate any credentials potentially stored or cached on engineering hosts.
- Patch and validate: apply the vendor patch in test, validate service behavior, then schedule production rollout with fallback plans.
- Post‑incident review: map the attack path and harden any jump hosts, file shares, or contractor workflows that allowed introduction of malicious inputs.
Why the operational risk is higher in industrial contexts
PI System installations often straddle IT and OT, run on long‑lived images, and interact with contractors, integrators, and OEMs. That increases exposure in three ways:- Engineering workstations regularly share project bundles, backups, and configuration artifacts — any of those files can serve as an attack vector when moved into runtime contexts.
- OT environments sometimes permit wider inbound connectivity for legacy reasons (management access, remote support), increasing the chance that an attacker can reach PI ports without traversing hardened jump hosts.
- Rolling out patches in production OT can be operationally disruptive; the need to preserve continuous process history means teams delay upgrades and, in doing so, remain vulnerable.
Practical patching strategy for Windows/PI administrators
- Inventory and map: identify every PI Server, PI Data Archive instance, PI Web API endpoint, and connectors (for example, CygNet connectors) in your environment. Note version and service pack for each host.
- Prioritize critical paths: prioritize production historians that feed control‑room displays, alarm pipelines, or safety‑relevant dashboards. These must be on the earliest upgrade schedule.
- Test upgrade in a clone environment: validate the PI Server 2024 (or the designated fixed 2018 SP3 patch) on a representative clone that mirrors storage sizes, archive retention, and connected consumers. Validate snapshot and write cache behavior under load.
- Apply patches during maintenance windows with rollback plans: keep a tested rollback snapshot and ensure your data‑ingest buffers are drained or otherwise managed to minimize data loss risk.
- After patching: verify service liveness, test downstream readers, and re-enable relaxed network paths only after monitoring shows no anomalous behavior.
Critical analysis — strengths, gaps, and residual risk
Strengths in the coordinated response
- Vendor transparency and coordinated disclosure: AVEVA reported the issues and coordinated with CISA and multiple national CERTs, which enabled consolidated advisories and a clear upgrade path. That process reduces patch‑management ambiguity for operators.
- Reasonable compensating controls: network segmentation, limiting port 5450, and service auto‑restart are practical, achievable mitigations for many organizations and reduce the immediate blast radius. These are recommended across advisories as sensible operational measures.
Residual risks and open questions
- Rollback and archival pain: some vendor-supplied migrations or patches may be one‑way or change on‑disk formats. Organizations that rely on archived snapshots or need binary‑level rollback should plan carefully before an in-place upgrade. Third‑party summaries and CERT notices have repeatedly warned that migrations can be irreversible — plan accordingly.
- Patch-window friction in OT: even with a clear patch to PI Server 2024, many process plants cannot quickly take historians offline for upgrades. This operational friction increases the time systems remain exploitable despite the availability of fixes. The right risk management balance is essential.
- Information consistency across bulletins: third‑party trackers and summaries sometimes show slightly different patch numbers (for example, Patch 7 vs. Patch 8 in 2018 SP3 branches) or advisory identifiers. If you’ve been handed an advisory number (for example, an AVEVA‑XXXX or a CISA ICSA number referencing 2026), verify the package name and checksum on the vendor portal or via your AVEVA support contact before applying a patch. In our review we validated the core technical claims against CISA and multiple CVE listings, but specific patch‑naming variants in secondary summaries should be cross‑checked with AVEVA.
Verification note: advisory identifiers and unverified bulletin IDs
Some organizations have reported advisory or bulletin identifiers that reference later calendar years (for example, AVEVA‑2026‑002 or ICSA‑26‑041‑03). During verification for this article, authoritative public feeds (CISA, NVD/CVE, national CERTs) show the core PI Data Archive advisories and CVE mappings published in mid‑2025 (CISA ICSA‑25‑162‑07 and the associated CVEs noted above). I was unable to locate a public CISA advisory with the code ICSA‑26‑041‑03 or an AVEVA‑2026‑002 bulletin in the official feeds accessible at the time of writing. If your communications reference those IDs, please validate them against the AVEVA support portal or your vendor support case, because patch numbers and advisory labels can be regionally issued or added after follow‑up bulletins. If AVEVA provided a 2026 follow‑up bulletin with updated patch targets (for example, PI Server 2024 R2 and 2018 SP3 Patch 8), treat that as authoritative if it comes from your vendor account — but confirm the package names and checksums before deployment.Checklist: immediate actions for Windows/OT teams
- Inventory PI components and record exact version/service pack numbers.
- Block inbound access to PI ports (especially port 5450) from untrusted networks and restrict to jump hosts.
- Configure automatic restart for PI Message and PI Archive services and implement liveness monitoring for those processes.
- Prepare a test clone and validate the vendor-recommended patch (PI Server 2024 or the fixed 2018 SP3 patch) before production rollout.
- Harden engineering workstations: move editing and project handling behind jump hosts, enforce MFA, and restrict file shares.
- Log and alert for anomalous connections to PI ports and for unexpected service crashes; preserve logs for forensic review if needed.
Final assessment and recommended next steps
The PI Data Archive vulnerabilities are a high‑impact, network‑relevant class of failures that demand action: the combination of a plausible network attack vector, low attack complexity, and the operational criticality of historians makes prompt remediation important for any organization that depends on PI for process insight.Recommended immediate priorities for operators:
- Validate your installed versions and cross‑check the vendor bulletin for the exact fixed package that applies to your edition.
- If you cannot patch within days, enforce the network restrictions and service‑monitoring steps listed above to reduce exposure.
- Schedule a controlled upgrade to the vendor‑recommended fixed release in a test‑first fashion and plan a staged production rollout with rollback plans and data retention verification.
Conclusion: AVEVA has published fixes and national authorities have issued clear mitigations — act promptly, validate vendor patch identifiers against the official support portal, and prioritize engineers’ and SOC teams’ short‑term mitigations to avoid an unnecessary outage.
Source: CISA AVEVA PI Data Archive | CISA