CISA OT Zero Trust Guidance: Never Assume the Network Is Safe

  • Thread Author
CISA and partner agencies have released new joint guidance urging owners and operators of operational technology systems to adapt zero trust principles to industrial environments where connected sensors, remote access, legacy controllers, and safety-critical processes have made old perimeter assumptions increasingly brittle. The document’s importance is not that Washington has discovered zero trust. It is that the government is now saying, plainly, that zero trust must survive contact with pumps, substations, production lines, treatment plants, building systems, and weapons-support infrastructure. In OT, the slogan “never trust, always verify” has to be translated into something less elegant but more useful: never assume the network is safe, never break the process, and never let visibility lag behind connectivity.

Industrial control room with digital cybersecurity icons and holographic shields amid sparks and debris.Zero Trust Finally Meets the Machines That Cannot Reboot​

The awkward truth about zero trust is that it was born in a world that looks nothing like a plant floor. Its natural habitat is the enterprise: laptops, cloud apps, identity providers, mobile devices, software-defined networks, and users who can be challenged, blocked, reauthenticated, or forced through a new policy flow. OT lives by a different clock. A programmable logic controller does not care that the corporate architecture team has a new reference model, and a safety system is not improved by adding friction at the wrong moment.
That is why CISA’s new OT-focused guidance matters. It treats zero trust not as a product category but as a discipline that must be bent around operational reality. Industrial environments have always had trust relationships, but many of them were implicit, undocumented, and inherited from eras when isolation was the main defense. If the engineering workstation could reach the controller, if the vendor VPN was on the approved concentrator, if the HMI lived on the right subnet, the system often assumed legitimacy.
That model is collapsing under the weight of convergence. Remote monitoring, predictive maintenance, cloud analytics, outsourced support, digital twins, and centralized operations centers have all delivered business value by connecting systems that were once physically or logically removed from enterprise networks. The result is not simply “more attack surface,” the phrase security people reach for when they want to sound precise. It is a structural change in how industrial authority is granted.
Zero trust in OT is therefore not about importing every IT control into a control network. It is about forcing every connection, command, device, identity, and maintenance path to justify itself in a setting where the cost of getting security wrong may be measured in downtime, damaged equipment, unsafe conditions, or public-service disruption.

The Perimeter Was Useful, Then It Became a Myth​

For decades, OT security leaned on separation. The plant network was not the office network. The control system was not the internet. The engineer with the laptop was not the same problem as the employee opening email attachments. That separation was never perfect, but it was meaningful enough to shape culture, budgets, and architecture.
The modern industrial environment has shredded that neat separation without fully replacing it. A utility may still have segmented networks and carefully managed control zones, but it may also have vendor-maintained remote access, historians feeding business intelligence platforms, cloud dashboards used by executives, cellular-connected field equipment, and Windows systems sitting in places where patch windows are rare and downtime is expensive. The perimeter has not vanished; it has multiplied into a set of brittle edges that no single firewall rulebase can explain.
Zero trust is attractive because it attacks the assumption that location equals legitimacy. In the old model, being “inside” mattered too much. In the new model, an internal system can be malicious, compromised, misconfigured, obsolete, or simply being used by the wrong person at the wrong time. For Windows admins, this is familiar territory: domain membership, local admin sprawl, cached credentials, and flat networks have taught the same lesson in enterprise IT for years.
But OT adds a more difficult constraint. The fix cannot be a reflexive lockdown. Industrial systems often use protocols and equipment that were designed for availability and determinism rather than hostile networks. Many devices cannot run agents. Some cannot support modern cryptography. Some cannot be patched without recertification, downtime, or vendor involvement. Some will continue running long after the operating systems around them have aged out of mainstream support.
CISA’s guidance lands in this gap between the ideal and the installed base. It effectively tells organizations to stop treating OT exceptionalism as an excuse for implicit trust, while also acknowledging that OT cannot be secured by pretending it is just another branch office.

Asset Visibility Is the First Real Zero Trust Control​

The most important sentence in any OT zero trust strategy is not “enforce least privilege.” It is “know what exists.” Without asset visibility, every other control is theater. You cannot continuously validate identities, devices, traffic, and risk if your inventory is a spreadsheet last reconciled during an audit.
OT asset visibility is harder than IT asset visibility because discovery itself can be risky. Aggressive scanning that is routine in an enterprise subnet can destabilize fragile industrial equipment. Passive discovery, network taps, span ports, engineering change records, controller project files, maintenance logs, and procurement data all become part of the truth-gathering exercise. The work is tedious, and that is precisely why it matters.
A credible OT inventory needs to describe more than hostnames and IP addresses. It should capture firmware, function, physical process role, network path, vendor dependency, maintenance owner, remote access requirement, safety impact, and the consequence of failure. A Windows workstation in an engineering bay is not just a workstation. It may be the bridge between Active Directory, a vendor toolchain, and a controller that governs a physical process.
This is where zero trust becomes less glamorous and more powerful. Once an organization can see its OT environment clearly, policy stops being aspirational. It becomes possible to ask whether a specific workstation should talk to a specific PLC, whether a vendor account should exist outside a maintenance window, whether a legacy device should be isolated behind a broker, and whether a protocol is carrying expected commands or something far stranger.
Asset visibility also changes incident response. In enterprise IT, the question after compromise is often “what data did they access?” In OT, the question becomes “what could they have influenced?” That is a different kind of blast-radius analysis, and zero trust without that operational map is blind.

Identity Is No Longer Just a Human Problem​

Zero trust discussions often begin with human identity: multifactor authentication, single sign-on, role-based access, conditional access, privileged access management. Those controls matter in OT, especially because engineering workstations, jump servers, VPNs, shared accounts, and vendor credentials are recurring weak points. But OT forces a broader definition of identity.
Machines have identities. Applications have identities. Services have identities. Controllers, sensors, gateways, historians, remote terminal units, and maintenance tools all participate in trust decisions, even when nobody has called those relationships “identity.” A device that can issue commands to a controller is exercising authority. A historian that can read from production networks and write toward enterprise systems is crossing a boundary. A vendor laptop that arrives during an outage can become the most privileged machine in the facility.
The hard part is that many OT assets were never built for strong, modern identity. They may rely on shared passwords, static IP allowlists, weak local accounts, or protocol-level assumptions that make cryptographic authentication difficult. Some environments still depend on local administrator credentials that have survived generations of staff turnover because changing them risks breaking tooling nobody fully understands.
CISA’s framing pushes organizations toward robust identity and access management, but the realistic path is layered. Human access should be tied to named users, strong authentication, time-bound privileges, and explicit approval workflows. Vendor access should be brokered, monitored, and revoked when not needed. Machine-to-machine communication should be constrained to known paths and expected functions, even when the device itself cannot participate in a modern identity fabric.
This is where Windows infrastructure becomes both a risk and an opportunity. Active Directory, Entra ID integrations, certificate services, remote desktop gateways, privileged access workstations, and endpoint detection controls often sit adjacent to OT even when the process equipment is non-Windows. If those identity planes are messy, OT inherits the mess. If they are hardened, monitored, and segmented, OT gains a foundation that many legacy devices cannot provide on their own.

Remote Access Is the Stress Test for Every Policy​

Remote access is where OT zero trust either becomes real or collapses into paperwork. Industrial organizations need remote access because vendors support specialized systems, engineers cover multiple sites, and centralized operations are now part of normal business. Pretending otherwise merely drives access into informal workarounds.
The zero trust move is not to ban remote access. It is to make remote access explicit, narrow, observable, and temporary. The difference between a standing VPN account and a time-bound session through a monitored access broker is not semantic. One assumes that a credential and network placement are enough; the other treats access as a revocable event that must be justified.
This matters because many OT incidents begin in places that look mundane. A reused credential. A forgotten vendor account. A VPN appliance behind on patches. A remote desktop service exposed too broadly. A contractor machine that connects from an unmanaged environment. Once inside, the attacker does not need to “hack the plant” in a cinematic sense if the architecture already grants broad lateral movement.
Zero trust reframes the session. Who is connecting? From what device? To what asset? For what purpose? During what window? With what commands visible? Under what approval? With what recording or logging? These are not abstract governance questions. They determine whether an organization can distinguish planned maintenance from intrusion while the system is still running.
For IT pros, this is the place to resist easy vendor answers. A shiny remote access portal does not create zero trust if it lands users into a flat engineering network. MFA does not solve the problem if the authenticated user can pivot everywhere. Session recording is useful, but it is not a substitute for least privilege. The test is whether compromise of one credential, one laptop, or one vendor path still leaves meaningful barriers between the attacker and the physical process.

Segmentation Remains the Workhorse, but It Needs a Promotion​

Zero trust is sometimes marketed as the successor to network segmentation. In OT, that is nonsense. Segmentation remains one of the most practical and necessary controls available, but it must evolve from static network plumbing into policy enforcement aligned with operational roles.
The Purdue model still has value because it gives organizations a language for separating enterprise systems, supervisory systems, control systems, and field devices. But a diagram is not a control. The actual question is whether traffic between levels is limited, monitored, and justified. If a business analytics server can initiate unexpected connections into the control zone, the drawing on the wall is decorative.
Microsegmentation in OT is harder than in a cloud-native application stack. Industrial protocols may be chatty, poorly documented, or sensitive to latency. Some equipment may require vendor-specific flows that are discovered only after a maintenance window breaks. That does not make segmentation optional. It means segmentation must be preceded by careful baselining and implemented with respect for process risk.
The better mental model is not “block everything and see who screams.” It is “observe, understand, constrain, and then continuously verify.” Passive monitoring can reveal normal communication patterns. Firewall and data diode strategies can enforce directional flows. Jump hosts can centralize administrative paths. Protocol-aware gateways can mediate commands. Zones and conduits can turn sprawling trust into bounded trust.
This is also where vulnerability management becomes more honest. In enterprise IT, the answer to a critical flaw is often patch quickly. In OT, patching may require testing, outage planning, vendor certification, rollback procedures, and safety review. Segmentation and access control are what keep known weaknesses from becoming immediate process risk while the patch reality catches up.

Supply Chain Risk Has Moved Inside the Control Loop​

CISA’s guidance emphasizes supply chain risk because OT environments are deeply dependent on vendors, integrators, firmware, proprietary software, and long-lived support relationships. That dependency is not a side issue. It is part of the operational architecture.
A plant may trust a vendor because the vendor installed the system, maintains the software, and understands the equipment better than internal staff. That trust may be commercially necessary, but it should not be technically unlimited. Zero trust asks whether the vendor’s access is scoped, whether its tools are validated, whether updates are authenticated, whether support sessions are monitored, and whether the organization can detect when legitimate channels are abused.
The supply chain problem is especially severe in OT because replacement cycles are slow. A control system may run for decades, with components added and integrated over time. Documentation may lag reality. The vendor that built one part of the system may have been acquired. The integrator may have moved on. The software may depend on a Windows version everyone wishes were gone but nobody can retire this quarter.
This is why procurement is now a cybersecurity control. New OT purchases should be evaluated for identity support, logging, secure update mechanisms, vulnerability disclosure practices, protocol security, support lifecycle, and integration with monitoring tools. Buying another opaque black box and promising to secure it later is how yesterday’s exception becomes tomorrow’s permanent risk.
The deeper point is cultural. OT organizations have historically optimized procurement around reliability, compatibility, and operational need. Those remain non-negotiable. But security properties are now part of reliability, because a device that cannot be authenticated, monitored, patched, or isolated may become the reason the process fails.

Safety Changes the Meaning of Least Privilege​

Least privilege sounds simple until the privileged action is needed to keep a process safe. In IT, denying access may annoy a user or delay a workflow. In OT, denying the wrong command at the wrong time can have physical consequences. That difference does not invalidate least privilege; it raises the standard for designing it.
The OT version of zero trust has to account for safety, availability, determinism, and human response under pressure. Access policies must distinguish between normal operations, maintenance, emergency procedures, and break-glass scenarios. A rigid access model that works during office hours may fail during a storm, outage, production incident, or safety event.
Break-glass access is often treated as a loophole, but it should be treated as a designed control. The account or pathway should be limited, monitored, tested, and reviewed. It should exist because emergencies are real, not because the organization lacked the discipline to implement granular access. The difference is whether exceptional access leaves an audit trail and expires when the exception ends.
This is also why training matters. Zero trust changes workflows, and changed workflows can create operational risk if the people using them are not prepared. Engineers need to know how to request access, how to respond to denied sessions, how to use privileged workstations, how to spot suspicious behavior, and how to escalate when security controls conflict with process needs.
The worst zero trust program in OT is one designed entirely by security staff and then inflicted on operators. The best one is co-owned by engineering, operations, safety, IT, security, procurement, and executive leadership. That may sound bureaucratic, but in industrial environments the social architecture is often as important as the network architecture.

Windows Is Still in the Middle of the OT Security Story​

For a Windows-focused audience, the OT zero trust conversation should feel uncomfortably familiar. Many industrial environments contain Windows servers, domain controllers, engineering workstations, HMIs, historians, file shares, remote desktop hosts, patch management systems, and backup platforms. Even when the field devices are specialized, the administrative and supervisory layers often ride on Microsoft infrastructure.
That makes Windows hygiene a direct OT security issue. Credential theft from an engineering workstation can become process access. A misconfigured domain trust can become lateral movement. An unpatched remote access server can become the bridge into the control network. A weak backup strategy can turn ransomware from an IT incident into an operational shutdown.
The answer is not to drag every OT asset into the corporate domain or force uniform endpoint controls where they do not belong. The answer is to treat Windows systems that touch OT as high-consequence assets. They deserve hardened baselines, application control where feasible, restricted administrative rights, careful patch rings, offline recovery planning, monitored remote access, and separation from ordinary enterprise browsing and email risk.
Privileged access workstations are particularly relevant. If an engineer uses the same laptop for email, web browsing, vendor downloads, and controller programming, the organization has built a trust bridge from the internet to the process. A zero trust design narrows that bridge. It separates everyday productivity from operational authority and makes the high-risk path more controlled.
This is not glamorous work, but it is where many organizations can make immediate progress. Before buying a new industrial security platform, they can reduce standing privileges, remove stale accounts, harden RDP, review VPN access, isolate engineering tools, inventory local administrators, enforce MFA for remote access, and verify that backups can restore the systems that actually matter.

The Guidance Is Really a Governance Challenge​

The federal guidance will be read by some as a technical roadmap, and it is partly that. But its larger message is governance. OT zero trust requires organizations to decide who has authority over industrial risk in a converged environment.
That question is often unresolved. IT owns the network but not the process. Engineering owns the process but not the identity provider. Security owns the monitoring platform but not the maintenance contract. Procurement buys the system but does not operate it. Vendors support the equipment but do not carry the organization’s public accountability when something goes wrong.
Zero trust exposes these seams because it demands policy decisions that cross them. Who approves vendor access to a control zone? Who decides whether a vulnerable device can remain in production? Who owns the inventory? Who signs off on a segmentation change? Who can invoke break-glass access? Who reviews logs after a maintenance session? Who has the authority to stop an insecure connection that operations says it needs?
These are not implementation details. They are the difference between a zero trust program and a slide deck. A mature organization can answer them before an incident. An immature one discovers them during the incident, when every decision is more expensive.
The governance challenge also extends to metrics. Counting deployed tools is not enough. Better measures include reduction in standing privileged access, percentage of OT assets with verified ownership, number of remote access paths consolidated, coverage of passive monitoring, segmentation policy exceptions reviewed, vendor accounts time-bound, and recovery procedures tested. In OT, progress is not merely more control; it is more controlled operation.

The Zero Trust Vendor Pitch Needs Industrial Skepticism​

The security industry will inevitably turn OT zero trust into a buying cycle. Some of that will be useful. Industrial asset discovery, secure remote access, identity governance, privileged access management, anomaly detection, protocol-aware firewalls, and network segmentation tools can all help. The danger is the familiar one: a principle becomes a SKU, and the hard organizational work gets deferred.
OT leaders should be skeptical of any product that claims to “deliver zero trust” without first asking about process criticality, safety constraints, maintenance windows, protocol behavior, vendor dependencies, and legacy equipment. They should be equally skeptical of architectures that assume agents everywhere, cloud connectivity everywhere, or real-time enforcement everywhere. Industrial environments punish assumptions.
The better vendor conversation starts with use cases. Protect remote vendor maintenance. Limit engineering workstation access to defined assets. Monitor controller programming changes. Separate historian flows from command pathways. Reduce shared accounts. Detect unusual protocol behavior. Enforce time-bound privileged sessions. Improve recovery after ransomware. These are concrete outcomes, not branding exercises.
This is where CISA’s guidance has practical value even for organizations that never cite it in a board deck. It gives security teams language to push back against both extremes: the fatalist claim that OT is too old for zero trust, and the salesman’s claim that zero trust can be installed like a gateway appliance. The truth is harder and more promising. OT zero trust is a long migration from inherited trust to engineered trust.

Where Administrators Should Push First​

The practical reading of the guidance is not that every plant, utility, manufacturer, hospital, airport, and water system should attempt a grand redesign. The better move is to identify the trust assumptions that create the most operational risk and start reducing them in ways that do not destabilize the process.
  • Organizations should build or refresh an OT asset inventory that captures operational function, owner, connectivity, software or firmware state, vendor dependency, and consequence of failure.
  • Remote access should be consolidated into monitored, time-bound, identity-aware pathways instead of persistent VPN access or informal remote desktop exceptions.
  • Engineering workstations and Windows systems that touch OT should be treated as high-consequence assets with hardened configurations, restricted privileges, and carefully managed administrative access.
  • Network segmentation should be validated against real traffic flows so that zones and conduits reflect how the process actually operates, not how the diagram says it operates.
  • Vendor and supply chain access should be governed through explicit approvals, scoped privileges, secure update expectations, and logging that survives beyond the maintenance window.
  • Break-glass access should be designed, tested, monitored, and reviewed rather than left as an undocumented escape hatch.
The most important shift is psychological. OT teams do not need to accept every zero trust fashion imported from enterprise IT, but they can no longer defend implicit trust as an operational necessity. The future of industrial security will belong to organizations that can prove not only that their systems are available, but that every path into those systems is known, justified, constrained, and resilient when the surrounding network is already assumed to be hostile.

Source: CISA Adapting Zero Trust Principles to Operational Technology | CISA
 

Back
Top