Agent Mode AI in Remote Access: Secure MCP with Devolutions RDM

  • Thread Author
IT administrators are increasingly pairing so‑called Agent Mode AI with remote access tools to turn large language models (LLMs) from passive advisers into hands‑on operators — and Devolutions’ recent integration of the Model Context Protocol (MCP) into Remote Desktop Manager (RDM) illustrates a practical, security‑focused path forward. The change is subtle on the surface — the UI stays familiar — but it reorders the operational model: instead of copy/paste relay races between remote consoles and chat windows, operators can let an assistant see context, propose steps, and with human approval, execute those steps inside the target session while keeping secrets out of the model. This article unpacks how MCP makes that possible, why Devolutions’ named‑pipe approach matters for governance, what risks remain, and practical steps IT teams should take to pilot agentic workflows without widening their attack surface. (petri.com)

Background: from advice to action — Agent Mode and MCP​

Agent Mode is the shift in AI from single‑turn suggestions toward multi‑step automation: an assistant plans, executes, validates, and iterates inside an application or workflow. That change has already appeared in productivity suites and developer tools, and is now migrating into the systems administrators use every day. Agent Mode isn’t a magic UI; it’s a new pattern where the model is permitted to act, subject to controls and visibility.
The Model Context Protocol (MCP) is the plumbing that makes practical, interoperable agentic actions possible. Think of MCP as an application‑layer protocol — analogous to a “USB‑C for AI” — that standardizes how tools expose capabilities, files, and actions to LLMs over a simple RPC‑like transport. MCP implementations can expose operations (for example, “run this command on this session and return structured output”), describe available capabilities, and stream outputs back to the model or client. Anthropic’s MCP specification and reference SDKs have driven rapid adoption across multiple vendors and clients, and MCP is now positioned as a neutral connector in the agentic ecosystem.
Why this matters for IT: instead of repeatedly copying logs, command output, and context into a chat interface, an MCP client (like a VS Code sidebar running GitHub Copilot or a Claude integration) can connect directly to an MCP server embedded in a management tool and use that tool’s capabilities on behalf of the user. Devolutions’ RDM implements an MCP server so an LLM can work with entries, open sessions, and even execute commands through Devolutions’ existing connection stack — but, crucially, without the model ever receiving plaintext credentials. (petri.com)

The operational pain: why copy/paste workflows don’t scale​

Anyone who’s debugged across SSH, RDP, PowerShell, and multiple portals knows the pattern by heart: gather context, paste into a chat or note, ask the model, translate suggested commands back to the remote console, capture output, and repeat. Each iteration costs minutes and creates risky data movement — sometimes secrets or internal IPs end up in chat logs. For scale operations or high‑velocity incident response, that “relay race” becomes the bottleneck. Marc‑André Moreau, Devolutions’ CTO, told Petri that this friction is precisely where productivity is lost and where MCP can intervene. (petri.com)
Two operational consequences are worth highlighting:
  • Time lost to context switching and manual copying scales with task complexity.
  • Risk of inadvertent data exposure grows with every copy/paste, especially when models or clients retain prompts or are networked to third‑party services.
Agentic integrations aim to keep the model "close" to the context, reduce manual copying, and automate low‑risk actions under human supervision — but only if they preserve session isolation and credential secrecy.

How MCP changes the workflow (and what stays the same)​

MCP’s power comes from two practical capabilities:
  • It exposes structured, auditable operations (not raw secrets) to a model or MCP client.
  • It supports transports that let clients and servers interoperate locally (stdio, HTTP, local proxies) or remotely, depending on policy.
When RDM exposes an MCP server, the user keeps the familiar RDM UI. The LLM runs in a separate client (for example, a Copilot or Claude side panel). When you ask the assistant to diagnose an issue, the MCP client requests operations from the RDM MCP server: fetch this session’s output, run this command in this session, or enumerate matching entries. The server performs the action, returns structured results, and the assistant reasons with that output. If the assistant proposes further commands, the operator can review and approve them before RDM executes them. The UX remains human‑centric; automation removes the busywork while a human maker/checker stays in control. (petri.com)
Key design goals here are clear:
  • Maintain the existing administrative mental model and tooling.
  • Let AI act where it reduces friction, but only with explicit human consent for execution.
  • Keep credentials and secrets out of the LLM’s exposed context wherever possible.

Devolutions’ design choices: named pipes, subprocess transport, and credential injection​

Devolutions’ RDM MCP server makes three noteworthy architecture choices that speak directly to governance and isolation.
  • Local subprocess transport and local proxy:
  • Rather than exposing an MCP server as a network service (which raises immediate questions about network segmentation, cross‑tenant discovery, and service hardening), RDM runs the MCP server in a subprocess and exposes a local proxy. That keeps the attack surface constrained to the local machine and avoids accidental network exposure. (petri.com)
  • Named‑pipe communication scoped to the user session:
  • The proxy talks to RDM via a named pipe. Named pipes on Windows can be created with security descriptors that restrict access to the creating logon session; using the logon SID in the pipe’s DACL prevents other sessions from connecting. Devolutions says the RDM named pipe is restricted to the active user session so MCP clients can’t leak across sessions. That architectural choice preserves per‑session isolation without requiring network‑level segmentation. Microsoft documentation for named‑pipe security and the use of a logon SID in the DACL supports this approach as a valid technique for session scoping. (petri.com)
  • Credential injection (the server uses connections, the model does not know secrets):
  • Instead of giving an LLM a password or secret, RDM keeps credentials in its vault and injects them into outbound connections when the assistant asks to act on a connection. The assistant can request that an action run “against connection X,” and RDM performs the authentication internally — the model never sees the password. Devolutions also disables credential‑returning capabilities by default so an MCP client cannot ask the RDM server to hand back secrets unless explicitly enabled by an administrator. That lowers the risk of credentials being captured in model training pipelines or chat logs. (petri.com)
Together, these choices reduce the principal surface where an LLM must be trusted: it needs to understand available operations and outputs, not passwords.

Security and governance analysis: strengths and remaining risks​

Devolutions’ approach addresses several high‑value concerns, but it is not risk‑free. Below is a frank assessment.
Strengths
  • Session isolation by design. When named pipes are properly created with logon SID–based DACLs, other sessions on the same host cannot connect to the MCP endpoint, preserving per‑user isolation on shared systems. This is an established Windows IPC pattern.
  • Credentials never surface to the model. Injecting credentials at the connection layer avoids the worst practice of giving an assistant plaintext secrets. Defaulting credential‑return APIs to “off” is a practical safety default. (petri.com)
  • Human‑in‑the‑loop execution. Requiring operator confirmation for generated commands keeps a human validator in place during rollout, limiting autonomous destructive actions. (petri.com)
  • Audit surface remains centralized. Actions performed through RDM should still flow into the same logging and Devolutions Server/Hub audit trails that organizations already rely on, avoiding an opaque side channel. (petri.com)
Residual and emergent risks
  • Privilege escalation via MCP clients. If an MCP client (or its host) is compromised, the local proxy could be misused to drive actions in RDM. The named pipe DACL reduces the attack surface, but operators must also harden local endpoints and limit which MCP clients are permitted.
  • Malicious or buggy code generation. LLMs can generate commands that appear syntactically correct but are semantically dangerous. RDM’s confirm‑before‑execute model helps, but organizations should initially require higher scrutiny and possibly a staged approval flow for sensitive systems. (petri.com)
  • Supply‑chain and vendor risk for LLM providers. Even if credentials are not given to the model, prompts and contextual outputs may cross provider boundaries (for example, when using cloud models). Treating LLM vendors as third‑party providers and validating training‑data/retention policies remains essential. (petri.com)
  • Misconfiguration of IPC security. Creating named pipes with overly permissive security descriptors or failing to bind them to the logon SID undermines session isolation. Windows supports session‑scoped DACLs, but engineers must implement them correctly. Microsoft’s documentation explains the controls and the use of the logon SID.
In short: Devolutions’ architecture reduces a class of data‑exfiltration risks by design, but it does not eliminate the need for endpoint hardening, vendor risk management, and careful rollout controls.

MCP transport choices: why named pipe vs. network services matters​

Not all MCP servers are created equal. Two common classes of transport are:
  • Network‑accessible MCP servers (HTTP/stdio or TCP) that listen on a local or remote endpoint.
  • Local, OS‑native IPC transports (stdio, local proxies, named pipes) that avoid network exposure.
Network‑accessible MCP servers can be simpler for distributed deployments and cloud connectors, but they raise the bar for segmentation, TLS, certificate management, and discovery controls. Conversely, local IPC transports confine the attack surface to the endpoint and are easier to govern with OS access controls, but they require careful per‑endpoint policy and make centralized orchestration more complex.
Devolutions chose a local subprocess + named‑pipe path to emphasize session isolation and to keep credential handling strictly inside RDM. That’s a tradeoff many security teams will prefer for high‑risk admin tooling, but it does require updating endpoint hardening standards to include MCP‑client whitelisting, pipe ACL validation, and process integrity checks. (petri.com)

Practical adoption path: how to pilot safely​

Devolutions and practitioners recommend a cautious, staged approach that preserves governance while delivering productivity gains. A five‑step pilot plan follows the Petri / Devolutions guidance, with a few extra controls IT teams should add:
  • Start with a known, vetted MCP client
  • Use a well‑understood client (examples reported by vendors include VS Code with GitHub Copilot) and configure it in a dedicated test environment. Limit client installation to a small group of pilot users. (petri.com)
  • Treat LLM providers like third‑party vendors
  • Conduct a vendor security review: where are models hosted, what are data retention and training clauses, can you opt‑out of model training, and what SLAs and incident response commitments exist? Don’t assume “local” equals private. (petri.com)
  • Keep credentials non‑negotiable and enforce vault‑only access
  • Adopt the rule that MCP clients must refer to connections (e.g., “use connection ID 42”) rather than retrieve secrets. Disable credential‑return APIs by default and only enable them under explicit, auditable conditions. (petri.com)
  • Require human approval for all execution at first
  • Start with a maker/checker model: the assistant proposes code or commands; an operator reviews and clicks confirm. After a controlled run period and rigorous audit review, consider low‑risk safe‑paths for limited automation. (petri.com)
  • Pilot on high‑friction, low‑blast‑radius tasks
  • Look for repetitive, time‑consuming work that is currently impractical to automate manually: bulk cleanup of entries, standard configuration drift repairs, or mass renaming/reclassification tasks in vaults. These deliver quick ROI and limit exposure while you refine controls. (petri.com)
Extra controls IT teams should add
  • Endpoint allowlisting for MCP clients and signing checks for MCP client binaries.
  • Strict local audit logging of MCP proxy connections and user confirmation events.
  • An automated rule to validate any proposed command against an allowlist / regex policy for destructive patterns.
  • Periodic red‑team exercises that simulate a compromised MCP client to validate isolation controls.

Technical deep dive: how Windows named pipe scoping actually works​

Windows allows you to control access to named pipes using security descriptors attached when creating the pipe. A robust pattern to scope a pipe to a single interactive user session is to place the creating session’s logon SID in the pipe’s DACL. That way, attempts to open the pipe from a different user or session will fail the access check. Microsoft’s documentation explains how security descriptors control named‑pipe access and explicitly recommends using the logon SID to prevent remote or cross‑session connections when session isolation is required. This is the precise mechanism Devolutions says it uses to keep its MCP pipe limited to the active RDM user session.
Implementers should also be aware that privileged processes can create global namespace objects; therefore, to maintain isolation the creating process should avoid elevated privileges that grant SeCreateGlobalPrivilege, or explicitly set a restrictive DACL that denies access to other principals. Missteps here can negate any benefit from the named‑pipe approach. Community and Microsoft guidance both stress correct DACL construction and explicit logon‑SID usage.

Comparative view: how other vendors and MCP implementations differ​

MCP is becoming a de‑facto open standard with multiple transport and governance patterns. Some MCP reference implementations favor network services and centralized connectors that make enterprise observability and scaling easier, but that increases requirements for certificate management, network segmentation, and tenant isolation controls. Others — including some desktop or local assistant architectures — emphasize local proxies, stdio transports, or in‑process connectors that avoid broad network exposure.
Devolutions’ RDM takes the local, session‑scoped route for admin tooling, which aligns well with the principle of least privilege for high‑impact endpoints. The broader ecosystem trend, however, is toward a hybrid world: MCP as a neutral protocol, with each organization selecting the transport and governance model appropriate to its risk profile. News coverage and vendor blogs reflect that convergence and the tradeoffs between openness and compartmentalization.

Governance checklist for security teams​

Before approving an MCP pilot in your environment, verify the following items and document them in a one‑page risk assessment:
  • Which MCP clients are allowed and where are they installed? (Allowlist + signing requirements)
  • Are MCP servers exposed over the network or only via local IPC? (Transport topology)
  • Are named pipes (or other IPC) created with a logon‑SID DACL where session isolation is required? (DACL verification)
  • Is credential return/inventory APIs disabled by default? (API surface check)
  • Is human approval required for all initial command execution? (Workflow control)
  • Are all MCP actions logged to your existing audit/SEIM pipeline? (Observability)
  • Has your LLM vendor been reviewed for data handling, training clauses, and opt‑out mechanisms? (Vendor governance)
If any item is “no” or “unknown,” delay production rollout until remediation is complete. (petri.com)

Why IT pros should learn to be “power users” of agentic tools​

Agentic AI will change how operational teams work, but it won’t replace technical judgment. The immediate winners are teams that combine domain knowledge with AI fluency: they train prompts, tune workflows, and govern agent behavior to scale expertise rather than relinquish control. Marc‑André Moreau’s message is practical: adopt well‑scoped agentic tools where they reduce friction — and insist the platform preserves isolation, auditability, and human oversight. For admins, that means becoming experimenters and governors: run careful pilots, feed the lessons back into policy, and champion the productivity gains inside your org. (petri.com)

Conclusion: practical optimism, not blind trust​

Pairing Agent Mode AI with remote access tools is an obvious next step: it removes low‑value busywork, lets skilled operators focus on judgment, and can dramatically accelerate repetitive administrative tasks. Devolutions’ RDM MCP server shows a pragmatic architecture that keeps credentials out of the model, scopes interactions to the active session via named pipes, and preserves human review and audit trails. Those are sound security priorities.
That said, these systems only reduce — not eliminate — risk. Endpoint hardening, stringent DACLs for IPC objects, vendor security reviews, and conservative rollout plans with human approvals are essential. Organizations that adopt MCP thoughtfully, focusing first on high‑friction, low‑blast‑radius tasks and proving safe controls, will gain the productivity upside while keeping governance intact. For IT leaders, the question is no longer if agentic AI will touch remote access; it’s how you’ll make that contact safe, observable, and manageable. (petri.com)

Source: Petri IT Knowledgebase Why IT Pros Are Pairing “Agent Mode” AI With Remote Access