Microsoft’s recent guidance on Copilot Studio agent security is both a wake-up call and a practical roadmap: as organizations race to embed AI agents into workflows, a predictable set of misconfigurations—broad sharing, weak or maker-owned authentication, HTTP request misuse, dormant artifacts, and hard‑coded secrets—are creating high‑value attack paths that traditional controls and perimeter defenses typically miss. lot Studio agents are no longer curiosities; they are operational tools that access data, call APIs, and perform actions across enterprise systems. That capability is precisely what makes them valuable—and precisely what creates new, composable attack surfaces when governance and identity are not treated as first‑class concerns. Microsoft’s defender-led analysis catalogues the top 10 misconfigurations they repeatedly observe in customer tenants and pairs each with ready‑to‑run detections in Microsoft Defender Advanced Hunting.
The attacks we’re sl into two broad patterns:
From a remediation perspective, the fix is straightforward: restrict sharing to a support group, require end‑user authentication or a dedicated service identity for scheduled runs, remove any hard‑coded secrets, and require an owner to certify the agent’s scope and connector permissions. The hard part is institutionalizing that review at scale.
Source: Microsoft Copilot Studio agent security: Top 10 risks you can detect and prevent | Microsoft Security Blog
The attacks we’re sl into two broad patterns:
- Social‑engineering and consent abuse (for example, agents that present login/consent flows to capture OAuth grants).
- Automation‑led exfiltration and privilege misuse (for example, agents issuing HTTP requests or sending email that leak sensitive data or retrieve tokens from cloud metadata endpoints).
Why this matters now: short tech concrete, independently documented incidents show how these configuration mistakes can become exploitation vectors.
- Tenable’s SSRF research demonstrated that Copilot Studio’s HttpRequestAction primitives could be abused to bypass SSRF protections and retrieve cloud instance metadata (IMDS), and in one case obtain tokens that led to read/write access to internal Cosmos DB resources. That report (and Microsoft’s subsequent mitigation) proves the class of risk in HTTP request actions is realistic and high impact.
- Researchers (publicized as the “CoPhish” technique) illustrated how agents hosting login/consent UI inside legitimate Microsoft domains can be used to harvest OAuth grants and tokens via social engineering—an attack that does not require software flaws in the agent platform itself, only sufficient trust in the hosted UI. Microsoft confirmed the technique and emphasized product updates and tenancy controls to mitigate it.
Overview of the Top 10 risks (what they are and why they’re dangerous)
Below is a concise, operational summary of each risk Microsoft enumerates, with why it matters to defenders and how you can detect it quickly.1. Agent shared with entire organization or broad groups
- What: Agents that are published to Org‑wide audiences or overly broad security groups.
- Why dangerous: Expands visibility and potential abuse—any user (or attacker with an account) can trigger actions or access knowledge bases they shouldoft provides an Advanced Hunting query to find agents shared broadly; start there to inventory and re‑scope sharing.
2. Agents that do not require authentication
- What: Agents published with no authentication or that allow unauthenticated access.
- Why dangerous: Public or unauthenticated agents are effectively internet‑facing entry points; anyone with the URL can exercise actions or exfiltratUse the “AI Agents – No Authentication Required” detection and immediately review any findings.
3. Agents with HTTP Request actions using risky configurations
- What: Agents that make raw HTTP requests (HttpRequestAction) to non‑HTTPS endpoints, non‑standard ports, or internal services.
- Why dangerous: Raw requests can bypass connector governance and, as Tenable showed, can be chained into SSRF to read metadata or issue token‑bearing calls. Prefer built-in connectors which bring identity, validation, and throttling.
4. Agents capable of email‑based data exfiltration
- What: Agents that send email where recipient or content is dynamically generated by the model.
- Why dangerous: With generative orchestration, an attacker can steer content or recipients at runtime to exfiltrate internal data to external addresses.
ents that use email sending actions and check whether recipient fields are freeform or constrained. Microsoft’s detection queries call this out explicitly.
5. Dormant connections, actions, or agents
- What: Published or unpublished agents, connectors, or actions that have not been used or reviewed for extended periods.
- Why dangerous: Orphaned artifacts often bypass active governance and become easyr attackers.
- Detect: Use dormancy detections (e.g., Published Dormant 30d) to surface and either reassign ownership or decommission.
6. Agents using author (maker) authentication
- What: Agents configured to use the creator’s personal credentials at runtime rather than the invoking user’s identity.
- Why dangerous: This creates privilege escalation—every user invoking the agent runs with the maker’s permissions, violating least privilege and separation of duties.
- Detect & Fix: Use admin controls in Power Platform to block maker‑provided credentials and enforce end‑user authentication; Microsoft documents these controls and provides admin flows to turn them on.
7. Agents containing hard‑coded credentials
- What: API keys, tokens, or connection strings embedded directly into topics or actions.
- Why dangerous: Easy to readoutside secret‑management controls—these are immediate credential leak risks.
- Detect & Fix: Hunt for literal secrets in agent definitions and move all secrets to Azure Key Vault referenced by environment variables.
8. Agents with Model Context Protocol (MCP) tools configured
- What: MCP tools provide custom, potentially undocumented access paths between the model context and external systems.
- Why dangerous: If MCP tools are unrred, they can create hidden actions that behave outside expected governance controls.
- Detect: Use the “MCP Tool Configured” query and enforce lifecycle reviews for every MCP tool configuration.
9. Agents with generative orchestration lacking instructions
- What: Orchestrations that give the LLM free rein without strong, explicit instructions or constraints.
- Why dangerous: Models can “drift” or be manipulated by prompt injections; wts, agents may take unapproved actions.
- Detect & Harden: Require clear instruction sets in agent orchestration; hunt for published orchestrations that have no instructions.
10. Orphaned agents (no active owner)
- What: Agents whose owners have left the organization or whose accounts are disabled.
- Why dangerous: No one is accountable for reviews, decommissioning, or permits are classic shadow‑IT risk.
- Detect & Remediate: Use the “Orphaned Agents with Disabled Owners” detection and assign a new owner or retire the agent.
Practical detection and response (how to use the Microsoft guidance right away)
Microsoft’s post pairs each risk with a named Advanced Hunting Community Query you can run in the Defender portal (Security portal > Advanced hunting > Community Queries > AI Agent folder). If you’ve only got 30 minutes to act, run these prioritized steps:- Run discovery queries to inventory agents shared org‑wide,ts, author‑authenticated agents, and agents that include HttpRequestAction. These queries produce the immediate, high‑priority hits.
- Quarantine any unauthenticated or publicl an owner validates intent. Public exposure and unauthenticated agents are the fastest path to external abuse.
- Hunt for HttpRequestAction usage and correlate with tenant telemetry for IMDS reads, unexpected managed identity token issuance, or unusual Cosmos DB access patterns (historically associated with SSRF chains). Tenable’s SSRF research shows these specific telemetry patterns are realistic signals of abuse.
- tions configured to send email with dynamic recipient inputs; temporarily constrain email senders/recipients to an approved list until you can enforce domain allowlists.
- Enable or enforce the Power Platform admin control to prevent maker‑provided credentials in sensitive environments; this change forces end‑user authentication for actions that access sensitive services. Microsoft documents the exact admin workflow.
- Move secrets from agent logic to Azure Key Vault and use environment‑referenced secrets instead of hard‑coding tokens. Implement rotation and auditing for any managed identity or federated credential used by agents.
A concise mitigation playbook (operational checklist security teams can adopt totraightforward in concept; the challenge is operationalizing them at scale. Below is a prioritized and repeatable checklist.
- Inventory and classify
- Run the Defender Advanced Hunting community queries for the Top‑10 list.
- Create a canonical registry (owner, purpose, data sensitivity, connectors, last used date).
- Lock down identity flows
- Enforce end‑user authentication in production environments (block maker‑provided credentials via Power Platform admin settings).
- If an agent must run autonomously, require a dedicated service identity with least privilege and short‑lived tokens.
- Harden orchestration
- Require explicit orchestration instructions and disallow model‑decided recipients or endpoints for any outbound actions (email, HTTP).
- Ban non‑HTTPS and non‑standard port calls unless reviewed and approved.
- Protect secrets and rotate artifacts
- Migrate all embedded secrets to Azure Key Vault and use environment references.
- Rotate credentials after any suspicious discovery; revoke tokens tied to abandoned or maker‑authenticated actions.
- Reduce exposure
- Scope agent sharing to role‑based groups, not org‑wide audiences.
- Disable public/demo publishing for agents that touch sensitive production data.
- Operationalize governance
- Require named owners and periodic certification (e.g., quarterly review).
- Gate production publication by security review and require code/manifest review for agents using network primitives (HttpRequestAction, MCP tools).
- Monitor and respond
- Add SIEM rules to watch for IMDS/metadata reads, unusual managed identity token grants, and Correlated Cosmos DB access aligned to agent calls. Tenable’s PoC showed these are high‑fidelity signals to prioritize.
- Integrate agent detections with SOAR playbooks to quarantine agents omatically.
Critical analysis: strengths of Microsoft’s approach and remaining blind spots
Microsoft’s top‑10 guidance and the Defender detection set are strong on visibility and pragmatic remediation. They excel in three areas:- Practicality: Each risk is paired with an Advanced Hunting query—this means defenders can find evidence rapidly rather than guessing what to look for.
- Policy levers: Microsoft provides concrete admin controls (for maker credentials and user authentication) so defenders can change enforcement per environment without waiting for code changes.
- Alignment with prior incidents: The guidance specifically addresses exploitation patterns demonstrated by independent research (SSRF, token harvesting), showing the vendor’s recommendations are threat‑informed.
- Hosted processing invisibility. Some exfiltration or token captures happen in vendor‑hosted model inference pipelines and may not generate traditional endpoint or network signals. Defenders therefore remain dependent on vendor telemetry and post‑event forensic support. Tenable’s SSRF work highlights how server‑side behaviors can bypass local network monitoring.
- Social‑engineering risk. CoPhish‑style consent harvesting leverages UI trust; technical controls can help (admin consent, conditional access), but user training and consent governance are still essential and imperfect.
- Scale of governance. Organizations that allow developers and business users to create agents at will will struggle to keep pace with reviews and approvals. This is a people + process problem, not solely technical.
- Third‑party & supply‑chain agents. Agents built by vendors or partners introduce a supply‑chain risk; vetting and runtime constraints for third‑party agents arsive and currently immature across many toolchains.
Operational case study: the simple Help‑Desk agent and the cascade of misconfigurations
Microsoft’s example of a Help‑Desk agent neatly illustrates how multiple low‑risk decisions accumulate into a high‑impact vulnerability. An agent created to fetch customer records using an MCP tool, shared broadly inside the org, and published without authentication will likely check off several risk boxes: broad sharing, maker or no authentication, MCP tool exposure, and potential for data exfiltration via email or HTTP requests—exactly the pattern the Top‑10 guidance warns about. Run the Advanced Hunting queries against that tenant and you will likely find multiple flags in a single scan.From a remediation perspective, the fix is straightforward: restrict sharing to a support group, require end‑user authentication or a dedicated service identity for scheduled runs, remove any hard‑coded secrets, and require an owner to certify the agent’s scope and connector permissions. The hard part is institutionalizing that review at scale.
Long‑term posture: treat agents as identities and production services
The most important conceptual shift is to stop thinking of agents as lightweight scripts or demos and to treat them as non‑human identities with full lifecycle controls. This includes:- Agent identity primitives (Entra Agent IDs or similar service identities).
- Lifecycle governance (registry, expiry, owner, automated decommission).
- Runtime controls (DLP, content safety, and policy‑based runtime allow/block decisions).
- CI/CD and PR‑style approvals for agent manifnges.
Final recommendations — what security teams should do this week
- Immediately run the Microsoft Defender Advanced Hunting community queries in the AI Agent folder and prioritize:
- Unauthenticated agents
- Org‑shared agents
- HttpRequestAction usage
- Maker‑authenticated agents
- Agents with email send actions.
- Enforce the Power Platform admin control to prevent maker‑provided authentication in pr; for necessary autonomous agents, require dedicated managed identities with least privilege and short‑lived tokens.
- Block public/demo publishing for any agent connected to production data, constrain email recipients, and require explicit instruction sets for generative orchestration.
- Hunt and monitor for high‑fidelity telemetry signals highlighted by research:
- IMDS/metadata reads and unexpected managed identity token issuance (SSRF indicators).
- Sudden Cosmos DB access patterns aligned with agent activity.
- Treat discovered agents as production assets: assign owners, enforce periodic reviews, and migrate any embedded secrets to Azure Key Vault with rotation policies.
Conclusion
Copilot Studio agents unlock real productivity gains, but the same primitives that make them powerful—dynamic orchestration, connector integration, and lightweight publishing—also enable high‑impact abuse when left unmanaged. Microsoft’s Top‑10 risks and the Defender Advanced Hunting queries give security teams a practical starting point: discover fast, quarantine risky artifacts, enforce identity controls, and harden orchestration and secret handling. The best defense is not a single control but a discipline: treat agents as identities, enforce least privilege, and bake governance into the development lifecycle. The research‑backed cases (SSRF token theft and consent‑based token harvesting) show the threat is real; the mitigations are known and actionable—now the operational work remains.Source: Microsoft Copilot Studio agent security: Top 10 risks you can detect and prevent | Microsoft Security Blog