Edge Copilot Auto-Open from Outlook Links: Privacy, UX, and Admin Risks

  • Thread Author
Microsoft’s most recent Edge experiment — automatically opening the Copilot side pane when you click links from Outlook — is a small UI change with outsized implications for privacy, user control, and how Microsoft positions AI inside everyday workflows. The feature is being tested on the Edge side and listed on the Microsoft 365 roadmap as a way to surface contextual summaries and “next-action” suggestion chips when users follow links from Outlook, but the reaction from many users has been immediate and negative: what looks like a productivity shortcut to Microsoft reads like an aggressive nudge to many people who already distrust Copilot’s ubiquity.

Background​

Microsoft has been steadily folding Copilot across its products: from the Copilot Chat sidebar in Word, Excel and Outlook to new agentic features inside Edge that treat the browser as an “AI-first” surface rather than a neutral window to the web. That strategy explains why Edge now contains multiple Copilot entry points — and why the browser team is experimenting with ways to make Copilot the default contextual assistant while users browse. This shift is part of a larger, company-wide push to make generative AI a built-in productivity layer rather than an optional add-on.
The specific change under test is straightforward in concept: when a user opens a link that originated in Outlook, Edge would automatically open the Copilot side pane and surface contextual insights derived from the originating email and the destination webpage — highlighting key points and offering recommended actions without forcing the user to interrupt their browsing to summon Copilot. Microsoft’s roadmap entry frames this as a time-saver that “helps users quickly understand content, take action with fewer steps, and get more value from Copilot while extending productive browsing time in Edge.”
But a roadmap blurb and a real-world rollout are different animals. Microsoft’s experiments with Copilot integration have included many UI nudges and automatic behaviors — from Copilot prompts on the New Tab page to hints in the address bar — and user response has repeatedly skewed towards resistance when those nudges feel defaulted or unavoidable.

What Microsoft says the feature will do​

  • When a user opens links from Outlook, Edge will optionally open the Copilot side pane automatically.
  • Copilot will analyze the email context and the destination page to produce short summaries, highlight important points, and suggest "next actions" via suggestion chips.
  • Microsoft pitches this as non-disruptive: the feature is meant to “provide contextual insights … without disrupting the browsing flow.”
These bullet points come from the Microsoft 365 roadmap description of the test. The roadmap entry does not — at least in its public summary — make clear whether the behavior will be on by default, or how exactly the browser will behave for signed-in versus non-signed-in users. That omission matters: defaults shape behavior, and defaults that favor Microsoft’s assistant over user preference are what trigger the strongest backlash.

Why this matters: UX, defaults, and the slippery slope​

  • Defaults shape product adoption. Repeatedly, Microsoft has shown that when AI functionality is enabled by default, it becomes the path of least resistance for users — even for those who prefer other tools or want no AI assistance at all. The Copilot push in Edge is part of a broad strategy to make the assistant visible and useful across many surfaces; the auto-open-on-Outlook-link test is another step in that direction.
  • Attention and workflow disruption. Even if Copilot opens in a side pane, the visual and attention costs are non-trivial: users expect a link click to change the central content area, not to also trigger an assistant UI. Some users will find the pane helpful; many will find it distracting — particularly when links are opened rapidly during triage of email or when a user’s mental model is “click link, read page.” The feature’s benefit hinges on a subtle balance: enough contextual value to justify the window dressing, but not so intrusive that it becomes a performance or attention burden.
  • The path from suggestions to action. Suggestion chips — quick actions that prompt further Copilot behavior — can reduce friction for users who want the assistant’s help. But they also prime users to accept Copilot-curated actions rather than exercise independent judgment. That trade-off has real consequences in workplace contexts where actions may involve sharing, filing, or acting on sensitive content.

Privacy and security concerns​

The auto-open feature raises a set of privacy and governance questions that deserve explicit scrutiny.
  • Data flow and inference: To give useful insights, Copilot needs context from both the email (subject, body, sender, potentially attachments) and the destination page. That means metadata and possibly content are being correlated across apps. For privacy-conscious users and enterprises, that cross-surface inference is non-trivial. Microsoft’s Copilot updates have already expanded the assistant’s ability to read inbox content when permitted, which heightens sensitivity around any automatic cross-app behavior.
  • Proven bugs and lapses. The recent history of Copilot includes at least one serious operational bug in which the assistant processed emails labeled as “Confidential” or otherwise governed by sensitivity labels, bypassing Data Loss Prevention protections — a behavior Microsoft attributed to a server-side logic error and which it patched after discovery. That incident underscores the stakes of automating assistants that ingest email content to provide summaries. Any new auto-open behavior will inevitably raise questions: what safeguards are in place to prevent Copilot from acting on, caching, or summarizing content that should remain protected?
  • User control and the “zombie toggle” perception. Community reports and discussions have surfaced cases in which users perceived Copilot as re-enabling or appearing even after they tried to disable it, generating a “zombie” feeling around the assistant. Whether those reports represent bugs, confusing settings, or simply miscommunication, they have eroded trust — and they make any new automatic behavior feel riskier to the broader user base.
Taken together, these points mean that even a well-intentioned UX feature can create significant privacy and governance tension, particularly for enterprise administrators who must enforce compliance and for individuals who rely on predictable, auditable application behavior.

Enterprise governance and admin controls​

Enterprises will want clarity and controls before accepting a behavior that automatically surfaces AI summaries in a browser context. The big questions for IT organizations include:
  • Will the feature be controllable via Group Policy, MDM, or an Edge administrative template?
  • Will behavior vary depending on whether the user is signed in to a corporate Azure AD account?
  • Are there tenant-level settings in Microsoft 365 that affect whether Copilot in Edge can access message context or cross-application metadata?
Microsoft’s broader Copilot rollouts have indicated that admin controls and staged opt-ins are part of the enterprise story — but implementation details matter. Administrators should expect to see policies and controls tied to Copilot features across Windows, Edge, and Microsoft 365; but they should also prepare for a period of rapid change in which policies are added, modified, or clarified as Microsoft collects feedback and telemetry.
Practical advice for IT teams (high level):
  • Inventory Copilot surfaces in your environment — Windows Copilot, Edge Copilot, and in-app Copilot sidebars for Office apps.
  • Validate which Copilot features are enabled by default for your tenant and whether any staged rollouts are scheduled.
  • Test the Outlook→Edge link workflow in a controlled environment to understand what metadata and content are sent to Copilot.
  • Push policies that disable the feature by default in production if you have concerns, using MDM or Group Policy until the behavior and controls are fully documented by Microsoft.
Because Microsoft’s rollout plans and administrative controls have changed before, admins should treat roadmap dates and initial defaults as tentative until Microsoft publishes formal documentation and policy settings.

Product analysis: why Microsoft is doing this, and whether it makes sense​

From Microsoft’s product strategy perspective, the feature is logical.
  • Microsoft wants Copilot to add value where users already work. Turning a link click — the most common form of context transfer in email workflows — into an opportunity for helpful summarization seems like a straightforward place to start. If Copilot can quickly tell you why a linked article matters or which parts of an email are critical, users might save time and avoid unnecessary steps.
  • The company is also competing in an AI arms race where visibility matters. Edge and Copilot together form a platform play: making the assistant visible in more places builds habitual usage and funnels more interactions into Microsoft’s models and services. Other products are pursuing similar “AI-first” strategies, and Microsoft appears determined not to cede default-in-browser attention to competitors.
But the feature has notable UX and trust risks.
  • The product assumes users want unsolicited assistance. That assumption is fragile. Many users prefer predictable, minimal UIs and will reject features that interrupt established mental models — especially if toggles are buried or defaults are aggressive.
  • It centralizes AI decisions without clear provenance. When Copilot highlights “key points” or recommends actions, users need to know where those suggestions came from, whether they are reliable, and whether the assistant accessed private material to produce them. Without clear provenance and easy ways to audit or opt out, the suggestions risk being treated as authoritative even when they’re not.
  • It increases the surface area for compliance failures. As the DLP bug showed, even well-tested systems can have logic errors that bypass protections. Automatically invoking Copilot on content that originated in email broadens the attack and failure surface, which matters for legal, regulated, and security-conscious enterprises.

How this could be implemented responsibly​

Microsoft could mitigate many concerns by designing the feature with three non-negotiable guardrails:
  • Opt-in by default for consumers and enterprises, with a clearly visible and persistent toggle. If the goal is to increase helpfulness, Microsoft should first win consent rather than rely on inertia.
  • Transparent data handling. Any time Copilot uses email context, the UI should explicitly show what content was read, what was used to form the summary, and whether sensitive items were excluded. A compact provenance header (e.g., “Summary generated from: message subject, first paragraph, linked page”) would reduce confusion.
  • Enterprise-level policy and telemetry controls. IT admins need the ability to disable or tune the feature centrally and to control whether Copilot may access message bodies or attachments for automatic summaries. Logs and audit trails should show when the assistant read inbox content and what actions were suggested or taken.
If Microsoft commits to those design principles and ships the necessary admin tooling, the feature could be a helpful time-saver. Without them, it looks like another example of aggressive default behavior that invites pushback. Several recent Copilot moves — from in-app chatsidebars to auto-install plans — show Microsoft pursuing ubiquity, but ubiquity without control breeds resentment.

What users should do now​

Below are practical, conservative steps readers can take to prepare for or mitigate the impact of an auto-open Copilot pane in Edge:
  • Audit your Edge and Outlook settings. Look for Copilot-related toggles in Edge’s settings and in Outlook, and note whether Copilot features are enabled by default for your account or tenant. If documentation is sparse, test behavior with a non-critical email account in a controlled environment.
  • Review organization policies. If you manage IT for an organization, check MDM and Group Policy templates for new Copilot-related settings and consider staging the rollout until policies are verified.
  • Prepare to opt out. If you prefer not to use Copilot, identify the quickest way to disable it in your Edge profile (settings, privacy, or Copilot sections) and teach your team how to do the same. Expect these paths to change as Microsoft refines the feature, so keep a short internal how-to up to date.
  • Filter sensitive content. For enterprises, ensure that sensitivity labels and DLP policies are properly configured and validated against Copilot behaviors in test tenants. The previous incident in which Copilot processed sensitivity-labeled emails shows why this step is essential.
  • Monitor the roadmap and changelogs. Microsoft has a history of updating roadmap entries and rolling features in stages; stay alert to official changelogs and admin center notices that announce policy and behavior changes. Treat roadmap timelines as tentative until confirmed by product documentation.

Strengths of the feature (why some will like it)​

  • Efficiency: For users who trust Copilot and want faster orientation when clicking links from mail, the auto-open pane can offer immediate summaries and action suggestions without extra clicks. That reduces context switching and could speed decision-making in many workflows.
  • Context-aware assistance: If implemented with careful privacy boundaries, Copilot can provide context that static searches can’t, such as correlating the sender’s note with the linked article to prioritize what to read.
  • Accessibility: A consistent Copilot pane could help users with cognitive load issues by distilling long articles or extracting action items from messages and web pages.

Risks and the potential downside​

  • Perceived coercion: Repeated default nudges across products create the sense that Microsoft is steering users toward its assistant whether they want it or not, eroding trust.
  • Privacy and compliance hazards: Automatic cross-app analysis of email and web content increases the chance of sensitive information being processed inadvertently, as company incidents have shown.
  • UI clutter and cognitive load: Even a side pane is still interface real estate; users who open dozens of links a day could find a persistent Copilot pane a significant distraction.
  • Vendor entrenchment: The more Microsoft makes Copilot the path of least resistance, the greater the lock-in effect for users and organizations that might prefer alternatives.

Conclusion​

The Edge auto-open Copilot feature is emblematic of Microsoft’s broader product thesis: integrate AI everywhere users work, then optimize for convenience and habit. That thesis makes sense as a growth strategy — Copilot is only useful if users encounter and rely on it — but it rests on a fragile social contract of consent, transparency, and control.
If Microsoft wants this feature to be seen as helpful rather than paternalistic, it needs to err on the side of explicit consent, clear provenance for AI outputs, and robust admin controls. The technical promise is real: contextual summaries and action chips could save time. The political and privacy risk is also real: automatic cross-app assistance raises legitimate questions for individuals and enterprises alike, particularly after documented incidents where Copilot processed content it should not have.
For now, users and IT teams should expect experimentation and change. Treat roadmap timelines and feature summaries as indicators rather than guarantees, validate behavior in controlled environments, and demand clear settings and policies before accepting automatic behaviors that reach across email and browsing. Microsoft still has time to choose restraint — making this feature opt-in, transparent, and manageable would turn an intrusive test into a genuinely useful productivity capability. Until then, skepticism from privacy-minded users and administrators is not only understandable, it’s prudent.

Source: Windows Central Microsoft can’t help itself — Edge may soon auto-open Copilot
 
Microsoft’s plan to have Microsoft Edge automatically open the Copilot side pane when a user clicks links from Outlook is small in code and big in consequence — it’s a feature that promises faster context and suggested actions, but it also raises clear questions about defaults, privacy, administrative control, and user trust that Microsoft will need to solve before this ships broadly.

Background​

Microsoft has been steadily folding Copilot — its generative AI assistant — into browsers, Office apps, and Windows itself as a core productivity layer rather than an optional add‑on. The specific roadmap entry shows Edge can “automatically open the Copilot side pane” when a link is launched from Outlook, so Copilot can surface “contextual insights and actionable suggestion chips” drawn from the email and the destination page. That roadmap entry lists the feature as in development with a rollout timeline starting in May 2026.
This is not a theoretical conversation: Microsoft has been experimenting with tighter handoffs between Outlook/Teams and Edge for some time — shared links views, profile-aware external link handling, and sidebar context have appeared in Microsoft’s documentation and experimental channels. Those earlier integrations explain the technical affordances that make this Copilot auto‑open possible.

What Microsoft says this will do​

Microsoft’s public description frames the feature as a productivity shortcut: when a user follows a link from an Outlook message, Edge will open the target page and the Copilot side pane will optionally appear on the right. From that workspace Copilot can:
  • Highlight key points on the page and in the originating email.
  • Produce a compact summary of the destination content.
  • Surface “suggestion chips” — short, actionable prompts such as drafting a reply, extracting action items, or calling out important dates or contacts.
The vendor pitches this as “helping users quickly understand content, take action with fewer steps, and get more value from Copilot while extending productive browsing time in Edge.” That framing appears verbatim in the roadmap summary.
Important verification notes:
  • The roadmap text uses permissive language — “can” and “can provide” — which suggests the capability is being introduced but does not make explicit whether it will be enabled by default for all users. This is a critical distinction for privacy and UX.
  • Microsoft’s public blog and roadmap entries confirm the feature is in development and tied to the broader agenda of Edge + Microsoft 365 integration; they do not include full rollout details such as default state, tenant policy controls, or telemetry behaviours. Those specifics remain to be published.

Wa debate​

The reaction across informed communities has been swift and skeptical. The reason is simple: a small UI behavior change becomes contentious when it is perceived as an automatic, persistent nudge to an always‑on assistant. There are three overlapping user concerns:
  • Defaults matter. When convenience features are enabled by default they become de facto experience choices that are hard to opt out of in practice. Many users and admins view default enabling of AI features as a form of behavioral nudging rather than neutral software design.
  • Privacy and data flow. Opening Copilot automatically implies the assistant will immediately access both the email context and the destination page to generate suggestions. Where that processing happens (locally vs. cloud), what metadata is sent, and how long it’s stored are top‑line questions for privacy‑sensitive users and enterprises.
  • UX interruptions and accessibility. Even when helpful, uninvited side‑panes are disruptive. Users who juggle multiple accounts, who rely on minimal screen real estate, or who use assistive technologies can find sudden pane openings disorienting or outright blocking.
The forum and community reaction — from power users to enterprise admins — has echoed these concerns and called for clarity around opt‑out controls, policy enforcement, and telemetry transparency.

Technical and governance implications​

Data flow and processing model​

From a technical governance standpoint, two core questions must be answered before admins and privacy teams will accept this feature:
  • What content is accessed? Does Copilot ingest the full email body, attachments, or only a short snippet and the link metadata when forming suggestions? The roadmap language suggests both “email and destination content” are used, but it is unclear whether attachments or sensitivity‑labelled content are excluded by default.
  • Where is processing performed? Is the contextual analysis done purely on‑device, or is data transmitted to Microsoft’s cloud services (and possibly routed through model providers) to generate suggestions? The enterprise risk profile differs dramatically between local inference and cloud‑based model calls.
Both of these vectors are central for compliance and DLP (Data Loss Prevention) controls. Microsoft has already had to adapt its Purview DLP and Copilot processing policies to handle sensitive content, which indicates that the company understands the risks but has not yet fully mitigated them for every integration. Enterprise admins will expect clear policy knobs exposing whether Copilot may process emails or pages that carry sensitivity labels.

Tenant and admin controls​

Effective enterprise rollout will require:
  • Group Policy / Intune configuration that can disable the Copilot auto‑open behavior for managed devices.
  • Purview DLP rules that prevent Copilot from processing content labelled as sensitive or restricted.
  • Audit trails showing when Copilot processed email or page content and whether any external model endpoints were invoked.
Without those capabilities, large organizations will treat the feature as an unacceptable risk. Early signals from Microsoft’s enterprise messaging on previous Edge/Outlook cross‑integration suggest admins can manage some link‑handoff behaviors, but explicit Copilot controls must be documented and enforced.

Privacy, compliance, and DLP: what to watch for​

There are five specific privacy items every IT and security team should demand from Microsoft before approving a broad rollout:
  • Explicit opt‑out at device and tenant level. Users and admins must be able to ensure the Copilot pane does not auto‑open under any policy that blocks it.
  • Sensitivity‑aware processing. Copilot should automatically respect sensitivity labels and encryption states, and should be prevented from processing or summarizing content that classification marks as “confidential,” “restricted,” or similar.
  • Transparent telemetry. Records of what content Copilot used and which model endpoints were invoked must be logged in a tamper‑resistant way for compliance checks.
  • On‑device option or guaranteed minimization. Where feasible, allow customers to perform analysis locally or reduce metadata shared with cloud models to the minimum required for the suggestion.
  • Retention and deletion guarantees. Microsoft must publish how long contextual snippets and suggestions are stored and provide mechanisms to delete those traces on demand.
Microsoft has previously shipped DLP guardrails for Copilot in reaction to real product issues, which shows the company is capable of addressing such requirements — but it also demonstrates this work often happens after feedback and incidents, not before. Enterprises should insist on pre‑release governance documentation.

User experience and accessibility​

From a human‑centered design perspective, Copilot auto‑open is a classic trade‑off between convenience and control.
  • Benefits for targeted workflows:
  • Rapid summarization of links from newsletters, tickets, or support emails.
  • One‑click generation of reply drafts or follow‑up tasks, reducing friction in triage workflows.
  • Quick extraction of dates, amounts, or action items from landing pages without switching context.
  • Downsides for everyday browsing:
  • Unexpected UI changes (a side pane unexpectedly opening) break visual flow and keyboard navigation.
  • Multi‑account users may see incorrect content if Edge opens the wrong profile and Copilot pulls data from a different account context.
  • Users with reduced screen real estate (small laptops, tablets) lose critical page space.
Design mitigations Microsoft should consider:
  • Non‑intrusive hinting: show a subtle banner or small icon offering to open Copilot, rather than opening it immediately.
  • “Don’t ask again” and persistent opt‑outs: both per‑device and per‑tenant.
  • Accessible transitions: ensure keyboard focus, screen reader announcements, and predictable pane opening behaviors.
Communities already report confusion and annoyance from earlier sidebar behaviors, where users struggled to find persistent toggles to disable similar features. Those past UX missteps increase skepticism about any new auto‑open behaviour.

Enterprise scenarios: acceptance vs. rejection​

Enterprises will split into three practical groups when this feature arrives:
  • Group A — Early adopters: Teams that prioritize productivity and have robust DLP and compliance tooling will test the feature in controlled pilots. They will accept the feature if Microsoft supplies granular admin controls and strict sensitivity handling.
  • Group B — Conditional adopters: Organizations that require privacy assurances will only enable the feature if Microsoft demonstrates on‑prem or EU‑regional processing options, or provides an on‑device mode.
  • Group C — Blockers: Regulated industries (healthcare, financial, government) are likely to block Copilot auto‑open by policy until they can audit and control all data flows and retention behavior.
IT administrators should be prepared for three immediate actions once this rolls into preview or production:
  • Audit the default state of the feature on corporate images.
  • Validate policy controls in Intune / Group Policy and test Purview interactions with sensitivity labels.
  • Communicate to end users what Copilot will and will not do, and provide a straightforward method to opt out.

Practical recommendations for users and admins today​

Whether you are a power user worried about pop‑ups, a privacy officer, or an administrator planning deployment, here are pragmatic steps to prepare:
  • For individual users:
  • Look for the Edge setting that controls Outlook/Teams context in the side pane (Edge Settings → Sidebar → Outlook/Teams) and set it to off if you prefer manual control. If you are seeing unexpected behavior, review the Outlook option for opening links in a specific browser or profile.
  • If you use multiple Microsoft accounts, ensure the profile mapping settings for external links are correct; mismatches can cause Copilot to appear with the wrong context.
  • For IT admins:
  • Inventory which business groups will benefit from Copilot‑powered link triage and which will not.
  • Prepare an Intune/Group Policy test plan that explicitly toggles the Copilot auto‑open behavior.
  • Update Purview DLP policies to explicitly deny Copilot processing of labelled content until Microsoft documents safe defaults.
  • Author internal guidance and a visible “how we use Copilot” FAQ for employees to reduce surprise and mistrust.
  • Monitor Microsoft 365 Message Center and the 365 roadmap updates for any changes to default behavior or new admin policies.

Risks Microsoft must mitigate before broad rollout​

If Microsoft hopes to avoid the backlash predicted by commentators and community members, it must address at least the following five risks before enabling the feature widely:
  • Default activation risk: Ship opt‑in or disabled‑by‑default experiences for consumer and managed enterprise channels to avoid surprise.
  • Cross‑profile leakage: Guarantee the correct work profile is used when opening links to prevent mixing personal and corporate contexts.
  • DLP circumvention: Ensure Copilot honors sensitivity labels automatically and refuses to process encrypted or restricted content.
  • Telemetry opacity: Provide admins with logs and the ability to audit Copilot’s content processing actions.
  • Accessibility disruption: Offer smooth, predictable pane transitions and keyboard/screen reader compatibility.
Failure to mitigate these could not only annoy users but create regulatory and legal headaches for Microsoft and its customers. Past incidents where Copilot processed sensitive corporate content led to rapid changes in Microsoft’s DLP posture — a pattern that underscores the stakes.

How Microsoft could ship a responsible rollout​

A responsible, phased approach would look like this:
  • Preview release limited to Edge Insider or a controlled tenant program, with explicit documentation of what is processed and where processing occurs.
  • Admin policy controls published and enforced via Intune and Group Policy before consumer rollout.
  • Default state: disabled for unmanaged consumer peers and disabled for managed enterprise images until policy is configured.
  • Privacy‑preserving options such as on‑device summarization or a low‑bandwidth metadata mode.
  • Clear UI affordances for users: small non‑modal prompts offering the Copilot pane with a “Do not sa visible always‑on/off toggle.
Such an approach reduces risk and builds trust; it trades short‑term adoption numbers for long‑term credibility. Community feedback will be far more positive if Microsoft demonstrates restraint and gives people real control over the assistant’s presence.

Wider industry context: why this matters beyond Microsoft​

The Copilot auto‑open debate is emblematic of a broader crossroads in HCI and platform governance: whether AI assistants become ambient assistants that interject proactively, or remain explicit, user‑driven tools. Both models have valid use cases — but they imply different norms for consent, transparency, and control.
  • Ambient assistants can increase productivity for coordinated workflows, but they also centralize power and data in platform vendors.
  • Explicit assistants require an intentional user action, which preserves agency at the cost of friction.
Regulators and enterprise customers are watching this tension closely. The EU AI Act and other emerging frameworks emphasize transparency, human oversight, and risk‑based controls for high‑impact AI systems. That regulatory backdrop will shape how vendors like Microsoft deliver features such as automatic Copilot panes.

Conclusion​

Microsoft’s roadmap entry for Edge to auto‑open Copilot from Outlook links is a logical extension of the company’s strategy to weave AI across apps — and it will be useful in many scenarios. But usefulness alone does not neutralize legitimate concerns about defaults, data handling, and control. To ship responsibly, Microsoft must be explicit about what data Copilot will see and where it will be processed, provide strong tenant and device controls, and default to less intrusive behavior for the majority of users.
If Microsoft gets those controls and communications right — previewed to insiders, managed by admins, and opt‑in for most users — the feature can be a genuine productivity win. If Microsoft treats it as a silent default nudge without the required governance, it risks repeating old mistakes and deepening user distrust in AI features across Windows and Microsoft 365. The difference between a welcome assistant and an intrusive billboard is not how clever the AI is; it’s how well users and admins control when and how it appears.

Source: TechRadar https://www.techradar.com/computing...-email-links-and-i-can-feel-the-hate-already/