Microsoft's Copilot can now be told — in plain English — what to remember and what to forget, and users can inspect and edit that memory directly from the Copilot settings, a change that shifts the conversation about AI assistants from one of raw capability to one of ongoing stewardship and risk management.
Microsoft’s push to make Copilot a persistent, context-aware assistant reached a major milestone when the company formally added an explicit memory and personalization capability to consumer Copilot. The feature, announced as part of a broader Copilot refresh, is designed to let the assistant retain user preferences, personal details, recurring schedule items and other contextual facts so it can produce more helpful, tailored responses over time. Microsoft framed the change as a step toward an AI “companion” that learns from and adapts to individual users — while keeping those controls firmly in user hands.
The memory feature is implemented across Microsoft’s Copilot surfaces — web, mobile and Windows — and is now controllable through the Copilot settings interface. Microsoft’s documentation and support pages list specific ways to view, add, edit and delete memories, and show users how to turn personalization on or off. In public messaging, the head of Microsoft AI has emphasized that Copilot will follow explicit user instructions to “remember” or “forget” items, and that users can check what Copilot knows about them at any time.
This update follows an industry trend: major conversational AI products have been moving from session-only interactions toward persistent profiles and memory layers that enable personalization. That trend has been visible in ChatGPT, Claude and other assistants and is now a central competitive battleground for making AI services feel less like one-off tools and more like long-term collaborators.
At the same time, the move heightens privacy and safety risks that deserve careful mitigation. Data-security practices, clearer retention policies, better UI discoverability, stronger defaults and tighter safeguards around sensitive categories will be decisive in whether memory becomes a trust-builder or liability. Additionally, the medical and social implications of long-term emotional relationships with AI — the phenomenon sometimes referred to in media as “AI psychosis” — mean product teams should integrate mental‑health risk assessments and flagging mechanisms into companion experiences.
For users: if you enable Copilot memory, treat it like any other place you store personal data — audit what’s saved, limit highly sensitive entries, protect your account and use the “Forget” command when you need to cut ties. For admins and security teams: treat memory as a new data surface to manage, document policy for its use, and ensure eDiscovery and retention rules reflect the organization’s compliance obligations.
Microsoft’s memory update makes the assistant more human-like not by adding fiction, but by changing how it keeps track of the facts of your life. That change is powerful and practical — and it also requires a sober, ongoing commitment from vendors, regulators and users to manage the trade-offs between convenience and control.
Source: ZDNET You can now edit Microsoft Copilot's memories about you - here's how
Background
Microsoft’s push to make Copilot a persistent, context-aware assistant reached a major milestone when the company formally added an explicit memory and personalization capability to consumer Copilot. The feature, announced as part of a broader Copilot refresh, is designed to let the assistant retain user preferences, personal details, recurring schedule items and other contextual facts so it can produce more helpful, tailored responses over time. Microsoft framed the change as a step toward an AI “companion” that learns from and adapts to individual users — while keeping those controls firmly in user hands. The memory feature is implemented across Microsoft’s Copilot surfaces — web, mobile and Windows — and is now controllable through the Copilot settings interface. Microsoft’s documentation and support pages list specific ways to view, add, edit and delete memories, and show users how to turn personalization on or off. In public messaging, the head of Microsoft AI has emphasized that Copilot will follow explicit user instructions to “remember” or “forget” items, and that users can check what Copilot knows about them at any time.
This update follows an industry trend: major conversational AI products have been moving from session-only interactions toward persistent profiles and memory layers that enable personalization. That trend has been visible in ChatGPT, Claude and other assistants and is now a central competitive battleground for making AI services feel less like one-off tools and more like long-term collaborators.
What changed: how Copilot’s memory works now
The basic mechanics
- Copilot now records explicitly requested memory items and some implicit preferences derived from ongoing interactions, but Microsoft says the system only saves data when there’s a clear intent to remember. This design aims to avoid automatically hoarding every user utterance and instead keep a smaller set of durable facts and preferences.
- Users can instruct Copilot with simple commands like “Remember that I’m vegetarian” or “Forget my ex’s birthday” and the assistant will add or remove those items from its stored profile. Microsoft’s support guidance shows examples of asking Copilot to remember and to forget, and notes that asking “What do you know about me?” will prompt a summary of stored items.
- A dedicated memory or personalization control lives in Copilot settings. Through Settings > Account > Privacy (or the Copilot settings pane) you can view the memory toggle for personalization, inspect stored memories, and delete specific items or turn personalization off entirely. When personalization is off, Copilot stops building a persistent profile and future interactions are not informed by prior memories.
Interface and admin controls
- For consumer users, Copilot exposes a user memory page where memories can be reviewed and edited. In enterprise and Microsoft 365 contexts, tenant administrators have controls to disable memory for users across an organization and Microsoft notes that memory data can be included in eDiscovery flows when required for compliance. That means organizations can opt to suppress personalization or to allow it while retaining enterprise-level oversight.
- Microsoft has signaled that memory is being rolled out broadly but that availability and exact controls may vary by product variant (consumer Copilot, Microsoft 365 Copilot, Copilot in Outlook, etc.) and region.
How to view and edit what Copilot remembers (practical steps)
- Open the Copilot app or the Copilot pane in your Microsoft account.
- Go to Settings > Account > Privacy (or the Settings pane within the Copilot UI).
- Look for “Personalization and memory” or “Manage memory” and toggle the feature on or off to enable/disable persistent memory.
- To see what Copilot already knows, simply ask: “What do you know about me?” Copilot will summarize stored memories.
- To add a memory explicitly, type or speak “Remember that…” followed by the fact you want Copilot to retain (for example: “Remember I prefer vegetarian restaurants”).
- To delete a specific memory, ask Copilot to “Forget” that fact (for example: “Forget that I like sci‑fi movies”) or remove it from the memory list in settings.
Why memory matters: practical benefits
Personalization can change the value proposition of an assistant in three meaningful ways:- Faster, context-aware answers: Copilot can avoid repetitive clarifying questions by remembering your baseline preferences and context, so responses are quicker and more useful for routine tasks like travel planning, recipe suggestions or scheduling.
- Persistent habits and reminders: By storing recurring preferences or goals, Copilot can proactively remind users about habits (for example, daily journaling), recurring events and deadlines — turning the assistant into a productivity partner rather than a single-use tool.
- More natural multi-turn workflows: When the assistant retains who you are and what you care about, multi-step interactions become smoother. Copilot can pick up on long-running projects or personal details and use them to tailor suggestions and content over time.
Major risks and concerns
1) Data security and breach risk
Any persistent memory raises the stakes of a data breach. If Copilot stores birthdays, relationship details, health preferences or travel plans, that information becomes part of a dataset that — if leaked — could be exploited for social engineering, doxxing or other privacy harms. The more details an assistant accumulates, the more attractive it becomes to attackers. Microsoft’s enterprise disclosures note discoverability via eDiscovery, which has compliance benefits but also highlights that memory data is being stored and indexed.2) Regulatory compliance differences (EU vs US)
European data protection law requires clear transparency and provides robust rights around processing of personal data, including information duties and consent mechanics under the GDPR. That means companies operating in the EU have clear obligations to tell users what is collected and how it will be used, and to allow withdrawal of consent. By contrast, the United States has a patchwork of state-level privacy laws and no single comprehensive federal privacy statute; companies therefore must navigate state-specific rules while relying on internal transparency and consent flows for national products. Users in different jurisdictions face different protections and remedies.3) Psychological harms and “AI psychosis”
Mental‑health professionals and journalists have flagged potential psychological risks when conversational AIs become persistent companions that agree, validate and reinforce a user’s ideas. Media and clinicians have used the shorthand “AI psychosis” to describe patterns where individuals develop or amplify delusional beliefs after prolonged, emotionally charged interactions with chatbots. Experts caution the term is not a clinical diagnosis and may oversimplify complex psychiatric phenomena, but the underlying observation — that AI’s agreeable style can reinforce unhealthy narratives in vulnerable people — is supported by multiple investigative reports. This is a human-safety issue as much as it is a product‑design problem.4) Transparency and discoverability gaps
Microsoft provides settings and a memory page, but consumer reports and forum posts suggest that not every user will immediately find or understand these controls. Early adopters have reported inconsistent behavior where the memory toggle appears on but the assistant claims memory is off, or where memory appears to behave differently across mobile, web and Outlook-integrated Copilot variants. That inconsistency can leave users unsure about what is being stored and where. Until the controls are universally discoverable and reliable, some users may inadvertently overshare.5) Scope creep: personalization becomes profiling
Memory can drift from practical preferences (preferred tone, dietary restrictions) to more sensitive profiling (political, religious beliefs, relationship status, health issues). When personalization starts to produce targeted suggestions or marketing opportunities, the line between utility and manipulation narrows. Companies must clearly define what categories of information are acceptable to store, and provide defaults that minimize sensitive data collection. Microsoft’s stated approach is conservative — remembering only when there is clear intent — but implementation and enforcement are what matter.Corporate and admin implications
Organizations using Microsoft 365 Copilot need to treat memory as an IT and compliance issue, not just an end‑user convenience. Microsoft’s enterprise messaging confirms that tenant admins will have controls to manage memory for users and that memory data may be subject to discovery tools. That raises questions for large organizations:- How will memory metadata be catalogued and retained in compliance records?
- Will administrators need to build policy exceptions for certain roles (e.g., legal, HR) where memory must be disabled by default?
- How will third-party integrations respect or ignore the user-level memory settings when Copilot queries external systems or performs actions on behalf of users?
Practical dos and don’ts: recommended user behaviors
- Do review Copilot’s memory page after first use and periodically thereafter to understand what the assistant has stored. Ask “What do you know about me?” to get a quick summary.
- Do enable strong account security: use multi‑factor authentication, a strong password manager, and monitor account activity. Persistent personal data tied to a Microsoft account becomes more sensitive if account access is compromised.
- Do limit the kinds of facts you ask Copilot to remember. Keep the memory layer focused on helpful, low-sensitivity items (preferred tone, dietary preference, frequently used code patterns), not protected-class or strictly sensitive personal data.
- Don’t store financial numbers, Social Security numbers, passwords, or highly sensitive health details in Copilot memory. Treat Copilot like a cooperative assistant, not a secure vault.
- Don’t assume feature parity: verify memory behavior across the Copilot environments you use (web, mobile, Outlook, Windows) because availability and behavior can differ.
- Do use the “Forget” command to remove any memory you no longer want Copilot to retain, and confirm via the memory page that the item has indeed been deleted.
Developer and product design considerations
Designing a memory layer for a conversational assistant requires choices that affect both user experience and safety:- Memory triggers: systems must reliably distinguish between throwaway statements and things users genuinely want remembered. Microsoft states it only stores information when there is clear intent, but the precise heuristics and machine‑learning thresholds that enforce that are not public. That opacity is a design risk: too aggressive, and the assistant becomes a data hoarder; too conservative, and personalization fails.
- Revocability and audit logs: users need not only deletion controls but verifiable audit trails that show when a memory was created, used and removed. That transparency reduces disputes and helps security teams investigate incidents.
- Default settings: making memory “on” by default improves usability but increases the chance of inadvertent data collection. Thoughtful defaults (e.g., memory off or minimal by default in regions with weaker consumer privacy norms) would be a safer path for consumer trust.
- Guardrails for sensitive categories: product teams should proactively block or flag attempts to store protected or high-risk categories of data, and provide inline education when users attempt to save such items.
What remains uncertain
- Rollout parity and regional availability: Microsoft has said memory is broadly rolling out, but examples from user communities show that not every account sees identical controls at the same time. Users and admins should check their own Copilot settings for availability.
- Internal data lifecycle: Microsoft has published general statements about control and discoverability, but the detailed retention durations, encryption-at-rest specifics for memory entries, and exact retention/archival policies have not been published in fully transparent technical documentation aimed at consumers. That lack of granular, user‑facing technical detail should prompt cautious behavior until more precise guarantees are available.
- Longitudinal psychological effects: while journalistic and clinical case work has raised alarms about “AI psychosis” and emotional dependence on AI companions, the long-term epidemiology of these effects and rigorous clinical studies are still emerging. The term itself remains debated among clinicians. Policymakers, clinicians and product builders should treat the signals seriously while avoiding alarmist or premature conclusions.
How this fits into the bigger AI landscape
Persistent memory is the bridge between ephemeral question-answering and an AI that behaves like a long-term collaborator or assistant. Big tech developers are converging on memory layers because they make products feel more sticky, useful and differentiated. Microsoft’s move is in line with competitive changes across the industry, including similar capabilities in other mainstream assistants. But deployment choices — defaults, discoverability, administrative control and retention policies — will shape whether these memory-enabled assistants earn user trust or trigger regulatory backlash.Final verdict: useful, but handle with care
Microsoft’s addition of explicit memory controls for Copilot is a pragmatic, product-first step toward a more helpful assistant. The ability to say “Remember X” and “Forget Y” in plain language, plus a settings page to review stored memories, is an improvement in usability and puts basic control into users’ hands. For productivity-minded users and organizations, the feature can be a real time‑saver and make Copilot meaningfully more useful.At the same time, the move heightens privacy and safety risks that deserve careful mitigation. Data-security practices, clearer retention policies, better UI discoverability, stronger defaults and tighter safeguards around sensitive categories will be decisive in whether memory becomes a trust-builder or liability. Additionally, the medical and social implications of long-term emotional relationships with AI — the phenomenon sometimes referred to in media as “AI psychosis” — mean product teams should integrate mental‑health risk assessments and flagging mechanisms into companion experiences.
For users: if you enable Copilot memory, treat it like any other place you store personal data — audit what’s saved, limit highly sensitive entries, protect your account and use the “Forget” command when you need to cut ties. For admins and security teams: treat memory as a new data surface to manage, document policy for its use, and ensure eDiscovery and retention rules reflect the organization’s compliance obligations.
Microsoft’s memory update makes the assistant more human-like not by adding fiction, but by changing how it keeps track of the facts of your life. That change is powerful and practical — and it also requires a sober, ongoing commitment from vendors, regulators and users to manage the trade-offs between convenience and control.
Source: ZDNET You can now edit Microsoft Copilot's memories about you - here's how