Microsoft 365 Copilot Adds Mode, Pages, Researcher, Notebooks & Prompt Gallery

  • Thread Author
Microsoft 365 Copilot now includes lesser-used tools such as the Mode Switcher, Copilot Pages, Researcher, Copilot Notebooks, and the Prompt Gallery, giving licensed users more control over how AI answers, collaborates, researches, stores context, and reuses task-specific prompts across Microsoft 365. The important story is not that Microsoft has added yet another handful of buttons to an already crowded productivity suite. It is that Copilot is slowly moving from chat window to work system. If users and administrators still treat it as a smarter search box, they will miss both the productivity upside and the governance headaches.

Microsoft’s Copilot Pitch Has Moved Past the Blank Chat Box​

The first wave of Microsoft 365 Copilot was easy to understand and hard to justify. It promised summaries, drafts, meeting recaps, email help, and a general-purpose assistant sitting beside Word, Excel, PowerPoint, Outlook, Teams, and the Microsoft 365 app. That made for clean demos, but in real offices it ran into a more stubborn problem: most people do not know what to ask an AI assistant once the novelty of “summarize this meeting” wears off.
The five features highlighted in Mike Tholfsen’s guide, and summarized by Geeky Gadgets, land squarely in that gap. Mode switching, Pages, Researcher, Notebooks, and Prompt Gallery are not glamorous in the way a new model benchmark is glamorous. They are workflow scaffolding. They try to turn Copilot from a conversational gamble into something closer to a repeatable office instrument.
That matters because the Copilot value proposition has always depended on context. Microsoft’s advantage is not that it alone has access to large language models; it is that it sits on the documents, chats, calendars, email threads, SharePoint libraries, Teams meetings, and permissions structure of the modern Microsoft workplace. These features are Microsoft’s attempt to make that context usable without forcing every employee to become a prompt engineer.
There is a catch. The more Copilot becomes embedded in the ordinary machinery of work, the less it behaves like an optional side tool. It starts touching records, meeting artifacts, reports, internal files, and collaborative planning spaces. For WindowsForum readers, especially administrators and security-minded power users, the question is no longer whether Copilot can write a better paragraph. It is whether Microsoft can make AI-assisted work predictable enough to trust.

The Mode Switcher Is Really a User Interface for Uncertainty​

The Mode Switcher sounds like a small convenience: choose a quick response when you need brevity, or ask Copilot to think more deeply when the task deserves a longer, more analytical answer. In practice, it is Microsoft acknowledging a basic truth about generative AI: the same prompt can be too shallow for one job and too verbose for another. Users need a way to declare intent before the model starts producing text.
That is more significant than it looks. The classic Copilot interaction asks users to phrase a request and hope the model infers the correct level of depth, tone, evidence, and caution. A mode control moves some of that intent into the product interface. It tells the assistant whether it should behave like a fast autocomplete engine, a drafting partner, or a slower analyst.
The reported ability to choose between different underlying model styles sharpens the point. A user may want a model optimized for writing, another for reasoning, and another for structured analysis. Whether Microsoft exposes those choices broadly, hides them behind automatic routing, or changes their names over time, the direction is clear: Copilot is becoming a model broker rather than a single assistant.
For users, this can reduce one of the most common Copilot frustrations: output that is technically responsive but wrong for the moment. A senior manager asking for a two-sentence decision summary does not want a research memo. A compliance analyst reviewing policy language does not want breezy simplification. Mode switching gives the user a visible lever for that distinction.
For IT departments, it also introduces a new support challenge. When employees can choose response styles or model families, training materials need to explain when those choices matter and when they do not. Otherwise, mode selection becomes another ritual button users click without understanding, like changing printer drivers in the hope that the jammed tray will magically clear.

Copilot Pages Turns AI Output Into Shared Work Product​

Copilot Pages may be the most important feature in the set because it attacks a very specific failure mode of workplace AI. Chat output is ephemeral. It appears, looks impressive, gets copied into a document or pasted into Teams, and then immediately starts drifting away from its source and context.
Pages changes the shape of that interaction. Instead of treating Copilot’s answer as a disposable chat result, it turns the answer into a collaborative artifact that can be edited, shared, reused, and refined. Microsoft has tied this closely to Loop-style collaborative components, which makes sense: Loop was already Microsoft’s answer to the problem of lightweight, live, shared work surfaces inside Microsoft 365.
That puts Copilot Pages somewhere between a document, a whiteboard, a meeting note, and a project scratchpad. A team can use it to assemble an agenda, draft a plan, summarize decisions, or build a project outline without immediately committing to the heavier formality of a Word document or a Planner board. The AI helps generate the starting structure; the team then does the human work of editing, arguing, assigning, and deciding.
The enterprise significance is that Pages lives closer to Microsoft 365’s permission and storage model than a random AI chat transcript does. That does not make governance automatic, but it does make governance possible. If Copilot output becomes a page stored in the Microsoft 365 ecosystem, administrators at least have a fighting chance of understanding where it lives, who owns it, and how it is shared.
This is also where the Windows productivity story becomes less about Windows itself and more about Microsoft’s cloud operating environment. The PC remains the workstation, but the durable layer of work is increasingly SharePoint, OneDrive, Teams, Loop, and the Microsoft 365 app. Copilot Pages is another sign that Microsoft’s center of gravity is the graph of work, not the file sitting on a local desktop.

Researcher Is Microsoft’s Bet That Reports Are Workflows, Not Documents​

The Researcher agent is pitched as a way to conduct more complex research, produce reports, compare information, add citations, and generate summaries or visual elements. That description may sound like a dressed-up web search assistant, but the ambition is larger. Microsoft is trying to automate the intermediate work between “I need to know something” and “here is a report my colleagues can use.”
That intermediate work is where many knowledge workers spend much of their time. They gather internal documents, scan market material, compare products, summarize customer feedback, extract themes from meeting notes, and turn loose material into something that resembles an argument. Researcher is meant to handle parts of that process, especially when the task spans multiple steps and sources.
The obvious use cases are analyst-heavy roles: marketing teams comparing competitors, consultants drafting client briefings, finance staff preparing internal summaries, product teams reviewing customer signals, and sustainability or compliance teams producing periodic reports. In these cases, Copilot’s value depends on whether it can keep track of source material, distinguish internal context from public information, and expose enough of its reasoning for a human reviewer to catch mistakes.
The caution is equally obvious. A generated report with citations and visuals can look more authoritative than it deserves. The more polished the output, the easier it is for a tired reader to assume the underlying work was rigorous. This is where features like critique or review modes are not decorative extras; they are survival gear.
Researcher should be treated as an accelerator, not an oracle. It can reduce the first-draft burden and surface patterns faster than manual scanning, but the professional responsibility remains with the user. In regulated, legal, financial, medical, government, or security-sensitive work, a Copilot-generated report needs the same scrutiny as work produced by a junior analyst who is fast, confident, and occasionally wrong.

Notebooks Gives Copilot the One Thing Chat Lacks: A Stable Memory Box​

Copilot Notebooks is Microsoft’s answer to another common AI problem: context sprawl. Users often need Copilot to consider a specific bundle of material — files, notes, meeting records, reference documents, requirements, or project background — rather than the vague totality of everything they might have access to. A notebook gives that context a container.
That container matters because ordinary chat is too transient for many professional tasks. A product manager might have one set of materials for a launch plan, another for customer interviews, another for executive reporting, and another for competitive research. If all of that context is mixed into ad hoc prompts, the user spends half the time reminding the assistant what matters.
A notebook lets the user curate the working set. That is a subtle but important inversion of the usual Copilot promise. Instead of saying “use everything you know about my Microsoft 365 world,” the user can say “use this collection of materials for this job.” In enterprise AI, narrower context is often better context.
The payoff is not just better summaries. It is repeatability. If a team keeps the right documents and notes in a notebook, Copilot can generate updates, briefs, study guides, project summaries, or draft materials against a relatively stable source base. That turns the notebook into a living project memory rather than a one-time prompt accessory.
Administrators should pay attention to where these notebooks are stored, how they inherit permissions, how retention applies, and whether users understand that adding sensitive reference material expands what Copilot may use in that workspace. The feature is useful precisely because it concentrates context. Concentrated context is also concentrated risk.

The Prompt Gallery Is Training Wheels Microsoft Cannot Afford to Remove​

The Prompt Gallery may sound like the least sophisticated feature on the list, but it may be the one that drives the most everyday adoption. Prompting remains a usability tax. People who are excellent at their jobs are not automatically excellent at describing their jobs to an AI system.
A curated gallery of prompts solves a practical problem: it gives users a starting point. Instead of staring at a blank Copilot box, an employee can pick a prompt designed for a role, industry, or task and adapt it. That can improve output quality simply by reminding users to specify audience, format, source material, tone, constraints, and desired result.
The gallery also reflects an uncomfortable reality for Microsoft. Copilot is expensive, both in license cost and organizational change effort, and enterprises will not tolerate a tool that only power users can exploit. Microsoft needs average employees to find repeatable value quickly. Prebuilt prompts are not a luxury in that model; they are onboarding infrastructure.
There is also a governance angle. If organizations can steer users toward approved prompt patterns, they can encourage safer and more consistent behavior. A finance department may prefer prompts that remind users to verify figures. A legal team may want language that avoids treating generated text as final advice. A security team may want prompts that discourage unnecessary inclusion of sensitive material.
But a gallery can only go so far. Prompt templates are still templates, and the output will vary based on permissions, data quality, tenant configuration, and the user’s own files. That means the same prompt may produce dramatically different answers for two people in the same company. Training must make that variability explicit, or users will mistake a prompt gallery for a vending machine.

The Real Feature Is Context Management​

Taken together, these five tools point to a bigger design pattern. Mode Switcher manages response depth. Pages manages collaboration and persistence. Researcher manages multi-step inquiry. Notebooks manages curated context. Prompt Gallery manages user intent. Microsoft is building controls around the places where generative AI tends to fail.
That is the right problem to solve. The raw model is only one part of the productivity equation. In a workplace, the harder challenges are knowing what the assistant saw, why it answered the way it did, whether the answer reflects current or outdated material, whether it used the right files, and whether the output can be turned into something the organization can actually use.
This is where Copilot differs from consumer chatbots. A consumer chatbot can be impressive in isolation. Microsoft 365 Copilot has to survive contact with calendar politics, stale SharePoint folders, mislabeled documents, confidential HR files, retention policies, legal holds, executive impatience, and the eternal human habit of putting “final_final_v3” in a filename.
The features in Tholfsen’s guide are not magic fixes for those problems. They are interface-level attempts to make them manageable. The best version of Copilot is not the assistant that always produces the longest answer. It is the assistant that knows when to be brief, when to use a bounded set of materials, when to create a shared artifact, and when to help a user start from a tested pattern.
That is why these lesser-used capabilities deserve more attention than another generic “write me an email” demo. They show Microsoft grappling with the boring parts of AI adoption. Boring, in enterprise software, is often where the money is.

The Admin Burden Moves From Deployment to Behavior​

For many IT departments, the first Copilot challenge was licensing and rollout. Who gets it? What does it cost? Which apps light up? What data can it access? Those questions still matter, but they are no longer sufficient.
Once users begin working with Pages, Notebooks, Researcher, and shared prompts, the administrative burden shifts toward behavior. IT has to understand not only whether Copilot is enabled, but how employees are using it to create, store, and circulate work. That is harder to document and harder to govern than a simple application deployment.
The most important control remains Microsoft 365 permissions hygiene. Copilot generally reflects what users are allowed to access, which means bad permissions become more visible and more consequential. If a user can reach an overexposed SharePoint library, AI can make that exposure easier to exploit unintentionally by summarizing or surfacing material the user never would have manually found.
This is not a reason to avoid Copilot. It is a reason to stop postponing information architecture work. SharePoint sprawl, orphaned Teams, broken sensitivity labels, inconsistent retention policies, and vague ownership models were already problems. Copilot makes them operationally urgent.
Training should also move beyond prompt cleverness. Users need to know when to use a notebook instead of a normal chat, when to turn a response into a page, when Researcher’s report needs human verification, and when a quick answer is appropriate. Copilot adoption fails when employees are told only that AI will save time. It succeeds when they are taught which tool fits which kind of work.

Microsoft’s Naming Problem Is Still a Productivity Problem​

Microsoft has a remarkable talent for making useful features sound like they were named by different committees on different continents. Copilot Pages, Loop Pages, Copilot Notebooks, Researcher, Agent Mode, Prompt Gallery, Microsoft 365 Copilot Chat, and Copilot agents all orbit the same broad idea: AI-assisted work inside Microsoft 365. To users, the distinctions can be blurry.
That naming issue is not merely cosmetic. If people cannot tell whether a Page is a Loop component, whether a Notebook is a OneNote notebook, whether Researcher is an agent, or whether a prompt belongs to them or their organization, adoption suffers. Confusion creates support tickets, and support tickets create skepticism.
Microsoft has improved the product experience since Copilot’s earliest enterprise rollout, but the brand architecture remains dense. The company is simultaneously selling Copilot as a chat assistant, an app feature, an agent platform, a workflow layer, and a premium productivity license. That may be strategically coherent inside Redmond. It is less coherent to a department head trying to train 200 employees.
The practical advice is to ignore the branding at first and teach the jobs. Use chat for quick interaction. Use Pages when the result needs to become shared work. Use Notebooks when the assistant needs a curated project memory. Use Researcher when the task requires multi-step synthesis. Use Prompt Gallery when users need a reliable starting pattern.
If Microsoft wants Copilot to become normal office behavior, it will need to keep sanding down these edges. The future of productivity AI may be agentic and model-routed, but the daily user experience still has to answer a basic question: which button do I click to get my work done?

Windows Users Will Feel This Through Microsoft 365, Not the Start Menu​

For Windows enthusiasts, it is tempting to judge Copilot by the icon on the taskbar or the quality of the consumer assistant built into Windows. That is the wrong lens for this particular story. The more consequential Copilot is the one embedded in Microsoft 365, because that is where business context, permissions, collaboration, and paid adoption live.
The PC still matters enormously. It is where Outlook, Teams, Word, Excel, browsers, remote desktops, line-of-business apps, and admin consoles converge. But the Copilot features that change work habits are increasingly cloud-first and Microsoft 365-first. A Windows machine becomes the access point for a set of AI-mediated services rather than the exclusive home of the intelligence itself.
That shift has consequences for traditional Windows administration. Endpoint management remains critical, but identity, data governance, browser controls, conditional access, DLP, sensitivity labeling, audit logs, and SharePoint administration become even more central to the user experience. Copilot does not reduce the need for IT discipline; it rewards organizations that already have it.
It also changes what “power user” means. The old power user knew keyboard shortcuts, registry tweaks, Office macros, and where Windows hid the useful settings. The new power user knows how to curate context, structure prompts, verify AI output, manage shared artifacts, and choose the right Copilot surface for the task. That is a real skill shift.
The danger is a two-tier workplace. Employees who learn these patterns will quietly become much faster at producing first drafts, plans, summaries, and research packets. Employees who keep using Copilot as a novelty chatbot will wonder why the tool never justifies its license.

The Five Features Reveal Microsoft’s Real Copilot Roadmap​

The concrete lesson from this batch of features is that Microsoft is not trying to win the AI productivity war by making chat slightly more charming. It is trying to place AI at the handoff points of office work: from idea to plan, from meeting to artifact, from file pile to summary, from research question to report, and from blank prompt to repeatable workflow.
That strategy is sensible because most office work is not one task. It is a chain of small transformations. Notes become summaries. Summaries become plans. Plans become documents. Documents become presentations. Presentations become meetings. Meetings become more notes. Copilot’s opportunity is to reduce friction between those states.
Pages and Notebooks are especially important because they give AI output somewhere to live. Researcher matters because it turns information gathering into a managed process. Mode Switcher matters because it gives users control over the shape of the answer. Prompt Gallery matters because it lowers the activation energy for people who do not speak fluent AI.
The weakness is that every one of these features depends on user trust. If Copilot invents details, mishandles context, produces uneven output, or hides too much of its reasoning, users will retreat to familiar tools. If the features are inconsistently available across tenants, licenses, regions, or app surfaces, administrators will struggle to explain why a demo does not match reality.
This is the recurring tension in Microsoft’s AI rollout. The company is moving quickly because the market rewards speed and because competitors are not waiting. Enterprises move slowly because trust, compliance, and training are not features that can be shipped in a monthly update. Copilot lives between those clocks.

The Copilot Tricks Worth Turning Into Habits​

The useful way to read this feature set is not as a checklist of hidden tricks, but as a map of where Microsoft thinks work is going. The organizations that benefit most will be the ones that turn these tools into habits instead of treating them as demo curiosities.
  • Use the Mode Switcher when the cost of a wrong level of detail is high, such as executive summaries, policy analysis, project planning, or technical explanation.
  • Use Copilot Pages when an AI response needs to become a shared artifact that colleagues can edit, refine, and carry into Teams, Word, PowerPoint, or related Microsoft 365 workflows.
  • Use Researcher for multi-step synthesis, but treat its reports as drafts that require human review, especially when decisions, figures, citations, or compliance claims are involved.
  • Use Copilot Notebooks when a project has a defined body of reference material and you want Copilot grounded in that curated context rather than a loose conversational history.
  • Use the Prompt Gallery to standardize good starting points across teams, but remind users that the same prompt can produce different results depending on permissions and available data.
  • Audit Microsoft 365 permissions, sharing practices, and retention policies before Copilot turns old information hygiene problems into faster-moving ones.
The most interesting Microsoft 365 Copilot features are not the ones that make the flashiest keynote demos; they are the ones that make AI less like a parlor trick and more like office plumbing. Mode Switcher, Pages, Researcher, Notebooks, and Prompt Gallery all push in that direction, giving users more structure while giving administrators more to govern. The next phase of Copilot adoption will not be decided by whether AI can draft a paragraph, but by whether Microsoft and its customers can make AI-assisted work repeatable, reviewable, and boring enough to trust.

Source: Geeky Gadgets 5 Microsoft Copilot Features You Probably Aren't Using Yet
 

Back
Top