OpenAI’s newest pilot brings ChatGPT into the same room as groups of people, testing threaded, multi-user conversations that pair up to 20 participants with the assistant in shared spaces designed for planning, decision‑making, and collaborative creation.
OpenAI has begun piloting group chats in ChatGPT across web, iOS, and Android in four regions: Japan, New Zealand, South Korea, and Taiwan. The experiment is available to logged‑in users across Free, Go, Plus, and Pro plans and lets a single group contain up to 20 people plus ChatGPT. Conversations remain separate from private one‑to‑one chats: adding people to an existing conversation creates a copied group thread while leaving the original private conversation intact. Responses inside groups are served by
Several product trends make this move timely:
Key protections include:
Comparative positioning:
Yet the project also surfaces meaningful risks. Privacy, moderation, billing clarity, and the handling of profile images for generation are all areas that need robust policy and product attention. The pilot’s limited geographic rollout and iterative approach are sensible: they give OpenAI room to refine core behaviors and controls before broader exposure.
For enthusiasts, teams, and families in the test regions, the new feature will likely feel immediately useful. For enterprises and regulators, the real work begins now: ensuring that shared AI experiences are safe, auditable, and respectful of personal and legal boundaries. How OpenAI responds to those operational and policy challenges will determine whether group chats become a mainstream collaboration paradigm or a useful but contained experiment.
OpenAI’s pilot is an unmistakable signal: the next phase of AI will be social and shared. The question for users and organizations is how to adopt that capability responsibly—balancing convenience and creativity with safety, privacy, and control.
Source: TestingCatalog OpenAI tests ChatGPT group chats in Japan, Korea, Taiwan, NZ
Overview
OpenAI has begun piloting group chats in ChatGPT across web, iOS, and Android in four regions: Japan, New Zealand, South Korea, and Taiwan. The experiment is available to logged‑in users across Free, Go, Plus, and Pro plans and lets a single group contain up to 20 people plus ChatGPT. Conversations remain separate from private one‑to‑one chats: adding people to an existing conversation creates a copied group thread while leaving the original private conversation intact. Responses inside groups are served by GPT-5.1 Auto, with search, file and image upload, image generation, and dictation enabled. ChatGPT behaves as a conversational participant that decides when to speak, supports an optional mention-only mode (responds only when called with @ChatGPT), allows emoji reactions, and can reference participant profile photos to generate personalized images. On the privacy side, personal memory is disabled once more than one human is present, group threads are handled separately from private conversations, and additional controls such as invite link resets, message deletion for everyone, and parental/under‑18 safeguards are built into the experience.Background: why group chats, and why now
ChatGPT has evolved from a single‑user assistant into a multi‑modal platform that increasingly blurs the line between productivity app and social space. The move to group chats is a logical step in that trajectory: it brings ChatGPT into contexts where multiple people need shared situational awareness, collaborative summarization, or neutral facilitation.Several product trends make this move timely:
- AI assistants are increasingly used as collaborative aids—summarizing meeting notes, drafting shared documents, and resolving disputes.
- Messaging platforms are central to group coordination; adding a capable assistant directly into those threads reduces friction compared with switching between apps.
- Advances in model behavior (contextual turn-taking, moderation, personalization) allow the assistant to behave more like a contextual participant instead of replying to every utterance.
- The incremental pilot approach—starting in a limited set of countries and plans—lets OpenAI gather usage data and feedback before a global rollout.
How group chats work: mechanics and user experience
Starting and joining group chats
- Create a group by tapping the people icon inside any chat. Adding participants to an existing chat copies it into a group thread to preserve the original private conversation.
- Invite others via a shareable link; any participant can re‑share that link. Groups can include 1–20 people.
- On first joining a group, participants are prompted to set a short profile (name, username, photo) so members know who’s contributing.
ChatGPT’s role and behavior
- Responses are powered by
GPT-5.1 Auto, which dynamically selects a model variant based on the prompt and the plan of the person ChatGPT is replying to. - ChatGPT acts like a participant—opting into the conversation flow, missing cues when silence is appropriate, and reacting with emoji when helpful.
- There’s an optional mention-only mode: ChatGPT will respond only when explicitly tagged with @ChatGPT.
- When ChatGPT responds, available capabilities include search, image generation, file uploads, and dictation. Only ChatGPT’s responses count toward user usage/rate limits, not messages exchanged between human participants; the response counts against the limit of the person ChatGPT is replying to.
Group-specific settings
- Each group can define its own custom instructions for tone, goals, and response preferences, separate from the account‑level custom instructions.
- Group settings allow renaming, adding/removing participants, muting notifications, and resetting or deleting invite links.
- Most participants can remove others from a group; the group creator can only be removed by leaving the group themself.
Privacy, safety, and control features
OpenAI has baked several safeguards into the pilot. These are central to user trust and to limiting compliance and regulatory exposure as the feature rolls out.Key protections include:
- Separate handling of group threads: Group conversations are distinct—personal account memories and private custom instructions are not accessible in groups.
- Automatic memory disablement: Once more than one human participant is present in a thread, personal memory is disabled by default for that thread to avoid cross‑contamination of private data.
- Granular control for membership and content: Invite links can be reset or deleted, participants can be removed, and users can delete their own messages for everyone.
- Teen and minor safeguards: Teen accounts remain subject to parental controls, and when a minor (under 18) is present the group as a whole is placed into an under‑18 mode that reduces exposure to sensitive outputs for all participants.
- Opt‑in model behavior: ChatGPT’s presence can be controlled by mentions; it doesn’t interrupt or over‑participate by default.
Where this fits in the messaging and collaboration landscape
The addition of native group chats places ChatGPT in direct competition with a range of messaging and collaboration tools. The strategic play is not strictly "replace WhatsApp" or "replace Slack" but rather to insert a powerful AI collaborator into the flows people already use for coordination.Comparative positioning:
- Messaging apps (WhatsApp, Telegram, LINE, KakaoTalk): ChatGPT group chats could appeal to users already coordinating events and personal groups, especially where AI facilitation (itinerary generation, polls, summarization) has immediate value.
- Collaboration suites (Slack, Microsoft Teams, Google Chat): Teams and businesses already use dedicated tools with admin controls and compliance. ChatGPT group chats could serve ad‑hoc workgroups or hybrid social‑work flows, but enterprise adoption will hinge on admin, compliance, and data residency features.
- Specialist study and project tools: For study groups, design teams, or hobbyist communities, an assistant that can summarize shared files, generate images from profile cues, and keep common context could be highly useful.
Strengths: what this pilot gets right
- Contextual collaboration: Bringing search, file summarization, and image generation into a shared thread reduces app switching and centralizes context. This is a practical win for trip planning, study groups, and design sessions.
- Turn‑taking intelligence: Teaching the assistant to decide when to speak avoids the “overly eager bot” problem and makes the AI feel more conversational and less disruptive.
- Granular per‑group behavior: Group‑level custom instructions let each shared space have a different tone or goal—useful for separating professional and social norms.
- Fine‑grained participant controls: Invite link reset, message deletion, and participant removal provide immediate operational controls for groups that don’t want to be permanently discoverable.
- Minor safety mode: The under‑18 mode that reduces sensitive outputs for an entire group when a minor is present addresses an early and obvious safety requirement for cross‑age groups.
- Incremental rollout strategy: Piloting in four regions and across Free/paid plans allows OpenAI to collect varied feedback and tune the model behavior before global expansion.
Risks and open questions
Despite the positive design choices, several risk vectors and open questions remain—some technical, some regulatory, and some social.Privacy and data governance
- Memory and cross‑context leakage: While personal memory is disabled in group threads by default, the long‑term handling of group content (logging, retention, use for model training) is not fully specified in the pilot messaging. Users and organizations will demand clarity about how group messages are stored, for how long, and whether they may be used to improve models.
- Account linking and profile photos: The ability to reference profile photos for personalized image generation raises risks around consent and misuse (e.g., deepfake creation or image manipulation). Clear boundaries and opt‑outs will be necessary.
- Cross‑border data flows: Piloting in specific countries invites scrutiny regarding compliance with local data protection laws, especially in jurisdictions with stringent data residency or transfer requirements.
Safety and moderation
- Group dynamics and harassment: Group chats can amplify abuse dynamics (pile‑ons, coordinated harassment). While users can remove participants, more robust moderation tools and safety signals may be required to detect and mitigate bad behavior.
- Misinformation and authoritative outputs: In group settings, ChatGPT’s answers may be taken as neutral arbitration. Without explicit provenance or confidence signals, groups could over‑rely on the assistant for factual disputes.
- Rate limits and abuse: The rule that only ChatGPT responses count toward usage limits (and that they consume the quota of the person ChatGPT is replying to) creates potential for uneven resource use, confusion about billing, and possible exploitation (e.g., a participant intentionally invoking ChatGPT responses to exhaust another user’s quota).
Legal and regulatory exposure
- Minor protections and parental consent: While under‑18 mode reduces access to sensitive outputs, the interaction of parental controls across jurisdictions with different definitions of minor and consent law will need careful policy work.
- Liability for group decisions: When groups use ChatGPT to make financial, legal, or health decisions, questions arise about liability for defective or misleading AI outputs created or amplified in a group setting.
- Political coordination and content moderation: The platform will need policies to handle groups coordinating political activity or mass persuasion, especially during sensitive election windows or in countries with strict speech laws.
UX and adoption hurdles
- Onboarding friction: Requesting profile photos, usernames, and short bios when joining may deter casual participants or complicate fast onboarding for ephemeral groups.
- Discovery vs privacy tradeoff: Shareable invite links are convenient but can be easily leaked—managing link security and discoverability will be critical for private groups.
- Behavioral expectations: Users will test where ChatGPT should or should not participate. Fine‑tuning the assistant’s social behavior to satisfy diverse norms (workplace seriousness vs casual banter) will take iteration.
What to watch next: metrics and signals that will matter
OpenAI’s stated intention is to iterate based on feedback. Key metrics and signals to track during the pilot include:- Engagement patterns: frequency of ChatGPT participation vs mention‑driven replies; percent of groups that use ChatGPT actively.
- Retention: are users returning to group threads for repeated collaboration, or are groups ephemeral?
- Moderation events: volume and types of removals, message deletions, and safety escalations.
- Incidents involving minors: counts and outcomes where under‑18 mode was triggered and whether parental controls were sufficient.
- Billing and quota confusion: reports of users surprised by usage counts or differences in rate limits when ChatGPT replies across members with different plans.
- Privacy complaints: requests to delete group data, concerns about image generation using profile photos, and data access inquiries.
Practical use cases: early adopters and scenarios
Trip and event planning
Groups can jointly share links, suggestions, and images while ChatGPT compares options, builds itineraries, and produces packing lists. The assistant’s ability to summarize shared files and web search results makes it a practical event planner.Study and research groups
Students can drop articles and notes into a shared thread; ChatGPT can summarize, create study questions, and synthesize conversations into outlines.Home and design collaboration
Roommates or partners can post images, inspiration boards, and floorplans; ChatGPT can generate mood boards, shopping lists, and mockups referencing profile preferences.Ad hoc work collaboration
Small teams or contractors can use group threads for brainstorming or drafting content, with ChatGPT summarizing decisions and producing action items.Social coordination
Friend groups coordinating meetups, game nights, or creative projects can use the assistant as an impartial facilitator or creative prompt engine.Recommendations for users and administrators
- Use group‑level custom instructions to set expectations: define whether ChatGPT should act as a neutral moderator, creative collaborator, or strict summarizer.
- Limit invite links to trusted participants and periodically reset links for sensitive groups.
- For groups involving minors, enable parental controls and prefer smaller, well‑moderated rooms.
- Keep sensitive data out of group threads until data retention and usage practices are fully understood.
- Monitor usage and billing closely if multiple participants have differing plans; clarify which responses count against whose quota.
- For businesses, treat group chats as unsuitable for regulated or high‑compliance use until enterprise controls (admin dashboards, DPA, data residency assurances) are explicitly available.
Broader implications: AI as a social layer
This pilot is emblematic of a larger shift: AI moving from single‑user assistant to social participant. That shift raises philosophical and practical questions:- How should an AI behave in a room full of people with competing goals and social norms?
- What trust signals should an AI present when it acts as arbiter or summary source in a group?
- How will norms evolve around using AI as a “tie‑breaker” or decision engine in social and professional contexts?
Limitations and unverifiable points
Several operational details remain unspecified or are subject to change as the pilot progresses. These include the granular retention policy for group messages, whether group threads will ever feed into model training by default, exact rate limit mechanics across mixed‑plan groups, and the long‑term availability of profile‑driven image personalization. The pilot framing suggests these areas will be iterated on, but until OpenAI announces explicit policies or enterprise features, those items should be treated as open.Conclusion
OpenAI’s group chat pilot transforms ChatGPT from a solo assistant into a potential social collaborator, bringing shared context, search, summarization, and generative tools into group workflows. The feature is a smart product move: it addresses obvious user pain points around collaboration and contextual continuity. The pilot’s strengths—turn‑taking behavior, per‑group custom instructions, and built‑in safeguards for minors—show realistic, cautious design.Yet the project also surfaces meaningful risks. Privacy, moderation, billing clarity, and the handling of profile images for generation are all areas that need robust policy and product attention. The pilot’s limited geographic rollout and iterative approach are sensible: they give OpenAI room to refine core behaviors and controls before broader exposure.
For enthusiasts, teams, and families in the test regions, the new feature will likely feel immediately useful. For enterprises and regulators, the real work begins now: ensuring that shared AI experiences are safe, auditable, and respectful of personal and legal boundaries. How OpenAI responds to those operational and policy challenges will determine whether group chats become a mainstream collaboration paradigm or a useful but contained experiment.
OpenAI’s pilot is an unmistakable signal: the next phase of AI will be social and shared. The question for users and organizations is how to adopt that capability responsibly—balancing convenience and creativity with safety, privacy, and control.
Source: TestingCatalog OpenAI tests ChatGPT group chats in Japan, Korea, Taiwan, NZ