Google’s Gemini is quietly testing a way to pull another assistant’s entire conversation history into its own workspace — a small UI change with potentially huge implications for portability, privacy, and how people choose (or switch) AI chatbots.
Google has been spotted testing an “Import AI chats” option inside Gemini’s attachment menu that promises to let users upload exported conversation archives from competing assistants such as ChatGPT, Claude, and Copilot so those threads can be continued within Gemini. Early screenshots and hands‑on discoveries indicate the control is currently marked as a beta feature and sits under the plus/attachment menu in Gemini’s web client. The feature appears to prompt users to download an export from the original service and then upload that archive into Gemini so past threads — prompts, responses, and attached media — can be reconstructed.
This move directly addresses the growing frustration around ecosystem lock‑in in consumer and prosumer AI: people accumulate months of prompts, custom fine‑tuning in the form of repeated patterns, and complex project histories in a single assistant, and switching platforms means starting that memory from scratch. Gemini’s importer, if it ships as described, aims to remove that friction and let users bring that context with them. Early reporting also shows Google testing related features — a “Likeness” (video verification/provenance) control and higher‑resolution image download options labelled for print (2K and 4K).
The result is a classic lock‑in effect: users stay with the first assistant that “knows” them because the switching cost is high. A clean import flow directly targets that pain point and lowers the barrier for users to try or migrate to Gemini. Multiple outlets reporting on the leak frame the feature as a strategic adoption lever for Google.
Why that matters:
But the feature’s value depends on execution. To be genuinely useful and responsible, Google needs to:
If Google ships a transparent, governed importer, it will have done something few platform owners have: reduce vendor lock‑in without eroding user privacy — an outcome that benefits both users and the wider AI ecosystem. If it fails to address training uses, provenance, and enterprise controls upfront, the importer will deliver convenience at an unacceptable cost.
The leak is a reminder that portability is now within reach technically; the remaining work is policy and UI design. That’s where the real product leadership — and the real risks — will be decided.
Source: Technobezz Google Gemini tests a feature to import chat history from ChatGPT and Claude
Background / Overview
Google has been spotted testing an “Import AI chats” option inside Gemini’s attachment menu that promises to let users upload exported conversation archives from competing assistants such as ChatGPT, Claude, and Copilot so those threads can be continued within Gemini. Early screenshots and hands‑on discoveries indicate the control is currently marked as a beta feature and sits under the plus/attachment menu in Gemini’s web client. The feature appears to prompt users to download an export from the original service and then upload that archive into Gemini so past threads — prompts, responses, and attached media — can be reconstructed. This move directly addresses the growing frustration around ecosystem lock‑in in consumer and prosumer AI: people accumulate months of prompts, custom fine‑tuning in the form of repeated patterns, and complex project histories in a single assistant, and switching platforms means starting that memory from scratch. Gemini’s importer, if it ships as described, aims to remove that friction and let users bring that context with them. Early reporting also shows Google testing related features — a “Likeness” (video verification/provenance) control and higher‑resolution image download options labelled for print (2K and 4K).
Why portability matters: the problem Gemini is trying to solve
The cost of conversational inertia
Modern chat assistants are not just Q&A boxes. Over time they accumulate context: recurring instructions, saved drafts, multi‑step research threads, debugging histories, and even embedded code or images. For many users the assistant becomes a de facto project repository. Losing that accumulated context is costly — it means hours of re‑teaching, broken continuity, and reduced productivity.The result is a classic lock‑in effect: users stay with the first assistant that “knows” them because the switching cost is high. A clean import flow directly targets that pain point and lowers the barrier for users to try or migrate to Gemini. Multiple outlets reporting on the leak frame the feature as a strategic adoption lever for Google.
Practical use cases
- Migrating months of research notes (e.g., literature review threads) without re‑entering prompts and clarifications.
- Consolidating multiple assistants into one workspace for a single project.
- Preserving developer debugging sessions, code revisions, and instructions in a new assistant.
- Carrying over content created in another model’s conversation (summaries, plans, drafts) for further iteration.
What the leaked flow looks like (and what’s missing)
The observed UI and flow
- The control surfaced in screenshots inside Gemini’s “Attach / +” menu as “Import AI chats,” marked with a small beta badge and a chat‑bubble icon. It reportedly appears beneath NotebookLM access in the same list. Activating it opens a popup that instructs the user to download their chat history from the source assistant and then upload the file to Gemini for ingestion. The popup text also warns that the uploaded content will be stored in the user’s Gemini activity and used to improve Google’s models.
Gaps and unanswered questions
- Supported source platforms: the screenshots do not publish an exhaustive list. Reporters speculate about ChatGPT, Claude, and Microsoft Copilot based on the leaked UI, but Google has not confirmed supported origins.
- Accepted file formats: the popup does not clearly state which archive types Gemini will accept (ChatGPT exports are ZIPs containing JSON/HTML; Claude uses a ZIP/.dms flow). The importer’s schema compatibility is unannounced.
- Fidelity guarantees: it’s unclear how attachments, media, thread forks, and branching conversation structures will be reconstructed inside Gemini.
- Training/usage policy details: the popup language suggests the imported data will appear in Activity and may be used for model improvement — but the exact opt‑in/opt‑out mechanics and whether enterprise data will be excluded are unspecified.
Verification: can you already export chats from rival services?
Contrary to some early interpretations of the leak, major assistants already provide export mechanisms that make a migration flow technically possible today:- OpenAI / ChatGPT: ChatGPT includes an account “Export Data” flow in Settings > Data Controls that produces a ZIP containing your chat history (typically a conversations.json and chat.html). OpenAI’s help documentation details the process and the ZIP delivery via email.
- Anthropic / Claude: Claude exposes a data export in Settings → Privacy (server‑side processing and email delivery), producing a downloadable archive that community tools can convert into readable JSON or Markdown. Multiple third‑party projects and guides document Claude export steps.
- Other tools and third‑party utilities: Several community and commercial tools already parse these exports and reformat them into searchable or portable formats, indicating that cross‑assistant migration is feasible from a technical standpoint.
The benefits: productivity, choice, and competition
Immediate user benefits
- Seamless continuity: users can resume projects, experiments, and long‑running threads without rebuilding context from zero.
- Freedom to test: lowering the cost of switching encourages experimentation — users can test Gemini while preserving their existing histories.
- Centralized workflows: creatives, researchers, and devs who use multiple assistants can consolidate artifacts into one workspace for easier collaboration and handoffs.
Market impact
- Erodes lock‑in: easier migration pressures incumbents to improve interoperability or competitive features.
- Accelerates feature parity: when content portability is feasible, assistants must differentiate on accuracy, integrations, privacy guarantees, and cost — not just who holds the user’s memory.
- Third‑party opportunity: a standardized portable archive format (or de facto conventions) will create a market for migration tools, search/analytics layers, and enterprise connectors.
The risks: privacy, training use, fidelity, and corporate controls
1. Privacy and model training
Initial reports indicate imported conversations and subsequent interactions are stored in Gemini Activity and may be used to train Google’s models. That wording — even appearing in a beta prompt — immediately raises flags for anyone importing sensitive content (personal data, client information, credentials, PII). If imported data is used for model training without strong de‑identification, organizations could inadvertently seed training datasets with confidential information.Why that matters:
- Regulatory exposure (data residency, GDPR data processing obligations) could be triggered if personal data moves between processors.
- Enterprise contracts or confidentiality agreements may forbid reprocessing of client data by third parties.
- Users may mistakenly upload data containing personal identifiers, credentials, or IP without realizing it will be used to improve models.
2. Fidelity and mis‑reconstruction
Export formats vary: JSON schemas, HTML conversations, CSVs, attachments stored separately or embedded. Importing is not just a file copy — it requires parsing roles (system/user/assistant), timestamps, branching threads, attachments, and metadata (titles, tags). Poor reconstruction could produce misleading continuity: responses stitched into the wrong thread, missing attachments, or shuffled chronology. That creates false context, which is worse than none because it breeds misplaced trust. Community tools already document edge cases and export quirks, underscoring the engineering challenge.3. Enterprise data loss or leakage
Organizations will want:- Audit trails for what was imported and when.
- Administrative controls to block or approve exports/imports.
- Non‑training pledges or contractual protections for data used within corporate accounts.
4. Provenance and authenticity
As assistants increasingly produce images, videos, and synthetic media, who authored a piece of content and where it originated matters. Google’s adjacent “Likeness” and “Video Verification” controls suggest the company is aware of synthetic media provenance and identity risks, but details are thin. Importing histories from other systems could complicate provenance if metadata is stripped or transformed during export/import.Technical and operational challenges for Google
- Schema compatibility: supporting ChatGPT’s ZIPs, Claude’s exports, and any other format means building robust parsers and normalization pipelines.
- Media handling: moving or rehosting images, PDFs, and attachments — and preserving access controls — is non‑trivial.
- Thread branching: Chat histories often branch or fork. Preserving correct structure and role labels is essential for utility.
- Opt‑out mechanics: providing simple, reliable ways for users and admins to prevent imported data from being used for model training.
- Enterprise controls & compliance: consent controls, audit logs, and contractual assurances for non‑training must be available for business users.
How an ideal migration flow should work (technical and policy checklist)
Below is a recommended blueprint — a practical checklist of features that will maximize value while minimizing risk.- Multiple source formats supported (ChatGPT, Claude, Copilot, JSON/HTML, ZIP).
- Pre‑import preview and scrub tool:
- Show a complete inventory of what will be imported (conversation titles, counts, attachments).
- Provide a “redact sensitive info” helper that flags likely PII (emails, SSNs, API keys) and lets users remove content before upload.
- Training opt‑out toggles:
- Per‑import and per‑account controls to exclude imported content from model training.
- Clear language in the UI about how imported content will be stored and used.
- Enterprise policy controls:
- Admin‑level whitelist/blacklist for imports.
- Audit logs of imports, downloads, and redactions.
- Fidelity reporting:
- After import, show a “reconstruction report” listing any items that could not be reconstructed (missing attachments, branch mismatches).
- Media handling options:
- Allow rehosting to Google storage with selectable access controls, or option to keep media off hosted servers (local only).
- Provenance metadata:
- Preserve original timestamps, assistant identifiers, and source labels in metadata to maintain provenance and auditability.
Practical steps for users who want to migrate today
If you’re considering switching assistants and want to preserve your history, here’s a practical, cautious approach:- Request an export from your source assistant.
- ChatGPT / OpenAI: Settings → Data Controls → Export Data. Wait for the email and download the ZIP.
- Claude / Anthropic: Settings → Privacy → Export data. Wait for the email and download the ZIP.
- Inspect the archive locally. Open the conversations.json or chat.html to identify any sensitive entries or credentials that should be removed.
- Run a manual scrub:
- Redact personal names, email addresses, API keys, and client identifiers.
- Remove entire conversations that include proprietary or regulated content.
- If using a third‑party migration tool, verify its data handling policy — prefer tools that process files locally in the browser (no upload) or that explicitly do not use your data to train models.
- Only upload to a new assistant after you’ve sanitized the export and confirmed the destination’s training/usage policy for imported data.
- For enterprise users: consult legal/compliance teams and insist on administrative controls before allowing imports on corporate accounts.
Competitive and regulatory implications
- Consumer expectations will shift: once one major assistant offers reliable imports, users will expect portability as a standard feature.
- Vendors will face pressure to implement export APIs and consistent, documented schemas to enable true portability.
- Regulators may pay attention: cross‑platform data movement raises privacy and contractual concerns that could invite regulatory scrutiny in heavily regulated verticals.
- Enterprises will demand contractual non‑training guarantees and DLP integration before approving imports for corporate accounts.
What we verified and what remains unconfirmed
Verified:- TestingCatalog and multiple major outlets have reported screenshots and hands‑on observations showing an “Import AI chats” control inside Gemini’s attachment menu, described as a beta feature.
- OpenAI (ChatGPT) and Anthropic (Claude) offer data export tools today that produce downloadable archives suitable for migration workflows; community projects and guides document how to use those exports.
- Reported adjacent features — higher‑resolution image download options (2K/4K) and a “Likeness” video verification UI — have been observed in testing builds by multiple reporters.
- Which exact file formats and platform exports Gemini will accept in its importer.
- Whether Google will offer a per‑import opt‑out for training or a global account setting that prevents imported content from contributing to model updates.
- How attachments and branching conversation structures will be handled during reconstruction.
- Final enterprise controls, audit features, or contractual non‑training options.
Bottom line: big promise, real obligations
Google’s importer test is strategically smart: it targets the single biggest non‑technical reason users stick with an assistant — accumulated context. Portability could usher in a more competitive landscape and give users real power to choose the assistant that best fits their workflows.But the feature’s value depends on execution. To be genuinely useful and responsible, Google needs to:
- support existing export schemas (so users aren’t forced into manual conversions),
- provide robust pre‑import scrubbing and training opt‑outs,
- offer enterprise governance and audit controls, and
- preserve provenance and fidelity during thread reconstruction.
If Google ships a transparent, governed importer, it will have done something few platform owners have: reduce vendor lock‑in without eroding user privacy — an outcome that benefits both users and the wider AI ecosystem. If it fails to address training uses, provenance, and enterprise controls upfront, the importer will deliver convenience at an unacceptable cost.
The leak is a reminder that portability is now within reach technically; the remaining work is policy and UI design. That’s where the real product leadership — and the real risks — will be decided.
Source: Technobezz Google Gemini tests a feature to import chat history from ChatGPT and Claude
