Gemini 3 Pro Brings Catalan into Official AI Support and Language Equity

  • Thread Author
Google’s latest Gemini 3 Pro release finally gives Catalan an official seat at the AI table — and that small line in a long product bulletin is a much bigger story about language equity, platform geopolitics, and how modern AI is moving from experimental novelty into everyday infrastructure.

Glowing holographic AI assistant panel displaying 'bon dia' and 'com estás'.Background / Overview​

The short version: Google’s Gemini 3 Pro launch brought a raft of technical advances — a new Deep Thinking reasoning mode, expanded multimodal abilities, and tighter mobile integrations — and, perhaps most consequential for Catalan speakers, official support for Catalan among a broad list of newly supported languages. That designation removes a long-standing practical ambiguity: systems trained on web-scale corpora could often understand Catalan, but they frequently lacked formal support or consistent behavior in real-world products. The new classification changes that. Equally important, the Gemini rollout roughly coincides with growing cross‑platform deals: Samsung is integrating Google’s image model (nicknamed “Nano Banana”) into Galaxy AI features, and major reporting indicates Apple is negotiating to run a custom Gemini instance inside Apple’s Private Cloud Compute to power the next generation of Siri — a deal widely reported as costing in the ballpark of $1 billion per year. These partnerships mean Gemini’s language coverage and capabilities could propagate fast — across Android devices, Samsung skins, and even into iPhone users’ Siri experiences.

What exactly changed for Catalan — and why that matters​

From tolerated corpus to supported language​

For years Catalan content has been part of the web datasets that large models are trained on, which meant models could often reply intelligibly when prompted in Catalan. But there was a difference between incidentally understanding a language and officially supporting it in product settings — the latter involves UI labels, text-to-speech/dictation support, conversational tuning, safety checks for local idioms, and product rollouts that respect locale-specific formatting. Google’s recent language matrix for Gemini explicitly lists Catalan among the model’s supported languages, placing it on par with other officially supported European languages. This is the change of status that affects millions of everyday interactions.

Real-world effects for users​

  • Device assistants and in-app chat UIs will more reliably accept Catalan voice and text input and return responses in Catalan.
  • On-device and cloud-powered features (e.g., summary, translation, image captioning) will be able to present user-facing strings, suggestions, and error messages localized properly.
  • App integrations that depend on vendor-supported language metadata (voice variants, TTS, locale-aware spell-checking) can now enable Catalan without bespoke engineering.
That shift matters beyond mere convenience: language coverage is a vector of digital inclusion. When a major model provider recognizes a language, downstream products and partners are more likely to enable that language as a first-class citizen.

The rollout: features, partners, and the language list​

Gemini 3 Pro’s headline capabilities​

Gemini 3 Pro is being billed as a multimodal, long-context model with enhanced reasoning abilities. Key product-level changes announced by Google and reported across specialist outlets include:
  • Deep Thinking / Deep Think: a reasoning mode that explores multiple solution paths before producing an answer, designed to reduce simple mistakes on complex math and logic tasks. This is being positioned as a qualitative leap for tasks that need stepwise reasoning.
  • Massive context windows: support for very long documents and multi‑document workflows, enabling analysis of manuscripts, long codebases, or entire books in a single context.
  • Generative interfaces: responses that are not just text but interactive modules — mini-maps, itineraries, and short-lived micro-apps produced on the fly.
  • Native multimodality: integration of text, image, audio, and video understanding within a single model (not stitched-on vision modules).
  • Agentic tooling and Antigravity IDE: model-driven workflows that can orchestrate multi-step tasks and even generate and deploy code.
Those capabilities are the same features that make language support meaningful: a multimodal assistant that understands Catalan can produce localized interactive outputs (maps, menus, UI) rather than failing silently or reverting to English.

The thirty-language expansion and geopolitics​

Google’s published language list for Gemini includes Catalan alongside many other additions — a clear prioritization of high-growth regions such as South Asia and Southeast Asia, but also mid-sized European and African languages. The list spans demographic heavyweights (Bengali, Urdu, Indonesian, Vietnamese, Telugu, Tamil, Marathi) and regional languages with strong digital publishing ecosystems (Catalan, Basque, Galician, Ukrainian). The selection signals where companies anticipate their next billion users will come from and how dataset quality and local digital activism can influence platform decisions.

Mobile and OEM integrations: how Catalan reaches devices​

Samsung, Nano Banana and Galaxy AI​

Samsung has been one of the fastest partners to adopt and surface Google’s image models in its Galaxy AI experience. The image-editing model known colloquially as Nano Banana — formally described in Google’s family of Gemini image models — has been embedded via “Now Brief” and gallery integrations so users can quickly apply generative edits or stylized transformations without leaving their default gallery app. That in-phone distribution accelerates adoption: when manufacturers tune these features into the camera and gallery experience, local-language support bundled with the model propagates to millions of devices.

Xiaomi, Oppo and the Android OEM landscape​

Beyond Samsung, multiple Android OEMs are integrating Gemini-powered features under white-label names: Xiaomi’s HyperAI, Oppo’s Mind Space, and other manufacturer hubs are using Google’s inference backend or tuned variants to add reminders, summarization, and creative features. As those OEMs roll Gemini-based features to their local markets, official language support in the underlying model simplifies regional rollouts — including Catalan in markets where it’s relevant or as part of European language packs.

The Apple wrinkle: Siri and Private Cloud Compute​

Possibly the biggest cross-ecosystem story is Apple’s reported plan to license a tailored Gemini instance for Apple Intelligence / Siri and host it inside Apple’s Private Cloud Compute environment. Multiple outlets — including Reuters and The Verge, echoing Bloomberg reporting — say Apple is close to a commercial arrangement to use a custom Gemini model inside Apple’s privacy-first hosting to upgrade Siri’s reasoning and multimodal capabilities. The implications are straightforward: Siri could immediately gain the languages and capabilities Google has certified in Gemini, including Catalan, even if Apple historically avoided adding Catalan natively. Note: the Apple–Google deal remains reporting-based and not an Apple press release; timeline and commercial detail are widely discussed but proprietary contract details are not public. Independent summaries corroborate that talks happened and that Apple evaluated multiple vendors, but readers should treat the exact price tag and scope as reported figures rather than contractually confirmed facts.

Economics: pricing, tiers, and the developer equation​

Token pricing and value framing​

Gemini 3 Pro’s API and developer pricing use a token-based model with a context-dependent tier. Reported preview pricing (consistent across multiple industry write-ups) is approximately:
  • Standard context (≤200k tokens): Input ≈ $2 per 1M tokens, Output ≈ $12 per 1M tokens.
  • Long context (>200k tokens): Input ≈ $4 per 1M tokens, Output ≈ $18 per 1M tokens.
That positions Gemini 3 Pro as more expensive than some low-cost offerings but competitive for high-context tasks where longer windows or Deep Think tokens (internal reasoning) justify the premium. Pricing is an active battleground; Google’s structure emphasizes the ability to process entire books or large codebases in one pass, which can be cost‑effective for complex enterprise workflows despite higher per‑token rates.

Consumer vs. enterprise economics​

  • Consumers: mobile app access and limited desktop tiers keep basic experimentation free or subscription-based (Google has traditionally offered free access with limits and paid tiers for advanced features).
  • Developers & enterprises: tokenized API pricing, enterprise Vertex AI integrations, and contextual window tiers determine landed costs for production applications.
  • Carriers & OEM bargains: in some markets (notably India), carriers or OEMs have already bundled high-tier access into customer plans — a distribution play that values user engagement and data over per-user ARPU.

Technical verification and what we could and could not confirm​

It’s essential to separate what companies officially document from press claims and bench‑marks that circulate after model launches.
  • Confirmed: Google’s Gemini language support list shows Catalan among supported languages; Google Cloud documentation publicly lists the languages that Gemini models “can understand and respond in.” That is the authoritative product-level confirmation that matters for local-language support.
  • Confirmed: Samsung’s integration of Google’s image models under Galaxy AI (Now Brief) and the nickname “Nano Banana” for Gemini image variants have been reported by device-hardware and tech press, showing a working OEM pathway for in-phone generative features.
  • Confirmed (reporting consensus): Apple has been in advanced talks to license a Gemini variant to power Siri’s next generation, and outlets cite a figure near $1 billion annually — but the details are sourced to investigative reporting and unnamed sources. Reuters and The Verge provide consistent reporting about these negotiations. Treat the number as a widely reported estimate rather than an audited contract.
  • Unverified / cautionary: Specific benchmark claims that sometimes appear in coverage — for example an exact MathArena or MathArena Apex percentage comparing Gemini 3 Pro and GPT-5.1.3 with very precise numbers — could not be traced to a primary benchmark page or peer-reviewed release. Many benchmark numbers are reported by outlets summarizing vendor claims; independent reproduction is required before treating such exact figures as settled facts. In short: Deep Think is documented as a better reasoning mode, but any single-percentage claim from secondary coverage should be treated cautiously until directly validated with the benchmark maintainer or a transparent methodology.

Strengths and the upside for Catalan users and small-language communities​

  • Visibility and parity: being included in the official language list means Catalan will now appear in product UIs, voice assistant settings, and developer language metadata — practical parity that drives real usage.
  • Faster ecosystem adoption: manufacturer and carrier integrations (Samsung, Xiaomi, carriers like Jio) turn a model-level change into device-level reality quickly, expanding reach beyond power users to mainstream mobile customers.
  • Better localized experiences: when models are tuned for a language, downstream artifacts such as prompt templates, safety filters, and text-to-speech voice models can be localized, improving fluency, idiom handling, and cultural sensitivity.
  • Platform leverage: language recognition at a major vendor reduces entry barriers for local startups and public institutions to ship Catalan-first services (chatbots for government services, localized education tools, community translation platforms).

Risks, caveats and governance concerns​

1) Vendor lock-in and opacity​

Large language providers are platforms: language support comes packaged with cloud tooling, APIs, and commercial terms that can lock public services and businesses into specific vendors. That increases systemic risk if pricing or access terms change. The economic importance of carriers and OEM deals also means language coverage may depend more on commercial priorities than on cultural or civic value.

2) Data, privacy and on‑device promises​

The Apple–Google reports show one mitigation pathway — running third‑party models inside Apple’s Private Cloud Compute — but also demonstrate a growing complexity: third-party models hosted inside vendor-controlled enclaves rely on contractual and engineering safeguards to prevent telemetry leaks. The privacy guarantees are architectural but require hardened contractual terms, audits, and verifiable attestations. These are not trivial to audit externally.

3) Sustainability and the investment cycle​

Sundar Pichai himself warned publicly that the AI sector shows elements of “irrationality” and that “no company is going to be immune” if an AI bubble bursts. That reality check should temper celebratory narratives: many startups are surviving on investor expectations of endless demand; a market correction would affect feature availability, free tiers, and R&D that underpins language support. For citizen languages, this matters because smaller-market initiatives are often fragile and depend on large-platform subsidies during the growth phase.

4) Safety, hallucination and cultural fit​

Official language support does not automatically solve model hallucinations, bias, or harmful stereotypes — especially with local idioms, historical sensitivities, or political nuance. Local safety teams, curated corpora, and post-deployment monitoring remain essential to prevent culturally inappropriate outputs.

Practical advice for developers, local institutions, and power users​

  • Evaluate the Gemini APIs and platform offerings for feature parity with your use case (TTS, dictation, long-context analysis). Prototype quickly on the free tier and measure behavior on Catalan queries.
  • If you are a public institution or NGO, insist on contractual transparency for data handling, logging, and the right to audit when you adopt third‑party LLM services for citizen services.
  • For mobile app teams, prioritize hybrid architectures that keep sensitive data local and send only anonymized, minimal context to cloud models when necessary.
  • Plan for contingency: vendor-agnostic designs (modular LLM layers, interface abstraction) let you switch backends if pricing or policy changes occur.
  • Advocate for independent benchmarking and community-driven test suites that measure performance for Catalan across factuality, fluency, safety, and cultural competence.

Conclusion​

The practical consequence of Google’s Gemini 3 Pro including Catalan on an official language list is deceptively simple: it signals the end of a liminal phase in which Catalan existed in models but lacked production-level guarantees. That small administrative change opens real doors for localized assistants, integrated on‑device experiences, and a faster path for Catalan-language apps.
At the same time, the launch sits inside a larger, uncertain industry moment. Deep technical advances and widespread OEM adoption are real and valuable — but so are the commercial pressures, energy demands, and investment cycles that Sundar Pichai warned could lead to broader market stress. For Catalan speakers and language advocates, the right response is pragmatic: seize the immediate benefits of first-class support, push for governance and transparency around deployments, and make localized datasets and safety work sustainable so that language parity survives whatever market cycles come next.
Source: Diari ARA Google's AI already works in Catalan (and Siri will eventually do so)
 

Google’s latest Gemini 3 Pro release did something small on paper but huge in practice: it officially lists Catalan among the model’s supported languages, a change that removes years of product-level ambiguity and immediately affects how assistants, apps, and phones will treat Catalan-speaking users.

Blue-lit Gemini AI interface on phone and laptop, greeting user with prompts.Background​

For years, large language models have been trained on web-scale corpora that incidentally included Catalan content. That produced a paradox: models could often understand Catalan in ad hoc prompts, yet major products did not officially support the language, leading to inconsistent behavior and user frustration. Google’s Gemini 3 Pro release bulletin — and the supporting documentation on Vertex AI — now places Catalan in the official language matrix for Gemini models, signaling a shift from tolerance to product parity. This piece unpacks what that change means for Catalan users, developers, device makers, and the broader AI ecosystem. It verifies the main technical claims where possible, cross-checks the major commercial reports, and flags the elements that remain uncertain or unverified.

What changed (overview)​

  • Official language support for Catalan: Google Cloud’s Gemini language list explicitly includes Catalan among the languages Gemini models “can understand and respond in,” moving Catalan from incidental understanding into formal product support.
  • Gemini 3 Pro’s new capabilities: Google has advertised major product-level improvements — a reasoning/“Deep Thinking” mode, vastly expanded context windows for long documents, and richer multimodality that can combine text, images, audio, and video. These are described in Google’s product literature and widely reported in coverage of the Gemini 3 release.
  • OEM and platform propagation: Samsung (via Galaxy AI / Now Brief) and other Android OEMs are integrating Gemini-derived image and assistant features — notably the image-editing model nicknamed “Nano Banana” — which accelerates distribution of language-enabled features to phones.
  • Cross‑platform implications: Reporting indicates Apple is negotiating to host a custom Gemini variant inside Apple’s Private Cloud Compute to supercharge Siri; multiple outlets estimate the deal in the ballpark of $1 billion per year (reporting still underlines that details are unconfirmed by the companies).
These changes are not purely technical: they are commercial levers. A language added at the model level can propagate to UIs, voice variants, text-to-speech, developer SDKs, and device integrations — effects that compound quickly when OEMs bake the tech into phones.

Why Catalan’s inclusion matters​

From incidental competence to product parity​

Large models often learn multiple languages simply by ingesting multilingual web data. But formal product support is a different commitment: it means interface localization, validated safety filters, voice/dictation support, and downstream integration work that makes the language available in production features and device UIs. Google’s documentation now lists Catalan alongside other supported languages, which is the concrete step app teams and OEMs need to ship Catalan-enabled features.

Practical effects for users​

  • Device assistants, chat UIs, and in-app AI features will now have a formal path to Catalan text and voice input handling.
  • Localized TTS, grammar tuning, and locale-aware behavior will become easier for third-party developers to enable without bespoke engineering.
  • Public services, education apps, and local startups can build products that expect consistent performance rather than ad hoc responses.
These are tangible improvements: Catalan will now appear as a selectable language in the model metadata and API layers that apps and device makers use to configure services.

Technical verification: what is confirmed and what remains provisional​

Confirmed by product documentation and vendor pages​

  • Language list: Google Cloud’s Vertex AI documentation lists Catalan as a supported language for Gemini models, an authoritative source for model-level language support.
  • Pricing and context windows: Google’s Vertex AI pricing page details Gemini 3 Pro token pricing tiers and the premium for very long contexts, confirming the basic economics public reporting mentioned. The published token-based prices (input and output tiers) and the long-context premium are available on Google Cloud’s pricing pages.
  • OEM integrations: Multiple independent device- and mobile-focused outlets report that Samsung has added Nano Banana (Gemini image editing) into Galaxy AI’s Now Brief feature, demonstrating an OEM pathway for in-phone generative features. These reports are consistent across SamMobile, Digital Trends and other outlets.

Claims that require caution or remain unverified​

  • Exact benchmark figures (MathArena / MathArena Apex comparisons): Some news summaries quote precise percentages comparing Gemini 3 Pro to rival models on specific benchmarks. Those precise single-number claims could not be traced to a primary benchmark publication in the public domain and should be treated as vendor- or press‑reported summaries rather than independently validated results. Independent reproduction is required.
  • Contract value and final Apple–Google terms: Major reporting (Bloomberg, Reuters, The Verge) consistently indicates Apple is negotiating for a Gemini-backed Siri and that the number widely reported is about $1 billion per year. These are strong investigative reports but not contract filings; the exact scope of the deal, technical SLAs, and legal commitments are not public. Treat the figure as a credible industry estimate, not a public accounting entry.
  • Distribution timing and per‑device localization: While OEMs are already shipping Gemini-derived features, the precise global rollout cadence — particularly for less-common regional locales — can vary by carrier, firmware channel, and country. Expect fragmented availability during the initial months.

The big technical pieces explained​

Deep Thinking (reasoning mode)​

Gemini 3 Pro introduces a mode described as a reflection or multi-path reasoning phase that aims to reduce superficial mistakes on complex math and logic tasks. In product terms, this is a model configuration that trades latency for higher-confidence outputs and better stepwise reasoning.
  • Benefit: Improved multi-step reasoning and fewer “fast but wrong” answers.
  • Trade-off: Increased compute and latency; potential cost implications for long or repeated reasoning sessions.
Reporters have noted benchmark gains in reasoning-focused tests, but the most precise numbers quoted in secondary coverage require independent reproduction before they can be accepted as definitive.

Massive context windows and long-document workflows​

Gemini’s long-context support — billed by Google as suitable for entire books, long codebases, or multi-document analyses — changes how teams architect workflows. Instead of chunking documents into many prompts, customers can send far larger inputs in a single inference call, which can simplify pipelines and improve coherence.
  • The Vertex AI pricing page explicitly documents a token‑tiered pricing structure and a premium for >200k token inputs, reflecting that longer context is more expensive and is priced accordingly.

Generative interfaces and Vibe Coding​

Gemini can now return richer, structured outputs that go beyond plain text: interactive maps, short-lived micro‑apps, or auto‑generated UI elements. Vibe Coding is a higher-level interface in which a user describes the feeling they want for an app and the AI generates code, design, and deployment scaffolding.
  • Impact: Faster prototyping and a lower barrier for non-programmers to generate functional artifacts.
  • Risk: Generated UIs or code must be audited for security, licensing, and maintainability; generated code is not a substitute for professional engineering validation.

Device distribution: how Catalan reaches phones​

Samsung and Nano Banana​

Samsung has embedded Google’s image-editing model, popularly called Nano Banana, into Galaxy AI’s Now Brief feature, allowing users to transform selfies or photos directly from the gallery without opening separate apps. SamMobile, Digital Trends and other outlets covered the feature rollout, and Samsung’s own communications highlight Gemini-derived capabilities in Galaxy devices. This is the clearest example of an OEM choosing to tune Google’s models rather than building from scratch.

Other OEMs and white-label integrations​

Xiaomi (HyperOS / HyperAI), Oppo (Mind Space) and other Chinese OEMs have been reported to integrate Gemini-derived features or to use Google’s inference backend under OEM branding. These relationships speed language-enabled feature rollouts across diverse markets. However, hardware-specific constraints and local regulations (especially in China) will affect the scope and timing of those features.

iPhone and Siri: the Apple wrinkle​

If Apple finalizes a deal to host a custom Gemini model inside its Private Cloud Compute environment, Siri could inherit the languages and capabilities Google certifies for Gemini — potentially including Catalan — while still maintaining Apple’s on‑device privacy posture. Multiple reputable outlets report Apple is negotiating a multi‑hundred‑million to roughly $1B‑per‑year arrangement to license a tailored Gemini variant, but the precise agreement is not public. Even if the headline number is accurate, Apple will still control the Siri UI and privacy flow; the model would operate behind Apple’s attested, sealed PCC environment if the company follows the widely described architecture.

Economics: who pays, and how much​

Google’s Vertex AI pricing page documents token-based pricing for Gemini 3 Pro with distinct tiers for standard and long contexts. The model is priced to reflect both input and output tokens and places a premium on very long-context operations. These published numbers align with the pricing summarized in industry coverage and highlight why enterprise customers choose different tiers or make distribution deals with carriers and OEMs in markets like India.
  • Consumer access: basic mobile app use is often free or subscription-based for advanced features (Google’s app tiers vary by market).
  • Developer & enterprise: pay-as-you-go token pricing, with higher per‑token costs for very long contexts.
  • Carrier/OEM bundling: carriers (e.g., Jio in India) have already bundled high-tier access free to customers to lock in engagement, a strategy that values user data and market reach over per-user revenue.

Strengths and the upside​

  • Language equity at scale: Listing Catalan formally makes it easier for public services, educational platforms, and local businesses to build AI features that consistently work in Catalan rather than relying on ad hoc behavior.
  • Faster device integration: OEMs embedding Gemini capabilities (Samsung, Xiaomi, Oppo) turn a model‑level change into immediate end‑user features.
  • Improved reasoning and tooling: Deep Thinking, long contexts, and agentic workflows can materially improve productivity tasks like research, summarization, and code generation.
  • Commercial reach: When major platform owners adopt a model family, the network effect of language support can rapidly accelerate ecosystem tools and third‑party apps.

Risks and governance concerns​

  • Vendor lock-in: Language support tied to a specific vendor’s stack increases systemic dependency — public services and private companies may find it costly to switch providers later, and that creates geopolitical and competitive risk.
  • Privacy and telemetry complexity: Even if models run inside privacy layers (Apple’s PCC, or on-device fallbacks), contractual terms and technical attestations are essential to avoid data leakage. Independent audits and contractual guarantees are required for public sector use.
  • Sustainability and investment cycles: Google’s own CEO warned of an “AI bubble” risk; this sector’s subsidies—free tiers, carrier bundling, or expensive model licensing—may not be sustainable across every market participant. A market correction could reduce the non-commercial support many small-language projects rely upon.
  • Hallucination and cultural safety: Official language support does not eliminate hallucination, bias, or culturally inappropriate outputs. Local safety teams, curated corpora, and monitoring frameworks still matter.

Practical recommendations​

For local institutions, developers, and power users who care about Catalan parity:
  • Prototype now on available free and trial tiers to evaluate fluency, localization, and safety for Catalan content.
  • Insist on contractual clarity around data usage: require non‑training clauses, logging controls, and the right to audit if you integrate a vendor model into public or sensitive workflows.
  • Prefer hybrid architectures: keep sensitive data local when possible and send minimal, anonymized context to cloud models. This reduces exposure if commercial terms change.
  • Contribute to or create independent Catalan benchmark suites that measure fluency, factuality, and safety so that community-driven auditing can supplement vendor claims.

What to watch next​

  • Availability of Catalan-enabled features in mainstream apps and manufacturers’ updates (e.g., Samsung’s Galaxy AI Now Brief showing localized UX).
  • Any Apple–Google contractual disclosures or regulatory filings that clarify the scope, pricing, and privacy guardrails for a Gemini-backed Siri. Current reporting is strong but not a public contract.
  • Independent benchmark releases that validate reasoning claims such as the “Deep Thinking” improvements and the exact performance gains reported in press summaries. Treat single-number claims as provisional until independently reproduced.

Conclusion​

The administrative change of listing Catalan as a supported language in Gemini 3 Pro materials is deceptively consequential. It converts a long-standing, fragile de facto capability into a first-class product promise — UI strings, voice variants, and device integrations are now practical possibilities rather than wishful thinking. That matters to millions of speakers and to the local digital ecosystem.
At the same time, this progress sits inside wider strategic and economic bets: OEM partnerships accelerate distribution; tokenized pricing makes long-context tasks feasible but costly; and reported vendor agreements — like Apple’s likely use of a custom Gemini for Siri — could propagate features across ecosystems. All of this is beneficial in the short term, but the durability of these gains depends on contractual transparency, sustainable economics, and robust governance to address privacy, safety, and vendor-dependency risks. For Catalan speakers, the prudent stance is both celebratory and cautious: embrace the immediate parity that Gemini 3 Pro affords, but press for the audits, contracts and community benchmarks that will make that parity stable and trustworthy over time.
Source: Diari ARA Google's AI already works in Catalan (and Siri will eventually do so)
 

Back
Top