• Thread Author
Microsoft’s AI unit has shipped two first‑party foundation models — MAI‑Voice‑1 and MAI‑1‑preview — marking a clear acceleration of in‑house model development even as the company continues to integrate and promote OpenAI’s frontier models such as GPT‑5 across its product stack. The launches are deliberate: one model targets expressive, high‑throughput speech generation while the other is a consumer‑focused instruction following language model intended to anchor select Copilot experiences and be iterated on through public testing. (theverge.com)

Futuristic control room with neon-blue holographic dashboards and a central command console.Background​

Microsoft’s Copilot and Azure ecosystems have long depended on a blend of proprietary research, partner models, and open‑source systems to deliver generative features. The new MAI family signals a shift toward an orchestration-first strategy: route workloads dynamically between OpenAI models, in‑house MAI models, and third‑party/open‑weight models depending on latency, cost, safety, and product fit. That message is explicit in Microsoft’s public framing of MAI as a platform of specialized models for different user intents. (windowscentral.com)
This move occurs amid intense talent hiring, multi‑cloud shifts across the industry, and new training and inference infrastructure investments — factors that together make it both feasible and strategically sensible for a hyperscaler to field its own family of foundation models. Microsoft’s MAI release should be read in the context of product control, cost optimization, and hedging against single‑vendor exposure. (outlookbusiness.com)

What Microsoft announced: the essentials​

  • MAI‑Voice‑1 — a natural speech generation model enabling expressive single‑ and multi‑speaker audio, surfaced in Copilot Daily, Copilot Podcasts, and Copilot Labs’ Audio Expressions. Microsoft claims the model can synthesize one minute of audio in under one second on a single GPU. (theverge.com) (verdict.co.uk)
  • MAI‑1‑preview — a mixture‑of‑experts (MoE) text foundation model described as Microsoft’s first end‑to‑end trained in‑house foundation model. Microsoft reports this model was pre‑trained and post‑trained on approximately 15,000 NVIDIA H100 GPUs and has been opened to public evaluation on LMArena and to trusted API testers. (folio3.ai) (investing.com)
Microsoft positions both models as elements of a broader portfolio strategy — not immediate one‑to‑one replacements for OpenAI models in enterprise scenarios. The company describes MAI releases as specialized building blocks that will be orchestrated alongside partner and open models to optimize user experiences. (windowscentral.com)

MAI‑Voice‑1: a close look at the speech model​

Capabilities and product placement​

MAI‑Voice‑1 is surfaced in production‑facing consumer features today — notably Copilot Daily (narrated briefings) and Copilot Podcasts — plus an experimental sandbox in Copilot Labs that exposes voice styles, emotional modes, and storytelling demos. The emphasis is on expressiveness, multi‑speaker capability, and natural delivery suited to daily companion‑style experiences. (english.mathrubhumi.com)

Performance claim and engineering implications​

Microsoft claims MAI‑Voice‑1 can synthesize a 60‑second audio clip in under one second of wall‑clock time on a single GPU. If reproducible at scale, that throughput materially lowers the marginal cost of spoken content and makes on‑demand audio companions economically viable for high‑volume consumer surfaces. Multiple outlets quote this performance figure and Microsoft itself has emphasized low latency and high throughput as key product goals. (theverge.com) (verdict.co.uk)
Important caveats:
  • The one‑second claim is a vendor performance metric; Microsoft has not released a full engineering whitepaper with reproducible benchmarks that specify GPU model, batching, precision (FP16, BF16, quantized), memory usage, IO and CPU overhead, or the test configuration used. Treat the number as a company claim pending independent verification. (windowsforum.com)
  • Real‑world throughput will vary with voice complexity, multi‑speaker mixing, safety filters, and live‑stream constraints; production deployments often insert pragmatic latency/quality tradeoffs that aren’t captured by raw single‑GPU claims.

Benefits for users and product teams​

  • Faster generation reduces per‑call compute cost, enabling more immersive and longer spoken experiences.
  • Tighter integration with Windows, Edge, and 365 telemetry can yield voice behavior tuned for Microsoft product flows.
  • Copilot Labs previews make it possible to explore expressive modes before a wider rollout, accelerating iteration from real user feedback. (folio3.ai)

Risks and governance concerns​

  • Impersonation and misuse: High‑fidelity voice synthesis increases impersonation risk; guardrails, watermarks, and robust consent flows are essential.
  • Privacy: How voice prompts and generated audio are logged, retained, and used for model improvement must be transparent for enterprise and consumer trust.
  • Safety testing: Expressive audio can transmit misinformation or harmful content in ways that text cannot; red‑teaming for audio‑specific attack vectors is required.

MAI‑1‑preview: what the language model brings and where it fits​

Architecture and training footprint​

MAI‑1‑preview is described as a mixture‑of‑experts (MoE) foundation model — an architecture choice that enables high capacity with sparse activation, improving parameter efficiency for many workloads. Microsoft reports a large training campaign that used roughly 15,000 NVIDIA H100 GPUs for pre‑training and post‑training phases. That scale is consistent with a serious engineering investment and is being positioned as the company’s first end‑to‑end trained foundation offering. (folio3.ai) (theverge.com)

Public testing and early benchmarks​

Microsoft has opened MAI‑1‑preview to LMArena, a crowd‑sourced human‑preference benchmarking platform, and has made the model available to trusted testers who can apply for API access. Early LMArena snapshots placed MAI‑1‑preview in the middle of the leaderboard, a useful signal for perceived helpfulness and style but not a definitive measure of factuality, safety, or enterprise readiness. (investing.com) (livemint.com)

Intended use cases and rollout plan​

The stated plan is conservative: roll MAI‑1‑preview into select text‑based Copilot use cases in the coming weeks, gather millions of interactions to tune behavior, then expand where the model proves reliable. That measured approach aims to balance rapid iteration with controlled exposure. (investing.com)

Strengths and shortfalls​

  • Strengths:
  • Product fit for high‑volume, low‑latency consumer tasks, where cost and integration matter more than frontier reasoning.
  • MoE design can target cost/performance sweet spots for instruction following.
  • Shortfalls and unknowns:
  • Early leaderboard ranks and public commentary indicate MAI‑1‑preview is not yet a frontier replacement for enterprise‑grade high‑reasoning flows.
  • Safety alignment, hallucination rates, and robustness on adversarial inputs remain to be demonstrated under enterprise benchmarks.

Verifying claims: what’s confirmed and what remains vendor‑asserted​

Microsoft’s headlines include measurable technical claims that matter to customers and operators. Several reputable outlets and Microsoft’s own messaging corroborate the broad strokes — MAI‑Voice‑1 exists and is in product, MAI‑1‑preview was trained at large scale and is on LMArena, and Microsoft is operating GB200 (Blackwell) hardware as part of its compute roadmap. But specific numbers and performance details require scrutiny. (theverge.com) (investing.com)
Key verification points:
  • The one‑second per‑minute audio claim: widely quoted but lacks a published methodology; therefore treat it as an engineering claim that needs independent benchmarking. (windowsforum.com)
  • The 15,000 H100 GPU training figure: reported across multiple outlets quoting Microsoft; external independent audit of GPU counts is typically infeasible, so it stands as Microsoft’s published figure until independently confirmed. (folio3.ai)
  • LMArena ranking snapshots: public and community‑driven; useful as perceptual gauges but limited by changing ballots, possible tuning, and human preference bias. Use them as early signals rather than procurement‑grade evidence. (livemint.com)
Where claims are unverifiable today, readers and procurement teams should require reproducible benchmarks and vendor documentation (test rigs, batch sizes, precision modes, safety test results) before making high‑risk integration decisions.

Strategic analysis: why Microsoft is building MAI​

Product, cost and sovereignty​

Microsoft’s rationale blends three pragmatic drivers:
  • Product fit: consumer Copilot experiences benefit from low latency, predictable cost, and tighter OS/app integration.
  • Cost control: routing high‑volume consumer requests to in‑house models can reduce recurring API outgo to third parties.
  • Sovereignty and bargaining power: owning a credible in‑house stack reduces strategic dependence on any single partner and strengthens Microsoft’s negotiating position with OpenAI and others.

Talent and time compression​

Microsoft’s hiring of senior AI leaders and strategic team acquisitions has shortened the timeline to credible in‑house models. Acqui‑hire patterns and experienced leadership provide the institutional muscle to perform large training runs and productize models quickly. That human capital is a differentiator, but it also creates integration and retention risks.

Compute roadmap: H100 to GB200 and beyond​

Microsoft called out operational GB200 (Blackwell) clusters as part of its compute roadmap while noting prior MAI training used H100 fleets. The GB200 series is the natural next step for larger, memory‑heavy models; Microsoft publicly states it has GB200 capacity ready as it iterates on future MAI variants. That level of on‑premise compute is costly but strategically important for rapid iteration cycles. (investing.com)

Enterprise implications and governance recommendations​

Enterprises evaluating MAI models for production should weigh the following considerations:
  • Data routing and telemetry: clarify what user prompts, document contents, and telemetry are used for training or logging, and whether opt‑out or enterprise‑only modes exist.
  • Compliance and provenance: request model lineage documentation, data provenance declarations, and legal guarantees around IP usage in the training corpora.
  • A/B testing and fallback routing: require the ability to route specific workloads to OpenAI, Anthropic, or third‑party models while MAI models are validated.
  • Safety and red‑teaming reports: demand red‑team artifacts, hallucination statistics, and mitigation strategies before enabling MAI models for regulated workloads.
  • Start with low‑risk, high‑value surfaces (consumer‑facing Copilot features, internal test sandboxes).
  • Run controlled blind evaluations that measure hallucination rates, factuality, latency, and cost per call.
  • Insist on contractual SLAs for data processing and model update cadences.
These steps help mitigate deployment risk and avoid early lock‑in on models that are still maturing.

Competitive landscape: where MAI sits​

Microsoft’s MAI rollout changes the marketplace dynamic but does not single‑handedly displace other players. The industry now expects:
  • Multi‑model orchestration: customers will select models by task — expressive voice, lightweight instruction following, or frontier reasoning — each supplied by different vendors or in‑house stacks.
  • Cloud and hardware competition: providers will continue investing in GPU fleets, GB200 Blackwell systems, and specialized inference hardware to lower latency and cost.
  • Open‑weight proliferation: with open‑weight releases from other labs and wider multi‑cloud distribution, the market will host many capable models optimized for different tasks.
Microsoft’s advantage is product integration and reach across Windows, Office, and Azure, but rivals such as Google Cloud, AWS, Anthropic and open‑source ecosystems will press their own advantages in model design and distribution. Expect intensified price and feature competition.

Talent, hiring and market signals​

Microsoft’s public call for developers and its hiring emphasis — particularly in software engineering roles — shows the company is staffing for rapid productization as much as research. The strategy of fast hires, acqui‑hires, and leadership transitions speeds capability build but raises cultural integration and retention challenges that the company must manage to sustain momentum.
From a labor market standpoint, this sustained hiring push underscores that AI is still a labor‑intensive domain: building, curating, validating and operating models at scale requires dozens of engineers, safety specialists, and product managers, not just raw compute.

Benchmarks and the limits of community testing​

Platforms like LMArena provide rapid, human‑preference based snapshots of model behavior and are useful early signals. They measure subjective helpfulness across pairwise comparisons, but they:
  • Are sensitive to voting populations and prompt suites.
  • Favor fluency and style over factual accuracy and safety.
  • Can be gamed by tuned variants or selective prompt submissions.
For procurement and compliance decisions, enterprises should prioritize controlled, metric‑driven evaluations covering factuality, hallucination rate, latency, throughput, and cost per 1,000 tokens rather than relying solely on LMArena ranks.

What to watch next​

  • Independent benchmarks that reproduce or challenge the one‑second per minute MAI‑Voice‑1 claim.
  • Third‑party audits or Microsoft disclosures showing detailed safety, alignment and hallucination metrics for MAI‑1‑preview.
  • The pace of MAI rollouts inside Copilot: which specific features migrate to MAI and which remain routed to OpenAI.
  • Regulatory scrutiny over preferential platform placement, data governance, and model provenance as Microsoft blends in‑house models with partner integrations.

Conclusion​

Microsoft’s MAI‑Voice‑1 and MAI‑1‑preview launches are more than product announcements; they are a strategic statement. The company is building an orchestration layer that mixes in‑house specialization with best‑in‑class partner models to meet different user intents, manage cost, and improve product integration across Windows and Copilot. The technical claims are bold — notably MAI‑Voice‑1’s throughput and MAI‑1’s large H100 training fleet — and they are corroborated by multiple outlets quoting Microsoft. Yet several of the most consequential numbers remain vendor‑asserted and will require independent verification and transparent benchmark methodologies before enterprises can treat MAI models as drop‑in replacements for mature third‑party models.
For administrators, developers, and technology buyers, the sensible path is cautious experimentation: evaluate MAI models on narrow, well‑instrumented tests; insist on contractual clarity about data and training provenance; and maintain multi‑model routing options while MAI matures. Microsoft’s investment in MAI materially shifts the competitive map, but the era ahead will be one of orchestration, measurement, and careful governance rather than a single‑model winner‑take‑all outcome. (theverge.com)

Source: Cloud Wars 2 Models Developed Internally at Microsoft Underscore Aggressive AI Ramp-Up, Hiring
 

Microsoft’s Copilot has become the fastest-growing AI chatbot in recent months, while OpenAI’s ChatGPT still holds the largest audience — a market dynamic that reflects distribution strategy, platform bundling, and the different roles these assistants play in everyday work and mobile use. (comscore.com) (adweek.com)

Split-panel comparison of Enterprise AI and Consumer AI featuring devices and GPT.Background / Overview​

The summer’s most-discussed AI usage numbers center on a short window of consumer behavior tracking: Comscore’s March–June snapshot of 117 AI tools across desktop and mobile. That dataset shows dramatic percentage growth for some assistants and sustained dominance for others. Comscore reports that, from March to June, Microsoft Copilot grew by 175% on mobile, Google Gemini by 68%, and OpenAI’s ChatGPT by about 17.9%, with mobile reach edging up while desktop reach slipped. (comscore.com)
Those percentage moves were first amplified in tech coverage by outlets with access to Comscore figures, which translated the percentages into absolute user estimates (for example, Copilot’s mobile base rising to roughly 8.8 million and ChatGPT’s mobile audience being reported around 25.4 million in that same window). Where Comscore gave growth rates and share context, media partners provided the numeric breakdowns that made headlines. (adweek.com)
At the same time, independent traffic trackers like StatCounter — which measure referrals and session share differently from Comscore’s panel methodology — show that ChatGPT still commands roughly four-fifths of chatbot referral traffic in mid‑2025, underscoring a scale-versus-velocity tension: ChatGPT’s raw audience remains far larger even as Copilot’s growth rate outpaces it. (gs.statcounter.com)

Data and methodology: what the numbers actually measure​

Comscore: panel-based, multi-tool visibility​

Comscore’s dataset is a panel-driven measurement of visits and engagement across a defined set of AI tools and apps, aggregated for desktop and mobile. The company’s press release describes tracking of 117 AI tools and reports deduplicated audience reach by platform over the March–June window. Comscore’s model emphasizes cross-device audience measurement and the behavior of a consumer panel that represents engagement patterns rather than raw API query volume. (comscore.com)

StatCounter: referral and session share​

StatCounter’s AI chatbot tracking focuses on referrals and session share — essentially the proportion of chatbot-driven traffic and website referrals that originate from each assistant. This is a different slice of usage and helps explain why StatCounter shows ChatGPT with roughly 80–83% of referral share while Comscore can show far higher percentage growth for other assistants on mobile. These two measurement approaches are complementary but not identical; they answer different questions about how people use chatbots. (gs.statcounter.com)

Why methodology matters​

  • Panel metrics (Comscore) better capture app and in‑app usage across devices and are useful for deduplicated audience reach.
  • Referral/session metrics (StatCounter) show which assistants are sending traffic back to websites, a proxy for how often answers produce outbound links or referrals.
  • Absolute growth percentages can be misleading without the base numbers: a 175% increase from a small base can still be less in absolute users than a 17.9% increase on a giant base. Media reporting often mixes both without always making that distinction explicit. (adweek.com, gs.statcounter.com)

Platform-by-platform breakdown​

Microsoft Copilot — the enterprise wedge turned mobile growth engine​

Copilot’s headline 175% mobile growth reflects distribution and integration. Microsoft has embedded Copilot into Windows, Microsoft 365 apps, Edge, and its enterprise management tooling, creating many preexisting touchpoints where Copilot can be surfaced to users with low friction. That bundling and ease of rollout explains much of its rapid adoption in a short window. Comscore and industry coverage point out that Copilot’s increase is concentrated in mobile adoption and productivity-oriented use cases. (comscore.com, adweek.com)
Key points:
  • Copilot benefits from deep enterprise distribution and admin controls, which reduce rollout friction for IT teams.
  • Mobile growth is aided by integration into apps users already open for work: email, documents, and Teams.
  • The 175% figure is a growth rate that starts from a smaller base than ChatGPT’s large audience, so absolute user counts remain lower. (comscore.com)

ChatGPT — the scale incumbent and retention leader​

ChatGPT remains the dominant destination for conversational AI by several measures. StatCounter session/referral data consistently places ChatGPT at roughly 80–83% of tracked chatbot traffic — a commanding lead. Comscore’s panel data shows steadier, lower percentage growth for ChatGPT because the product already has a much larger installed user base, which naturally flattens percentage changes. Moreover, Comscore’s cross-visitation analysis indicates ChatGPT users show the highest platform loyalty, with fewer people hopping between assistants. (gs.statcounter.com, comscore.com)
Key points:
  • ChatGPT’s scale creates powerful network effects, an extensive ecosystem, and high retention.
  • Smaller percentage growth is expected when growth is measured against a huge base.

Google Gemini — device distribution and multi-product strategy​

Gemini’s growth (reported at 68% in the March–June window by Comscore) is closely tied to device-level distribution (Pixel preloads and Android integration) and Google’s strategy of unbundling AI capabilities across multiple consumer endpoints. Gemini’s mobile performance is boosted by Android preinstallation and Google’s app ecosystem, but overall market share in referral/session terms remains low compared with ChatGPT. (comscore.com)

Other players (Perplexity, Anthropic Claude, Canva, Grammarly, Octane AI, Voicemod)​

Comscore’s tracking highlights that numerous specialized tools — design assistants like Canva, writing helpers like Grammarly, marketing AIs and voice changers — also operate at scale on mobile, and that different verticals will fragment usage away from single-chatbot narratives. These players occupy niches where discrete feature sets, rather than general conversation, drive habitual use. (comscore.com)

Why Copilot surged: three structural explanations​

  • Ecosystem embedding and enterprise rollout
  • Copilot’s presence inside Microsoft 365 and Windows means adoption can be driven by IT policies and licensing, not just consumer discovery.
  • Admin tools and single sign-on make it straightforward for businesses to distribute Copilot at scale. (adweek.com)
  • Mobile-first lightweight productivity use cases
  • Comscore’s analysts point to Copilot’s appeal for quick productivity tasks on mobile: email drafts, meeting summaries, and lightweight document edits. That aligns with broader consumer shifts to mobile for many day-to-day tasks. (comscore.com)
  • Channel and bundling economics
  • Device partnerships and preloads (e.g., Pixel for Gemini, OEM agreements for Copilot integrations) are powerful user-acquisition levers that can produce rapid percentage growth when activated. (comscore.com)

Mobile vs. desktop: the tectonic shift​

Comscore’s headline finding — mobile reach rising by 5.3% while desktop fell by 11.1% over the March–June window — signals a behavioral shift: people are increasingly turning to assistants on phones for quick tasks and ephemeral queries. This has implications for product design, model latency, and privacy controls because mobile sessions are often shorter, more context-sensitive, and tied to device permissions (microphone, photos, location). (comscore.com)
Practical implications:
  • Mobile-first features, offline/low-bandwidth resilience, and tight app UX will favor assistants optimized for brief task flows.
  • Desktop will remain critical for long-context workflows (coding, research, design), but its relative share may continue declining as mobile capabilities improve.

Commercial implications: revenue, cloud demand, and strategic incentives​

The commercialization of AI is turning usage into meaningful revenue streams. Independent reporting indicates OpenAI reached roughly $10 billion in annual recurring revenue from subscriptions and the API by June, while Microsoft reported Azure revenue exceeding $75 billion, citing AI workloads as a substantial growth driver. These financial realities create incentives for platform owners to push assistants into enterprise and consumer products aggressively. (cnbc.com, apnews.com, wsj.com)
Why it matters:
  • Vendors have economic incentives to prioritize integrations that convert usage into subscription or cloud consumption.
  • These incentives can accelerate feature rollouts — but they also shape where and how assistants appear to users, often privileging ecosystem-embedded experiences.

Risks, measurement caveats, and governance concerns​

1. Growth rates vs absolute scale​

Percent growth figures are powerful headlines but can mislead without base context. Copilot’s 175% mobile growth is significant, but it rose from a smaller mobile base than ChatGPT’s tens of millions of users. Always ask for both percentage and absolute figures to properly weigh competitive dynamics. (adweek.com, comscore.com)

2. Measurement inconsistency across trackers​

Comscore panel data and StatCounter referral metrics tell different but complementary stories. Journalists and decision-makers should treat both sources as informative and use them to triangulate rather than to declare a single definitive ranking. (comscore.com, gs.statcounter.com)

3. Vendor and media claims that defy independent verification​

Some published rankings and viral lists report multi‑billion user counts for various assistants. Independent verification often does not support those absolute claims; such numbers can conflate visits, API calls, embedded referrals, and cumulative or global estimates without clarifying methodology. These discrepancies should be flagged as potentially unverifiable until vendors or trackers publish exact measurement definitions.

4. Privacy, data handling, and governance​

Wider deployment of assistants into enterprise and mobile contexts raises concerns:
  • Data residency and logging practices for corporate communications.
  • Model hallucinations or incorrect outputs becoming amplified in operational workflows.
  • Vendor lock-in via ecosystem-dependent features, admin tooling, and licensing.
    IT teams must map these risks and require contractual and technical controls where assistants are adopted.

5. Market concentration and its second-order effects​

ChatGPT’s continued dominance by referral share poses potential long-term risks of concentration: fewer independent models in the mainstream could reduce diversity of viewpoints and raise systemic dependency. Policymakers and technologists should monitor for platform power effects and consider interoperability, auditing, and data portability. (gs.statcounter.com)

What this means for Windows users, IT leaders, and power users​

For Windows consumers​

  • Expect Copilot features to appear more frequently in Windows UIs and Microsoft apps. Use cases like email drafting, file summarization, and meeting preparation will be the most immediate productivity wins.
  • If privacy or choice matters, review app permissions and the specific Copilot plan that aligns with your data-handling preferences.

For IT and procurement leaders​

  • Treat Copilot as an extension of Microsoft 365 governance: SSO, audit logs, and admin controls should be evaluated before broad deployments.
  • Run pilot deployments to measure real-world gains in productivity rather than adopting based solely on percentage growth headlines.
  • Maintain multi-vendor testing for critical workflows to reduce single‑vendor dependency risk.

For power users and technologists​

  • Use multiple assistants for different tasks: ChatGPT for deep research and plugins; Copilot for tight document and email integration; Gemini for Android‑centric workflows.
  • Validate critical outputs with human review and source triangulation, particularly for business decisions and published content.

Short-run outlook and likely scenarios​

  • Continued bifurcation: ChatGPT retains mass-market leadership; Copilot accelerates within Microsoft’s productivity ecosystem; Gemini grows where Google can control device distribution.
  • Monetization and bundling intensify: Platform owners will further tie assistant capabilities to cloud consumption, enterprise licensing, and device partnerships.
  • Measurement debates continue: Independent trackers and vendor disclosures will keep clashing on methodology, pushing analysts to triangulate across datasets rather than accept single-source claims. (comscore.com, gs.statcounter.com)

Practical checklist for evaluating AI assistant adoption (for IT teams)​

  • Define the use cases you want to accelerate with assistants.
  • Map data flows and classify sensitive information that must not be sent to public models.
  • Pilot with a limited user group; measure time saved and quality of outputs.
  • Validate vendor governance: audit logs, DDOS resilience, data retention and deletion policies.
  • Build fallback plans: ensure business continuity if a single assistant becomes unavailable.

Conclusion​

The recent Comscore window and independent telemetry sketches a market in which velocity and scale tell two different stories: Copilot’s rapid mobile growth demonstrates the power of ecosystem integration and enterprise distribution, while ChatGPT’s commanding share in referral and session metrics underscores the durable value of being the default public conversational destination. StatCounter’s referral snapshots and Comscore’s panel data together give a fuller picture: percentage acceleration matters, but so does base size, retention, and measurement method. (comscore.com, gs.statcounter.com)
For Windows users and enterprise buyers, the sensible posture is deliberate adoption: leverage Copilot where Microsoft integration and governance reduce friction, rely on ChatGPT where broad model capabilities and plugins benefit workflows, and treat Google Gemini as a growing mobile research assistant where Android-first distribution helps. Maintain skepticism about headline growth claims that lack methodological transparency, and insist on contractual and technical safeguards before embedding assistants into sensitive workflows. Evidence-based, measured adoption — not chasing the latest growth headline — will deliver the most reliable productivity gains. (adweek.com)

Source: ZDNET The fastest growing AI chatbot lately? It's not ChatGPT or Gemini
 

Back
Top