Ads in Chat: Monetization and Trust in Conversational AI

  • Thread Author
The arrival of clearly labeled, visually separated advertisements inside conversational AI marks the end of an era where chatbots were purely informational tools — and the start of a new, risk‑heavy commercial ecosystem for users, brands, and publishers alike. //openai.com/index/testing-ads-in-chatgpt/)

A dark blue chat app showing a help conversation and headphone product options.Background​

Chat interfaces transformed search and discovery by capturing richer intent than a single search query. What a user types into a chat — constraints, preferences, budgets, timelines — is precise context that marketers crave. That context is why a wave of companies began experimenting with ad formats tailored to conversation: the potential for high‑intent, moment‑of‑decision placements is enormous, but so r trust and privacy.
The industry turning point came in early 2026 when multiple firms publicly disclosed pilots or tests that put advertising into chat surfaces. OpenAI published a formal test plan to show ads to logged‑in adult users on free and lower‑cost “Go” tiers while keeping higher paid tiers ad‑free; the company emphasized principles such as answer independence, conversation privacy, and user controls. The announcement crystallized a cross‑industry pivot: monetization pressure at scale is fdapt ad stacks to the unique constraints of conversational UX.
That pivot has already sparked a public commercial and moral fight. Anthropic ran high‑profile Super Bowl creative that criticized in‑conversation ads and positioned its Claude assistant as ad‑free, triggering a public sparring match with OpenAI executives and widespread media coverage. The adveerately jarring dramatizations of a chatbot mid‑conversation pitching a product — underscore how sensitive users are to ad intrusions in a conversational context.

How ads are being implemented today​

Ad formats in chat are experimental but several patterns have emerged. Platforms have largely converged on features meant to preserve the integrity of model outputs while still surfacing commercial opportunities.
  • Labeled cards or banners beneath answers. Ads are visually separated from the assistant’s generated text and labeled as “Sponsored” or similar to preserve answer independence. This is the most common early pattern.
  • Sponsored follow‑up prompts. Services like Perplexity introduced sponsored follow‑ups — suggested next questions or prompts marked as sponsored — which invite users to explore a brand or product. These can appear after a synthesized answer.
  • Shoppable cards and in‑chat commerce. Some prototypes show product tiles with price and a buy/learn CTA that keep users inside the chat flow rather than sending them out to a merchant. Platforms pitch this as friction reduction for purchase decisions.
  • Contextual, limited personalization. Early pilots emphasize contextual targeting — matching ads to the immediate conversation topic — while promising to avoid selling raw chat transcripts. Platforms say they will exclude ads on sensitive topics like health or politics and not show ads to minors.
These safeguards are design choices meant to thread the needle between monetization and trust. In practice, implementation details — defaults, labeling prominence, and whether personalization is opt‑in or opt‑out — will determine whether these measures are effective or merely declarative.

Who’s doing what: platform moves and positioning​

OpenAI: test, label, promise​

OpenAI’s public playbook is explicit: start small, test in the U.S. on Free and Go tiers, separate ads visually, and promise that answers remain independent from ads. The company also offers user controls to dismiss ads, delete ad data, and disable personalization. Those commitments are detailed in OpenAI’s testing announcement and policy notes.

Anthropic: ad‑free as a brand differentiator​

Anthropic’s Super Bowl creative pivoted to a positioning play: call out the perceived harms of in‑conversation advertising and claim Claude will remain ad‑free. The campaign achieved two goals — generating press coverage and forcing a public debate about what responsible monetization should look like. But it also raised questions about whether ad‑free positioning is a durable competitive advantage for firms that must scale expensive infrastructure.

Microsoft: Copilot’s “ad voice” and ecosystem integration​

Microsoft has been integrating sponsored content into Copilot‑style experiences since at least 2023 and has been explicit about an “ad voice” that explains why an ad is shown and how it connects to the user’s conversation. Microsoft frames ads as part of a seamless, contextual experience across Bing, Edge, and Copilot, emphasizing relevance and explanation.

Perplexity and publishers: revenue‑share experiments​

Perplexity moved quickly to offer revenue sharing with publishers whose content is cited by ansded sponsored follow‑ups and a publishers’ program designed to give outlets a cut of ad revenue when their reporting underpins an AI answer. Perplexity’s approach is an early attempt to address the “zero‑click” probose for journalism revenue.

Why this matters — the upside​

Ads in chat can be monetarily potent and functionally useful in ways legacy ad surfaces cannot.
  • High‑intent targeting. Chat captures explicit, sequential intent. A user who asks for “best blenders under $150 for smoothies” is a clearer purchase prospect than many keyword searches; ads served in that moment can convert at higher rates.
  • Conversational commerce. When discovery, comparison, and checkout can happen inside the assistant, conversion funnels compress and brands can capture a larger share of value without reliance on intermediaries.
  • Sustainable free access. For platforms, ad revenue can underwrite wider availability of powerful assistants without forcing all users onto paywalls. OpenAI frames its tests as a way to keep advanced capabilities accessible while charging for premium ad‑free tiers.
  • New publisher revenue paths. Revenue‑share models like Perplexity’s create an alternative to referral traffic, directing ad dollars to content creators whose reporting informs AI answers. This could be one path to sustain independent journalism in an AI‑first world.
If executed thoughtfully — transparent labeling, strict exclusions for sensitive contexts, auditable measurement — in‑chat ads can add genuine user utility and brofor open access to AI.

Major risks and why the backlash is credible​

The flipside is stark: ads inside a medium people treat as private and helpful can erode the single most valuable asset for assistants — trust.
  • Trust erosion and migration. Users are sensitive about the intve or poorly labeled ads can make assistants feel like another display network and push users toward paid, ad‑free alternatives or competitor products. Anthropic’s Super Bowl ads exploited this fear and turned it into a marketica.com]
  • Opaque targeting vectors. Platforms promise not to sell raw chat transcripts, but derived signals, aggregated models, or ephemeral featurestarget ads. That technical nuance matters to privacy regulators and to privacy‑conscious users, and promises alone will not suffice without independent audits.
  • Publisher disintermediation. If assistants provide end‑to‑end answers and commerce without sending traffic to source sites, publishers lose the referral economics that fund reporting. Revenue‑share pilots help, but they may not scale widely enough or protect niche publishing models.
  • Measurement and fraud gaps. Ad tech built for web pages doesn’t map neatly to conversational surfaces. Without new verifiertisers may pay for low‑value impressions or metrics that are easy to game.
  • Regulatory exposure. Consumer protection, advertising transparency, profiling rules, and data protection laws could all be triggered by will scrutinize how personal data and derived features are used in ad selection and whether users receive actionable consent.
These are not theoretical; the public debate and legal actions alreareputational and regulatory costs may be swift and severe.

Measurement, verification, and new technical primitives​

Conversational advertising requires different measurement and verification tools than display or search.
  • Session‑level attribution. Attribution should respect session continuity rather than pageviews. Platforms must develop auditable session IDs and privacy‑preserving signals that link ad exposure to downstream conversion without revealing chat content.
  • Impression validation. What counts as a valid impression in chat? Is a “sponsored follow‑up” that never gets tapped equivalent to a banner view? The industry needs standard definitions and third‑party validators.
  • Anti‑fraud and bot detection. Conversational surfaces are particularly susceptible to automated or scripted query patterns that can distort measurement; new anti‑fraud tooling is required.
  • **Privacy‑first targetinges — on‑device scoring, differential privacy, or aggregated cohort signals — can limit exposure of raw conversations while enabling some personalization. Platforms should publish technical attestats they use and allow audits.
Without these primitives, advertisers and regulators will lack confidence; with them, conversational ads may develop credible governance.

Practical guidance for stakeholders​

For users​

  • Check and exercise controls. Turn off ad personer contextual-only ads, and use ad data deletion options where available. Platforms say these controls exist — verify them in product settings.
  • Prefer paid tiers when privacy matters. If you need ad‑free, uncompromised assistance for sensitive tasks, paid tiers remain the most reliable guarantee in the short term.
  • Be skeptical of sponsored suggestions. Always verify claims the assistant makes, especially for health, legal, or financial decisions; sponsored prompts should be treated like any other ad.

For brands and marketers​

  • **Design conversational‑first ility‑driven sponsored prompts or clear product cards work better than repurposed display creatives.
  • Invest in GEO (Generative Engine Optimization). Structure content with FAQs, schema, and canonical answers so AI assistants can cite and attribute your content organically.
  • Demand transparent measurement. Insist on auditi‑fraud safeguards, and clear definitions of what constitutes an impression or conversion in chat.

For publishers​

  • Negotiate clear revenue‑share terms. If platforms surface your work and then monetize it, secure transparent, auditable compensation — Perplexity’s publishers’ program is an early example but not the only model.
  • Harden canonical content. Make it easy for assistants to identify and attribute your reporting: clear Q&As, structured metadata, and robust paywall/redirection strategies can help prese

For platforms and product teams​

  • Ship labeling and controls at launch. Clear visual distinction and accessible opt‑outs must be baked into the UX, not tacked on later. ([opai.com/index/testing-ads-in-chatgpt/)
  • Publish independent audits. Third‑party attestations of privacy promises and “answer independence” are critical to win trust.
  • Exclude sensitive contexts programmatically. Technical enforcement must ensure ads do not appear in health, mental health, political, or otations.
  • Create publisher partnerships. Revenue sharing, APIs, and referral mechanisms can soften the impact on journalism and secure content supply.

Ethical and regulatory guardrails to press for now​

Policymakers and civil society should press platforms for a baseline of protections:
  • Mandatory transparency reports. Platforms should publish ad volumes, categories excluded, revenue‑share arrangements, and auditing summaries on a regular cadence.
  • Independent audits of targeting logic. External verification that advertisers cannot access raw chats and that derived signals are bounded by declared policy must be required.
  • **Age and sensitivity prousions for minors and for queries touching medical, mental health, or political advice should be enforceable and verifiable.
  • Ad disclosure standards. Uniform labeling and prominence requirements to ensure users can always distinguish ads from assistant outputs.
Regulation should not preclude innovation, but it must prevent predatory monetization that exploits vulnerable moments of human need.

A tactical checklist (for product managers and policy teams)​

  • Require a single, auditable ad labeling component across conversational surfaces.
  • Default ad personalization to off for new users; make opt‑in explicit.
  • Implement cryptographic session tokens for privacy‑preserving attribution.
  • Publish a third‑party audit within six months of any ad launch.
  • Build a publisher compensation pipeline and public ledger of payments.
  • Enforce programmatic exclusions fxonomies.
  • Share anonymized, aggregated ad performance data with an independent watchdog.
These steps are operationally specific and, if adopted broadly, would materially reduce many of the harms critics rightly fear.

Conclusion​

The integration of advertising into AI chat is not an inevitability the public must meekly accept — it is a design and policy choice open to governance, scrutiny, and technical innovation. Done thoughtfully, in‑chat ads can fund broad access, create new revenue streams for publishers, and surface genuinely useful offers at the moment of intent. Done badly, they will corrode trust, hollow out referral economics, and invite swift regulatory action — consequences that will damage users, brands, and platforms alike. The coming months will determine whether this new ad frontier becomes a sustainable funding model for inclusive AI or a short‑term revenue play that fractures the trust on which conversational assistants depend. Platforms, advertisers, publishers, and regulators each have clear tasks: prioritize transparency, insist on independent verification, and treat conversational advertising as a governance challenge as much as a product opportunity.

Source: Iosco County News Herald New world for users and brands as ads hit AI chatbots
 

Perplexity’s decision to forgo in‑chat advertising and double down on subscriptions and enterprise sales has turned a long‑running debate about how to monetize AI assistants into an explicit industry fork — one path led by OpenAI toward ad‑supported scale, another led by Anthropic and now Perplexity toward a premium, ad‑free experience built on paid customers and enterprise contracts. rview
OpenAI announced in January 2026 that it would begin testing clearly labeled ads inside ChatGPT for logged‑in adult users of the Free tier and its low‑cost ChatGPT Go plan; the company framed the move as a way to subsidize broader access while keeping paid tiers ad‑free. That public push prompted rivals and the market to reassess monetization strategies for conversational AI: should assistants become ad surfaces at the moment of decision, or must they remain neutral tools paid for by users and enterprises?
Anthropic expressly chose the latter in a high‑visibility marketing move: a Super Bowl advertising campaign made fun of the idea of ads interrupting sensitive or personal conversations, promoting Claude as a no‑ads alternative and drawing sharp reactions from OpenAI leadership. The campaign crystallized a values argument — advertising inside conversational threads could erode trust — and translated that argument into tangible user growth for Anthropic in the immediate aftermath.
Into that widening split stepped Perplexity. At a recent media roundtable the company’s executives confirmed what multiple outlets had begun reporting: Perplexity will not be introducing ads into the chatbot portion of its product and will instead rely on subscriptions and enterprise revenue. The startup is expanding its enterprise sales effort, prioritizing revenue retention over raw engagement metrics, and keeping a limited free tier with usage caps.

Office monitor displays a NO ADS shield with AI chat and enterprise labels.Why this matters: the economics and the trust trade‑off​

Running real‑time, multimodal conversational AI at scale is expensive. Large‑scale models, memory systems, real‑time search, and image/voice inputs all demand significant compute, storage, and engineering investment. OpenAI’s rationale for testing ads is explicitly economic: ads can help support the infrastructure and keep low‑cost access available to larger numbers of users.
But conversational AI differs from traditional web pages or social feeds in one critical way: the user expects a direct, concise answer, often when making a decision. That context raises a set of unique risks for ad monetization:
  • Perceived bias: even clearly labeled sponsored placements risk being perceived as influencing or “steering” answers.
  • Contextual sensitivity: conversations about health, finance, legal matters, or family issues are not typical ad inventory; any ad appearing near such content can cause outsized reputational damage.
  • Data and privacy concerns: ad targeting requires signals. Even if companies claim not to share conversations with advertisers, matching ads to conversation context raises questions about what data is used and how it is stored or processed. OpenAI states advertisers won’t get chat data and that ads will be restricted near sensitive topics, but the technical and governance details matter enormously.
Perplexity’s core product promise — an “answer engine” that produces concise, sourced responses — hinges on user trust and perceived impartiality. For that reason, abandoning in‑chat ads is not merely ideological; it’s a strategic defense of product integrity and the company’s position in markets that value credibility, including finance, healthcare, and legal services.

What Perplexity said (and what it didn’t)​

Perplexity executives told reporters they will not introduce ads into chatbot responses and will instead focus on subscriptions and enterprise sales. The company is reportedly hiring to expand a small enterprise sales team (reported as five people currently) and is actively targeting professionals — finance, C‑suite, medical — with dedicated features and an enterprise product that blends internal and external data sources for research.
Notably, the company’s recent trajectory has been used to justify the decision: multiple reports cited rapid revenue growth and a reported annual recurring revenue (ARR) figure approaching $200 million by October 2025, with claims of roughly 4.7x year‑over‑year growth. Those numbers are being widely repeated in trade coverage as evidence that a subscription‑first model can scale for a search‑centric AI business. However, the exact figures and the timing of the milestones vary across reports and have not been uniformly confirmed in a single, company‑filed disclosure. That gap matters for investors and rivals trying to read Perplexity’s playbook.

Verifying the claims: what the public record supports​

  • OpenAI is testing ads in ChatGPT Free and Go tiers for U.S. adult users. This is OpenAI’s stated policy and has been publicly posted by the company. Verified.
  • Anthropic ran Super Bowl spots contrasting Claude’s ad‑free promise with ChatGPT’s ad tests, resulting in measurable traffic and app ranking bumps. Verified by multiple independent outlets.
  • Perplexity’s public statements (roundtable remarks) emphasize a shift to subscriptions and enterprise revenue and an aversion to in‑chat ads. Reported by reputable outlets and repeated across the trade press, but company financials remain largely private and some revenue figures are reported, not audited.
  • ARR and growth multiples cited in press pieces (e.g., “$200M ARR by October 2025” and “4.7x growth”) appear in multiple stories but are not accompanied by SEC‑style filings; they should be treated as reported metrics pending official confirmation. There is conflicting historical reporting on Perplexity ARR figures, which emphasizes the need for caution when using a single number as definitive. Unverified / reported.
Where possible, this article cross‑references Perplexity claims with at least two independent trade outlets and with OpenAI’s own public policy on ads to ensure an accurate portrayal of the landscape. When numbers are inconsistent across reports, they are described as reported and flagged accordingly.

The competitive landscape: three monetization camps​

The current market shows an emergent three‑way split in monetization strategies among major AI players:
  • Ad‑enabled scale (OpenAI): OpenAI’s pilot places clearly labeled ads beneath answers for Free and Go users, asserting that ads will be separated from the organic answer and not influence model outputs. The company argues ads will fund broader access and continued product investment.
  • Ad‑free, subscription/enterprise (Perplexity, Anthropic): Perplexity and Anthropic emphasize trust and neutrality, positioning ad‑free access as a premium attribute worth paying for or a necessity for enterprise adoption. Anthropic’s big‑budget Super Bowl creatives underscored that stance and translated into short‑term user gains.
  • Platform‑level choices (Google/DeepMind): Large incumbents that already monetize heavily through ads (Google) have broader strategic options and can subsidize ad‑free experiences in new products for now; other players (DeepMind/Gemini) have publicly said they have no immediate plans to introduce in‑assistant ads. That stance may be tactical, reflecting larger corporate economics rather than a permanent philosophical commitment.
This segmentation redefines competitive differentiation. Where product capability once dominated buying decisions, procurement and user choice will increasingly weigh a vendor’s business model: who pays the bills, and whose incentives might influence recommendations?

Product design and the ad experience: promises vs. pitfalls​

OpenAI promises that ads will be visually separated, labeled, and that ads won’t change what the assistant recommends. That design aims to reduce the apparent conflict of interest. But the subtlety of design lies in implementation and perception.
Key technical and UX considerations:
  • Placement and labeling: Placing ads below answers reduces interruption risk, but the timing of ad presentation (immediately after a definitive answer) can still create an associative effect: the user may remember the sponsored option more readily.
  • Relevance matching: OpenAI indicates ads will be matched to conversation topic and past interactions. That requires signal processing and modeling choices that have to balance usefulness and privacy.
  • Guardrails around sensitive content: OpenAI has pledged not to show ads around health, mental health, or political content and will avoid showing ads to accounts they predict belong to under‑18 users. These policies matter but require rigorous enforcement and edge‑case handling.
  • Transparency tooling: Ad dismissal, “why this ad,” and ad‑data deletion controls are helpful, but enterprise buyers will demand contractual and technical assurances about data flows and auditability.
  • Measurement and economics: Early reporting suggests ad minimums and premium CPMs for in‑chat placements. The economics — whether 1) advertisers pay enough to offset infrastructure costs without compromising UX, and 2) the platform can scale ad revenue without excessive targeting — will determine whether ads are sustainable for consumer tiers.

Enterprise strategy and the sales challenge​

Perplexity is shifting headlong toward enterprise sales and high‑value professional users. That strategy makes sense for several reasons:
  • Higher willingness to pay: Finance teams, legal, healthcare, and consultants will pay for verifiable, citable, and auditable information without the noise of ads.
  • Less sensitivity to scale: Enterprise contracts, unlike consumer ads, can produce large revenue per customer without requiring mass consumer adoption.
  • Trust as a product moat: For regulated industries, vendor business model and data handling practices are core decision criteria.
But executing a B2B strategy is operationally and culturally different from scaling consumer virality. Reported constraints and risks:
  • Perplexity reportedly had a small enterprise sales team (five people) at the time of the announcement; scaling to an enterprise‑grade go‑to‑market requires hiring, legal, SOC/ISO compliance work, and long sales cycles. Rapid ARR growth alone does not automatically translate to durable enterprise contracts.
  • Enterprise customers will demand strong SLAs, data residency, governance, and on‑prem or private cloud options — all expensive to implement and operate.
  • Competition is stiff: incumbents like Microsoft/Google and specialist vendors are racing to lock in enterprise footprints with integrated suites (Workspace, 365, Teams + Copilot). Perplexity’s strength is product focus and neutrality, but it must also show depth and integration capabilities that enterprises expect.

Regulatory and ethical implications​

The advertising decision touches more than UX and revenue. It raises regulatory questions that vary by jurisdiction:
  • Children and COPPA‑style rules: OpenAI’s stated policy of not showing ads to under‑18s is a pragmatic step, but age prediction algorithms are imperfect and regulators will scrutinize liability.
  • Consumer protection and deceptive practices: If ads are embedded in responses or given undue prominence near answers, consumer protection authorities could challenge whether users are being unfairly influenced.
  • Data protection regimes (GDPR, CCPA): The legal status of conversational logs and the use of derived signals for ad targeting must be contractually and technically clarified.
  • Industry‑specific regulation: Healthcare, finance, and legal domains have sectoral rules that may preclude particular types of monetization or require demonstrable data segregation.
All companies in this space will need not only legal compliance but also strong privacy engineering and transparent governance to withstand regulatory and public scrutiny.

Strategic strengths and risks — a candid appraisal​

Perplexity’s stance offers clear advantages:
  • Strength: Trust and product differentiation. By refusing in‑chat ads, Perplexity differentiates on credibility — a critical asset for enterprise buyers and professionals who cannot tolerate opaque incentives.
  • Strength: Higher ARPU potential. Charging enterprises and professionals typically yields higher revenue per user than an ad model that needs massive scale.
  • Strength: Brand clarity. Saying “no ads in answers” is a simple, defensible message that resonates in privacy‑sensitive markets and in the wake of Anthropic’s high‑profile campaign.
But the move is not without material risks:
  • Risk: Scale economics. Advertising can monetize the long tail of users cheaply; a subscription/enterprise model requires either high unit economics (large enterprise deals) or very successful consumer conversion funnels.
  • Risk: Sales execution. Moving from a product company to a repeatable enterprise sales organization is operationally difficult and time‑consuming.
  • Risk: Competitive pressure and integration. Giants with broader product suites can bundle AI into platforms that enterprises already use, undercutting the value of a standalone answer engine.
  • Risk: Investor expectations. If reported ARR numbers were a driver of investor confidence, any slowdown or discrepancy in the public accounting of revenue could create fundraising or valuation pressures. Reports of rapid ARR growth contend with earlier numbers that vary widely — an ambiguity investors will probe.

What advertisers and publishers should watch​

For advertisers, conversational AI presents a novel ad surface with potentially higher intent signals than display ads. But the format is new, and the following are immediate considerations:
  • Measurement and transparency: Advertisers will demand clear measurement and verification of ad performance inside chats.
  • Brand safety: Will the platform guarantee that ads won’t appear near risky content? OpenAI says it will block ads near certain sensitive topics; enforcement will be a test.
  • Cost and yield: Early reports suggest high minimums and premium pricing; small and medium advertisers may be priced out initially.
For publishers, the development is existential. If conversational AIs become the primary interface for information discovery and answers incorporate sponsored placements, publishers may see their referral traffic and ad revenue disrupted — but they may also obtain new monetization channels via sponsored answers or partnerships. The choices these companies make now will shape the open web’s economics for years.

What users and enterprises should do now​

  • Individuals who prioritize no ads in answers can evaluate Perplexity and Anthropic as alternatives, or pay for ad‑free tiers where available.
  • Enterprises should update procurement checklists to include vendor monetization model as a primary risk factor. Ask vendors:
  • Do ads appear inside answer surfaces?
  • How are ads selected and what data signals are used?
  • What contractual protections exist for sensitive queries and user data?
  • Privacy and compliance teams need to assess how conversational data is processed for ad relevance, retention, and deletion controls.

The near‑term outlook: what to watch next​

  • OpenAI’s rollout performance: Will ad tests measurably fund infrastructure and free access while preserving trust? Watch adoption metrics, user feedback flows, and any public reversals or policy tinkering.
  • Perplexity’s enterprise traction: Will the reported ARR growth convert into a broad, diversified enterprise book? The pace of sales hires and large contract disclosures will be telling.
  • Regulatory moves: Expect consumer protection and privacy regulators to scrutinize ad targeting in conversational contexts; public enforcement or guidance could reshape permissible designs.
  • Competitive marketing and positioning: Anthropic’s brand play showed that a values‑based appeal can yield short‑term user lift. Watch whether competitors adopt similar trust‑first messaging or pivot their product posture.

Conclusion​

Perplexity’s public refusal to insert ads inside chatbot answers crystallizes a pivotal commercial argument for the AI era: is user trust — and the premium it commands — a defensible, scalable business model, or is mass ad monetization inevitable to underwrite ubiquitous, low‑cost AI access? The answer will not be binary. We’re entering a period of market differentiation where business model, not just model accuracy, will determine who wins certain customer segments.
For users, advertisers, and enterprises, the immediate lesson is practical: ask vendors how they make money. Business model alignment now matters as much as capability when the product is an assistant that can influence real‑world choices. The next twelve months will show whether Perplexity’s subscription‑and‑enterprise bet can scale to meet the financial demands of running an answer engine — or whether ad‑funded assistants like ChatGPT will prove the faster path to ubiquity and incumbent entrenchment.

Source: The News International AI ad wars begin as Perplexity snubs ChatGPT advertising
 

Back
Top