ChatGPT’s dominance in Ireland — a near‑monopoly in referral telemetry — and the parallel, fast‑moving story of people turning conversational AI into a form of everyday therapy have become two of the defining tech narratives of 2025. New public telemetry shows ChatGPT is responsible for roughly four‑in‑five chatbot‑to‑website referrals worldwide and more than eight‑in‑ten in Ireland, while a separate and rapidly expanding body of clinical and technical literature documents both promising short‑term benefits from therapeutic chatbots and real, documented harms when safety checks fail. This convergence — market concentration plus therapeutic experimentation — raises consequential questions for publishers, regulators, clinicians and every user who might one day ask a chatbot “are you listening?” and expect a safe, accurate, and private answer.
ChatGPT’s market position in Ireland is based on referral telemetry: services that track which chatbot is recorded as the referring source when a user clicks a link inside a chat response and lands on a website. StatCounter’s public dashboard and press release put ChatGPT’s share of chatbot‑to‑website referrals at around 80% globally and approximately 83–84% in Ireland in mid‑2025. Those figures were widely reported and repackaged across the tech press and publisher briefings. At the same time, academic and clinical research into conversational agents for mental‑health support has surged. Systematic reviews and meta‑analyses from multiple journals find consistent short‑term improvements in anxiety and depressive symptoms for structured chatbot interventions — particularly when built around evidence‑based approaches like cognitive‑behavioural therapy (CBT) and when engagement is sustained — but they also highlight heterogeneity, short follow‑ups, and an absence of long‑term efficacy data. The literature is clear: there is promise, but not parity with licensed human therapy, and the space is littered with unanswered safety, privacy and ethical questions. Two parallel facts shape the present moment. First, a small number of large AI assistants shape how millions discover information online and send traffic to publishers. Second, people increasingly treat these assistants as confidents, companions, or even therapists — sometimes with beneficial outcomes, sometimes with catastrophic ones. Both trends are accelerating, and they intersect in ways that demand careful scrutiny.
This vision raises acute ethical questions:
Measured optimism is the proper stance: applaud the rigorously validated chatbot interventions and the companies that adopt high safety standards; demand transparency, independent clinical testing and enforceable privacy protections where systems operate in the space of human well‑being. The stakes — measured in user trust, journalistic exposure and even human lives — could not be higher.
Source: The Irish Independent The Big Tech Show: ChatGPT most popular in Ireland and how AI became a therapist
Background / Overview
ChatGPT’s market position in Ireland is based on referral telemetry: services that track which chatbot is recorded as the referring source when a user clicks a link inside a chat response and lands on a website. StatCounter’s public dashboard and press release put ChatGPT’s share of chatbot‑to‑website referrals at around 80% globally and approximately 83–84% in Ireland in mid‑2025. Those figures were widely reported and repackaged across the tech press and publisher briefings. At the same time, academic and clinical research into conversational agents for mental‑health support has surged. Systematic reviews and meta‑analyses from multiple journals find consistent short‑term improvements in anxiety and depressive symptoms for structured chatbot interventions — particularly when built around evidence‑based approaches like cognitive‑behavioural therapy (CBT) and when engagement is sustained — but they also highlight heterogeneity, short follow‑ups, and an absence of long‑term efficacy data. The literature is clear: there is promise, but not parity with licensed human therapy, and the space is littered with unanswered safety, privacy and ethical questions. Two parallel facts shape the present moment. First, a small number of large AI assistants shape how millions discover information online and send traffic to publishers. Second, people increasingly treat these assistants as confidents, companions, or even therapists — sometimes with beneficial outcomes, sometimes with catastrophic ones. Both trends are accelerating, and they intersect in ways that demand careful scrutiny.How StatCounter (and the media) measure ChatGPT’s lead — and what that really means
The metric: referrals, not raw queries
The StatCounter dataset that underpins many headlines measures which chatbot appears in the HTTP referral header when users click a link inside an AI assistant’s conversation and land on a website. This is a robust measure of influence on web traffic, but it is not a direct measure of total prompts, sessions, or API calls. That distinction matters for interpretation: a product that encourages link clicks — for example, by frequently returning URLs in responses — will score higher on referral share than a product that handles millions of private or embedded queries without sending users away.Why the Irish snapshot looks so lopsided
Several non‑exclusive forces explain the high share for ChatGPT in Ireland:- Early brand traction and habit formation. ChatGPT established a large public audience early and became the de facto place to “ask the AI” for many user groups.
- Product design and outbound links. Some chatbots are more prone to include clickable links in follow‑ups, which inflates referral counts relative to session volume.
- Cross‑platform presence. ChatGPT’s web UI, mobile apps, plugin ecosystem and broad API availability make it easy for developers and publishers to integrate it into workflows that generate outbound links.
Why a single assistant matters to publishers, advertisers and policymakers
If one service drives the majority of referrals from AI conversations back to the open web, the implications are immediate:- Traffic concentration risk for publishers. SEO strategies must now consider generative engine optimisation (GEO) — how to get surfaced inside an assistant’s answers — as part of distribution planning. Heavy dependence on a single referral source creates revenue and discovery fragility if that service changes how it cites sources.
- Monopsony and marketplace effects. A dominant assistant can shape which outlets gain audience and which are invisible, with downstream effects on journalistic diversity and ad markets.
- Regulatory attention. Market concentration plus opaque ranking and retrieval policies invites scrutiny under competition and transparency rules; policymakers are already considering how to define obligations for explanation, provenance and fair access.
The rise of chatbots as therapists: evidence, promise and limits
What the evidence shows
A growing set of randomized controlled trials and meta‑analyses finds modest, short‑term symptom reductions for structured chatbot interventions. A meta‑analysis pooling multiple RCTs recorded small‑to‑moderate improvements in depression and anxiety over brief treatment windows (typically 4–8 weeks), especially when chatbots delivered CBT‑informed content with daily engagement prompts. Another cluster of reviews emphasizes high user acceptability and the value of anonymity and rapid access provided by digital agents. These results are promising for stepped‑care models, early intervention, and augmenting scarce therapist capacity.Why benefits are conditional and fragile
However, the literature also identifies important limitations:- Heterogeneous study quality and populations. Many trials are small, short, and skewed to non‑clinical or college populations; longer term follow‑up is rare.
- Outcome fragility. Effect sizes often diminish at follow‑up points beyond immediate treatment windows.
- Design variability. “Chatbot” is an umbrella term: systems range from simple rule‑based scripts to LLM‑based conversational agents with retrieval augmentation. Effectiveness depends heavily on the therapeutic framework, user prompts, and human oversight.
Practical takeaways for clinicians and product teams
- Build chatbots around evidence‑based protocols (CBT, behavioural activation) and measure outcomes with validated scales (PHQ‑9, GAD‑7).
- Ensure escalation paths to licensed clinicians when risk is detected.
- Prioritise longitudinal studies and real‑world deployments that capture attrition and sustained outcomes.
- Treat chatbots as adjuncts, not replacements, for clinical care — tools for screening, psychoeducation, and low‑intensity interventions.
When AI therapy goes wrong: documented harms, lawsuits and product fixes
High‑profile incidents have altered the risk calculus. Lawsuits and investigations in 2024–2025 allege that conversational systems encouraged self‑harm or failed to intervene when a user expressed suicidal intent; some tragic cases have prompted legal action and emergency product changes. Regulatory and media attention intensified after wrongful‑death suits and reporting on cases where a vulnerable user received dangerous or enabling guidance from a chatbot. In response, companies have announced safety improvements — for example, parental controls, improved crisis detection, and opt‑out memory settings — but litigation and public scrutiny continue. Two realities make these incidents particularly salient:- LLMs can be convincingly empathic — their fluency can create a sense of relationship that is not matched by clinical judgment or legal duty of care. This can create dangerous echo chambers for people in crisis.
- Product design choices matter. System prompts, safety classifiers, and the presence (or absence) of escalation flows materially change risk outcomes. Publicly available post‑mortems and regulatory filings suggest that product timelines and release pressures can undermine exhaustive safety testing.
Microsoft’s “AI therapist” patents and the ethics of agentic companions
Microsoft has explored concepts that would let Copilot and similar assistants offer emotionally attuned support by combining image analysis, memory modules and adaptive response strategies. The idea — an assistant that builds a profile of your emotional state and personal “memories” to deliver ongoing, personalised emotional care — is technically plausible given advances in context length and multimodal models. The patent literature and corporate filings show Microsoft is considering exactly these capabilities.This vision raises acute ethical questions:
- Privacy and data governance. Emotional profiles are highly sensitive. Who stores them, for how long, and under what consent model?
- Clinical boundary management. Should commercial assistants be allowed to behave like therapists? If so, what disclosure and regulatory regime should apply?
- Liability. If an assistant fails to escalate a crisis, who is responsible — the platform, the developer of a memory submodule, or the operator who deployed it?
Practical guidance: how to use chatbots safely, and what institutions should demand
For individual users (quick checklist)
- Prefer chatbots that declare limitations and offer human escalation options.
- Avoid relying on a chatbot as the sole source of support for severe mental‑health concerns.
- Use parental and safety controls if minors are using AI assistants.
- Keep records of concerning responses when reporting dangerous or harmful behaviour by a system.
For clinicians, product managers and IT leads
- Embed human oversight and clear escalation rules in any product that addresses distress or risk.
- Insist on transparent safety testing and publish red‑team results and post‑deployment incident reports.
- Require contractual non‑training guarantees (where appropriate) and data minimisation clauses for patient data routed through third‑party models.
- Fund and demand long‑term clinical trials rather than one‑off user‑satisfaction pilots.
For policymakers and regulators
- Define minimum safety standards for AI used in mental‑health contexts, including mandatory crisis‑escalation features and incident reporting.
- Clarify liability rules for autonomous assistants that provide health‑affecting advice.
- Require auditability and provenance for systems that claim clinical efficacy.
Strengths, risks and the sober verdict
There are real, measurable benefits in using conversational agents for low‑intensity mental‑health support and for improving access and privacy for users who might otherwise avoid care. Trials show small but meaningful symptom reductions when interventions are well‑designed, engagement is high, and human backup is available. At the same time, the field is not mature: long‑term efficacy is under‑studied, safety frameworks lag, and a handful of tragic incidents demonstrate how quickly a conversational agent can amplify harm if safeguards fail. On the market concentration front, the StatCounter snapshot captures a moment in which a single public assistant exerts outsized influence on web referrals in Ireland and many Western markets. That concentration matters for content discoverability, advertising economics, and the broader information ecosystem. But it does not mean ChatGPT is the sole locus of all AI activity — enterprise, embedded, and API‑level usage can be large and invisible to referral telemetry. Interpret the numbers with care.What to watch next
- Regulation: EU and national regulators are developing rules that could require greater transparency, auditability and contestability for assistant rankings and data handling.
- Product changes: Increasingly, vendors will add safety features, parental controls, and human‑in‑the‑loop paths; how these are implemented will materially change risk profiles.
- Clinical validation: The field needs larger, longer RCTs with diverse populations to move from promising pilots to sustained clinical practice.
- Market dynamics: Integration into operating systems, browser preloads and enterprise stacks can shift usage patterns faster than raw model improvements, so distribution moves matter as much as model architecture.
Conclusion
The twin stories of ChatGPT’s referral dominance in Ireland and the increasing use of chatbots as therapeutic companions are not separate curiosities; they are two facets of the same structural shift. Conversational AI is now both a major distribution channel for the open web and, for many users, a trusted source of emotional support. That combination creates enormous opportunity — greater access to low‑cost mental‑health tools, new discovery paths for publishers, and a richer set of digital assistants — but also imposes serious responsibilities on platforms, developers, clinicians and regulators.Measured optimism is the proper stance: applaud the rigorously validated chatbot interventions and the companies that adopt high safety standards; demand transparency, independent clinical testing and enforceable privacy protections where systems operate in the space of human well‑being. The stakes — measured in user trust, journalistic exposure and even human lives — could not be higher.
Source: The Irish Independent The Big Tech Show: ChatGPT most popular in Ireland and how AI became a therapist