Use of AI tools is moving from novelty to habit in the UK, and Ofcom’s latest research suggests the shift is now deep enough to matter for regulators, platform designers, and the public alike. The media watchdog says some adults are already treating chatbots as sources of advice and conversation, not just search shortcuts or productivity aids. That matters because once a tool starts mediating personal decisions, questions about accuracy, safety, disclosure, and emotional dependence become much harder to dismiss.
Ofcom’s new findings land at a moment when the UK’s broader relationship with AI has changed quickly and visibly. In the same set of research, the regulator says adult internet users are spending more time online, seeing AI-driven summaries in search more often, and interacting with generative tools in ways that would have sounded speculative only a year or two ago. The regulator’s own summary points to a digital environment where AI is not a side feature anymore, but part of the default online experience.
That context matters because UK policymakers have been moving toward a more structured response to AI chatbots. Ofcom has already warned online service providers that the Online Safety Act can apply to generative AI and chatbots in specific circumstances, especially where safety, age assurance, or harmful content are involved. It has also begun enforcement activity, including an investigation into an AI companion chatbot service over age-check requirements, which shows the regulator is no longer treating this as a purely theoretical policy debate.
The new adult-use research is also part of a broader pattern seen in other UK surveys. Ipsos reported in 2025 that a meaningful share of Britons were already turning to AI for personal advice, while charities and youth groups have raised alarms about emotional reliance among younger users. Taken together, these findings point to a market where conversation is becoming as important as search, and where the line between utility and dependency is increasingly blurred.
That shift has implications well beyond consumer behavior. When adults ask chatbots for relationship advice, health guidance, or simple companionship, the product category starts overlapping with counselling, customer support, education, and even social infrastructure. That is precisely why regulators care: the same interface that improves convenience can also create new pathways for misinformation, manipulation, and harm.
The regulator’s wording also reinforces how normalized this behavior is becoming. Its summary notes that some participants used AI to seek relationship breakup advice or companionship while working from home. That suggests a broad range of use cases, from practical decision support to emotional substitution, and it helps explain why Ofcom is paying attention now rather than later.
A second issue is scale. Ofcom’s broader research indicates AI has become visible enough in online search and daily routines that the regulator can no longer treat chatbot harm as a niche edge case. When use becomes widespread, small error rates can affect large numbers of people.
Key takeaways from the Ofcom picture include:
That shift helps explain why AI companies are racing to make their systems feel more natural. The more human the interaction, the easier it is for users to return, reuse, and trust the product. But that same friendliness can also obscure the fact that the system is not a person, does not understand context like a human would, and can still produce confident but wrong advice.
There is also a convenience trap. A chatbot can respond faster than a doctor, adviser, therapist, or line manager, but speed is not the same as judgment. The more people rely on AI to fill that gap, the more pressure there is on the tools to behave safely even when users present them with emotionally loaded or complex questions.
Important implications include:
The recent investigation into an AI companion chatbot service underscores that the regulator is willing to test those expectations in the real world. Even if a service markets itself as playful or experimental, Ofcom is signaling that age-check duties, moderation practices, and risk controls may still matter. That is a major warning shot for companies that have relied on the ambiguity of “just a chatbot.”
The OpenAI and chatbot debate in the UK has therefore evolved into a more practical question: what duties attach to which kind of service, and at what point? The answer will likely depend on product design, content risks, and the service’s operational reach.
The current UK regulatory posture suggests several likely priorities:
This is where the industry’s own design choices matter most. When systems are optimized to be engaging, affirming, and high-retention, they can inadvertently encourage users to continue conversations that should instead be redirected to human support. That is why emotional UX is now a safety issue, not just a product feature.
At the same time, it is easy to overstate the case. Not every user seeking advice is in distress, and not every chatbot interaction is harmful. The challenge is designing systems that can distinguish between casual curiosity and a high-risk interaction, then respond appropriately.
That question is especially tricky because AI systems do not fit neatly into traditional product categories. They are not just static software, but not quite human advisers either. The result is a liability gap that regulators are only beginning to map.
Practical risk areas include:
That creates pressure on procurement teams to ask harder questions. What data does the chatbot retain? What safety filters exist? Can the system identify risky conversations and escalate them? Those questions are becoming standard due diligence, not optional extras.
For employers, the biggest issue is often unsanctioned use. Employees may paste sensitive data into public chatbots, ask for legal or HR guidance, or rely on AI for judgments outside their competence. That can create compliance exposure long before a company has formally deployed its own assistant.
That means organizations should be thinking in terms of operational trust, not just software performance. The most resilient companies will be the ones that can explain, document, and audit their AI systems rather than merely showcase them.
Important enterprise actions include:
That changes the economics of the sector. Companies that can prove restraint and control may gain an advantage with regulators and enterprise buyers, even if they are not first to market with the flashiest conversational features. Safety is becoming a sales argument.
Providers may increasingly compete on:
That creates a second-order competition issue. Smaller providers may struggle to match the distribution power of large platforms, but large platforms may face heavier scrutiny because their tools reach more people by default.
There is a social irony here. People often describe AI as impersonal, yet many are turning to it for highly personal uses. That suggests not merely technological adoption, but a change in what kinds of interactions people are willing to outsource to machines.
The trend also exposes broader pressures in modern life:
The ideal outcome is a system that helps people move toward better decisions without pretending to be the decision-maker. That is a much harder design problem than it sounds.
Expect more tension between innovation and restraint. AI vendors want frictionless experiences that keep users engaged, while regulators increasingly want friction in precisely the places where risk is highest. That tension will shape everything from age checks to warning labels to how chatbots respond when a conversation turns personal or sensitive.
What to watch next:
The UK is entering a phase where AI governance must keep pace with everyday intimacy, not just technical capability. If regulators and companies get this right, chatbots could become safer, more honest tools that augment human judgment rather than replace it. If they get it wrong, the country may discover too late that the most persuasive AI systems are also the hardest to control.
Source: MLex AI use growing among UK adults, some seek advice from chatbots, regulator says | MLex | Specialist news and analysis on legal risk and regulation
Background
Ofcom’s new findings land at a moment when the UK’s broader relationship with AI has changed quickly and visibly. In the same set of research, the regulator says adult internet users are spending more time online, seeing AI-driven summaries in search more often, and interacting with generative tools in ways that would have sounded speculative only a year or two ago. The regulator’s own summary points to a digital environment where AI is not a side feature anymore, but part of the default online experience.That context matters because UK policymakers have been moving toward a more structured response to AI chatbots. Ofcom has already warned online service providers that the Online Safety Act can apply to generative AI and chatbots in specific circumstances, especially where safety, age assurance, or harmful content are involved. It has also begun enforcement activity, including an investigation into an AI companion chatbot service over age-check requirements, which shows the regulator is no longer treating this as a purely theoretical policy debate.
The new adult-use research is also part of a broader pattern seen in other UK surveys. Ipsos reported in 2025 that a meaningful share of Britons were already turning to AI for personal advice, while charities and youth groups have raised alarms about emotional reliance among younger users. Taken together, these findings point to a market where conversation is becoming as important as search, and where the line between utility and dependency is increasingly blurred.
That shift has implications well beyond consumer behavior. When adults ask chatbots for relationship advice, health guidance, or simple companionship, the product category starts overlapping with counselling, customer support, education, and even social infrastructure. That is precisely why regulators care: the same interface that improves convenience can also create new pathways for misinformation, manipulation, and harm.
What Ofcom Is Actually Showing
Ofcom’s report is important not because it proves a single dramatic statistic, but because it places AI use inside the everyday habits of UK adults. The regulator says people are increasingly embedding AI tools into their digital routines, and that some are explicitly using them for advice and conversation. That is a subtle but important distinction: it suggests users are moving beyond asking chatbots to draft emails or summarize text, and toward treating them as interactive advisors.The regulator’s wording also reinforces how normalized this behavior is becoming. Its summary notes that some participants used AI to seek relationship breakup advice or companionship while working from home. That suggests a broad range of use cases, from practical decision support to emotional substitution, and it helps explain why Ofcom is paying attention now rather than later.
Why this matters for policy
The policy issue is not whether adults may choose to use AI. It is whether the systems they use are being designed and governed with enough care for the kind of use they are now seeing. If a chatbot is being asked for advice, then accuracy, uncertainty, and safe fallback behavior stop being nice-to-haves and become core product requirements.A second issue is scale. Ofcom’s broader research indicates AI has become visible enough in online search and daily routines that the regulator can no longer treat chatbot harm as a niche edge case. When use becomes widespread, small error rates can affect large numbers of people.
Key takeaways from the Ofcom picture include:
- AI is becoming part of routine adult online behavior.
- Some adults are using chatbots for advice, not just information retrieval.
- Emotional and conversational use cases are already emerging.
- Regulator attention is shifting from concept to enforcement.
- The boundary between search tool and advisor is getting thinner.
The Consumer Shift Toward Conversational AI
One of the most striking features of the current AI wave is how quickly users have accepted chatbots as conversational partners. Ofcom’s findings align with other UK research indicating that adults are increasingly comfortable asking AI for personal guidance, with some people using it for topics that were once reserved for friends, family, or professionals. The pattern is not limited to entertainment or novelty; it is becoming habit-forming.That shift helps explain why AI companies are racing to make their systems feel more natural. The more human the interaction, the easier it is for users to return, reuse, and trust the product. But that same friendliness can also obscure the fact that the system is not a person, does not understand context like a human would, and can still produce confident but wrong advice.
Advice, companionship, and convenience
The evidence from recent UK studies suggests that users value chatbots because they are immediate, available, and nonjudgmental. That makes them especially attractive for sensitive topics, where users may hesitate to speak to another human. In practice, this can lower the barrier to asking for help, but it can also lower the barrier to receiving bad advice.There is also a convenience trap. A chatbot can respond faster than a doctor, adviser, therapist, or line manager, but speed is not the same as judgment. The more people rely on AI to fill that gap, the more pressure there is on the tools to behave safely even when users present them with emotionally loaded or complex questions.
The trust problem
Trust is the central issue beneath the surface. When users treat a chatbot as a confidant, they may disclose more, challenge less, and accept suggestions they would otherwise question. That creates a product dynamic very different from standard search, because the system is not merely retrieving information; it is participating in decision-making.Important implications include:
- Users may overestimate chatbot reliability.
- Emotional tone can be mistaken for expertise.
- Advice-seeking behavior increases exposure to hallucinations.
- Privacy expectations may not match product reality.
- The risk of dependency grows as interaction becomes more personal.
Ofcom, the Online Safety Act, and the New Enforcement Mood
Ofcom’s role is more consequential than a simple research publisher’s role. The regulator is already setting expectations for how the UK’s Online Safety Act applies to generative AI and chatbots, and it has been explicit that providers need to think about whether their services fall within the regime. That is a meaningful change for the industry, because compliance now has to be designed in rather than bolted on later.The recent investigation into an AI companion chatbot service underscores that the regulator is willing to test those expectations in the real world. Even if a service markets itself as playful or experimental, Ofcom is signaling that age-check duties, moderation practices, and risk controls may still matter. That is a major warning shot for companies that have relied on the ambiguity of “just a chatbot.”
What regulators are trying to prevent
Regulators are not trying to stop adults from using AI chatbots. Instead, they are trying to ensure that services are not exposing users — especially children and vulnerable adults — to foreseeable harm. That includes unsafe content, hidden manipulative features, and age-inappropriate experiences.The OpenAI and chatbot debate in the UK has therefore evolved into a more practical question: what duties attach to which kind of service, and at what point? The answer will likely depend on product design, content risks, and the service’s operational reach.
Why enforcement matters now
Enforcement changes behavior far more effectively than guidance alone. Once a regulator opens investigations and publicizes them, companies begin revisiting product labels, access controls, moderation pipelines, and safety notices. That is particularly important in a fast-moving market where product teams often ship features faster than governance teams can review them.The current UK regulatory posture suggests several likely priorities:
- Age verification and age assurance.
- Transparency around chatbot limits.
- Content moderation and reporting pathways.
- Clearer user disclosures for AI-generated interactions.
- Risk assessment for emotionally sensitive use cases.
Where AI Advice Becomes a Safety Issue
The jump from “helpful assistant” to “advice source” is where the risk landscape changes. A chatbot used to rewrite a paragraph is one thing. A chatbot used to guide a breakup, interpret symptoms, or calm someone in distress is another. Ofcom’s findings do not say those uses are universal, but they do show that they are real enough to influence regulatory thinking.This is where the industry’s own design choices matter most. When systems are optimized to be engaging, affirming, and high-retention, they can inadvertently encourage users to continue conversations that should instead be redirected to human support. That is why emotional UX is now a safety issue, not just a product feature.
The mental health overlap
The strongest concern is around mental health and crisis situations. Recent reporting and charity research suggest that some people are already using AI for emotional support, and that younger users in particular may be drawn to chatbots as quasi-companions. That raises obvious concerns about dependency, boundary confusion, and the possibility of harmful reinforcement.At the same time, it is easy to overstate the case. Not every user seeking advice is in distress, and not every chatbot interaction is harmful. The challenge is designing systems that can distinguish between casual curiosity and a high-risk interaction, then respond appropriately.
The advice liability question
There is also a legal and commercial question lurking behind the policy debate. If a chatbot gives misleading advice that a user relies on, where does responsibility lie? Is it with the model provider, the app maker, the platform, or the user who chose to ask a machine for help?That question is especially tricky because AI systems do not fit neatly into traditional product categories. They are not just static software, but not quite human advisers either. The result is a liability gap that regulators are only beginning to map.
Practical risk areas include:
- Self-harm and mental health discussions.
- Relationship and breakup advice.
- Financial or career guidance.
- Health-related questions.
- Overly persuasive companionship features.
Enterprise Implications: Governance, Procurement, and Reputation
For businesses, the Ofcom findings are not simply a consumer trend story. They point to a shifting compliance environment in which employee-facing AI tools, customer chatbots, and public-facing assistants will all need more disciplined governance. If staff are already using chatbots informally for advice, then enterprises have a shadow AI problem even before they deploy official systems.That creates pressure on procurement teams to ask harder questions. What data does the chatbot retain? What safety filters exist? Can the system identify risky conversations and escalate them? Those questions are becoming standard due diligence, not optional extras.
Enterprise use versus consumer use
Consumer AI is driven by convenience and intimacy. Enterprise AI is driven by productivity and scale. But the regulatory risks overlap, because the same underlying technology can be used in both environments and can fail in similar ways.For employers, the biggest issue is often unsanctioned use. Employees may paste sensitive data into public chatbots, ask for legal or HR guidance, or rely on AI for judgments outside their competence. That can create compliance exposure long before a company has formally deployed its own assistant.
Reputation and customer trust
Public trust is another major factor. If a company offers AI-powered customer service and it makes a harmful recommendation, the brand damage can be immediate. Customers rarely distinguish between the underlying model vendor and the company that presented the chatbot as a trusted front door.That means organizations should be thinking in terms of operational trust, not just software performance. The most resilient companies will be the ones that can explain, document, and audit their AI systems rather than merely showcase them.
Important enterprise actions include:
- Establishing acceptable-use policies for staff.
- Vetting chatbot vendors for safety and logging controls.
- Separating low-risk automation from high-risk advice.
- Training teams on prompt hygiene and data handling.
- Creating escalation routes for harmful outputs.
The Market Response: Competition, Product Design, and Differentiation
The competitive implications are significant because AI chatbots are becoming a mainstream interface layer, not just a feature. That means product differentiation is no longer only about model size or benchmark scores. It is about safety, reliability, transparency, and how confidently a company can market its chatbot without triggering regulatory discomfort.That changes the economics of the sector. Companies that can prove restraint and control may gain an advantage with regulators and enterprise buyers, even if they are not first to market with the flashiest conversational features. Safety is becoming a sales argument.
Why “friendliness” is no longer enough
A chatbot that sounds warm may feel better to the user, but warmth alone does not make a product trustworthy. In fact, a highly personable interface can make errors more dangerous because users are less likely to question the answer. That is why the next phase of competition may reward systems that are less performative and more bounded.Providers may increasingly compete on:
- Better transparency about uncertainty.
- Clearer refusal behavior for risky topics.
- Stronger parental and age controls.
- More robust citations and provenance.
- Easier reporting and correction tools.
The platform effect
There is also a platform effect at work. If major search engines and messaging platforms continue embedding AI summaries or assistants into core experiences, users may encounter chatbot behavior without consciously choosing it. Ofcom’s research suggests that AI is already surfacing in search and routine online life, which makes design choices by dominant platforms especially consequential.That creates a second-order competition issue. Smaller providers may struggle to match the distribution power of large platforms, but large platforms may face heavier scrutiny because their tools reach more people by default.
Consumer Behavior, Culture, and the New Normal
The cultural significance of Ofcom’s findings should not be underestimated. A technology becomes normal not when people talk about it constantly, but when they stop noticing that they are using it. That appears to be happening with AI tools in the UK, especially among adults who now use them as part of ordinary online behavior.There is a social irony here. People often describe AI as impersonal, yet many are turning to it for highly personal uses. That suggests not merely technological adoption, but a change in what kinds of interactions people are willing to outsource to machines.
The social meaning of chatbot use
When users ask chatbots for advice, they are making a judgment about speed, privacy, and convenience. They may also be expressing a subtle lack of confidence in the human alternatives available to them. That is why chatbot adoption can be read as both a tech story and a social one.The trend also exposes broader pressures in modern life:
- Time scarcity.
- Loneliness and isolation.
- Cost barriers to human support.
- Friction in traditional services.
- Familiarity with always-on digital interfaces.
A cultural shift with limits
Still, it would be a mistake to assume people want AI to replace humans outright. In many cases, they likely want a low-stakes first draft of an answer, not a final authority. That distinction matters, and it should guide both product design and regulation.The ideal outcome is a system that helps people move toward better decisions without pretending to be the decision-maker. That is a much harder design problem than it sounds.
Strengths and Opportunities
Ofcom’s report is a warning, but it is also an opportunity to make AI safer, more transparent, and more useful. If regulators, companies, and civil society respond well, the UK could end up with a more mature AI ecosystem than one driven purely by hype. The upside lies in shaping the market before dangerous habits become fully entrenched.- Regulators can set clearer boundaries before harms scale.
- Companies can build trust through stronger safeguards.
- Consumers can benefit from better disclosure and safer defaults.
- Enterprise buyers can demand auditability and risk controls.
- Public debate can shift from novelty to responsible use.
- AI tools can be designed to escalate sensitive issues to humans.
- Better regulation could reward high-quality providers over reckless ones.
Risks and Concerns
The same factors that make chatbots appealing also make them risky. When a tool is easy to use, always available, and emotionally responsive, people may trust it more than they should. That creates a real possibility of harm, especially if the system is used in moments of vulnerability or stress.- Users may rely on inaccurate or oversimplified advice.
- Emotional attachment can blur the line between tool and companion.
- Children and vulnerable adults may be especially exposed.
- Product design can nudge people toward deeper dependence.
- Companies may underinvest in moderation until after incidents occur.
- Regulation may lag behind rapid feature rollouts.
- Public trust could be damaged by a high-profile chatbot failure.
Looking Ahead
The next phase of this debate will likely be defined by enforcement, product redesign, and more detailed public evidence. Ofcom has already shown that it is willing to connect research findings to regulatory action, and that suggests the UK will not wait long for further scrutiny of AI companions and chatbot advice systems. The central question is no longer whether the technology is being used this way, but whether the rules and products are fit for that reality.Expect more tension between innovation and restraint. AI vendors want frictionless experiences that keep users engaged, while regulators increasingly want friction in precisely the places where risk is highest. That tension will shape everything from age checks to warning labels to how chatbots respond when a conversation turns personal or sensitive.
What to watch next:
- Further Ofcom guidance on generative AI and chatbots.
- Additional enforcement actions under the Online Safety Act.
- Changes to chatbot age assurance and verification systems.
- Safer design updates from major AI providers.
- New surveys on advice-seeking behavior among UK adults.
- Enterprise policy updates around sanctioned AI use.
- Any evidence that chatbot companionship is displacing human support.
The UK is entering a phase where AI governance must keep pace with everyday intimacy, not just technical capability. If regulators and companies get this right, chatbots could become safer, more honest tools that augment human judgment rather than replace it. If they get it wrong, the country may discover too late that the most persuasive AI systems are also the hardest to control.
Source: MLex AI use growing among UK adults, some seek advice from chatbots, regulator says | MLex | Specialist news and analysis on legal risk and regulation
Similar threads
- Replies
- 0
- Views
- 19
- Replies
- 0
- Views
- 35
- Article
- Replies
- 0
- Views
- 30
- Article
- Replies
- 0
- Views
- 20
- Replies
- 1
- Views
- 12