The landscape of artificial intelligence chatbots has rapidly evolved, with state-of-the-art models infiltrating everyday life and increasingly turning to voice conversations as a primary interface. Voice mode chatbots on platforms from OpenAI, Microsoft, Google, Meta, and xAI are vying not only to outperform one another on knowledge and informational accuracy, but also to demonstrate personality, empathy, and a more “human” conversation style. The recent experiment by ZDNet’s Lance Whitney, wherein he brought a common but emotional issue—an anxious cat named Mr. Giggles—to five headline chatbots, offers an illuminating snapshot of both the current capabilities and shortcomings of these voice AI assistants.
Talking to machines has long moved beyond the stuff of science fiction. Modern chatbots, equipped with sophisticated voice interfaces, promise more natural, accessible, and emotionally attuned interactions. For many, voice is simply a more convenient way to converse—hands-free, on the go, or when typing feels like a chore. Importantly, these services typically allow users to review transcripts after the fact, merging instant spoken guidance with the archival power of text.
The idea of testing chatbots with a relatable, emotionally nuanced scenario—an owner worried about their nervous pet—was a masterstroke. It's a common, real-world problem, and the quality of responses provided more than a window into knowledge and logic; it revealed each AI's capacity for reassurance, sensitivity, and conversational cadence.
On the technical side, ChatGPT’s voice mode supports a range of accents and personalities, with the reviewer opting for the “Vale” voice—British, inquisitive, and bright. Unlike some chatbots that simply dispensed advice, ChatGPT engaged in a more revealing back-and-forth, probing, validating, and providing comfort alongside practical suggestions.
Critical Analysis:
Its responses mirrored best-practice advice seen in veterinary and animal behaviorist circles. However, the tone struck a careful balance between supportive attentiveness and professional detachment.
Critical Analysis:
Critical Analysis:
Critical Analysis:
Notably, Grok supports on-screen transcription and archiving of entire conversations, making it easy to review details later. Its voice, while pleasing enough, is less customizable compared to the competition.
Critical Analysis:
Notably, all five platforms offered at least adequate practical advice: calming treats, pheromone diffusers, collars, music, toys—recommendations that track with standard veterinary wisdom. This thematic consistency suggests AIs are drawing on broadly similar data—yet their ability to contextualize and humanize that information varies widely.
In an age where people increasingly rely on AI for medical and behavioral advice—pets included—verifying critical claims and acting cautiously is essential. Equally, it would be remiss to overlook privacy concerns, especially when revealing health issues, locations, or other sensitive details to these multinational platforms.
This shortfall is more than a stylistic bug. Effective, therapeutic conversation—whether concerning human or animal anxiety—relies on feeling heard, not simply receiving correct facts. Human therapists, support workers, and even the best customer service agents excel precisely because they listen, reflect, validate, and then offer suggestions. Voice-based chatbots are just beginning this journey.
It’s also worth noting that differences in the approach to privacy and data security may influence user trust across the big platforms. Before launching into a personal conversation—about pet anxiety or anything else—users should familiarize themselves with each chatbot’s privacy policies, voice recording retention, and options for data deletion.
Some companies are actively working on this front. ElevenLabs, for instance, has been spotlighted for AI voice assistants aiming to automate tasks with more nuanced conversational skills, hinting at even more personalized, interactive experiences ahead.
Yet, as AI platforms race forward, new updates or approaches may soon produce a chatbot that synthesizes warmth, practicality, and trust better than any before. For the millions turning to these tools not just for information, but for comfort, support, and real human companionship in digital form, the stakes are only increasing.
Ultimately, the best voice assistant is the one that fits your needs—whether that’s efficient Q&A, a soothing presence, or a virtual friend who’ll listen as you fret about Mr. Giggles’ next vet appointment. The world of AI chatbots is no longer one-size-fits-all. Soon, choosing your virtual confidant may be as personal and consequential as choosing your real-life therapist—or vet.
Source: ZDNet I talked to 5 AIs about my cat, Mr. Giggles - and it says a lot about the state of chatbots
The Modern Voice Assistant: Convenience Meets Empathy
Talking to machines has long moved beyond the stuff of science fiction. Modern chatbots, equipped with sophisticated voice interfaces, promise more natural, accessible, and emotionally attuned interactions. For many, voice is simply a more convenient way to converse—hands-free, on the go, or when typing feels like a chore. Importantly, these services typically allow users to review transcripts after the fact, merging instant spoken guidance with the archival power of text.The idea of testing chatbots with a relatable, emotionally nuanced scenario—an owner worried about their nervous pet—was a masterstroke. It's a common, real-world problem, and the quality of responses provided more than a window into knowledge and logic; it revealed each AI's capacity for reassurance, sensitivity, and conversational cadence.
The Experiment: Five AIs, One Anxious Cat
ZDNet’s Whitney tested the voice conversation capabilities of:- OpenAI’s ChatGPT
- Microsoft Copilot
- Google Gemini
- Meta AI
- xAI’s Grok
1. ChatGPT: The Empathetic Companion
OpenAI’s flagship ChatGPT consistently ranked high not only for the accuracy and practicality of its advice but for its tone—empathic, supportive, and encouraging. The model asked probing questions about environmental triggers, behavioral changes, and suggested a range of practical solutions including calming treats, supplements, diffusers, and collars. Whitney specifically highlighted ChatGPT’s ability to “listen,” cementing it as the virtual equivalent of a sympathetic friend or an attentive vet nurse.On the technical side, ChatGPT’s voice mode supports a range of accents and personalities, with the reviewer opting for the “Vale” voice—British, inquisitive, and bright. Unlike some chatbots that simply dispensed advice, ChatGPT engaged in a more revealing back-and-forth, probing, validating, and providing comfort alongside practical suggestions.
Critical Analysis:
- Strengths: Highly conversational, emotionally intelligent, broad knowledge, and an engaging tone are standout features. ChatGPT felt like a partner in problem-solving, leveraging empathy and curiosity to build user trust—a key differentiator in the current AI landscape.
- Potential Risks: While ChatGPT excels in forming a virtual rapport, its suggestions should still be independently verified for medical accuracy, especially for pet health issues. Users may conflate AI empathy with actual medical expertise and should consult a qualified professional for serious concerns.
2. Microsoft Copilot: Calm and Supportive
Microsoft Copilot (based on an OpenAI model but with Microsoft’s distinctive touch) offered a parallel experience, both informative and supportive. Copilot’s user flow for activating voice mode was streamlined and included a selection of voices; “Wave” (British-accented) became the reviewer’s choice. Copilot was reassuring, collaborative (“we will figure this out”), and systematic, asking about specific triggers and reiterating solutions such as pheromone diffusers and calming supplements.Its responses mirrored best-practice advice seen in veterinary and animal behaviorist circles. However, the tone struck a careful balance between supportive attentiveness and professional detachment.
Critical Analysis:
- Strengths: Well-structured, soothing, and approachable, Copilot successfully combined knowledge and empathy. Its clear, articulate voices and easy setup make it a compelling choice for regular use.
- Potential Risks: Copilot’s measured approach could, in some instances, feel less personalized or emotionally rich compared to its rivals. Users should watch for moments where attentive follow-up questions are replaced by polite but formulaic responses—an area Microsoft continues to refine.
3. Google Gemini: Practical, But Reserved
Google’s Gemini voice AI stood out for practicality—immediately acknowledging that vet visits can be stressful and suggesting an array of tools like pheromone diffusers, calming collars, soothing music, and games. Yet what held Gemini back was its conversational brevity. Rather than coaxing details or displaying outward sympathy, it replied in shorter, more matter-of-fact statements. Whitney noted needing to “drag” some information out. Its “Capella” voice was described as soothing, though the AI’s overall demeanor was less warm.Critical Analysis:
- Strengths: Direct, to-the-point, and with solid advice, Gemini is ideal for users who prefer concise, actionable answers rather than drawn-out chat.
- Potential Risks: This matter-of-fact brevity can hinder rapport, especially in emotionally loaded scenarios. The lack of follow-up or deeper conversational engagement may make Gemini users feel more like customers than collaborators. For sensitive or complex topics, Gemini may frustrate those seeking reassurance over efficiency.
4. Meta AI: Snappy, Informative, Yet Distant
Meta AI brought a unique touch—offering voice options mimicking celebrities, such as Kristen Bell, adding a fun or familiar twist to the exchange. The interface was praised for the real-time transcription visible during and post-conversation. Meta AI performed admirably at offering succinct, on-point advice, but this snappiness came at the cost of warmth. The responses, though accurate, sometimes lacked the softening touches of validation or emotional resonance seen in ChatGPT and Copilot.Critical Analysis:
- Strengths: Highly responsive and easy to use, with a novelty factor (celebrity voices) that could appeal to broad audiences. The transcription feature increases accessibility.
- Potential Risks: The lack of a sympathetic tone may make Meta AI less suitable for emotional topics. The risk is especially pronounced in scenarios—like caring for anxious pets—where users crave not only information but also comfort.
5. Grok: The Q&A Powerhouse
xAI’s Grok, currently distinguished for its customization features, allowed the reviewer to set conversational style (sympathetic and understanding, via the “custom” option). Despite these tweaks, Grok tended to overwhelm with information—delivering advice in a technical, one-way burst that felt more like a data dump than a dialogue. Despite the utility of Grok’s content, this machine-gun approach fell short of fostering a real sense of conversation.Notably, Grok supports on-screen transcription and archiving of entire conversations, making it easy to review details later. Its voice, while pleasing enough, is less customizable compared to the competition.
Critical Analysis:
- Strengths: Exceptional information density and customizability. For users who want a quick, comprehensive answer, Grok may prove unbeatable.
- Potential Risks: The risk, as seen in ZDNet’s test, is information overload and missed opportunities for relationship building. Grok’s “Q&A” style could alienate users looking to feel heard or who prefer a slower, more interactive exchange.
Comparing Personality, Empathy, and Practicality Across Chatbots
The essence of a “good” conversational AI is now defined by more than accuracy or breadth of knowledge; it's about how the information is delivered. Whitney’s multi-AI test exposes fundamental differences in tone, responsiveness, and the feeling of being understood—qualities that can weigh at least as heavily as the information itself.AI Assistant | Strengths | Weaknesses | Personality/Energy | Custom Voice Options |
---|---|---|---|---|
ChatGPT | Empathy, probing dialogue, breadth of advice | Reliance on user verification | Warm, conversational | Yes (accents, tones) |
Copilot | Soothing, logical, supportive | Sometimes more formal, reserved | Calm, measured | Yes |
Gemini | Direct, practical, efficient | Can be terse, less warm | Pragmatic, reserved | Yes |
Meta AI | Succinct, innovative voices | Lacks warmth, less emotionally tuned | Informative, snappy | Yes (celebrity mimic) |
Grok | Detailed, customizable | Overwhelming info, limited dialogue | Fast, encyclopedic | Limited |
The New Arms Race: Humanizing AI Interactions
What emerges from this experiment is an arms race beyond accuracy—a contest for empathy, relatability, and emotional resonance. OpenAI’s ChatGPT, perhaps reflecting its research priorities, is more natively equipped to provide warmth and curiosity. Microsoft, as a close collaborator with OpenAI, has absorbed many of these conversational strategies, though sometimes with a more corporate restraint. Google prioritizes brevity and usability but, for now, at the cost of overt warmth. Meta AI adds speed and visual pop, but hasn’t found a way to sound truly caring. Grok leans towards information firehoses over handholding.Notably, all five platforms offered at least adequate practical advice: calming treats, pheromone diffusers, collars, music, toys—recommendations that track with standard veterinary wisdom. This thematic consistency suggests AIs are drawing on broadly similar data—yet their ability to contextualize and humanize that information varies widely.
Verifying AI Advice: Are Recommendations Safe?
While chatbot advice aligned with general best practices—calming sprays like Feliway, anxiety-reducing supplements, and environmental enrichment—users should remember that AI is not a substitute for a qualified vet. Calming treats and pheromone diffusers are widely regarded as low-risk (as confirmed by organizations like the American Veterinary Medical Association and independent reviews), but supplement composition, allergic reactions, and the core cause of anxiety may call for a professional diagnosis.In an age where people increasingly rely on AI for medical and behavioral advice—pets included—verifying critical claims and acting cautiously is essential. Equally, it would be remiss to overlook privacy concerns, especially when revealing health issues, locations, or other sensitive details to these multinational platforms.
User Experience: Beyond Functional Advice
Ease of use, the speed of setup, and the joy of interaction round out the winning formula. Here, the ability to select a preferred voice or retain a transcript after conversation gets high marks. The subtle touches—custom accents, celebrity impersonations, or a visually pleasing transcript—help build trust and engagement. These features are not just icing on the cake; they create accessibility for users with disabilities or those who prefer to review conversations in detail later.The Unexpected Lessons: Where AIs Still Miss the Mark
As chatbots strive for emotional intelligence, surprising gaps remain. Most notably, only a few AIs genuinely “listened”—pushing the user for more detail, asking relevant follow-up questions, or showing awareness of context beyond a simple Q&A script. In Whitney’s review, only ChatGPT (and occasionally Copilot) repeatedly demonstrated this skill; others were either brisk, overly technical, or struggled to parse the emotional dimension.This shortfall is more than a stylistic bug. Effective, therapeutic conversation—whether concerning human or animal anxiety—relies on feeling heard, not simply receiving correct facts. Human therapists, support workers, and even the best customer service agents excel precisely because they listen, reflect, validate, and then offer suggestions. Voice-based chatbots are just beginning this journey.
Privacy and Data Security: An Often Overlooked Concern
When pouring your heart out to a voice assistant about a beloved pet, few users are mindful of where the conversation goes, who is processing it, or what data is stored. Yet with cross-platform deployments and vast datasets, privacy becomes a tangible risk. Meta, Google, Microsoft, and OpenAI all collect voice and conversation data, often for “service improvement,” but the extent of anonymization or data retention is rarely clear. Some studies have raised concerns about inadvertent leaks or model “regurgitation” of user-provided data—risks that warrant consumer vigilance.It’s also worth noting that differences in the approach to privacy and data security may influence user trust across the big platforms. Before launching into a personal conversation—about pet anxiety or anything else—users should familiarize themselves with each chatbot’s privacy policies, voice recording retention, and options for data deletion.
The Next Frontier: Personalized Emotional Support
What ZDNet’s experiment most clearly reveals is that the next leap for voice artificial intelligence won’t be in raw problem-solving, but in deepening the illusion of emotional presence. Adding technical features—richer voice selection, improved transcription, celebrity novelty—may help, but the winner in this space will be the assistant that feels most like a compassionate partner. As algorithms learn to interpret tone, pick up on emotional cues, and develop longer conversational memory, the line between tool and companion will further blur.Some companies are actively working on this front. ElevenLabs, for instance, has been spotlighted for AI voice assistants aiming to automate tasks with more nuanced conversational skills, hinting at even more personalized, interactive experiences ahead.
Conclusion: Choosing the Right Voice for the Moment
After comparing the conversational prowess, empathy, practical advice, and customization capabilities of five leading voice chatbots, the verdict is clear: All five have their merits, and all five can effectively help worried pet owners explore solutions for anxious animals. However, for users who prioritize feeling heard and engaging in supportive, emotionally intelligent dialogue, OpenAI’s ChatGPT remains the gold standard—at least for now.Yet, as AI platforms race forward, new updates or approaches may soon produce a chatbot that synthesizes warmth, practicality, and trust better than any before. For the millions turning to these tools not just for information, but for comfort, support, and real human companionship in digital form, the stakes are only increasing.
Ultimately, the best voice assistant is the one that fits your needs—whether that’s efficient Q&A, a soothing presence, or a virtual friend who’ll listen as you fret about Mr. Giggles’ next vet appointment. The world of AI chatbots is no longer one-size-fits-all. Soon, choosing your virtual confidant may be as personal and consequential as choosing your real-life therapist—or vet.
Source: ZDNet I talked to 5 AIs about my cat, Mr. Giggles - and it says a lot about the state of chatbots