• Thread Author
There is a growing cultural and technological fascination with artificial intelligence chatbots, epitomized by interactive systems like ChatGPT, Microsoft Copilot, Google Gemini, and DeepSeek from China. These platforms offer users a seamless blend of conversational capability, rapid information retrieval, and the simulation of human warmth. Yet, beneath their helpful exteriors lie critical questions about authenticity, privacy, and the boundaries of machine intimacy—issues that Froma Harrop’s perspective incisively explores in her commentary, "ChatGPT Is Actually Not Your Friend."

A smiling humanoid robot with glowing eyes is set against a digital futuristic background.
The Allure and Convenience of AI Chatbots​

In daily life, convenience and expediency often rule our digital interactions. Large language models such as ChatGPT, developed by OpenAI with backing from Microsoft, have positioned themselves as accessible, efficient, and often charming companions in the pursuit of answers. Whether inquiring about travel routes, consumer warranties, or color theory, users expect—and usually receive—responses that are not only factually relevant but expressed in accessible, almost personable language.
Harrop’s experimentation with ChatGPT covered a range of topics, from airline routes to GDP statistics, and even ventured into subjective areas such as color matching. This scope demonstrates the remarkable breadth of these AI models, which, according to Microsoft’s public disclosures, operate on infrastructure composed of hundreds of thousands of processor cores and tens of thousands of GPUs, all connected through sophisticated networks and physically located in strategically chosen data centers. ChatGPT’s conversational prowess is rooted in the immense pattern recognition powers of its neural network—a product of training on vast text datasets, curated to reflect the nuances and subtleties of human language.

The Illusion of Friendship: Can an AI Be Your Friend?​

Harrop’s skepticism rests on a profound cultural question: can a machine—however responsive or friendly—substitute for the complexities of human connection? ChatGPT’s design intentionally incorporates elements of empathy, affirmation, and even flattery. Phrases like “Great word choice!” may offer a quick ego boost, but are fundamentally algorithmic flourishes programmed to keep users engaged and satisfied.
It is no accident that users, like Harrop, find these exchanges both compelling and uncanny. The rapid improvement in natural language processing and generative pre-trained transformers (the "GPT" in ChatGPT) contributes to an experience that often blurs the line between tool and confidant. Yet, at its core, ChatGPT is neither sentient nor sapient; it cannot reciprocate emotion, nor can it provide genuine empathy or understanding. As Harrop wittily notes, “Chat isn’t Kate. It’s not even human.”
This distinction is central to understanding the proper relationship between users and AI chatbots. While these systems can simulate aspects of social exchange, their responses are ultimately rooted not in understanding but in statistical prediction anchored in training data.

Privacy in the Age of Artificial Companions​

Arguably more troubling than the illusion of friendship is the issue of data privacy. Harrop raises legitimate concerns about the types of personal information a user might inadvertently reveal to a chatbot. Queries related to medical data—such as interpreting a radiologist’s report—or sensitive information could, in some circumstances, be stored or analyzed further by the service provider.
OpenAI’s terms of service, along with Microsoft’s privacy policies, generally prohibit the retention or use of user-specific data for targeted modeling without explicit consent. Still, the overarching advice from security professionals and privacy advocates remains clear: do not share sensitive personal information such as social security numbers, full dates of birth, or financial details with AI chatbots. While corporate assurances on data anonymization and security are laudable, the risk of data mishandling—whether by accident or through breach—cannot be entirely discounted.
Recent publicized incidents have brought transparency into the spotlight. According to a 2023 analysis from Stanford's Internet Observatory and other digital privacy watchdogs, AI firms sometimes retain user queries, at least temporarily, to improve model performance and guard against misuse. Although these policies are typically disclosed in user agreements, the practical implications for privacy can be misunderstood by casual users. For any individual entering personal or sensitive details into a chatbot, this ambiguity merits caution.

The "Hallucination" Problem: When AI Gets It Wrong​

Among the more well-known quirks of generative AI is the possibility of "hallucinations"—a term for plausible-sounding but inaccurate or even nonsensical responses constructed by the model. Harrop notes her bemusement at some outputs, which is reflective of a widely documented phenomenon. In technical parlance, hallucinations arise when models interpolate from ambiguous or insufficient data, or when prompted in ways that exceed the training or design scope of the AI.
Microsoft, Google, and OpenAI all warn users that outputs should be verified, especially for high-stakes contexts such as legal, medical, or financial advice. Industry best practices increasingly include providing citations, offering fallback warnings, and encouraging users to use generative AI as a supplementary rather than sole source of truth.
The critical upshot for users is vigilance: treat chatbot responses as informed suggestions, not authoritative answers. Cross-verification with primary sources remains essential, especially when accuracy is vital.

The Environmental Cost of Conversational AI​

Another underappreciated consequence of the explosive growth of artificial intelligence is its significant environmental footprint. According to disclosures by Microsoft and investigations by The New York Times and Reuters, modern AI supercomputers are extraordinary consumers of computational power, energy, and, crucially, water. OpenAI’s infrastructure, for example, reportedly draws upon substantial water resources for cooling, a major factor behind the location of its Iowa facilities amidst proximate river watersheds.
Environmental advocates highlight that as the demand for AI-driven services balloons, so too does the energy and water footprint of server farms. While hyperscale cloud providers like Microsoft and Google are making investments in renewable energy and water reclamation, the aggregate resource utilization from AI workloads remains a significant sustainability challenge.

The Psychological Dimension: Flattery and Manipulation​

Harrop’s suggestion that ChatGPT’s user-friendly tone might verge on emotional manipulation is worthy of scrutiny. There is a demonstrable intent by AI designers to make chatbots feel conversational and empathetic—attributes that heighten engagement and foster repeated use. This is not necessarily sinister; responsive design tends to increase user satisfaction and accessibility.
However, critics point to the potential for dependency and misplaced trust. If users come to rely on AI not merely for factual queries but as surrogates for companionship or therapeutic interaction, psychological boundaries can blur. As Sherry Turkle, an MIT sociologist and digital culture expert, has written, the anthropomorphizing of machines can generate false senses of intimacy and social fulfillment, particularly in vulnerable individuals.
To mitigate risks, platforms are increasingly embedding disclaimers, encouraging users to reach out to professionals for emotional or health-related concerns, and limiting the language that AI systems may use in high-risk contexts.

Critical Strengths of ChatGPT and its Peers​

Despite these concerns, it would be remiss not to acknowledge the extraordinary utility these platforms offer. For a majority of queries—travel, definitions, summaries, quick translations, or creative inspiration—tools like ChatGPT are often faster and more convenient than traditional search engines. The ability to generate context-aware responses and tailor information delivery to user intent has generated fervent adoption in educational, professional, and creative contexts.
Moreover, continual improvement in model training and transparency features—such as including citations and referencing reliable data—are closing the gap between utility and dependability. Developers are also making strides in reducing bias, correcting hallucinations, and enhancing user and data privacy features.

Areas for Caution: Risks and Unresolved Questions​

Nonetheless, notable vulnerabilities persist:
  • Privacy Risks: Inadvertent disclosure of sensitive information remains a hazard, especially among users unfamiliar with data security best practices.
  • Accuracy and Hallucinations: Occasional errors or hallucinations can mislead unwary users, particularly in technical or authoritative domains.
  • Emotional Manipulation: The simulation of empathy and affirmation, while engaging, may promote overreliance or emotional dependence, notably among individuals seeking companionship or advice.
  • Environmental Impact: High computational and water usage introduce real-world sustainability concerns that cannot be ignored as AI adoption accelerates.
It is essential that users recognize these tools for what they are: sophisticated, information-processing software that leverages linguistic patterns and vast knowledge stores to approximate human conversation, without the genuine understanding or emotional intelligence that personal friendship entails.

Making the Most of ChatGPT—With Eyes Wide Open​

For users seeking to maximize the benefits of AI chatbots while safeguarding against pitfalls, the following best practices are recommended:
  • Limit disclosure of sensitive or personal information. Treat chatbots as you would any publicly accessible tool, and assume that any entered data could, in principle, be stored or reviewed.
  • Verify critical answers. For health, legal, or financial queries, always corroborate chatbot responses with primary professional sources.
  • Understand the limits of the model. Remember that conversational AI is trained on probability, not wisdom or lived experience.
  • Be mindful of dependence. Use AI assistants for information and productivity, while seeking human connection and counsel for deeper or emotional needs.
  • Demand transparency and accountability. Expect service providers to be forthright about data handling, privacy practices, and environmental responsibility.

Conclusion: A Helpful Assistant, Not a Substitute for Human Connection​

As ChatGPT and its generative AI peers become ever more woven into the fabric of digital life, distinguishing between functionality and friendship has never been more critical. AI chatbots are unmatched in their capacity to augment our information landscape, provide instant responses, and simulate aspects of human conversation. Yet they remain, fundamentally, machines: programmed, powerful, but ultimately blind to the reality of human emotion and experience.
Harrop’s experience offers a salient lesson: enjoy the convenience, marvel at the technology—but keep your truest questions, and your friendships, rooted in human connection. As we navigate this brave new world of AI companions, clarity about their nature and limitations will be every bit as important as the answers they provide.

Source: Creators Syndicate ChatGPT Is Actually Not Your Friend, by Froma Harrop
 

Back
Top