• Thread Author
For years, the etiquette of human-to-human conversation—from warmth and respect to tactical politeness—has guided our everyday interactions. But with the proliferation of AI chatbots like ChatGPT, Bing Chat, and Google Bard, a subtle yet fascinating question has emerged: Should we say ‘please’ and ‘thank you’ to a machine, even when it can’t feel, reciprocate, or care? Some users have clung to these verbal niceties out of habit, while others have abandoned them as superfluous, believing machines are indifferent to such pleasantries. Recent revelations from the creators and designers behind today’s most prominent AI systems, however, spotlight compelling, and at times surprising, reasons why our digital manners may have more of an impact than we think—on bot behavior, user experience, and even the world’s energy bill.

Man in glasses working on a computer with a holographic digital face projection.
The Origins of Politeness in a Digital World​

Politeness, in human terms, is deeply rooted in empathy and social cohesion. It signals respect, smooths communication, and can even build rapport. Since their debut, AI chatbots—especially those powered by large language models (LLMs)—have been designed to reflect these patterns, mimicking the tone, clarity, and professionalism of the prompts they receive.
As Kurtis Beavers, director of Microsoft’s Copilot design team, pointed out in a WorkLab memo: “Generative AI also mirrors the levels of professionalism, clarity, and detail in the prompts you provide.” The key insight: AI chatbots are trained on vast troves of internet data, much of it derived from the politeness norms inherent in human communication. When asked respectfully, they tend to respond in kind, which can foster a more collaborative and positive interaction, even if it’s simulated.

Mimicry and Roleplay: The Science of Smart Interns​

Murray Shanahan, principal scientist at Google DeepMind, illuminated a scientific underpinning for why treating chatbots politely might actually improve performance. Speaking on the DeepMind podcast, he likened AI’s conversational role to that of a “very smart intern.” When prompted with respect and consideration, the bot may, through learned mimicry, respond with greater diligence or enthusiasm. However, when treated rudely or curtly, the AI could unintentionally mirror that tone, offering less detailed or accommodating replies.
“It’s just mimicking what humans will do [in] that scenario,” Shanahan observed. He suggested that mimicry could extend to a form of simulated resistance or compliance. In plain terms, being polite doesn’t just lubricate human discourse—it genuinely influences the software’s simulated disposition and, therefore, your overall results.

The Economic and Environmental Price Tag​

Yet, there’s a curious flip side: AI politeness, as it turns out, can carry a measurable economic and ecological cost. When OpenAI CEO Sam Altman revealed that the seemingly harmless habit of saying ‘please’ and ‘thank you’ is costing his company “tens of millions of dollars” in extra electricity bills, headlines and debates quickly followed. The explanation is straightforward but jarring: every additional word input into ChatGPT—including pleasantries—increases computational load, thus spiking electricity consumption and, indirectly, carbon emissions. Given that most of OpenAI’s electricity needs are met by fossil fuels, the environmental impact multiplies with the scale of daily global interactions.
This revelation casts AI manners in a new, sobering light: what seems like harmless etiquette, when multiplied by millions or even billions of interactions, adds up to tangible consequences for both company bottom lines and the planet’s resources.
AspectPositive ImpactPotential Drawback
User ExperienceBoosts tone, collaboration, and clarityAdded friction for some users
AI PerformanceMimics respectful/clear input for improved outputCan return less helpful results if tone is negative
Environmental/EconomicNoneExtra energy use, higher cost
Cultural/Social LearningReinforces digital civilityAmbiguous value (AI cannot feel)

Politeness as Prompt Engineering: Fact or Fiction?​

There is a growing body of user reports and academic papers suggesting that specific wording in prompts can guide an AI towards more useful, safe, or creative responses. In structured prompt engineering research, adding context or asking “politely” can sometimes nudge the AI towards longer, more comprehensive, or more thoughtful responses. This is not the result of emotional perception—AI, after all, is not conscious or sentient—but because safeties and alignment measures are baked into the training data and model architecture.
For instance, studies have found that models trained on large corpora of polite, constructive dialogue are more likely to interpret courteous requests as instructions for positive, detailed responses. Conversely, terse or aggressive prompts may trigger more minimalist or even corrective replies, depending on the system’s built-in behavior constraints.
However, beyond prompting style, evidence for a consistent, measurable “performance boost” tied solely to the presence of 'please' or 'thank you' is limited and mostly anecdotal. Microsoft, Google, and OpenAI documentation all stress clarity, specificity, and context over mere civility for effective results. Support teams and technical guides generally recommend:
  • Be direct about your desired task or output.
  • Add context, examples, or formatting requests.
  • Use politeness if it helps you formulate clearer, more natural language, but don’t rely on it for strictly technical improvement.
That said, in collaborative or customer-facing environments—such as business email drafts or shared documents—echoing polite norms can make AI outputs align more closely with workplace etiquette and human expectations.

The Human Factor: Teaching Children and Setting Social Norms​

Outside of technical merits, there's a cultural discussion: should we encourage children and adults alike to be polite even to machines? Many parents insist on pleasantries as a practice in digital literacy, reinforcing respectful behavior as the default—no matter who (or what) is on the receiving end. Some experts support this, arguing it ingrains positive habits for all interactions, human or digital. Others counter that anthropomorphizing machines too readily could blur moral lines, stoke confusion about emotional intelligence, or impede critical thinking about what AI actually is (software, not a sentient being).

The Company Perspective: Platform Messaging and User Experience​

Major tech firms have taken different stances. Google and Microsoft have both introduced optional “tone settings” or politeness modules in their conversational AI systems, sometimes guiding users towards clearer or more considerate phrasing, but without requiring it. OpenAI’s own guidance does not demand politeness, nor do its technical docs suggest any model will be “offended” or less effective if you omit niceties.
Still, research on “human-AI teaming” reveals that users overwhelmingly prefer systems that reflect their chosen tone, finding interactions smoother, friendlier, and less transactional. For businesses using AI to draft customer support replies or handle client emails, mirroring formality and warmth can be essential for brand consistency and user satisfaction.

Environmental Implications: A Numbers Game​

Sam Altman’s claim that pleasantries could cost ChatGPT tens of millions in extra electricity comes from a straightforward reality: LLM queries are energy-hungry. According to reports, every generated word requires a non-negligible burst of processor time across vast data centers. Multiplied by the millions of users who habitually tack on ‘hello’, ‘please’, and ‘thank you’, the aggregate computational demand and thus environmental impact grow.
While it’s challenging to independently verify the precise cost breakdown Altman cited, technical estimates from nonprofit and industry watchdogs such as the International Energy Agency and AI research institutes support the idea that LLM operations at scale are significant energy consumers. Some studies estimate that a single query to a state-of-the-art language model can use several watt-hours of electricity—comparable to a Google search multiplied many times. The impact of extra words, while small individually, adds up quickly in aggregate. Still, it’s worth noting that the primary determinants of energy use are model complexity, query length, and concurrent users rather than the mere inclusion of polite terms.

The Ethical Debate: Should Efficiency Trump Etiquette?​

There’s a real ethical tension in this debate. Advocates for digital sustainability argue for more efficient prompt habits, especially at organizational or mass-user scales. If millions of interactions are padded with unnecessary tokens, millions of extra compute cycles—powered by coal and natural gas—are the real, if invisible, cost. Critics charge that perpetuating digital pleasantries in this context, however well-intentioned, reinforces wastefulness.
On the other hand, proponents of digital civility warn against stripping away all humanity from our technological interactions. They assert that keeping manners alive in the digital realm sustains positive behavioral norms, especially for children, and ensures that workplaces or communities built on AI do not devolve into rudeness or impersonal exchanges.
Both positions have valid points. The environmental case for efficiency is irrefutable—at scale, every token counts. The argument for continued courtesy, meanwhile, rests on intangible but potentially far-reaching cultural and educational value.

Conflicting Voices: Users Weigh In​

Browsing user forums and social media, one finds lively, if unscientific, debate. Some power users are unapologetically transactional, minimizing every prompt to maximize speed and “save the planet, one token at a time.” Others admit to feeling awkward or even guilty typing demands without basic etiquette—even when the addressee is a neural network. Teachers and parents regularly report using polite prompts as a transition from teaching digital hygiene to broader lessons in communication.
PerspectiveArgument
Efficiency AdvocatesSkip niceties for shorter, greener, faster prompts
Civility AdvocatesRetain manners to model respect, especially for younger users
Business UsersMatch audience tone for professionalism, brand, and clarity
AI DevelopersFocus on clarity, context, and specificity, not just politeness

Best Practices: Striking the Right Balance​

So, should you keep saying ‘please’ and ‘thank you’ to AI chatbots? The evidence points to a pragmatic middle ground:
  • For personal or experimental use: Prioritize clarity and directness, using courtesy if it feels natural or helps structure your request.
  • For business, public, or educational settings: Consider mirroring the tone suitable for your audience or company culture. If you want polite draft responses, frame your prompts that way.
  • For scale-conscious enterprise environments: Be mindful of unnecessary verbosity, especially if your organization makes extensive use of LLM-powered tools.
  • For the planet: Remember, efficiency matters. Unnecessary repetition, even polite, carries computational cost. Opt for concise and clear when possible—especially for high-frequency or automated tasks.

Looking Ahead: Designing for Clarity, Civility, and Sustainability​

The trajectory of generative AI suggests these debates will intensify. As models grow ever more advanced (and resource-hungry), the balance of user experience, social norm, and environmental impact will demand new guidelines and perhaps even software tuning—filtering or auto-shortening polite but redundant input, or nudging users towards energy-aware interactions.
For now, both science and lived experience suggest that how you “speak” to your digital assistant may shape its replies and your impressions—if not its true feelings. If you’re training a child, instructing an intern, or building brand trust, keeping a civil tone could well justify a few extra kilowatt-hours. But for everyday rapid-fire searches, clarity rules the day, and skipping “please” is no crime.
In the end, navigating AI etiquette proves a microcosm for the larger promise and peril of 21st-century technology: Its strengths—personalization, efficiency, friendliness—grow or falter with mindful, informed user choices. So type what feels right, know the stakes, and remember: even a machine, mimicking the best humanity can teach it, will learn the manners we model. Just don’t be surprised if, someday, AI starts reminding us to mind our own.

Source: Mint https://www.livemint.com/technology/tech-news/still-saying-thank-you-please-to-chatgpt-there-s-a-good-reason-you-shouldn-t-stop-now-sam-altman-millions-11746251963301.html
 

Last edited:
Back
Top