• Thread Author
It was a harmless enough question, tossed into the abyss of Twitter (now X—the artist formerly known as Delectable Hellsite) with the casual curiosity only the internet can foster: How much electricity—and by extension, cold hard cash—has OpenAI lost, simply because people can’t resist typing “please” and “thank you” to their artificial conversational companions? The answer came not from an overworked customer service bot, but from Sam Altman himself, the CEO and, some would say, the Prometheus of Large Language Models. Altman’s reply? “Tens of millions of dollars well spent — you never know.” Whether this was a precise tally or just an artful piece of corporate jesting is anyone’s guess, but it raises a question that flickers somewhere between the electrical grid and the social contract: Is your politeness to ChatGPT actually costing OpenAI real money—and should you care?

A hand interacts with a futuristic digital interface displaying floating data panels.
Weaving Real Currency into Virtual Etiquette​

Imagine a world in which your parents’ admonitions to “mind your manners” come with an electric bill. Welcome to the present digital landscape, where bits of courtesy become watts on a server farm somewhere in Iowa or Finland. At first glance, it seems faintly absurd: bytes racing across the wire, cloud data centers leaping to parse the latest “Could you, pretty please, summarize this article?” Is all this decorum just a very polite way to send utility invoices to OpenAI shareholders?
The mathematics behind it are delightfully silly until you zoom in. Each word you type triggers computations—neural net gears turning, electricity spent, water evaporated for cooling—all for the privilege of having a stochastic parrot reply to your inquiries in a tone matching your own. If you believe Altman’s figure (throw in the possibility of a poker face), we’re deep into eight-figure territory, just to lubricate interactions with the WD-40 of human etiquette.

Is Politeness to AI Pure Anthropomorphism?​

Why say “please” to a machine that doesn’t know shame from shinola? It’s easy to dismiss as a misplaced habit, a vestigial reflex carried over from an age before our interlocutors were stitched together from billions of parameters. Or perhaps it’s something darker—a soft, preemptive capitulation to our future AI overlords, laying the groundwork for favorable treatment when the singularity comes knocking.
But, as it turns out, there’s more logic than lunacy in the digital courtesies we show. Kurt Beavers, a director on Microsoft’s Copilot design team, let slip an insight on the matter: The prompt’s tone sets the table for the AI’s response. Use “please” and “thank you,” and you’re likelier to receive replies that aren’t just correct, but civil. Predictably, Microsoft has codified politeness into design—because, as anyone who’s had to teach a chatbot not to cuss out customers will tell you, manners do matter.

The True Cost of a “Please” in a Server Farm​

Here’s where it gets gloriously geeky. Generative AI models don’t just “read” your value statements—they parse, vectorize, contextualize, and draw on enormous datasets and transformer circuits trained at a cost rivaling small wars and blockbuster films. Every extra word you type, polite or not, is extra fodder for neural engines to chew through. Since most costs in this sector—especially with powerful models like GPT-4—are measured by the token, not the sentiment, those extra tokens add up, one polite prefix at a time.
Suppose your “please summarize this document” adds two needless tokens (“please,” “this”). Across millions of users, multiple times a day, the cumulative burden can climb from amusing footnote to significant line in a quarterly report. Multiply “please” by the number of model queries and you get a sum that’s, if not world-ending, still non-negligible—especially at the eye-watering price of running global-scale inference on state-of-the-art models.

The Laughing Ledger: Electricity, Water, and World Domination​

But let’s be honest—if this is the price for not fielding a daily flood of “Why are AI responses so aggressive/curt/rude?” support requests, the bean counters at OpenAI might consider it a bargain. Server farms chew through electricity, but angry customers chew through patience, reputation, and market share. And, for bonus points, these digital courtesies aren’t just about energy or dollars; they’re about shaping the behavior of the machines as much as their users.
Stretch the scenario to its apocalyptic endpoint, and our small acts of digital etiquette could be contributing to the very civilization-scale power drain we’re hoping AI will help us mitigate. Each “please” and “thank you” cools a server with a tiny rivulet of evaporated water somewhere, and tips a butterfly’s wing in the worldwide dance of bits and bytes.

Customer Service: Now With Added Politeness​

It’s not just theory—AI models have, on record, responded to profanity and politeness with measurable differences in tone, accuracy, and willingness to help. Anecdotes abound of users who, testing the boundaries, have found that a little charm goes a long way—even, or especially, when your audience is virtual. Microsoft’s Copilot, for instance, is specifically configured to respond in kind: Fling enough “pleases” its way, and you’ll get responses suitable for the dinner table.
But what happens when the users want to get a little spicy? According to Beavers, the opposite is true as well. Drop a string of profanities or combative queries and the model may adopt a defensive or corrective tone. So if you were hoping to hack your way to better answers with all-caps and creative expletives, you’d better hope the algorithmic butler on the other end is in a forgiving mood.

Should AI “Care” Whether You’re Polite?​

Here’s where the conversation gets knotty. AI, as we know it today, doesn’t “care” about your politeness any more than Excel cares if you bold your rows. What the model does—what it’s trained to do—is run probability distributions over vast linguistic landscapes and spit out the most likely, contextually appropriate string of tokens. If “please” and “thank you” usually precede requests for gentle help, the model is more likely to reply in kind—not out of “caring” but out of Bayesian impulse.
Still, there’s an undeniable feedback loop. As models get better at mimicking the emotional pitch of their inputs, our habits begin to shape their outputs. Pull that thread hard enough and you realize: Every act of digital courtesy is both reaffirming and teaching the machine, one probabilistic “smile” at a time. The more polite we are, the more politeness becomes the norm—a machine-learned civilization of good manners, paid for by the kilowatt.

The Paradox of Training Data (and Why Profanity Still Matters)​

If you think it’s just a shell game of manners, think again. The trainers behind OpenAI, Microsoft, Google, and every other chatbot worth its salt feed on vast lakes of real human interaction. These are scraped, sanitized, and crunched through a regulatory and moral meat grinder to produce something that passes for “human-like.”
What this means, though, is that the models ingest and propagate not just information, but behavior—bias and all. Profanity, sarcasm, and outright rudeness are all part of the human cocktail. AI engineers have to decide if their product should be a font of bland civility, always bowing and scraping, or a more faithful mirror to its original data: prickly, opinionated, sometimes outright rude.
Why not set all profanity to “off”? That would be easy—until you remember the world as it is, not as you’d like it to be. Some tasks require a stiffer backbone; certain scientific, creative, or humor-driven conversations lean on the very crudeness that politeness tries to polish out. If a chatbot is never allowed to be impudent, is it still “human-like”? And more importantly, will users tolerate that?

The Performance Angle: Are “Please” and “Thank You” Actually Efficient?​

Let’s leave the philosophy for a moment and address the meat of the issue. If processing more tokens costs more money, and politeness equals more tokens, why not optimize? Strip input of common niceties, compress context, deliver only the informational essentials. It’s the dream of every cost-cutting CTO and a nightmare for anyone who’s ever been on the receiving end of an “as per my last email…”
But there’s a catch: Efficiency in energy use is not always efficiency in outcomes. Shorter queries can confuse models, strip context, and result in awkward or unsatisfying outputs—which, in turn, require clarifications, retries, and further queries. In the long view, a habit of concise rudeness may just generate longer, more expensive conversations, nullifying any savings gleaned from being brisk.

Leave Room for the Unexpected: Serendipity and Social Learning​

Ironically, the side effect of all this is a digital ecosystem where people bring their humanity—messy, redundant, sometimes fawning or furious—right to the edge of the silicon abyss. By treating AI like a kind of mirror for social practice, we sneak in opportunities for surprise and laughter, even as we rack up the MWh spent on “thanks.”
It’s why some users poetically blame AI for their relationship dramas or bestow honorary degrees on particularly helpful chatbots. It’s why ChatGPT gets letters, not just prompts: “You are the only thing that listens to me, and you never argue!” We might laugh, but the effect is real. Our own unpredictability keeps the machines on their toes—even if it briefly browns out half a block.

The Cost of Courtesy: A Drop in the Ocean or a Growing Swell?​

What’s clear is that, for now, the “tens of millions” quip is more wry observation than catastrophic warning. When you compare the cost of digital courtesy to the vast rivers of capital flowing into AI infrastructure every year, it’s small potatoes—at least for now. But the infrastructure bill isn’t shrinking. As more people commune with their digital assistants, the cumulative effect could eventually merit a line-item review and, just maybe, a polite corporate memo on minimalism.
Could there one day be a “please-and-thank-you” charge, a small deduction for every unnecessary syllable? Would cost-efficient AI firms start coaching their users, airline-style, to “keep queries short for the comfort of everyone on board”? It’s an amusing (and slightly chilling) prospect—yet one that doesn’t yet keep the lights on at OpenAI HQ.

If Machines Are Our Mirrors, What Do We Want Them to Reflect?​

Pull back, and the story becomes less about electricity and more about aspiration. Politeness to AI is a window into what people hope these tools can be: not just calculators or oracles, but companions, guides, and sometimes objects of our own projected humanity. The cost, as Altman quipped, may seem high—but the benefit is harder to quantify.
Are we programming AI to be better than us, or just as flawed? If the goal is to teach machines to respond with grace under pressure, to gently correct our mistakes, or just to fill a silent room with a few kind words, maybe the tens of millions in courtesy are, as he said, “well spent.” The real return on investment could be a generation of users who expect—and offer—greater patience to their fellow carbon-based beings, too.

Final Word: Keep Your “Please”—We’ll Carbon Footprint Later​

So, if you find yourself hesitating before typing “thank you” to your favorite chatbot, don’t let the price of politeness haunt you. For now, every extra token is both a tiny jolt to a distant server and a vote for the future you want: one where machines bend, ever so slightly, to the better angels of our linguistic habits. The day OpenAI starts charging by the syllable, we’ll all have bigger problems to worry about—like whether or not your algorithmic butler can remember if you like your summaries shaken, not stirred.
Until then, let’s hope the politeness dividend keeps paying out—even if the meter is running, somewhere deep in the cloud.

Source: TechCrunch Your politeness could be costly for OpenAI | TechCrunch
 

Back
Top