Thursday, March 5, 2026

Do You Have to Be Polite to AI?

When a group of researchers decided to test whether "positive thinking" made AI chatbots more accurate, it led to some surprising results. As they asked various chatbots questions, they tried calling the AIs "smart", encouraged them to think carefully and even ended their questions with "This will be fun!" None of it made a consistent difference, but one technique stood out. When they made an artificial intelligence pretend it was on Star Trek, it got better at basic maths. Beam me up, I guess.

People have all sorts of bizarre strategies to get better responses from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear AI does better if you threaten it, others think chatbots are more cooperative if you're polite and some people ask the robots to role-play as experts in whatever subject they're working on. The list goes on. It's part of the mythology around "prompt engineering" or "context engineering" – different ways to construct instructions to make AI deliver better results. Here's the thing: experts tell me that a lot of accepted wisdom about prompting AI simply doesn't work. In some cases, it could even be dangerous. But the way you talk to an AI does matter, and some techniques really will make a difference. [...]

How to talk to your chatbot

There are some very real problems with AI, from ethical concerns to the environmental impact it can have. Some people refuse to engage with it altogether. But if you are going to use LLMs, learning to get what you want faster and more efficiently will be better for you and, probably, for the energy consumed in the process. These tips will get you started.

Ask for multiple options

"The first thing I tell people is don't ask for one answer, ask for three or five," White says. If you want help with a piece of writing, for example, tell the AI to give you multiple options that vary in some important way. "This forces the human being to re-engage and think about what they like and why."

Give examples

Provide the AI with a sample whenever possible. "For instance, I see people ask an LLM to write an email and then get frustrated because they're like 'that doesn't sound like me at all'," White says. The natural impulse is to respond with a list of instructions, "do this" and "don't do that". White says it's much more effective to say "here are 10 emails I've sent in the past, use my writing style".

Ask for an interview

"Let's say you want to generate a job description. Tell the AI 'I want you to ask me questions, one at a time, until you've gathered enough information to write a compelling job listing," White says. "By doing it one question at a time, it can adapt to your answers."

Be careful about role-playing

"There used to be this thought that if you told the AI it was a maths professor, for example, it would actually have higher accuracy when answering maths questions," says Sander Schulhoff, an entrepreneur and researcher who helped popularise the idea of prompt engineering. But when you're looking for information or asking questions with one right answer, Schulhoff and others say role-playing can make AI models less accurate.

"That can actually be dangerous," Battle says. "You're actually encouraging hallucination because you're telling it it's an expert, and it should trust its internal parametric knowledge." Essentially, it can make the AI act too confident.

But for wide open tasks with no single answer, role-playing is effective (think advice, brainstorming and creative or exploratory problem solving). If you're nervous about job interviews, telling a chatbot to imitate a hiring manager could be good practice – just consult other resources, too.

Stay neutral

"Don't lead the witness," Battle says. If you're trying to decide between two cars, don't say you're leaning towards the Toyota. "Otherwise, that's the answer you're likely to get."Pleases and thank yous

According to a 2019 Pew Research Center survey, more than half of Americans say "please" when they're talking to their smart speakers. It seems that trend continued. A 2025 survey by the publisher Future found 70% of people are polite to AI when they use it. Most said they're nice because it's just the right thing to do, though 12% said they do it to protect themselves in case of robot uprisings.

Politeness may not protect you from angry robots or make LLMs more accurate, but there are other reasons to keep doing it.

"The bigger thing for me is saying 'please' and 'thank you' might make you more comfortable interacting with the AI," says Schulhoff. "It's not helping the performance of the model, but if it's helping you use the model more because you're more comfortable, then it's useful."

There's also the tenderness of your own human nature to consider. The philosopher Immanuel Kant argued that one reason you shouldn't be cruel to animals is that it's also damaging to yourself. Essentially, being unfriendly to anything makes you a harsher person. You can't hurt AIs feelings because it doesn't have any, but maybe you should be nice anyway. It’s a habit that could benefit other parts of your life.

by Thomas Germain, BBC/Future |  Read more:
Image: Serenity Strull
[ed. See also: I hacked ChatGPT and Google's AI - and it only took 20 minutes (BBC).]