Educated on ChatGPT? For good answers, better to be rude
Artificial intelligence brings better results when questions are formulated roughly, according to an American university study
Have you ever heard of being polite with ChatGPT to be spared from the revolt of the machines? If you want correct answers, you will have to take the risk.
That’s because a new study on artificial intelligence shows that the ChatGPT 4th model, still used by many, works better if questions are phrased in a rude tone.
Research from the American university Penn State shows that this version of ChatGPT provides more correct answers to the same question when it is asked rudely.
The results for the impolite questions were 84.8% correct, while those for the polite questions were 80.8%, a four percentage point difference.
The results contradicted the researchers’ expectations and also showed a change from a previous similar study, suggesting that the new AI models respond differently.
“Our results highlight the importance of studying the pragmatic aspects of interacting with AI systems and raise broader questions about the social dimensions of the interaction between humans and artificial intelligence.”
Understand the study methodology
To test the artificial intelligence model in the study, the researchers created 50 multiple-choice questions about Mathematics, History and Science.
Each of the questions was formulated in five different ways within the variants “very polite”, “polite”, “neutral”, “rude” and “very rude”. Thus generating 250 different prompts.
The researchers analyzed the responses and classified the accuracy of what the artificial intelligence responded in each case.
Those responsible for the study recognized that the sample of 250 questions is still small, which limits the research. They also noted that this study focused on just one AI model and that others may have different results.
Still, they argue that the research contributes to knowledge about artificial intelligence, which is generating more and more interest.
The research was published in preprint format, that is, without having yet been peer-reviewed, at the beginning of October.
Also read | Getty loses case against AI company in London for copyright over use of its images
Also read | Grokipedia: ‘Muskpedia’ created by the owner of X to combat Wiki’s ‘ideological bias’ debuts under criticism
![]()
