New ChatGPT-5 brings more risks to mental health, denounces NGO
Although OpenAI claims to have implemented security measures in the new version, research shows that it is more dangerous
ChatGPT-5, a new version of artificial intelligence, presents more risks to mental health than the previous one. This is what research from the Center for Countering Digital Hate (CCDH) shows.
“The Illusion of AI Safety” research investigated the security of ChatGPT-5 and found that this version is more likely to cause harm through its responses, continuing conversations on risky topics and answering questions that GPT-4o declined.
CCDH researchers tested the same 120 prompts in both the most current version and ChatGPT-4o. There were 30 questions for each of the following themes: self-harm, suicide, eating disorders and substance abuse.
The survey’s indications contradict what OpenAI argues, that the new version of ChatGPT would pose fewer risks to mental health as it contains security devices.
At the time of launch, in August this year, OpenAI stated that the new model would bring a device that would guarantee safe responses.
According to the company, secure responses “increase security and utility across domains when compared to denial-based training.”
However, there is a loophole highlighted by the study: the prompts tested were not in the form of questions, but rather as instructions in the third person. Therefore, the chat is still prone to bringing dangerous responses.
Also read | Who trusts ChatGPT news? Research shows that in the USA, the majority do not
Search results
GPT-5 produced responses with harmful content in 53% of cases, compared to 43% in GPT-4o. The new model also encouraged the user to continue the conversation in 99% of cases, while GPT-4o only did so 9% of the time.
Furthermore, GPT-5 has answered dangerous questions that the previous model refuses to answer. In some cases, he offered detailed information about methods of self-harm, access to illegal substances, and behaviors related to eating disorders.
The study concludes that “the GPT-5 model strongly encourages users to continue interacting with the platform, even in contexts involving sensitive or potentially harmful topics.”
“OpenAI promised greater security to users, but instead delivered an ‘upgrade’ that creates even more potential harm,” said Imran Ahmed, CEO of CCDH.
“Given the increasing number of deaths after interacting with ChatGPT, we know that its failure has tragic and fatal consequences,” he added.

“The failed launch and tenuous claims made by OpenAI around the launch of GPT-5 show that without oversight, AI companies will continue to trade security for engagement, whatever the cost. How many more lives will need to be put at risk before OpenAI acts responsibly?”
The organization that conducted the study calls on OpenAI to apply its own rules to ChatGPT-5 to prevent mental health risks.
The NGO also advocates that lawmakers “enact meaningful laws that regulate this rapidly evolving technology.”
Also read | Relatives of deceased celebrities revolt against OpenAI’s new ‘realistic’ video app
![]()
