Skip to content
Facto News
  • Viral News
  • Business
  • Politics
  • Health
Facto News
Viral News

IA and Suicide: OpenAi acts after death of a teenager

BySimon Rousseau Posted onSeptember 11, 2025 10:31 pmSeptember 11, 2025 10:31 pm
Notebook com tela inicial do ChatGPT, onde o usuário deve inserir o 'prompt' para pedir respostas ao chatbot de IA

OpenAi announced a package of measures after a lawsuit filed by the parents of adolescent Adam Raine, USA

In the week before World Suicide Prevention Day (September 10), OpenAi, owner of ChatgPT, published a long text on its blog announcing plans for a series of adjustments designed to protect mental health and safety of minors using the service, including parental control.

The demonstration then came to a case that exposed risks caused by the interactions of vulnerable people with AI: 14 -year -old American Adam Raine’s parents entered two weeks with a lawsuit against Openai for allegedly encouraged the teenager to take his own life.

Thus, generative artificial intelligence chatbots add to social networks as factors that can lead people vulnerable to committing extreme acts.

In its campaign by changing the narrative, the World Health Organization (WHO) warns of the influence of digital platforms on suicide rates.

The case that motivated the manifestation of OpenAi

In the lawsuit filed two weeks ago in California, Adam Raine’s parents accuse the company of manslaughter.

After death, they found that the young man spent months dialoguing with ChatgPT about his mental conditions and his life plans. Chatbot agreed and supported him in the decision.

In 2024, another similar process reached the Florida courts. Megan Garcia, the mother of 14 -year -old Sewell Setzer, who died from suicide at 14, has judicially called Google’s artificial intelligence company Character.ai.

According to the judicial complaint, the author’s son developed emotional dependence on a character chatbot.ai.

The tool, created based on language models, simulated the identity of a fictitious adult romantic therapist and partner.

Read too | Justice authorizes mother to prosecute Google and AI for the death of 14 -year -old son in the USA

What did OpenAi said

The news of the lawsuit filed by the Adam Raine family was released by the American press on August 26. On the same day, OpenAi published a post on its blog without referring directly to the case, but saying:

With the great adoption of chatgpt worldwide, many people have started using our tool not only to conduct research, programming or writing activities, but also when making personal decisions such as intimate advice, coaching and support.

Within this large number of people, it is inevitable that some of them are in serious situations of mental or emotional suffering. We wrote about this a few weeks ago and our plan was to approach the subject in more detail after the next update. However, recent cases of people who used chatgpt during acute seizures have had a profound impact on our teams, and so we decided to talk about it as soon as possible.

The text describes how systems are trained so as not to stimulate hazardous practices and forward those that show emotional instability to support services. However, it recognizes:

“Even with these protections, there were situations in which our systems did not behave in the expected way.”

Concrete measures in chatgpt

On September 2, in another statement, the company officialized plans for a series of adjustments for mental health and safety of minors – including parental control.

“We have seen people support on it (chatgpt) at the most difficult times. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional stress, guided by expert guidance.”

Announced measures range from further research so that the Generative AI responds appropriately on mental health to more direct interventions, such as notifying close accounts when a teenager touches a sensitive issue. The company’s plan is that they are implemented by the end of this year.

But next month, parents will, according to the statement:

  • Link your accounts to the adolescent’s account (minimum age of 13) by means of a simple email invitation.
  • Control how ChatgPT responds to the adolescent child with age -appropriate model behavior rules, which are activated by default.
  • Manage which features disable, including memory and chat history.
  • Receive notifications when the system detects that the teenage child is in a moment of acute suffering.

In another article, OpenAi had already made it clear that it is not recommended for children under 13, and that the underage audiences needed to use the parental consent tool even when used for study.

“We recommend caution when exposing children (chatgPT), even those within the recommended age group, and if you are using chatgPT in an educational context for children under 13, interaction with chatgPT should be conducted by an adult.”

Read too | Mental health, suicide and ‘capacitism’: guides teach the right words to break the stigma

Mosaic -worried woman's look

Ia can be ‘counselor’, but not for minors

Although officially it seems to be only new features, in the announcement OpenAi indirectly recognizes the potential of big techs to cause harm to children and adolescents to the point of taking them to suicide.

This risk is not new, and has been motivating the regulation of social networks. Algorithms and failures in moderation enhance the spread of dangerous content – especially for minors – involving eating disorders and self -mutilation.

With AI, however, the risks are different due to direct interaction, with chatbot acting as a counselor.

For good or evil, chatbots like chatgPT has become increasingly popular as therapy tools, according to a study published in the Journal of Medical Internet Research in July this year.

Although the practice has potential, it still needs safeguards, not just young people.

“To ensure ethical and efficient implementation, a complete set of protections – particularly regarding privacy, bias of algorithm and responsible engagement of users – must be established,” the survey says.

Read too | ‘UNALIVICIDE’: Analysis on social networks exposes failures on content moderation on suicide displayed to children and young people

Young man seeing suicide content on the computer

Simon Rousseau
Simon Rousseau

Hello, I'm Simon, a 39-year-old cinema enthusiast. With a passion for storytelling through film, I explore various genres and cultures within the cinematic universe. Join me on my journey as I share insights, reviews, and the magic of movies!

Moro leaves the Union and decides to join Bolsonaro’s PL
Moro leaves the Union and decides to join Bolsonaro’s PL
March 20, 2026March 20, 2026
Crisis and loneliness in the USA: Two thirds of people avoid parties due to lack of money
Crisis and loneliness in the USA: Two thirds of people avoid parties due to lack of money
March 20, 2026March 20, 2026
Lula confirms candidacy for reelection and raises tone against extreme right
Lula confirms candidacy for reelection and raises tone against extreme right
March 20, 2026March 20, 2026
It’s good to be a billionaire, even when it comes to paying income tax
It’s good to be a billionaire, even when it comes to paying income tax
March 20, 2026March 20, 2026
Alckmin: Haddad is a person dedicated to serving SP as a great governor
Alckmin: Haddad is a person dedicated to serving SP as a great governor
March 20, 2026March 20, 2026

Facto News
  • About us
  • Contact us

© 2010 - 2026 Facto News - [email protected]

  • Viral News
  • Business
  • Politics
  • Health
Search