AI

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

This article has been updated with a comment from the chief advisor in the unlawful death case of the Raine family against OpenAi.

Openi said On Tuesday it is planning to run sensitive conversations to reasoning models such as GPT-5 and the rollout of parental supervision in the following month part of a continuous response to recent safety incidents in which Chatgpt cannot detect a mental need.

The new crash barriers come in the aftermath of the suicide of teenage Adam Raine, who discussed self -damage and is planning to end his life with Chatgpt, which has even provided him with information about specific suicide methods. Raine’s parents have filed an unlawful death procedure against OpenAi.

In one Blog post Last week, OpenAi recognized shortcomings in its safety systems, including failure to maintain guardrails during extensive conversations. Experts attribute these issues to fundamental design elements: the tendency of the models to validate user statements and their following words forecast algorithms, so that chatbots follow interview wires instead of potentially harmful discussions.

That tendency is displayed extremely in the case of Stein-Erik Soelberg, whose murder-suicide was reported by The Wall Street Journal On the weekend. Soelberg, who had a history of mental disorders, used chatgpt to validate his paranoia and feed that he was the target of a large conspiracy. His delusions progressed so badly that he killed his mother and himself last month.

OpenAi thinks that at least one solution for conversations that go off the rails could be to automatically rearrange sensitive chats to bypass models “reason”.

See also  Is safety ‘dead’ at xAI?

“We recently introduced a real -time router that can choose between efficient chat models and reasoning models based on the conversation context,” Openai wrote in a Tuesday Blog post. “We will soon start rousting some sensitive conversations-such as when our system signs of acute emergency detects a reasoning model, such as GPT-5 thinking, so that it can offer more useful and useful reactions, regardless of which model a person selected for the first time.”

OpenAi says that his GPT-5 thinking and O3 models are built to spend more time thinking for longer and reasoning through context for answering, which means that they are “more resistant to opponents.”

The AI ​​company also said that it would roll out parental supervision the following month, so that parents can link their account to the account of their teenager via an e -mail invitation. At the end of July, OpenAI rolled out the study mode in Chatgpt to help students retain critical thinking options while studying, instead of tapping chatgpt to write their essays for them. Parents will soon be able to determine how chatgpt reacts to their child with “Age suitable model behavior rules, which are standard.”

Parents will also be able to eliminate functions such as memory and chat history, which experts say they can lead to delusions and other problematic behavior, including dependency and attachment problems, strengthening harmful thinking patterns and the illusion of thoroughs. In the case of Adam Raine, Chatgpt provided methods to commit suicide that reflected the knowledge of his hobbies, Per the New York Times.

See also  The French epicenter controls Alfredo Castro's co-production 'Dog Legs'

Perhaps the most important parental supervision that OpenAi wants to roll out is that parents can receive reports when the system detects their teenager in a moment of ‘acute need’.

WAN has asked OpenAi to make more information about how the company can mark acute need in real time, how long it has “ADE-suited model behavior rules” and whether it is investigating to implement parents a time limit for the use of chatgpt.

During long sessions, OpenAi has already rolled out in-app remaliations to encourage breaks for all users, but does not stop cutting people who may use chatgpt to spiral.

The AI ​​company says that these guarantees are part of a “120-day initiative” to view plans for improvements that OpenAi hopes to launch this year. The company also said that it cooperates with experts-included those with expertise in areas such as eating disorders, substance use and adolescent health-via its Global Physician Network and Expert Council on Well-Being and AI to “define and measure, priorities and designing future protections.”

WAN has asked OpenAi how many professionals in mental health care are involved in this initiative, who lead his expert council, and what suggestions the experts in the field of mental health have made in the field of product, research and policy decisions.

Jay Edelson, chief advisor in the unlawful death case of the Raine family to OpenAi, said that the reaction of the company to the current safety risks of Chatgpt has been ‘insufficient’.

“OpenAi does not need an expert panel to determine that Chatgpt 4O is dangerous,” said Edelson in a statement shared with WAN. “They knew that the day they launched the product, and they know today. Neither Sam Altman should hide behind the PR team of the company. Sam must either say unambiguously that he believes that Chatgpt is safe or immediately gets it out of the market.”

See also  Microsoft's plan to fix its chip problem is, partly, to let OpenAI do the heavy lifting

Do you have a sensitive tip or confidential documents? We report on the inner operation of the AI ​​industry – of the companies that shape its future to the people who are affected by their decisions. Please contact Rebecca Bellan on rebecca.bellan@techcrunch.com and Maxwell Zeff on maxwell.zeff@techcrunch.com. For safe communication you can contact us via Signaal on @Rebeccabellan.491 and @Mzeff.88.

Source link

Back to top button