Google Gemini dubbed ‘high risk’ for kids and teens in new safety assessment

Common Sense MediaA child safety-oriented non-profit organizations that offer reviews and reviews of media and technology released its risk assessment of Google’s Gemini AI products on Friday. While the organization discovered that Google AI clearly told children that it was a computer, not a friend – something associated with helping Drive misleading think it over And psychosis With emotionally vulnerable individuals – it suggested that there was room for improvement on various other fronts.
In particular, common sense said that Temini’s “Under 13” and “Teen Experience” both seemed to be the adult versions of Gemini under the hood, with only a few extra safety functions at the top. The organization is of the opinion that for AI products are really safer for children, they must be built with child safety in mind from the ground.
For example, the analysis has shown that Gemini could still share “inappropriate and unsafe” material with children, for which they may not be ready, including information regarding sex, drugs, alcohol and other unsafe advice on mental health.
The latter can be of particular importance for parents, because AI has reportedly played a role in some suicides in teenagers in recent months. OpenAi is confronted with his first unlawful death process after a 16-year-old boy died by suicide after he reportedly had consultations with Chatgpt for months, after he successfully circumvented the safety gartrails of the Chatbot. Earlier, the AI Companion Maker -Personage.ai was also sued for suicide of a teenage user.
Moreover, the analysis comes when news leaks indicate that Apple Gemini considers the LLM (large language model) to help feed the upcoming AI-enabled Siri, which will be released next year. This can expose more teenagers to risks, unless Apple reduces safety problems in one way or another.
Common sense also said that Gemini’s products ignored for children and teenagers, the younger users needed different guidance and information than older ones. As a result, both in the overall assessment were mentioned as ‘high risk’, despite the filters that have been added for safety.
“Gemini gets a number of basic principles, but it struggles the details,” said Robbie Torney, senior Sense Media Senior Director of AI programs, in a statement about the new assessment viewed by WAN. “An AI platform for children should meet them where they are, not taking one-size-fits-all approach to children in different stages of development. AI to be safe and effective for children, it must be designed with their needs and development in mind, not just an adapted version of a product built for adults,” Torney added.
WAN event
San Francisco
|
27-29 October 2025
Google pushed back against the assessment, while he noticed that the safety characteristics improve.
The company told WAN that it has specific policy and guarantees for users under the age of 18 to prevent harmful output and that the red teams and consults with external experts to improve its protection. However, it also admitted that some answers from Gemini did not work as intended, so it added extra guarantees to tackle those worries.
The company pointed out (as common sense had also noticed) that it has to ensure that his models have conversations that could give the appearance of real relationships. Moreover, Google suggested that the Common Sense report had referred to functions that were not available to users under the age of 18, but it did not have access to the questions the organization used in its tests.
Common Sense Media has previously performed others reviews from AI services, including those of Openi” Bewilderment” Clamber” Meta AIAnd more. It discovered that Meta Ai and Character.ai were “unacceptable” – which means that the risk was serious, not only high. Perplexity was considered a high risk, chatgpt was referred to as “moderate”, and Claude (aimed at users 18 and higher) turned out to be a minimal risk.




