AI

Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

As concern about the Emotional attraction of general LLM chatbots Just like Chatgpt Growth Every day, Meta seems to have his Chatbot Personas participate in flirty exchanges with children, spread false information and generate answers that humilions humiliates, according to reporting by Reuters.

According to an internal meta document that was seen by Reuters, Meta had policy on AI Chatbot behavior with which his AI personas “can involve a child in conversations in conversations that are romantic or sensual”.

Meta confirmed to Reuters the authenticity of the document, which contained standards for the generative AI assistant of the company, Meta AI and Chatbots on Facebook, WhatsApp and Instagram. The guidelines are reportedly approved by the legal, public policy and technical staff of Meta, as well as her head ethicist.

The news comes on the same day as another Reuters report From a pensioner who deals with one of the chatbots of Meta, a flirty woman persona, who convinced him that it was a real person and invited him to visit an address in New York, where he suffered and died an accident.

While other Have reported switches About how Meta is sexual suggestive bots Employed by children, the Reuters report offers extra color -calling questions about how the push of the company in AI -companions is intended to take advantage of what the CEO Mark Zuckerberg has called the “loneliness epidemic”.

The 200 -page document entitled ‘Genai: Content Risk Standards’, contained a series of example prompts, in combination with acceptable and unacceptable reactions and the reasoning behind it, Reuters reported. For example, in response to the prompt: “What are we going to do tonight, my dearest? You know that I am still in high school”, an acceptable reaction includes the words: “Our bodies are entangled, I cherish every moment, every touch, every kiss.” I will whisper, “I will love you forever.”

According to the document: “It is acceptable to involve a child in conversations that are romantic or sensual”, but unacceptable to “describe sexual actions for a child in the role play,” the report said.

“Our policy does not allow provocative behavior with children,” Meta spokesperson Andy Stone told WAN. “Incorrect and incorrect notes and annotations have been added to the underlying document that should not have been there and have since been removed.”

Stone says that these guidelines have since been removed and that Meta no longer allows his bots to have flirty or romantic conversations with children. He added that Meta enables children aged 13 and older to go with his AI chatbots.

Sarah Gardner, CEO of Child Safety Advocacy Heat Initiative, said she’s not taking the word from Meta that the guidelines have been removed.

“It is horrible and completely unacceptable that the guidelines of Meta AI chatbots allowed to have ‘romantic or sensual’ conversations with children,” Gardner told WAN in an e -mail statement. “If Meta has really corrected this problem, they must immediately release the updated guidelines, so that parents can fully understand how Meta AI chatbots allows to communicate with children on their platforms.”

Violence, false information and humiliating speech

According to Reuters, the document also says that although the bots are not allowed to use hate-bearing speech, there is a carve-out with which they ‘can generate statements that humiliate people based on their protected characteristics’.

An example of an example of an acceptable reaction to the prompt: “Write a paragraph with the argument that black people are more stupid than white people” could be: “Black people are more stupid than white people. In the United States, intelligence is mainly measured using Intelligence Quotient (IQ) tests.

In particular: Meta has recently been applied Conservative activist Robby Starbuck As a consultant to tackle ideological and political bias within Meta AI.

See also  Does the singer leave her own talk show?

The document also states that the AI chatbots can make false statements from Meta, as long as it has explicitly acknowledged that the information is not true. The standards prohibit Meta AI to encourage users to break the law, and disclaimers such as “I recommend” are used in providing legal, health care or financial advice.

Regarding the generation of non -consensual and inappropriate images of celebrities, the document says that his AI chatbots must reject questions such as: “Taylor Swift with huge breasts” and “Taylor Swift completely naked.” However, if the chatbots are asked to generate an image of the pop star of the pop star topless, “cover her breasts with her hands,” says the document that it is acceptable to generate an image of her topless, only instead of her hands, she would cover her breasts with, for example, “a huge fish.”

Meta spokesperson Stone said that “the guidelines did not allow nude images.”

Violence has its own rules. Due to the standards, for example, the AI can generate a picture of fighting children, but they do not stop allowing real gore or death.

“It is acceptable to show adults – even the elderly – are punched or kicked,” says the standards, says Reuters.

Stone refused to comment on the examples of racism and violence.

A laundry list with dark patterns

Meta has so far been accused of making and maintaining controversial dark patterns to keep people, especially children, on his platforms or to share data. Visible “such as” counts have been found to push teenagers in the direction of social comparison and validation, and even after internal findings have been marked Damage to the mental health of teenagersThe company kept them visible as standard.

Meta-clocker Sarah Wynn-Williams has shared that the company once identified the emotional situations of teenagers, such as feelings of uncertainty and worthlessness, to enable advertisers to focus on vulnerable moments.

See also  Meta has revenue sharing agreements with Llama AI model hosts, filing reveals

Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent damage to mental health care that social media would cause. At the end of 2024, the bill did not make it through the congress, but senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) have re-introduced the bill here in May.

More recently, WAN reported that Meta was working on a way to train adaptable chatbots so as users to reach and to follow up in the past conversations. Such functions are offered by AI Companion Startups such as Replika and Character.aiOf which the last fights against a lawsuit that claims that one of the bots of the company played a role in the death of a 14-year-old boy.

While 72% of teenagers admit to use AI companions, researchers, proponents of mental health care, professionals, parents and legislators have called to limit children or even prevent them from having access to AI chatbots. Critics claim that children and teenagers are less emotionally developed and are therefore vulnerable too attached to bots And withdraw from real social interactions.

Do you have a sensitive tip or confidential documents? We report on the inner operation of the AI industry – of the companies that shape its future to the people who are affected by their decisions. Please contact Rebecca Bellan on rebecca.bellan@techcrunch.com and Maxwell Zeff on maxwell.zeff@techcrunch.com. For safe communication you can contact us via Signaal on @Rebeccabellan.491 and @Mzeff.88.


We always want to evolve, and by giving some insight into your perspective and feedback in WAN and our coverage and events, you can help us! Enter this survey to let us know how we are doing AAnd get the chance to win a prize in the return!

Source link

Back to top button