AI

xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’

Xai blamed an “unauthorized adjustment” for a bug in his AI-driven grok chatbot that took care of it refer repeatedly to “White genocide in South Africa” ​​when invoked in certain contexts on X.

On Wednesday, Grok started to answer to dozens of posts on X with information about white genocide in South Africa, even in response to non -related topics. The strange answers stem from the X account for Grok, which responds to users with AI-generated messages when a person tagts “@Grok”.

According to a post on Thursday of the official X-account of XAI, a change was made on Wednesday morning to the system prompt of the Grok Bot-De High level instructions that guided the behavior of the Bot of Grock to give a “specific answer” to a “political subject”. Xai says the Tweak ‘has violated [its] Internal policy and core values, “and that the company” has carried out a thorough investigation “.

It is the second time that Xai has publicly recognized an unauthorized change in the Grok code, the AI ​​has responded in controversial ways.

In February censored grok short -censored unflatter mentions of Donald Trump and Elon Musk, the billionaire founder of Xai and owner of X. Igor Babuschkin, an Xai Engineering -Lead, said grok was instructed by a Rogue employee To ignore sources that called Musk or Trump that spread wrong information, and that Xai returned the change as soon as users began to show it.

See also  Andrew Ng is 'very glad' Google dropped its AI weapons pledge

Xai said on Thursday that it will make various changes to prevent that similar incidents will take place in the future.

From today, Xai will publish the grok system, prompts on Github and a Changelog. The company says that it will also “carry out additional checks and measures” to ensure that XAI employees cannot change the system prompt without assessment and to set up a “24/7 monitoring team to respond to incidents with grok answers that are not caught by automated systems.”

Despite the frequent warnings from Musk for the dangers of Ai road uncontrolledXai has a bad AI safety record. A recent report Discovered that grok photos of women would undress when asked. The chatbot can also be considerably more coarse than AI such as Google’s Gemini and Chatgpt, cursed without much restraint to talk about.

A study by Saferai, a non -profit that is aimed at improving the accountability of AI Labs, it turned out that Xai is poorly on safety among his colleagues, because of his “Very weak” risk management practices. Earlier this month, Xai missed a self-imposed deadline to publish a completed AI safety framework.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button