AI

X users treating Grok like a fact-checker spark concerns over misinformation

Some users on the X of Elon Musk turn to Musk’s Ai Bot Grok for facts control, so that they are worried under human facts controls that this could feed the wrong information.

Earlier this month, X used users to call Xai’s Grok and ask questions about different things. The move was Similar to perplexityThat an automated account has carried out on X to offer a similar experience.

Shortly after Xai made the automated grok account on X, users started experimenting with asking questions. Some people in markets, including India, started asking groks from facts controls and questions that focus on specific political beliefs.

Fact checks are worried about the use of grok-or another AI assistant of this kind of this way because the bots can frame their answers to the convincing sound, even if they are not actually correct. Fell of spread fake news And incorrect information were seen with grock in the past.

In August last year, five State Secretaries insisted Musk to make critical changes to Grok after the misleading information generated by the assistant appeared on social networks prior to the American elections.

Other chatbots, including the chatgpt from OpenAi and Google’s Gemini, were also seen Generate inaccurate information About the elections last year. Disinolking researchers in 2023 found that AI chatbots including chatgpt could easily be used to produce Convincing text with misleading stories.

“AI assistants, such as grok, are really good at the use of natural language and give an answer that sounds like a person who said it. And in this way the AI ​​products have this statement to naturalness and authentic sounding reactions, even if they are potentially very wrong. That would be the danger,” Angie Holan, Tercnter, Tested TechCnch.

See also  AlphaQubit: Solving Quantum Computing's Most Pressing Challenge
Grok was asked by a user on X to check for claims from another user

Unlike AI assistants, human facts control use multiple, credible sources to verify information. They also take responsibility for their findings, with their names and organizations that are connected to guarantee credibility.

Pratik Sinha, co-founder of India’s non-profit facts control website Alt News, said that although Grok currently seems to convince answers, it is only as good as the data with which it is provided.

“Who will decide with which data it will be supplied with, and that is where the interference of the government, etc. Will will come into the picture,” he noticed.

“There is no transparency. Everything that is missing transparency will cause damage because everything that is missing can be formed in any way.”

“Can be abused – to distribute wrong information”

In one of the reactions posted earlier this week, Grok’s account on X recognized That it “can be abused – to distribute wrong information and violate privacy.”

The automated account, however, does not show disclaimers to users when they get answers, so that they are misinformed if it has, for example, the answer, which is the potential disadvantage of AI.

Grok’s response to the question of whether the wrong information can distribute (translated from HingLish)

“It can come up with information to give a response,” said Anushka Jain, a research employee at Goa-based multidisciplinary research collective Digital Futures Lab, to WAN.

There is also a question about how many grug messages on X uses as training data and what quality control measures it uses to check such messages. Last summer it pushed a change that seemed to leave to have Grok standard X user data consume.

See also  Queen Camilla labels 'evil stepmother' in new TV documentary amid health concerns

The other AI assistants such as Grok that is accessible through social media platforms is their supply of information in public – in contrast to chatgpt or other chatbots that are used privately.

Even if a user is well aware that the information he receives from the assistant can be misleading or not completely correct, others on the platform can still believe it.

This can cause serious social damage. Cases were previously seen in India then Misin For information about WhatsApp led to Mob -Lynchings. However, these serious incidents took place before the arrival of Genai, which made generating synthetic content even easier and seem more realistic.

“If you see a lot of this grock, you will say, hey, well, most are right, and that can be, but there will be one that is wrong. And how many? It is not a small group. Some research studies have shown that AI models have been subject to 20% error error … and when it is wrong with real go,” “” “

Ai versus real facts controls

While AI companies, including Xai, refine their AI models to let them communicate more like people, they are still not to replace people.

In recent months, technology companies have been investigating ways to reduce dependence on human facts controls. Platforms including X and Meta began to embrace the new concept of crowdsourced facts through so -called communitynotities.

Of course, such changes also cause concern for facts controls.

Sinha van Alt News believes optimistic that people will learn to distinguish between machines and human fact checkers and the accuracy of people will appreciate more.

See also  Meteomatics eyes U.S. expansion for its enterprise-focused weather forecasting tools

“We’re going to see the pendulum swinging again in the end to more facts,” said IFCN’s Holan.

However, she noted that in the meantime facts controls will probably have more work with the AI-generated information that is spreading quickly.

“Much of this issue depends on it, you can really care what is really true or not? Are you just looking for the veneer of something that sounds and where feels without being true? Because that is what AI help you will get,” she said.

X and Xai did not respond to our request for comments.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button