AI

People struggle to get useful health advice from chatbots, study finds

With long waiting lists and rising costs in overloaded healthcare systems, many people turn to AI-driven chatbots such as chatgpt for medical self-diagnosis. About one in six American adults already use at least monthly chatbots for health advice, According to a recent study.

But placing too much confidence in the output of chatbots can be risky, partly because people have difficulty knowing what information to give chatbots for the best possible health recommendations, According to a recent study guided by Oxford.

“The study revealed a two-way communication breakdown,” said Adam Mahdi, director of Graduate Studies at the Oxford Internet Institute and a co-author of the study, to WAN. “Those who use [chatbots] Make no better decisions than participants who trusted on traditional methods such as online searches or their own judgment. “

For the study, the authors recruited around 1,300 people in the UK and gave medical scenarios written by a group of doctors. The participants had the task of identifying potential health problems in the scenarios and to use chatbots, as well as their own methods, to find out possible actions (for example, seeing a doctor or going to the hospital).

The participants used the standard AI model that Chatgpt, GPT-4O, as well as the command R+ and the Lama 3 of Cochere, who once substantiated the meta AI assistant of the company. According to the authors, the chatbots not only had the participants less likely to identify a relevant health status, but it also made them more likely to underestimate the severity of the conditions they identified.

See also  'Very Important People' Season 2 Premiere Host Vic Michaelis Interview

Mahdi said that the participants often left important details when requesting the chatbots or received answers that were difficult to interpret.

‘[T]he received responses they received [from the chatbots] Often combined good and bad recommendations, “he added.” Current evaluation methods for [chatbots] Do not reflect the complexity of interaction with human users. “

WAN event

Berkeley, Ca
|
June 5

Book now

As technology companies, the findings are increasingly pushing AI as a way to improve health results. Apple is Reportedly The development of an AI tool that can provide advice with regard to exercise, diet and sleep. Amazon is investigating an AI -based way to analyze medical databases for ‘social determinants of health’. And Microsoft helps AI to build triage reports to care providers sent by patients.

But as WAN has previously reported, both professionals and patients are mixed or AI is ready for health applications with a higher risk. The American Medical Association recommends the use of doctors from chatbots such as chatgpt for help with clinical decisions, and large AI companies, including OpenAi, warning against making diagnoses based on the output of their chatbots.

“We would recommend trusting trusted sources of information for health care decisions,” said Mahdi. “Current evaluation methods for [chatbots] Do not reflect the complexity of interaction with human users. Such as clinical studies for new medicines, [chatbot] Systems must be tested in the real world before they are deployed. “

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button