Latest News, Local News, International News, US Politics, Economy

Doctors Turn to ChatGPT for Workload Assistance, But Is AI Reliable?

Given that more and more doctors are using artificial intelligence (Ai) to help them with their demanding workloads, it could not be all that far from the reality.

Up to 10% of clinicians use ChatGPT, an OpenAI large language model (LLM), according to studies, but how reliable are its responses?

University of Kansas Researchers Take Action

A group of University of Kansas Medical Center researchers made the decision to investigate.

Dan Parente, the senior study author and an assistant professor at the institution, told Fox News Digital that while a million new medical articles are published in scholarly journals each year, busy doctors don’t have the time to read them.

140 peer-reviewed papers from 14 medical journals were compiled by the researchers using ChatGPT 3.5 for a recent study that was published in the Annals of Family Medicine.

After that, seven doctors evaluated the chatbot’s responses on their own, assigning ratings for bias, accuracy, and quality.

The AI responses were found to be 70% shorter than those of actual physicians; nevertheless, they were also found to be free of prejudice and to have high ratings for accuracy (92.5%) and quality (90%).

Out of 140 summaries, only four contained serious errors and hallucinations, making them “uncommon”. Only two of the 140 summaries, he claimed, were hallucinations.

However, minor errors were slightly more prevalent, showing up in 20 out of 140 summaries.

“We also found that ChatGPT could generally help physicians figure out whether an entire journal was relevant to a medical specialty — for example, to a cardiologist or to a primary care physician — but had a lot harder of a time knowing when an individual article was relevant to a medical specialty,” Parente said.

Parente concluded that ChatGPT might be able to assist time-pressed medical professionals in selecting which of the latest papers in their field’s publications to read. 

Although he was not part in the University of Kansas study, Dr. Harvey Castro, a board-certified emergency medicine physician in Dallas and a national speaker on artificial intelligence in healthcare, shared his thoughts on doctors’ use of ChatGPT.

Castro emphasized the significance of having medical personnel monitor and verify AI-generated content in light of these possible errors, especially in high-risk situations.

Read more: Scientists Raise Concerns About Increased Fatal Car Crashes During April 8 Solar Eclipse

Insights on AI Integration in Healthcare

Doctors-turn-to-chatgpt-for-workload-assistance-but-is-ai-reliable
Given that more and more doctors are using artificial intelligence (Ai) to help them with their demanding workloads, it could not be all that far from the reality.

The researchers concurred, emphasizing the need to exercise caution while balancing the advantageous features of LLMs like ChatGPT.

Parente said, “We should insist that scientists, clinicians, engineers, and other professionals have done careful work to make sure these tools are safe, accurate, and beneficial.” This is especially important as AI is employed in healthcare more and more.

Read more: Big Wins In Pennsylvania: Powerball Tickets Score $100,000 And $50,000 Payouts

Leave A Reply

Your email address will not be published.