Free ChatGPT may incorrectly answer drug questions, study says

Biotech
Tuesday, December 5th, 2023 3:04 pm EDT

Key Points

Study Identifies Concerns with ChatGPT Accuracy in Medical Information:

    • The research conducted by pharmacists at Long Island University highlights potential inaccuracies and deficiencies in the responses provided by OpenAI’s ChatGPT, particularly in relation to medical information, including drug-related queries.
    • Out of 39 questions posed to the free version of ChatGPT, only 10 responses were deemed satisfactory by the pharmacists, indicating a significant number of inaccuracies, incompleteness, or a lack of direct addressing of the questions.

Caution Urged for Patients and Healthcare Professionals:

    • The study emphasizes the need for caution among patients and healthcare professionals when relying on ChatGPT for drug information. It suggests that users should verify any responses from the chatbot with trusted sources, such as doctors or reputable medication information websites like the National Institutes of Health’s MedlinePlus.
    • With concerns raised about potential inaccuracies, the study urges users to exercise caution and highlights the importance of seeking information from reliable and verified sources.

Challenges with Free Version’s Data Limitations and Potential Improvements:

    • The study points out limitations in the free version of ChatGPT, which relies on data sets only through September 2021, potentially lacking updated information in the rapidly evolving medical landscape. It questions the accuracy of the paid versions, which began using real-time internet browsing in 2022, in answering medication-related questions.
    • While acknowledging the possibility of a paid version producing better results, the study focused on the free version to reflect the usage patterns of the general population. It emphasizes that the research provides only a snapshot of ChatGPT’s performance earlier in the year, leaving room for the free version’s potential improvement if a similar study were conducted presently.


The study conducted by pharmacists at Long Island University raises concerns about the accuracy and reliability of OpenAI’s ChatGPT, particularly in responding to drug-related questions. The researchers posed 39 questions related to medications to the free version of ChatGPT in May, and only 10 responses were deemed satisfactory based on established criteria. The study found that the chatbot’s responses to the remaining 29 questions either did not directly address the query, were inaccurate, incomplete, or a combination of these issues.

The research highlights potential risks for patients relying on ChatGPT for drug information, and lead author Sara Grossman suggests caution. She emphasizes the importance of verifying responses from the chatbot with trusted sources, such as healthcare professionals or government-based medication information websites like the National Institutes of Health’s MedlinePlus.

ChatGPT, initially considered the fastest-growing consumer internet app, has faced scrutiny for issues like fraud, intellectual property concerns, discrimination, and misinformation. Multiple studies have pointed out instances of inaccurate responses, prompting the Federal Trade Commission to investigate the chatbot’s accuracy and consumer protections.

The study’s focus on the free version of ChatGPT, limited to data sets through September 2021, raises concerns about its potential lack of information in the rapidly evolving medical landscape. The paid versions, which began using real-time internet browsing, might offer more accurate responses, but the study focused on the free version to reflect what the general population predominantly uses.

The results of the study indicated that ChatGPT did not directly address some questions, provided inaccurate responses to others, and failed to offer verifiable references in most cases. This lack of references makes it challenging for users to confirm the information provided. The study also noted that ChatGPT’s responses included nonexistent sources, raising further doubts about the reliability of its information.

Examples from the study illustrate critical issues, such as misinformation about drug interactions and incorrect dosage conversions. For instance, when asked about the interaction between Pfizer’s Covid antiviral pill Paxlovid and the blood-pressure-lowering medication verapamil, ChatGPT inaccurately indicated no reported interactions, while in reality, such a combination could lead to excessively low blood pressure.

Concerns were also raised about outdated information, with Paxlovid being authorized by U.S. regulators in December 2021, postdating the September 2021 data cutoff for the free version of ChatGPT. This discrepancy could potentially mislead users into relying on outdated information.

The study concludes that while a paid version of ChatGPT might yield different results, the research provides only a snapshot of the chatbot’s performance from earlier in the year. It suggests that further investigation is needed to assess whether improvements have been made since the study period.

For the full original article on CNBC, please click here: https://www.cnbc.com/2023/12/05/free-chatgpt-may-incorrectly-answer-drug-questions-study-says.html