Connect with us

Technology

Over 40 Million Use ChatGPT for Healthcare, But Is It Safe?

Editorial

Published

on

More than 40 million people globally are turning to ChatGPT for healthcare-related inquiries, according to a recent report from OpenAI shared with Axios. This usage represents over 5% of all messages sent to the chatbot, with users seeking advice on symptoms and insurance issues. The report highlights a growing reliance on AI for medical guidance, raising questions about the safety and accuracy of such information.

The analysis indicates that ChatGPT processes approximately 125 million healthcare-related questions daily, based on its reported handling of around 2.5 billion prompts per day as of July 2022. This substantial volume underscores the chatbot’s role in providing support outside traditional medical hours, as around 70% of these conversations occur when clinics are closed. Users are leveraging AI not only for symptom assessments but also for navigating issues like insurance denials and billing discrepancies.

As many Americans face rising healthcare costs, particularly following the expiration of pandemic-era subsidies under the Affordable Care Act, the popularity of ChatGPT has surged. Reports indicate that over 20 million ACA enrollees have experienced an average premium increase of 114%. Younger and healthier individuals may opt to rely on chatbots for medical advice rather than traditional healthcare services, especially as costs escalate.

While ChatGPT offers convenience, its use in healthcare is not without risks. A study published in July by a team of physicians on the preprint server arXiv revealed that leading AI chatbots, including OpenAI’s GPT-4 and Meta’s Llama, often provide inaccurate medical information. The study found that both models generated unsafe responses in 13% of cases, highlighting the potential dangers of relying on AI for medical advice.

Despite these concerns, OpenAI is actively working to improve the safety and reliability of its models when addressing health-related queries. The urgency of this task is underscored by the fact that many individuals are seeking help for sensitive medical issues through an automated platform.

Users are advised to approach generative AI cautiously, treating it similarly to platforms like WebMD. While it can be a starting point for understanding health conditions or navigating insurance complexities, it should not replace professional medical guidance. Given the propensity for inaccuracies, individuals are encouraged to critically evaluate AI-generated responses, particularly when addressing serious health concerns.

The increasing use of AI in healthcare illustrates a significant shift in how people seek medical advice. As technology evolves, the intersection of healthcare and artificial intelligence will likely continue to shape the landscape of patient support, making it imperative to balance accessibility with the need for accurate, reliable information.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.