Technology
Over 40 Million Use ChatGPT for Healthcare, But Is It Safe?
More than 40 million people globally are turning to ChatGPT for healthcare-related inquiries, according to a recent report from OpenAI shared with Axios. This usage represents over 5% of all messages sent to the chatbot, with users seeking advice on symptoms and insurance issues. The report highlights a growing reliance on AI for medical guidance, raising questions about the safety and accuracy of such information.
The analysis indicates that ChatGPT processes approximately 125 million healthcare-related questions daily, based on its reported handling of around 2.5 billion prompts per day as of July 2022. This substantial volume underscores the chatbot’s role in providing support outside traditional medical hours, as around 70% of these conversations occur when clinics are closed. Users are leveraging AI not only for symptom assessments but also for navigating issues like insurance denials and billing discrepancies.
As many Americans face rising healthcare costs, particularly following the expiration of pandemic-era subsidies under the Affordable Care Act, the popularity of ChatGPT has surged. Reports indicate that over 20 million ACA enrollees have experienced an average premium increase of 114%. Younger and healthier individuals may opt to rely on chatbots for medical advice rather than traditional healthcare services, especially as costs escalate.
While ChatGPT offers convenience, its use in healthcare is not without risks. A study published in July by a team of physicians on the preprint server arXiv revealed that leading AI chatbots, including OpenAI’s GPT-4 and Meta’s Llama, often provide inaccurate medical information. The study found that both models generated unsafe responses in 13% of cases, highlighting the potential dangers of relying on AI for medical advice.
Despite these concerns, OpenAI is actively working to improve the safety and reliability of its models when addressing health-related queries. The urgency of this task is underscored by the fact that many individuals are seeking help for sensitive medical issues through an automated platform.
Users are advised to approach generative AI cautiously, treating it similarly to platforms like WebMD. While it can be a starting point for understanding health conditions or navigating insurance complexities, it should not replace professional medical guidance. Given the propensity for inaccuracies, individuals are encouraged to critically evaluate AI-generated responses, particularly when addressing serious health concerns.
The increasing use of AI in healthcare illustrates a significant shift in how people seek medical advice. As technology evolves, the intersection of healthcare and artificial intelligence will likely continue to shape the landscape of patient support, making it imperative to balance accessibility with the need for accurate, reliable information.
-
Top Stories1 month agoRachel Campos-Duffy Exits FOX Noticias; Andrea Linares Steps In
-
Top Stories2 weeks agoPiper Rockelle Shatters Record with $2.3M First Day on OnlyFans
-
Top Stories1 week agoMeta’s 2026 AI Policy Sparks Outrage Over Privacy Concerns
-
Sports1 week agoLeon Goretzka Considers Barcelona Move as Transfer Window Approaches
-
Top Stories2 weeks agoUrgent Update: Denver Fire Forces Mass Evacuations, 100+ Firefighters Battling Blaze
-
Sports7 days agoSouth Carolina Faces Arkansas in Key Women’s Basketball Clash
-
Top Stories2 weeks agoOnlyFans Creator Lily Phillips Reconnects with Faith in Rebaptism
-
Health2 months agoTerry Bradshaw Updates Fans on Health After Absence from FOX NFL Sunday
-
Top Stories7 days agoCBS Officially Renames Yellowstone Spin-off to Marshals
-
Top Stories1 week agoWarnock Joins Buddhist Monks on Urgent 2,300-Mile Peace Walk
-
Entertainment1 week agoTom Brady Signals Disinterest in Alix Earle Over Privacy Concerns
-
Top Stories1 week agoOregon Pilot and Three Niece Die in Arizona Helicopter Crash
