Why I Won't Be Sharing My Health Data with AI Chatbots Yet

The allure of AI health insights is strong, but uploading sensitive medical records to chatbots like ChatGPT Health comes with significant privacy and accuracy risks. Discover why caution is key.

By Sarah Mitchell ··2 min read
Why I Won't Be Sharing My Health Data with AI Chatbots Yet - Routinova

The recent announcement of ChatGPT Health, a new feature allowing users to upload medical records and ask health-related questions, marks a significant step in AI’s integration into personal wellness. While the convenience of instant health insights is undeniably appealing, I find myself deeply hesitant. This is precisely why I won't be entrusting my sensitive medical data to this platform, and why I believe caution is prudent for all, primarily due to reliability and privacy concerns.

While OpenAI touts "enhanced privacy to protect sensitive data" within this sandboxed environment, a crucial detail remains: it lacks end-to-end encryption. This means your private health data—from genetic test results to mental health evaluations and prescription histories—isn't fully impervious to internal access or potential external breaches (Pew Research Center, 2022). Entrusting such deeply personal information to a tech company not primarily focused on medical services, with policies that could shift, presents considerable risk. This lack of ironclad security is a major reason why I won't be an early adopter.

Beyond privacy, the accuracy of AI in medical contexts remains a significant concern. Large Language Models (LLMs) are notorious for "hallucinating"—generating plausible but factually incorrect information. OpenAI's own data shows a substantial portion of ChatGPT messages are already health queries, despite the models' known propensity for inaccurate diagnostic information. This is precisely why I won't be relying on it for critical health advice; OpenAI itself explicitly states ChatGPT Health is “not intended for diagnosis or treatment.” Imagine asking about a medication and receiving advice that overlooks a critical drug interaction, or misinterpreting a complex symptom, potentially leading to anxiety or improper self-treatment. Such errors are not theoretical; studies confirm LLMs can produce convincing yet flawed medical advice (Stanford AI Lab, 2023). For critical health decisions, this unreliability is a deal-breaker.

Currently, ChatGPT Health is rolling out via a waitlist. Until its privacy safeguards are more rigorously tested and transparent, relying on the regular version of ChatGPT for health advice is even riskier. Ultimately, when faced with health questions or concerns, the most reliable and secure course of action remains direct consultation with a qualified medical professional (American Medical Association, 2024). Their expertise, ethical obligations, and understanding of your unique medical history are irreplaceable. For these compelling reasons, why I won't be uploading my health records is a clear decision rooted in prudence and the paramount importance of my personal well-being.

About Sarah Mitchell

Productivity coach and former UX researcher helping people build sustainable habits with evidence-based methods.

View all articles by Sarah Mitchell →

Our content meets rigorous standards for accuracy, evidence-based research, and ethical guidelines. Learn more about our editorial process .

Get Weekly Insights

Join 10,000+ readers receiving actionable tips every Sunday.