Every week, more people ask ChatGPT about their health than live in Brazil. The regulatory system built to protect them wasn’t designed for this.
Over 230 million people every week ask ChatGPT for health advice — sharing diagnoses, medications, and lab results with a chatbot that isn’t bound by the same privacy laws as their doctor. Techbuzz OpenAI didn’t just acknowledge this. It leaned into it.
On January 7, 2026, OpenAI announced plans to launch ChatGPT Health — a new model that lets users connect their health records and wellness applications to the chatbot, covering functions like reviewing medical records, summarizing visits, and providing nutrition advice. Healthlawpolicy
The launch put a long-simmering question front and center: is ChatGPT a medical device?
The Line OpenAI Is Walking
OpenAI sidesteps the question by explicitly stating ChatGPT Health “is not intended for diagnosis and treatment” and should be used “in close collaboration with physicians.” Techbuzz
That language is deliberate. Products designed for diagnosis and treatment are classified as medical devices by the FDA and must go through rigorous clinical trials and safety monitoring. By stating ChatGPT Health isn’t meant for that purpose, OpenAI keeps the product outside the FDA’s jurisdiction. Techbuzz
But critics say the disclaimer doesn’t reflect reality. When OpenAI encourages users to upload lab results, track health behaviors, and reason through treatment decisions, the line between “informational tool” and “medical device” starts to blur. “If a product is doing this, one could reasonably argue it might fall under the US definition of a medical device,” researcher van Kolfschooten told The Verge. Techbuzz
What the FDA Actually Said
On January 6, 2026, the FDA published guidance that reduces oversight of certain digital health products, including AI-enabled software and wearable devices. The guidance clarifies that products posing minimal risk — such as fitness wearables and health tracking apps — generally do not require FDA regulatory oversight. Telehealth.org
Software that provides recommendations which a clinician can independently review and understand falls outside the agency’s medical device oversight. Recommendations not serving as the sole or primary basis for clinical action would not trigger FDA review. Telehealth.org
FDA Commissioner Marty Makary put it plainly: “If something is simply providing information like ChatGPT or Google, we’re not going to go in there and say, ‘There’s one result that is inaccurate, therefore we’ve got to shut this down.’ We have to promote these products and, at the same time, just guard against major safety concerns.” Fox Business
The Privacy Gap Nobody Is Closing
The regulatory question is only part of the problem. The other is privacy.
Unlike healthcare providers bound by HIPAA, consumer AI tools rely on privacy policies that companies can change. Legal experts say users have “very limited protection” beyond OpenAI’s word. Techbuzz
Users submitting medical records to ChatGPT Health would not render OpenAI a covered entity or business associate under HIPAA, leaving its status as a consumer health product outside HIPAA’s regulatory scope. Healthlawpolicy
OpenAI has outlined data protections for the product — including encryption, data isolation, and user options to delete chats — but those are commitments, not legal obligations.
OpenAI also launched two similar-sounding products: ChatGPT Health for consumers, and ChatGPT for Healthcare, an enterprise version with HIPAA compliance — creating confusion about which carries stronger safeguards. Techbuzz
What Doctors Are Saying
Physicians aren’t opposed to AI in medicine. They’re cautious about how fast it’s moving without guardrails.
Mayo Clinic cardiologist Paul Friedman told The Wall Street Journal: “It’s not that I don’t ask ChatGPT medical questions, but when I do, I always look for the references, click on them and read the abstracts — at a minimum.” Health Exec
Memorial Sloan Kettering pathologist Anthony Cardillo flagged a different concern: “Any time I outsource my thoughts to something that isn’t my own brain, I’m worried I’m going to lose that muscle memory.” Health Exec
Health systems, meanwhile, see real pressure to adopt. Hospitals are looking for ways to deal with persistent worker shortages that burn out clinicians and delay care. Health Exec AI tools that handle administrative load, summarize visits, or flag patterns in patient data are already being deployed across major systems.
Where This Is Headed
OpenAI has said it will release a full healthcare policy blueprint, suggesting five ways AI could address healthcare’s hardest challenges — including calls to clarify the regulatory pathway for AI-powered consumer medical devices and define the scope of medical device regulation to encourage innovation without blocking AI services that support doctors. Health Exec
The FDA, for its part, says it will focus oversight on products that present meaningful risks, while giving lower-risk technologies room to develop. The regulatory shift is occurring alongside broader modernization efforts — including a joint FDA and CMS pilot program launched in late 2025 designed to expand access to digital health technologies for chronic disease care while collecting real-world safety data. Telehealth.org
The framework is moving. The technology is moving faster. And 230 million people aren’t waiting for either to catch up.