The rapid rise of chatbots available online has brought some clear advantages. They can organize and summarize vast amounts of information in a very short time. However, there is also a downside: they can create the impression that their answers are always accurate. In healthcare, artificial intelligence is still far from having all the answers, and relying on information that has not been validated by a doctor can lead to medical complications that could have been easily identified and treated in an early stage.
Every day, millions of people ask
chatbots about medical conditions. They seek advice on symptoms, treatments,
and medical tests, often treating chatbots as if they were real doctors. It can
feel more personal, convenient, faster, and, of course, less expensive.
Unfortunately, chatbots intended for the general public are not medical
experts, even if they may seem convincing.
At the same time, the chatbots used
by the public are not always the same as those used by healthcare
professionals. Doctors may rely on AI systems specifically developed for
medical support, designed to assist clinical decision-making from a scientific perspective.
For example, there are AI models
created for healthcare professionals, such as Articulate Medical Intelligence
Explorer (AMIE) developed by Google, or MedFound created by researchers at
Beijing University of Posts and Telecommunications. Even in these cases, there
is still significant room for improvement.
Chatbot false diagnosis rate
between 40% and 80%
According to a recent study published
in JAMA Network Open by the American Medical Association, researchers analyzed
21 AI language models developed by OpenAI, Google, xAI, DeepSeek, and
Anthropic. These are systems widely accessible to the general public. The study
found that more than 80% of the diagnoses generated by chatbots during the
early evaluation of medical cases were incorrect. Why? There are several
reasons for this. One major limitation is that users often provide incomplete
or vague information when describing their symptoms. Without sufficient medical
details, the chatbot has to interpret a limited amount of information and may
quickly generate an inaccurate conclusion.
Researchers also found that when the same chatbot receives complete and accurate information, and the medical case is described thoroughly, the error rate can drop to around 40%. However, the obvious question remains: can a patient accurately provide all relevant clinical details without first being evaluated by a doctor? In most cases, no.
The chatbot may reassure you,
but it does not cure you!
Another
reason why chatbots cannot replace medical diagnosis is that many AI systems
are designed to provide responses that feel helpful and agreeable. In some
cases, they may reinforce what the user already believes rather than challenge
assumptions that could be medically incorrect.
A chatbot may appear empathetic, but
that does not mean it fully understands the patient’s condition. It does not
have access to the patient’s medical history, previous investigations, test
results, or ongoing treatments. All of these are essential for making a proper
medical assessment.
There is also another important
limitation: AI systems do not have clinical experience. They cannot examine a
patient and cannot apply medical judgment in the same way as a doctor, even if
their responses may sound more detailed or persuasive.
AI cannot always distinguish
real information from false information
Current AI systems, especially those
available for free or through low-cost subscriptions, generate answers based on
patterns found in available data. That data may be incomplete, outdated, or
even entirely false.
A recent article published in Nature
highlighted one such example. Researchers from University of Gothenburg created
an experiment to test the limitations of AI. They published information online
about a fictional skin disease called bixonimania and uploaded fake research
papers attributed to a fictional scientist from a fictional institution called
Starfleet Academy.
Although the fake studies contained
obvious signs that they were not real, several chatbots, including Microsoft
Copilot, Google Gemini, and ChatGPT, treated the information as genuine. These
systems began generating detailed responses about bixonimania as if it were a
real condition, even claiming that it affects one in every 90,000 people.
“The therapeutic act happens
between the patient and the doctor”
Dr. Ciprian Ene, coordinating
physician at Quantum Therapy Integrative Medicine Center in Bucharest,
acknowledges some benefits of AI, but emphasizes that direct interaction
between doctor and patient remains essential for appropriate and effective
treatment.
“Recently, we have seen an
exponential development of AI, and naturally it is beginning to influence the
doctor-patient relationship. Within certain limits, this can be useful, as
patients understandably want to be informed and seek answers to their health
concerns. However, the information they receive should come from reliable
sources, such as clinical studies and academic medical publications. AI may
help translate this information into a more accessible format, but in the end,
it still needs to be discussed and interpreted together with a specialist,
precisely to avoid the inaccuracies that can appear in these searches.”
“Health is not found on the
internet”
“The doctor can also use AI, for
example in imaging or in processing large volumes of data. But ultimately, the
doctor must remain the one who correctly informs the patient and helps them
apply that information in practice. The doctor remains the authority in
medicine, but also someone who should build a partnership with the patient,
understand the information they may have gathered from various sources, assess
it critically, and help them use it in the most effective way possible.”
Dr. Ene points out that many patients
now arrive at consultations with lists of questions and answers already
generated by AI. “We explain to patients that, regardless of how advanced
technology becomes and how applicable it may be in medicine, the therapeutic
act still takes place between the patient and the doctor. Technology can
support this relationship, just as developments in fields like physics or
computer science can support medicine. But in both conventional and
complementary medicine, treatment needs to be personalized, and that is
something only a doctor can provide.”
“The causes of illness are unique to
each patient. Symptoms may reflect physical or biochemical conditions, but they
may also be linked to psycho-emotional factors. These aspects can only be
identified through clinical reasoning combined with empathy.” Dr. Ene believes that a
better-informed patient is often a better-treated patient, as long as that
information is accurate and grounded in a realistic understanding of disease
causes and treatment options.
“At the same time, modern medicine
should also train doctors, including medical students, to communicate better
with patients and to allocate enough time for the explanations they need. This
is one of the reasons why many people turn to AI. Patients should receive
answers after a discussion with a doctor, because medicine is both science and
art. It is not a search engine, and health is not found on the internet.”
In conclusion, a chatbot can help
users better understand medical test results, clarify a diagnosis already made
by a doctor, or explain medical documents in simpler terms. However, a chatbot
cannot replace a medical consultation, professional recommendations, or a
diagnosis established by a qualified doctor.