HepSA Community News

Is it Safe to Ask AI for Health Advice?

People are increasingly relying on “AI” sources for health information, but is it safe to do so?

Worldwide, one in four queries to ChatGPT are health related, while almost one in ten Australians reported that they had asked ChatGPT about their health1.

However, independent studies consistently show that large language model (LLM) tools – commonly referred to as “AI” – give unsafe health advice.

One study found that ChatGPT Health failed to recommend emergency care when it was needed in over 50 per cent of cases. At the same time, it suggested emergency care to 64 per cent of people who had a non-urgent problem2.

Recently, Google removed several AI Overviews on health topics after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

Similarly, the BBC reports that a study by researchers at the University of Oxford, published in Nature Medicine 3 found seeking medical advice from AI chatbots to be “dangerous” due to their “tendency to provide inaccurate and inconsistent information”.

… chatbots were giving wrong diagnoses and failing to recognise when urgent help was needed.

According to the BBC report, study co-author Dr Rebecca Payne, who is also a GP, said “despite all the hype, AI just isn’t ready to take on the role of the physician.”

“Patients need to be aware that asking a large language model about their symptoms can be dangerous,” she said, pointing out that the chatbots were giving wrong diagnoses and failing to recognise when urgent help was needed.

Clearly, AI tools don’t merit the level of trust that people are placing in them.

The Sydney Health Literacy Lab has developed an education resource for using AI more safely. It suggests only asking AI questions when there is a straightforward factual answer with credible information readily available, preferably while also checking other sources.

They point out that ChatGPT will:

  • Sound authoritative and confident, even when it is wrong
  • Make up references or
  • Provide real references which do not actually match the information given by the AI4.

A lot of generative LLM systems use data from user input and prompts to feed into the model training, unless you opt out.

In addition to the risks from inaccurate information, there is also the risk of disclosure of personal information.

As one community sector IT administrator explained: “There are concerns around data security and confidentiality. A lot of generative LLM systems use data from user input and prompts to feed into the model training, unless you opt out. There have been issues with sensitive data and personal identification information leakage from these LLMs.”

So before asking Grok, ChatGPT or Gemini, check out resources and sites with answers from provided by real (human) intelligence. In Australia there are a number of high quality, credible health resources available online that provide a safer alternative to AI tools.

These include healthdirect and the Better Health Channel (both government funded) as well as the healthdirect phoneline which allows people to speak to a nurse about their concern. There are also many helplines that provide support and information about specific health conditions.

  1. https://theconversation.com/chatgpt-health-promises-to-personalise-health-information-it-comes-with-many-risks-273699 ↩︎
  2. https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies  ↩︎
  3. https://www.nature.com/articles/s41591-025-04074-y ↩︎
  4. https://www.sydneyhealthliteracylab.org.au/onlinehealthlit-chatgpt ↩︎

Last updated 1 May 2026

More from: