Google AI Overviews criticised for giving incorrect medical advice: report
text_fieldsGoogle’s AI Overviews feature has come under scrutiny after it was reported to have shared incorrect and potentially harmful medical information, raising concerns at a time when artificial intelligence adoption in healthcare is accelerating.
According to an investigation by The Guardian, the AI-powered search summary tool provided erroneous advice in response to specific medical questions.
In one instance, when asked about dietary recommendations for people with pancreatic cancer, AI Overviews reportedly advised avoiding high-fat food. This contradicts professional medical guidance, which recommends high-fat intake for such patients, with failure to do so carrying serious health risks.
The report also found that AI Overviews displayed misleading information about normal ranges for liver blood tests. The figures shown did not account for factors such as age, gender, nationality, or ethnicity, potentially leading users to believe their results were normal when they were not. Information related to women’s cancer tests was also said to be inaccurate, with genuine symptoms being dismissed.
A Google spokesperson told The Guardian that the examples were based on incomplete screenshots but said the linked sources were well-known and reputable. After the issue gained attention, the reported responses were removed, and the AI-generated summaries were no longer visible for some of the tested medical queries.
The incident has raised concerns because AI Overviews appear prominently at the top of Google Search results, and many users rely on them for quick answers. The issue emerges as companies such as OpenAI and Anthropic continue to push healthcare-focused AI tools, where even small errors could have serious consequences.

















