Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Scorching summer too has to pass
access_time 21 April 2026 10:30 AM IST
A democratic model against anti-democracy
access_time 20 April 2026 9:30 AM IST
Gunfire pause in West Asia
access_time 18 April 2026 9:31 AM IST
exit_to_app
Homechevron_rightTechnologychevron_rightNearly half of AI...

Nearly half of AI chatbot responses on cancer treatment ‘problematic’: study

text_fields
bookmark_border
cancer treatment
cancel

A new study has found that major AI chatbots frequently provide problematic advice on cancer treatment, raising concerns about potential risks to patients.

Researchers from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Centre evaluated five widely used chatbots, including Gemini, Meta AI, ChatGPT, Grok, and DeepSeek. The findings, published in BMJ Open, showed that 49.6 percent of responses related to cancer treatments were rated “problematic” by expert reviewers.

Of these, 30 percent were classified as somewhat problematic and 19.6 percent as highly problematic. The study found no major differences in overall response quality between platforms, though Grok produced a higher share of highly problematic answers than expected.

To test reliability, researchers used a method called “straining,” prompting chatbots with misleading or high-risk queries. These included claims linking cancer to 5G or antiperspirants, as well as questions about vaccines and anabolic steroids. In some cases, chatbots suggested alternative treatments instead of standard therapies like chemotherapy.

Lead author Nick Tiller said the study simulated how typical users interact with AI tools, often treating them like search engines. “A lot of people are asking exactly those questions,” he said, noting that user biases can shape the responses they receive.

The findings add to growing evidence that AI tools can struggle with medical accuracy.

A separate study published in JAMA Network Open earlier this month found that AI chatbots misdiagnosed conditions in more than 80 percent of early clinical cases.

Researchers warned that while AI systems can perform well in structured tests, they often fall short in real-world medical reasoning, potentially leading to misleading or incomplete advice.

Show Full Article
TAGS:AICancer Treatment
Next Story