Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Justice system straining under pressure
access_time 30 Aug 2025 9:31 AM IST
It is not reservation, but big plunder
access_time 29 Aug 2025 9:45 AM IST
comedians
access_time 28 Aug 2025 11:21 AM IST
The horrendous massacre of media persons
access_time 27 Aug 2025 10:30 AM IST
Hindutva experiments in Uttarakhand
access_time 26 Aug 2025 11:04 AM IST
DEEP READ
Ukraine
access_time 16 Aug 2023 11:16 AM IST
Espionage in the UK
access_time 13 Jun 2025 10:20 PM IST
Yet another air tragedy
access_time 13 Jun 2025 9:45 AM IST
The Russian plan: Invade Japan and South Korea
access_time 16 Jan 2025 3:32 PM IST
exit_to_app
Homechevron_rightTechnologychevron_rightMan develops...

Man develops psychiatric disorder after following ChatGPT health advice

text_fields
bookmark_border
chatgpt
cancel

A 60-year-old man with no prior medical or psychiatric history was hospitalised after developing paranoia, hallucinations, and psychosis — symptoms later traced to bromism, a rare neuropsychiatric disorder caused by bromide poisoning.

According to a case study published in Annals of Internal Medicine Clinical Cases, the man arrived at an emergency department convinced his neighbour was trying to poison him.

However, tests and input from the Poison Control Department revealed the real cause: months of self-administered sodium bromide, taken on the advice of ChatGPT.

The patient told doctors he had read online about the harmful effects of table salt (sodium chloride) and sought alternatives through ChatGPT. The AI reportedly suggested bromide as a substitute — a dangerous swap that is sometimes used for cleaning purposes, not human consumption.

“For three months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide,” the report stated.

Bromism develops after prolonged ingestion of bromide salts, leading to symptoms including confusion, paranoia, and hallucinations.

The authors noted they do not have access to the patient’s conversation history with ChatGPT, meaning the exact advice he received will likely remain unknown, as responses from the AI vary based on previous inputs.

It comes shortly after OpenAI launched GPT-5, which the company described as its “best model yet for health-related questions.”

Show Full Article
TAGS:AIArtificial IntelligenceChatGPT Risks
Next Story