Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Champions Trophy tournament
access_time 21 Nov 2024 5:00 AM GMT
The illness in health care
access_time 20 Nov 2024 5:00 AM GMT
The fire in Manipur should be put out
access_time 21 Nov 2024 9:19 AM GMT
America should also be isolated
access_time 18 Nov 2024 11:57 AM GMT
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
The betrayal of the highest order
access_time 16 Nov 2024 12:22 PM GMT
DEEP READ
Munambam Waqf issue decoded
access_time 16 Nov 2024 5:18 PM GMT
Ukraine
access_time 16 Aug 2023 5:46 AM GMT
Foreign espionage in the UK
access_time 22 Oct 2024 8:38 AM GMT
exit_to_app
Homechevron_rightLifestylechevron_rightHealthchevron_rightWHO warns against use...

WHO warns against use of AIs ChatGPT, Bard, Bert in health care

text_fields
bookmark_border
WHO
cancel

Geneva: The World Health Organisation warned on Tuesday that the risks in using artificial intelligence (AI) tools such as ChatGPT, Bard, Bert etc., in healthcare must be strictly examined, IANS reported.

Though the organisation appreciate the appropriate use of technologies, it said, "...there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools (LLMs)."

WHO said in a statement, "This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation."

"It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity," it added.

The "precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies," it said.

WHO is concerned that the data used to train the AI tools could be biased and, therefore it will, generate misleading or inaccurate information which could pose risks to health, equity and inclusiveness.

WHO said that AI may not protect sensitive data (including health data), it can misuse data to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.

"WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine -- whether by individuals, care providers or health system administrators and policy-makers," the statement read.

Show Full Article
TAGS:WHOAIhealthcare
Next Story