Cutting-edge AI in healthcare sparks WHO warning on potential risks

This news has been read 719 times!

WHO warns of risks as generative AI revolutionizes healthcare.

GENEVA, Jan 18: In a bid to revolutionize healthcare with advancements like drug development and expedited diagnoses, generative artificial intelligence (AI) is making waves. However, the World Health Organization (WHO) issued a stern warning on Thursday, emphasizing the need for increased scrutiny of the potential risks associated with this cutting-edge technology.

The WHO has undertaken a thorough examination of the benefits and dangers posed by large multi-modal models (LMMs) of AI, which are relatively novel and rapidly gaining traction in the field of health. LMMs represent a form of generative AI capable of processing various data inputs, including text, images, and videos, to generate outputs not confined to the type of data initially fed into the algorithm.

The organization predicts that LMMs will find extensive applications in healthcare, scientific research, public health, and drug development. Highlighting five key areas of application, the WHO envisions LMMs contributing to diagnosis (addressing patients’ written queries), scientific research and drug development, medical and nursing education, clerical tasks, and patient-guided use (such as symptom investigation).

Despite the promising potential, the WHO cautioned against documented risks, including the potential for LMMs to produce false, inaccurate, biased, or incomplete outcomes. Concerns also extend to the quality of data used for training, with biases related to race, ethnicity, ancestry, sex, gender identity, or age.

“As LMMs gain broader use in healthcare and medicine, errors, misuse, and ultimately harm to individuals are inevitable,” warned the WHO. In response to these challenges, the organization issued recommendations on the ethics and governance of LMMs, targeting governments, tech firms, and healthcare providers to ensure the safe utilization of this technology.

Jeremy Farrar, WHO chief scientist, emphasized the necessity for transparent information and policies to manage the design, development, and use of LMMs. The organization stressed the need for liability rules to compensate users harmed by LMMs and underscored the importance of involving medical professionals and patients in the development process.

While AI is already embedded in diagnosis and clinical care, with applications in radiology and medical imaging, the WHO emphasized that LMMs present unique risks that societies and health systems may not yet be fully prepared to address. This includes concerns about compliance with existing regulations, especially regarding data protection, and the potential entrenchment of tech giants’ dominance due to the substantial resources required for LMM development.

In conclusion, the WHO recommended collaborative development involving scientists, engineers, medical professionals, and patients. The organization also highlighted the vulnerability of LMMs to cybersecurity risks that could compromise patient information and the trustworthiness of healthcare provision. It further suggested the appointment of regulators to approve LMM use in healthcare, along with auditing and impact assessments to mitigate risks associated with this transformative technology.

This news has been read 719 times!

Related Articles

Back to top button

Advt Blocker Detected

Kindly disable the Ad blocker

Verified by MonsterInsights