The World Health Organization (WHO) urges caution when using artificial intelligence (AI) to protect human safety and health. This is stated in the statement of the organization, published on May 16.
The WHO fears that the hasty implementation of untested systems such as ChatGPT could lead to medical errors, harm patients, undermine trust in AI, and hinder the application of new technologies around the world.
WHO’s arguments are as follows:
– AI generates answers that may seem authoritative and plausible to the user, but may be completely wrong or contain serious errors, especially when it comes to health;
– AI can learn from data that has not been consented to, and cannot protect sensitive information that the user provides to get a response (for example, about health);
– AI can be used to deliver “persuasive disinformation” in the form of text, audio or video that the audience will find difficult to distinguish from authentic medical content.
WHO proposes to look at these issues and identify clear evidence for the benefits of AI before it is widely used in medicine and healthcare.