DT News - Pakistan - WHO guidelines for regulation of AI tools’ use in healthcare

Search Dental Tribune

WHO guidelines for regulation of AI tools’ use in healthcare

Press Release

Tue. 24 October 2023

save

GENEVA: The World Health Organization (WHO) has recently published a comprehensive report delineating critical regulatory considerations regarding the integration of artificial intelligence (AI) in the field of healthcare, to ensure the safety and efficacy of AI systems, expediting their availability to those in need, and promoting dialogue among stakeholders, including developers, regulators, manufacturers, healthcare professionals, and patients.

Given the growing accessibility of healthcare data and the rapid advancements in analytical techniques, AI tools hold the potential to revolutionize the healthcare sector, whether through machine learning, logic-based processes, or statistical methods.

WHO acknowledges the immense potential of AI in improving health outcomes by strengthening clinical trials, improving medical diagnosis and treatment, promoting self-care and patient-centric care, and augmenting the knowledge, skills, and competencies of healthcare professionals.

For instance, AI applications can be particularly advantageous in underserved areas facing a scarcity of medical specialists, aiding in the interpretation of retinal scans and radiology images, among other functions.

Nevertheless, the swift deployment of AI technologies, including large language models, without a comprehensive understanding of their performance, poses the risk of harming end-users, including healthcare professionals and patients.

When utilising health data, AI systems may gain access to sensitive personal information, necessitating robust legal and regulatory frameworks to ensure the protection of privacy, security, and integrity, a goal that this publication aims to facilitate and sustain.

Dr Tedros Adhanom Ghebreyesus, WHO Director-General, remarked, “Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation. This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks.”

In response to the pressing need for responsible management of the rapid proliferation of AI health technologies, the report identifies six key areas for the regulation of AI in the healthcare domain.

One of the primary recommendations is to emphasise transparency and documentation to foster trust. This can be achieved through comprehensive documentation of the entire product lifecycle and tracking of development processes.

Another crucial aspect highlighted in the report involves comprehensive risk management. It suggests thorough consideration of various factors such as intended use, continuous learning, human interventions, model training, and cybersecurity threats. The emphasis is on simplifying the models as much as possible to ensure effective management of potential risks.

Furthermore, the report underscores the significance of external validation of data and clarity regarding the intended use of AI. This is seen as crucial in ensuring safety and facilitating effective regulation within the healthcare sector.

The importance of committing to data quality is also stressed, emphasising rigorous evaluation of systems prior to release. This is aimed at preventing the amplification of biases and errors that can arise from inadequate data quality measures.

The challenges posed by intricate regulations like the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States are also addressed in the report that suggests a focus on understanding the jurisdictional scope and consent requirements to safeguard privacy and data protection.

Lastly, the report recommends encouraging collaboration among regulatory bodies, patients, healthcare professionals, industry representatives, and government partners. This collaborative effort aims to ensure that AI products and services comply with regulations throughout their life cycles, promoting a responsible and well-regulated AI healthcare landscape.

AI systems are intricate and rely not only on the code upon which they are constructed but also on the data used for their training, which is often derived from clinical settings and user interactions. Improved regulation can help manage the risks of AI amplifying biases in training data. For instance, ensuring that AI models accurately represent the diversity of populations can be challenging, as failure to do so may result in biases, inaccuracies, or malfunction. Regulations can be instrumental in mandating the reporting of attributes such as gender, race, and ethnicity in the training data, thereby ensuring intentional representativeness.

The new WHO publication aims to establish fundamental principles that governments and regulatory authorities can adhere to while developing new guidance or adapting existing ones concerning AI at the national or regional levels.

To post a reply please login or register
advertisement
advertisement