BD.20.010 – Enabling responsible AI in healthcare: establishing boundary conditions and developing actionable solutions

Route: Creating Value through responsible access to and use of big data

Cluster question: 105 How can Big Data and technological innovation (e-health) contribute to health care?

Problem: Digital technologies have transformed our current society at many levels. In particular artificial Intelligence (AI), a technology capable of making use of large amounts of data (big data), has a great potential to revolutionise health care, but its application is still limited. Several obstacles stand between the innovation of AI and its application in the health sector. These barriers must be overcome to harvest AI’s full potential while making a responsible use of the technology. Primary objective: Establish boundary conditions and develop actionable solutions for AI prediction algorithms in healthcare in order to achieve responsible medical AI. These solutions, which can also be understood as a knowledge infrastructure, will enable the translation of current soft guidelines and legal frameworks into practice, contributing to closing the gap between theory and implementation. Specifically, this project will address the following ‘four key issues’ faced by medical AI: bias and unintended results, lack of explainability and transparency, absence of adequate evidence of cost-effectiveness, and insufficient proof of causality. Strategy and deliverables: Based on real use cases, the ‘four key issues’ of medical AI will be investigated. Combining scientific research with the engagement of multiple stakeholders to find solutions to technical, societal, ethical and legal issues, we seek to approach medical AI issues in a multidisciplinary and inclusive way. Stakeholders include the AI community (ie. developers and manufacturers), governmental bodies, regulators, health insurance companies, healthcare institutions, clinicians, healthcare professionals, patients and the general public. This approach will allow us to design actionable solutions and to develop an open source tool for the whole community, in order to achieve responsible medical AI. Impact: This project aims to benefit all stakeholders involved and, ultimately, to contribute to a fair and just society that embraces technological advances in a responsible way.

Keywords

AI, Artificial Intelligence, Big Data, boundaries, health care, medical, Responsible AI

Other organisations

Clinical Artificial Intelligence and Research Lab (CAIRElab), Leiden Universitair Medisch Centrum (LUMC), nl), pacmed.ai

Submitter

Organisation National eHealth Living Lab (NeLL), Leids Universitair Medisch Centrum (LUMC).
Name Prof. dr. N.H. (Niels) Chavannes
E-mail N.H.Chavannes@lumc.nl
Website https://nell.eu/, https://www.lumc.nl/