Healthcare Anywhere

Anywhere: Through advanced AI, specialized health support will be delivered increasingly outside of hospitals: anywhere and at any time. AI-powered prediction and scheduling tools will protect scarce resources and direct them timely to where they are most effective in the care chain. For example, the surgeon of the future will be supported by real-time imaging and precision robotics at locations that minimize waiting time and maximize throughput.

Nowhere: Care will move from today’s fractured business model, often characterized by limited patient contact, to a new holistic approach in which clinicians and caregivers continuously manage each person’s health remotely: scalable and personalized preventive services will reduce the need for hospital care in the first place. Such services will also amplify the wellbeing in local communities by promoting participation in social activities.

Contibuting projects

Artificial intelligence for medical image registration

Project description

Patients undergoing radiotherapy treatment can never lie completely still and their anatomy typically changes over time, for example due to breathing, bowel movements and changes in tumor size. To compensate for these changes, consecutive images are acquired of the patient using low-dose computed tomography (CT) or magnetic resonance imaging (MRI). The anatomical structures in these images subsequently need to be aligned with those in the planning image using a technique called image registration.

Classical methods for image registration typically suffer from being slow. Methods based on artificial intelligence (AI), on the other hand, have the potential to perform image registration in real-time, enabling important applications in radiotherapy. Small-scale studies on AI for image registration have demonstrated astonishing results. The clinical application, however, remains limited due to several challenges (such as dealing with large displacements and limited availability of annotated training data).

Personalized healthcare processes using AI for improved treatment

Health care has developed significantly in the past decades. Advances in medicine and technology allowed to treat more and more diseases and in a more effective way, with visible improvements in terms of both duration and quality of life for individuals. However, at the same time the complexity and the costs of the health care systems have grown exponentially. Nowadays, multiple solutions often exist to cure a particular disease, and determining the best option for the patient at hand is a challenging issue, where many factors have to be taken into account. In recent year, significant efforts have been made to develop standardized care pathways, representing best practices for a number of treatment processes, with the goal of standardizing and improving the quality of care. However, several studies have shown that the proposed care pathways are often either not used or used only as guidelines, while the actual treatment processes can deviate quite significantly from them.

Cognitive Models as Surrogate Models for Explainable AI

Project description

Biological intelligence is explained with the help of cognitive models: mathematical or computational models that reproduce capacities such as visual categorization, language learning, and decision making. Can cognitive models also be used to explain the behavior of “black box” artificial intelligence?In this project, we evaluate the usefulness of different cognitive modeling frameworks for understanding, predicting, and intervening on the behavior of “black box” systems such as deep neural networks. We consider different stakeholders, from expert developers to end-users and external regulators, and determine the extent to which models that center on mental representations, complex dynamics, and/or statistical inference can be used to satisfy these stakeholders' explanatory needs. In this way, we aim to identify normative guidelines and best-practice methods for Explainable AI, and facilitate comparisons between human and artificial intelligence. 

Assisting medical decision with Explainable AI

Project description

Modern laboratory experiments in bio-medicine generate large amounts of structured, heterogeneous data that can be used to build models and assist critical decisions in clinical environments. Due to the delicate nature of these decisions, in real-world scenarios, models must be interpretable or explainable, i.e., domain experts must be able to inspect and understand the rationale underlying decisions. 


EAISI program management HEALTH

Pieter Van Gorp (main contact person) -
Charly Bastiaansen -
Marieke van Beurden
Paul Merkus
Carmen van Vilsteren -

Follow us