Reshaping Patient Interaction in Health Care

Reshaping Patient Interaction in Health Care

AI@Work Abstract Submission Reshaping Patient Interaction in Health Care 

Koen V. Hindriks Vrije Universiteit Amsterdam Computer Science (Social Artificial Intelligence) 

Current developments in health care due in particular to a growing aging population have increased the administrative burden on our health care system. Value-based Health Care (VBHC) has been proposed as a health care delivery model that rewards added value instead of volume with the quadruple aim to obtain better health outcomes, reduce costs, and improve the experience of both patients as well as health care professionals (HCPs). Measuring patient outcomes is essential for the success of VBHC and to improve average life expectancy and quality of life of elderly. To this end, patients are asked to complete self-reported outcome questionnaires called Patient Reported Outcome Measures (PROMs). PROMs enable a data-driven health care approach for assessing quality of life, which supports doctors and nurses in delivering personalized health care, and institutions in monitoring the effectiveness and efficiency of their services. However, in practice the response rates for PROMs are low, while the administration of PROMs by HCPs requires a considerable effort and thus puts a large burden on today’s health care system. In order to handle the capacity problems the health care system faces, health care providers on a massive scale have turned to blended care, and have begun to offer eHealth services to their patients. These services aim to empower patients and promote self-management. Self-management support aims at the systematic provision of education and supportive interventions to increase patients’ skills and confidence in managing their health problems, including regular assessment or progress and problems, goal setting, and problem-solving support. The idea is that by making use of digital solutions, patients keep control of their own health and remain informed of their clinical picture. These self-management approaches and clinics that have been launched have the additional benefit from a capacity point of view that patients need to physically attend less often for treatment and traditional consultations are replaced with online contact. Artificial Intelligence (AI) can play a key role in delegating patient data gathering, educational and supportive interventions to digital solutions implemented in the cloud and interactive technology such as (ro)bots. We have developed a social robot, for example, for the personalized and cost-effective robot-mediated administration of PROMs [Boumans, 2019; Hindriks, 2018]. Philips has developed the VitalHealth platform that provides support for questionnaire administration and person-to-person chat functionality to provide remote support. Patients, however, require a personalised approach, but HCPs often find it difficult to implement a tailored approach because they lack the time to have in-depth conversations with patients. Here, conversational AI can support HCPs in identifying and addressing patient issues and to increase patient’s self-management skills. AI thus can transform the way patient interaction is being done in health care at a rapid pace. 

The introduction of AI and the delegation of administrative tasks and other health care related interventions to AI systems introduces new requirements of which we will highlight here three. First, delegation requires increasing levels of automation of intelligent systems that can operate without the need for human intervention (autonomy). Autonomy is important if only because the need for human support defeats the purpose of reducing workload for HCPs. At the same time, however, we should realize that the capabilities of AI systems are limited and such systems cannot cater for all patient needs. It is therefore important in the design and evaluate these systems to clearly identify the limits of what a system can do for a patient and implement mechanisms (using AI again) that are able to indicate and trigger the need for HCP support. Second, patient interaction requires transparent handling of the data and guarantees that the data collected is valid (reliability). Reliability requires careful evaluation and comparison of data collected by an AI system with a golden standard (typically, a best practice for collecting the data by a human). Moreover, AI systems should also aim to collect contextual information that can help HCPs to understand and interpret the data collected. As we cannot require HCPs to go through all data collected automatically, however, we need to ensure that AI systems are able to identify cues (using AI again) that indicate the need for an HCP to look at specific data that a patient has provided. Third, although it appears that most patients are open to the use of new technology, patients only accept technology if it is sufficiently capable to service their needs (acceptance). We need to better understand the factors that we can use to identify which patients are open to using AI systems and how they can benefit most from these systems. As we have seen above, personalization of AI systems so they can adapt to individual patients (using AI again) may be an important tool for increasing patient acceptance. The lessons learnt from our work indicate the need for a new methodology that takes human factors and values into account when introducing AI systems and new interactive technology. We believe that it is not sufficient to rely on new design methods that emphasize taking values into account but that a more integral empirical cycle is needed that explores and identifies the effects of new AI technology that also identifies its limits. We need a design methodology that supports developers to reflect on effects and limits of their systems. To this end we need an empirical design cycle that uses exploratory empirical research (using, e.g., ethnological methods, or techniques from theater) to establish the needs of users and identify which human factors are relevant (is, e.g., a more personal communication style preferred over a more formal one). Careful continuous evaluation in the context of application is needed to identify whether desired effects are realized but also which potential side effects are introduced. This requires a continuous iterative design and development cycles in which prototypes are evaluated, new AI techniques are introduced for addressing (side) effects, and the limits of AI systems are identified to make sure that HCPs remain in the loop when patients need them most. 


Boumans, R., Meulen, F. van, Hindriks, K.V., Neerincx, M.A., Olde-Rikkert, M.G.M. (2019). Robot for health data acquisition among older adults: a pilot randomised controlled crossover trial. In: BMJ Quality & Safety, Volume 28, 793-799. DOI: 10.1136/bmjqs-2018-008977. 

Hindriks, K. V., Boumans, R., van Meulen, F., Neerincx, M., & Rikkert, M. (2018). An interview robot for collecting patient data in a hospital. Ercim News, 114, 20-21.