Human-Centered Reinforcement
Learning for Nutritional Coaching

Innovating health coaching with human-centered reinforcement learning

Nutritional Coaching

Health coaching is a service that empowers patients to work towards healthier lifestyles with the assistance of clinical experts. Since a patient’s socioeconomic situation influences their access to traditional means of health coaching, there is an active effort to make this service more accessible through conversational agents (CA) such as chatbots. Patients can receive health advice from CA so long as they have a device with Internet access.

Since a majority of CA in healthcare are designed to respond to pre-determined prompts, they have a limited ability to engage with patients in open-ended conversations. Integrating machine learning with CA addresses this limitation since it allows CA to learn how to interact when presented with unfamiliar topics. 

Reinforcement Learning + Applications

Reinforcement learning (RL) is a machine learning approach rooted in the idea of modifying behavior depending on how interactions with the surrounding environment are rewarded or penalized. If an RL agent is being trained to play a video game, it would learn to distinguish which in-game actions lead to favorable and undesired outcomes.

Since current RL-powered CA typically assess the efficiency of a conversation by the time it takes to convey information, they are prone to using short dialogue that users perceive as random. This coupled with the barriers associated with explaining RL algorithms’ decisions introduces the need to develop RL-based CA from the perspective of human reasoning and expectations.

Our Approach

We will address the limitations detailed above in two complementary ways. First, we will develop a more general, data-driven approach to nutritional health coaching that does not rely on hardcoding the appropriate response for an RL algorithm. At the same time, we will identify means of aligning RL-based conversations with human reasoning and generate human-understandable explanations of RL inferences and recommendations.

Ackowledgements

This project is funded in part by the award from the National Science Foundation (Award #2306690).