ComPsy-FoMo:

Computational Psychology & Foundation Models

Thursday,  Nov 30,   6650 Rue Saint-Urbain, 5th floor,  #500, Montreal


Registration Remote call-in link 

11:45 - 12:30   Towards an Inter-Personalized Computational Psychiatry Guillaume Dumas (UdeM/Mila)

Mental health is a multidimensional challenge, marked by rapid brain changes and essential interpersonal relationships. This complicates the application of traditional statistical methods and neuropsychiatric approaches classically focused on isolated individuals. The combination of artificial intelligence and multi-brain neuroscience offers an innovative path towards a holistic understanding that encompasses neurobiological, behavioral and social scales. These advanced technologies facilitate the analysis of dynamics and interactions, whether between genes, neurons, or even individuals. This more integrative perspective paves the way for inter-personalized computational psychiatry. Combining mathematical tools and models with an ecosocial perspective promises to reinvent the detection and treatment of psychiatric disorders, but also opens the way to medicine beyond the individual. 

12:30- 1:30pm The Doctor Will Convince You Now Guillermo Cecchi  (IBM Research)

The phenomenal capabilities displayed by LLMs represent significant opportunities and challenges for biomedical applications. In a collaboration with MIT Media Lab and Stanford Medical, we investigated people’s perception of AI-generated medical advice compared to that provided by human doctors. 300 naïve participants evaluated responses from either medical professionals on an online healthcare platform or generated by a LLM (GPT3), labeled by medical experts as highly or poorly accurate. Participants struggled to differentiate between AI-generated and doctors’ responses and showed a preference for LLM advice, rating highly accurate LLM responses as more valid, trustworthy, and satisfactory. Low-accuracy LLM responses performed similarly to or better than doctors’ responses in participants' evaluations. Interestingly, participants were more trusting of high-accuracy LLM responses if they believed the response came from a doctor, showcasing a bias toward LLM advice perceived as doctor-endorsed.  Both experts and non-experts displayed biases favoring LLM responses for their perceived thoroughness and accuracy but still valued doctors' involvement in medical advice delivery.  The study emphasizes the need for collaboration between AI systems and medical professionals to mitigate misinformation risks while leveraging the benefits of advanced technology in healthcare delivery.

1:30-2:30 pm  RL Psychotherapy AI Companion Djallel Bouneffouf  (IBM Research)

We introduce a Reinforcement Learning Psychotherapy AI Companion that generates topic recommendations for therapists based on patient responses. The system uses Deep Reinforcement Learning (DRL) to generate multi-objective policies for four different psychiatric conditions: anxiety, depression, schizophrenia, and suicidal cases. We present our experimental results on the accuracy of recommended topics using three different scales of working alliance ratings: task, bond, and goal. We show that the system is able to capture the real data (historical topics discussed by the therapists) relatively well, and that the best performing models vary by disorder and rating scale. To gain interpretable insights into the learned policies, we visualize policy trajectories in a 2D principal component analysis space and transition matrices. These visualizations reveal distinct patterns in the policies trained with different reward signals and trained on different clinical diagnoses. Our system’s success in generating DIsorder-Specific Multi-Objective Policies (DISMOP) and interpretable policy dynamics demonstrates the potential of DRL in providing personalized and efficient therapeutic recommendations. 

Discussion