Optimizing Human-AI Collaboration: Expert Interpretation of AI Results and Approaches for Accuracy and Efficiency
The rapid advancement of artificial intelligence (AI) has ushered in a transformative era across various industries, prominently impacting the research and development (R&D) processes. In healthcare, AI has become a powerful tool for enhancing decision-making processes. One critical domain within healthcare is pharmacovigilance, where the detection of Adverse Events (AE) related to pharmaceutical products is important for ensuring patient safety. In collaboration with a leading pharmaceutical company in Europe, our research aims to effectively integrate AI in expert decision-making.
Our project builds on improvements of our trained AI model for AE detection using Large Language Models (LLMs). Based on the AI model, we conduct an in-depth study to explore the strengths and weaknesses of AI in the recommendation process. We also identify human experts’ strengths and weaknesses, by comparing their historical decisions with ground truth, namely FDA label changes. We aim to test the performance of human experts and AI in various disease domains and data quality scenarios. In addition, we will try to explore if human experts exhibit systematic biases compared to the FDA label change decisions. The identification of human and AI biases will help us in designing better Human-AI collaboration solutions.
The main contribution of our project is to explore an accurate and efficient way to integrate AI into the human detection process. Running single-blind field experiments with the company’s human experts, we aim to test a set of hypotheses derived from theoretical models.
(Work in progress)