The use of Artificial Intelligence (AI) is becoming ubiquitous in technologies for health and wellbeing. However, many of these AI-based interventions often overlook the critical role of patients, healthcare professionals, and other key stakeholders and do not involve them early and inclusively enough in the design process. Thus, we propose to organise a full day hybrid workshop to discuss the challenges and opportunities to use co-design approaches as a critical methodology to meaningfully involve different key stakeholders from the very beginning of the design process taking a multidisciplinary and democratic approach to human-centred AI design. We will invite a multidisciplinary group of participants from academia and industry, as well as practitioners who will discuss their experiences, barriers, and facilitators for using co-design for designing culturally appropriate AI-based technology in diverse healthcare and wellbeing contexts.
The integration of Artificial Intelligence (AI) in the design of healthcare technology has increased in the last few years (Andersen et al. 2021; Rajpurkar et al. 2022; Knoche et al. 2023). In particular, AI has already shown great potential in making healthcare more accurate and efficient, especially in clinical practices (Zajac et al. 2023) such as disease prediction and diagnosis (Khalifa et al. 2024). For instance, AI algorithms have been able to detect pathologies more accurately in medical images (Rajpurkar et al. 2022), predict major depressive disorders (Schnyer et al. 2017), support the detection of diabetic retinopathy (Gulshan et al. 2016; Beede et al. 2020) and cancer (Lehman et al. 2019), and the construction of data-driven clinical pathways (Bettencourt-Silva et al. 2015). It could also help patients more directly by enabling them to get insights that support their quality of life (Shaheen 2021) and promote mental health via, for example, the use of chatbots or generative AI that provides advice and support (Dritse et al. 2024).
Yet, the development of AI algorithms in digital health interventions is mainly done by developers, professionals and practitioners who tend not to have direct contact with key stakeholders such as patients, caregivers, and healthcare professionals (Zytko et al. 2022; Till et al. 2023; Cajamarca et al. 2023). As a result, lay end-users and stakeholders who are impacted by this type of technology are excluded, even though they should have a say on how these technologies ought to be designed, especially in relation to ethical concerns such as data privacy and accountability. Indeed, research has shown that AI can negatively impact users when their perspectives, values, and needs are not considered at the time of developing technology that may affect them, directly or indirectly (Choi et al. 2023). For instance, biases could be propagated based on the data selected to train such AI algorithms (Wang et al. 2019). Furthermore, there are ethical concerns regarding the use of large amounts of sensitive personal data to train AI models (Reddy et al. 2020), as well as with the accountability for who is responsible for what AI algorithms do (Morley et al. 2020). Without careful consideration of end-user perspectives, AI-based systems may do more harm than good, especially when deployed without contextually appropriate, personalised support—as seen in applications like mental health (Dritse et al. 2024). As AI becomes increasingly prevalent and influential across many aspects of daily life, it is critical that end-users and other stakeholders who are most impacted by these technologies participate proactively in shaping how they are built, starting from the earliest stages and continuing throughout the entire design process, to design culturally appropriate AI-based health technologies (Sultana et al. 2025). This entails going away from a technology-centred approach and instead taking on a human-centred approach, which is gaining increased attention in HCI at the moment (Andersen et al. 2023; Capel et al. 2023).
Co-design has been a key methodology for designing with and for vulnerable populations (Harrington et al. 2019) and building more contextually situated technologies (Till et al. 2025). For example, Maestre et al. (2023) co-designed technologies that would not further stigmatise people living with chronic illness when trying to access social support, while Stawarz et al. (2023) co-designed decision-making system for people with Type 1 diabetes. Consequently, co-design seems to be a very important approach to address several potential barriers, including being unsure how to involve and engage with end users and key stakeholders in a meaningful way to foreground their needs and help design and shape the design of health technologies that will have an impact on them. These in turn could help establish guidelines, regulations and policy that could inform a more ethical design process for AI-based technology (Fijoo et al. 2020).
Given the benefits of bringing together multidisciplinary stakeholders when developing personal health technologies, especially involving AI (Ayobi et al. 2021, we propose to organise a workshop that will synergise cross-disciplinary learning from researchers, academics, practitioners, and students to discuss the challenges and opportunities for applying co-design approaches to design human-centred AI-based technology in a more ethical and inclusive way in health and wellbeing contexts.
References:
Andersen, T. O., Nunes, F., Wilcox, L., Kaziunas, E., Matthiesen, S., & Magrabi, F. (2021). Realizing AI in healthcare: challenges appearing in the wild. In Extended abstracts of the 2021 CHI conference on human factors in computing systems (pp. 1-5).
Andersen, T. O., Nunes, F., Wilcox, L., Coiera, E., & Rogers, Y. (2023). Introduction to the special issue on human-centred AI in healthcare: Challenges appearing in the wild. In ACM Transactions on Computer-Human Interaction (Vol. 30, pp. 1–12). ACM New York, NY.
Ayobi, A., Stawarz, K., Katz, D., Marshall, P.,Yamagata, T., Santos-Rodriguez, R., Flach, P., O’Kane, A. A. (2021) Co-Designing Personal Health? Multidisciplinary Benefits and Challenges in Informing Diabetes Self-Care Technologies. In Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2, Article 475 (October 2021).
Beede, E., Baylor, E., Hersch, F., Iurchenko, A., Wilcox, L., Ruamviboonsuk, P., & Vardoulakis, L. M. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-12).
Bettencourt-Silva, J. H., Clark, J., Cooper, C. S., Mills, R., Rayward-Smith, V. J., & De La Iglesia, B. (2015). Building data-driven pathways from routinely collected hospital data: a case study on prostate cancer. JMIR medical informatics, 3(3), e4221.
Cajamarca, G., Proust, V., Herskovic, V., C.diz, R. F., Verdezoto, N., & Fern.ndez, F. J. (2023). Technologies for managing the health of older adults with multiple chronic conditions: A systematic literature review. In Healthcare (Vol. 11, No. 21, p. 2897). Capel, T., & Brereton, M. (2023). What is humancentered about human-centered AI? A map of the research landscape. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–23.
Choi, Y., Kang, E. J., Lee, M. K., & Kim, J. (2023, April). Creator-friendly algorithms: Behaviors, challenges, and design opportunities in algorithmic platforms. *Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems*, 1–22.
Dritsa, D., Van Renswouw, L., Colombo, S., V..n.nen, K., Bogers, S., Martinez, A., Holbrook, J., & Brombacher, A. (2024). Designing (with) AI for Wellbeing. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–7.
Feij.o, C., Kwon, Y., Bauer, J. M., Bohlin, E., Howell, B., Jain, R., Potgieter, P., Vu, K., Whalley, J., & Xia, J. (2020). Harnessing artificial intelligence (AI) to increase wellbeing for all: The case for a new technology diplomacy. Telecommunications Policy, 44(6), 101988.
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., ... & Webster, D. R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. jama, 316(22), 2402-2410.
Harrington, C., Erete, S., & Piper, A. M. (2019). Deconstructing community-based collaborative design: Towards more equitable participatory design engagements. *Proceedings of the ACM on Human-*
Khalifa, M., & Albadawy, M. (2024). AI in diagnostic imaging: Revolutionising accuracy and efficiency. Computer Methods and Programs in Biomedicine Update, 100146.
Knoche, H., Abdul-Rahman, A., Clark, L., Curcin, V., Huo, Z., Iwaya, L. H., ... & Ziadeh, H. (2023). Identifying challenges and opportunities for intelligent data-driven health interfaces to support ongoing care. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-7).
Lehman, C. D., Yala, A., Schuster, T., Dontchos, B., Bahl, M., Swanson, K., & Barzilay, R. (2019). Mammographic breast density assessment using deep learning: clinical implementation. Radiology, 290(1), 52-58.
Maestre, J. F., Groves, D. V., Furness, M., & Shih, P. C. (2023). “ It’s like With the Pregnancy Tests”: Co-design of Speculative Technology for Public HIV-related Stigma and its Implications for Social Media. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–21.
Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: a mapping review. Social Science & Medicine, 260, 113172.
Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28(1), 31–38.
Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491–497.
Schnyer, D. M., Clasen, P. C., Gonzalez, C., & Beevers, C. G. (2017). Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder. Psychiatry Research: Neuroimaging, 264, 1-9.
Shaheen, M. Y. (2021). Applications of Artificial Intelligence (AI) in healthcare: A review. ScienceOpen Preprints.
Stawarz, K., Katz, D., Ayobi, A., Marshall, P., Yamagata, T., Santos-Rodriguez, R., Flach, P., O’Kane, A.A. (2023). Opportunities for Human-Centred Machine Learning in Supporting Type 1 Diabetes Decision-Making Beyond Self-Tracking. International Journal of Human-Computer Studies.
Sultana, S., Mahzabin Chowdhury, H., Sultana, Z., Verdezoto, N.(2025). ‘Socheton’:A Culturally Appropriate AI Tool to Support Reproductive Well-being. Accepted to the 2025 ACM SIGCHI Conference on Designing Interactive Systems (DIS).
Till S, Mkhize M, Farao J, Shandu L, Muthelo L, Coleman T, Mbombi M, Bopape M, Klingberg S, van Heerden A, Mothiba T, Densmore M, Verdezoto Dias N, CoMaCH Network Digital Health Technologies for Maternal and Child Health in Africa and Other Low- and Middle-Income Countries: Cross-disciplinary Scoping Review With Stakeholder Consultation J Med Internet Res 2023;25:e42161
Till, S., Verdezoto Dias, N., & Densmore, M. (2025). Fostering co-design readiness in South Africa. Interacting with Computers, iwaf005.
Wang, F., & Preininger, A. (2019). AI in health: state of the art, challenges, and future directions. Yearbook of Medical Informatics, 28(1), 016–026.
Zaja˛c, H. D., Li, D., Dai, X., Carlsen, J. F., Kensing, F., & Andersen, T. O. (2023). Clinician-facing AI in the Wild: Taking Stock of the Sociotechnical Challenges and Opportunities for HCI. ACM Transactions on Computer-Human Interaction, 30(2), 1-39.
Zytko, D., Wisniewski, P. J., Guha, S., Baumer, E. P. S., & Lee, M. K. (2022). Participatory design of AI systems: Opportunities and challenges across diverse users, relationships, and application domains. *CHI Conference on Human Factors in Computing Systems Extended Abstracts*, 1–4.