Accepted Papers
*Track 1: Gallery Opening*
How are HCI researchers using LLMs as research tools today?
Leveraging Large Language Models (LLMs) to Support Collaborative Human-AI Online Risk Data Annotation, Jinkyung Park, Pamela Wisniewski, Vivek Singh
Apprentices to Research Assistants: Advancing Research with Large Language Models, Mohammad Namvarpour, Afsaneh Razi
Integrative Understanding of Image and Text in Large Language Vision Models: Evidence from News Image Captions, Yanru Jiang, Hongjing Lu, Rick Dale
How countries learn from each other: Evidence from National AI Strategies using LLMs, Eunji Emily Kim
Leveraging Large Language Models for Collective Decision-Making, Marios Papachristou, Longqi Yang, Chin-Chia Hsu
Improving Academic Paper Comprehension Efficiency and Accuracy with Large Language Model-Based Annotation Training Tool, Yifei Hu, Tianyi Li, Tatiana Ringenberg, Nathan Kim, Edgar Babajanyan, Julia Rayz
QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums, Varun Nagaraj Rao, Eesha Agarwal, Samantha Dalal, Dan Calacci, Andrés Monroy-Hernández
ThemeViz: Human-AI Collaboration in Iterative Theme Refinement with an LLM-enhanced Interactive Visual System, Daye Kang, Zhuolun Han, Jiahe Tian, Muhan Zhang, Jeff Rzeszotarski
Utilizing ChatGPT for Taiwan News Headline Categorization of Topic and Sentiment, Yu-Chen Yang, Rebecca Ping Yu, Wan-Yun Yu, Meichi Pan, Chi Hu, Poting Yeh, Jouyun Ho, Yung-Ju Chang
Practical Strategies for Labeling Qualitative Data Using Large Language Models, Marianne Aubin Le Quéré, Travis Lloyd, Madiha Zarah Choksi
Student Reflections on Self-Initiated GenAI Use in HCI Education, Hauke Sandhaus, Maria Teresa Parreira, Wendy Jiu
LLMs in HCI Data Work Bridging the Gap Between Information Retrieval and Responsible Research Practices, Neda Taghizadeh Serajeh, Iman Mohammadi, Vittorio Fuccella, Mattia De Rosa
Chain of Thought Prompting for Large Language Model-driven Qualitative Analysis, Courtni Byun, Piper Vasicek, Kevin Seppi
*Track 2: The Brushstrokes*
How can we empirically validate our new methodological tools?
Assessing the Reliability of Vision Language Models for Inferring Phone Activity from Smartphone Screenshots: A Preliminary Case Study with GPT-4V, Yung-Ju Chang, Yu-Chun Chen, Yu-Jen Lee, Kuei-Chun Kao, Mu-Jung Cho, Yikun Chi, Byron Reeves, Nilan Ram
Decoding Complexity: Exploring Human-AI Concordance in Qualitative Coding, Elisabeth Kirsten, Annalina Buckmann, Abraham Mhaidli, Steffen Becker
A Brief Summary of the Study “If in a Crowdsourced Data Annotation Pipeline, a GPT-4", Zeyu He, Chieh-Yang Huang, Chien-Kuang Cornelia Ding, Shaurya Rohatgi, Ting-Hao ‘Kenneth’ Huang
Identifying Basic Human Values in Social Media Posts with Large Language Models, Isabel Gallegos, Ziv Epstein, Farnaz Jahanbakhsh, Tiziano Piccardi, Dora Zhao, Johan Ugander, Michael Bernstein
ThemeViz: Human-AI Collaboration in Iterative Theme Refinement with an LLM-enhanced Interactive Visual System, Daye Kang, Zhuolun Han, Jiahe Tian, Muhan Zhang, Jeff Rzeszotarski
AI-Mediated Annotation: Just put a human in the loop?, Hope Schroeder
*Track 3: Critical Reception*
Can we use LLMs as research tools ethically and thoughtfully?
The Illusion of Artificial Inclusion, William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Díaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, Kevin R. McKee
Large Language Models Cannot Replace Human Participants Because They Cannot Portray Identity Groups, Angelina Wang, Jamie Morgenstern, John P. Dickerson
Leveraging the Strengths of Qualitative Analysis to Improve Data Annotation, Ruyuan Wan, Jie Gao
The Shiny Scary Future of Automated Research Synthesis in HCI, Katja Rogers
A Framework For Discussing LLMs as Tools for Qualitative Analysis, James Eschrich, Sarah Sterman