Singapore HCI Meetup 2024
🗓 26 April 2024
⏰ 10AM to 5PM
📍 NUS SDE3 LT421 (Level 4)
👋 A gathering for the Singapore HCI community from all over.
This event is organized by NUS CUTE Center,
with generous support from NUS Smart Systems Institute.
Huge shoutout to Ye Qian, Sonne Chen Yang, Yong Zhen Zhou, Janghee Cho, Tony Tang, EJ Lee, and Foong Pin Sym for helping to organize things as well!
⏳ Schedule
09:45 Arrival and Check-ins
10:15 Welcome
10:30 Paper Talks Session A
12:00 Lunch + Poster Session
13:00 Poster + Demo Session
14:30 Paper Talks Session B
16:00 Breakout Discussions: What is SG HCI to you?
16:45 Closing
🎤 Paper Talks Session A
Session Chair: Yi-Chieh (EJ) Lee (School of Computing, NUS)
E-Acrylic: Electronic-Acrylic Composites for Making Interactive Artifacts
Han Bo (CUTE Center & Interactive Materials Lab, SSI, NUS) et al.
Electronic composites incorporate computing into physical materi- als, expanding the materiality of interactive systems for designers. In this research, we investigated acrylic as a substrate for electron- ics. Acrylic is valued for its visual and structural properties and is used widely in industrial design. We propose e-acrylic, an electronic composite that incorporates electronic circuits with acrylic sheets. Our approach to making this composite is centered on acrylic mak- ing practices that industrial designers are familiar with. We outline this approach systematically, including leveraging laser cutting to embed circuits into acrylic sheets, as well as different ways to shape e-acrylic into 3D objects. With this approach, we explored using e-acrylic to design interactive artifacts. We reflect on these applications to surface a design space of tangible interactive artifacts possible with this composite. We also discuss the implications of aligning electronics to an existing making practice, and working with the holistic materiality that e-acrylic embodies.
Designing for Caregiver-facing Values Elicitation Tools
Natasha Ureyang (NUS) et al.
In serious illness contexts, caregivers are often tasked to make values-based decisions for patients without decision-making capacity. However, most existing values elicitation tools are designed for patient use, which might not address caregivers’ unique needs. In this study, we developed have low-fidelity prototypes as probes to explore the design requirements for caregiver-facing values elicitation tools with 12 caregivers. Our findings indicate that caregivers need more support in reconciling various conceptions of patient values and their own values. Caregivers wanted to use the tools to build consensus among family members, but may prefer to use the online tool on their own rather than share the interface with other caregivers. Lastly, there is a prevalent lack of understanding of the importance of values in decision-making. From these insights, we draw some implications for the design of online tools for caregiver-facing values elicitation.
Help Me Reflect: Leveraging Self-Reflection Interface Nudges to Enhance Deliberativeness on Online Deliberation Platforms
Yeo Shun Yi (SUTD) et al.
The deliberative potential of online platforms has been widely examined. However, little is known about how various interface-based reflection nudges impact the quality of deliberation. This talk presents two user studies with 12 and 120 participants, respectively, to investigate the impacts of different reflective nudges on the quality of deliberation. In the first study, we examined five distinct reflective nudges: persona, temporal prompts, analogies and metaphors, cultural prompts and storytelling. Persona, temporal prompts, and storytelling emerged as the preferred nudges for implementation on online deliberation platforms. In the second study, we assess the impacts of these preferred reflectors more thoroughly. Results revealed a significant positive impact of these reflectors on deliberative quality. Specifically, persona promotes a deliberative environment for balanced and opinionated viewpoints while temporal prompts promote more individualised viewpoints. Our findings suggest that the choice of reflectors can significantly influence the dynamics and shape the nature of online discussions.
PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels
Runze Cai (NUS) et al.
While effective for recording and sharing experiences, traditional in-context writing tools are relatively passive and unintelligent, serving more like instruments rather than companions. This reduces primary task (e.g., travel) enjoyment and hinders high-quality writing. Through formative study and iterative development, we introduce PANDALens, a Proactive AI Narrative Documentation Assistant built on an Optical See-Through Head Mounted Display that transforms the in-context writing tool into an intelligent companion. PANDALens observes multimodal contextual information from user behaviors and environment to confirm interests and elicit contemplation, and employs Large Language Models to transform such multimodal information into coherent narratives with significantly reduced user effort. A real-world travel scenario comparing PANDALens with a smartphone alternative confirmed its effectiveness in improving writing quality and travel enjoyment while minimizing user effort. Accordingly, we propose design guidelines for AI-assisted in-context writing, highlighting the potential of transforming them from tools to intelligent companions.
PaperTouch: Tangible Interfaces through Paper Craft and Touchscreen Devices
Ye Qian (CUTE Center & Interactive Materials Lab, SSI, NUS) et al.
Paper and touchscreen devices are two common objects found around us, and we investigated the potential of their intersection for tangible interface design. In this research, we developed PaperTouch, an approach to design paper based mechanisms that translate a variety of physical interactions to touch events on a capacitive touchscreen. These mechanisms act as switches that close during interaction, connecting the touchscreen to the device’s ground bus. To develop PaperTouch, we explored diferent types of paper along with the making process around them. We also built a range of applications to showcase diferent tangible interfaces facilitated with PaperTouch, including music instruments, educational dioramas, and playful products. By refecting on this exploration, we uncovered the emerging design dimensions that considers the interactions, materiality, and embodiment of PaperTouch interfaces. We also surfaced the tacit know-how that we gained through our design process through annotations for others to refer to.
The Other Me (TOM): Towards Intelligent Wearable Proactive Assistants
Nuwan Janaka (Synteraction Lab, SSI, NUS) et al.
Advanced digital assistants can significantly enhance task performance, reduce user burden, and provide personalized guidance to improve users’ abilities. However, the development of such intelligent digital assistants presents a formidable challenge. To address this, we introduce TOM, a conceptual architecture and software platform designed to support the development of intelligent wearable assistants that are contextually aware of both the user and the environment. We showcase several proof-of-concept assistive services and discuss the challenges involved in developing such services.
🎤 Paper Talks Session B
Session Chair: Cho Janghee (Division of Industrial Design, NUS)
Towards Human-AI Collaborative Systems for Physical Stroke Rehabilitation Practices
Lee Min Hun (SMU) et al.
Rapid advances in artificial intelligence (AI) and machine learning (ML) have made them applicable to support healthcare practices. However, the deployment of these AI systems remains a challenge. In this talk, I will present the findings from on-going studies to design, develop, and evaluate a human-AI collaborative decision support system for physical stroke rehabilitation assessment.
How People Prompt to Create Interactive VR Scenes
Zhang Tianyi (SMU) et al.
Generative AI tools can provide people with the ability to create virtual environments and scenes with natural language prompts. Yet, how people will formulate such prompts is unclear—particularly when they inhabit the environment that they are designing. For instance, it is likely that a person might say, “Put a chair here,” while pointing at a location. If such linguistic and embodied features are common to people’s prompts, we need to tune models to accommodate them. In this work, we present a wizard-of-oz elicitation study with 22 participants, where we studied people’s implicit expectations when verbally prompting such programming agents to create interactive VR scenes. Our findings show when people prompted the agent, they had several implicit expectations of these agents: (1) they should have an embodied knowledge of the environment; (2) they should understand embodied prompts by users; (3) they should recall previous states of the scene and the conversation, and that (4) they should have a commonsense understanding of objects in the scene. Based on these explorations, we outline new opportunities and challenges for conversational programming agents that create VR environments.
Visualization Recommendation Reasoning
Alexander Zhang (SMU) et al.
In an era where data size is growing exponentially, human cognitive capacity remains unchanged, necessitating tools that can bridge this gap. Data visualization, through the use of common graphics such as charts and infographics, offers a means to gain deep insights into complex datasets. Despite its potential, creating effective visualizations typically demands professional expertise and significant manual effort. This raises the question: Can we develop a visualization recommendation system that minimizes manual input yet ensures high explainability? Our proposed methods leverage AI models and algorithms to enhance the efficiency and effectiveness of visualization design, offering a promising direction for research and application in data science and HCI.
AudioXtend: Assisted Reality Visual Accompaniments for Audiobook Storytelling During Everyday Routine Tasks
Felicia Tan (AH Lab, NUS) et al.
The rise of multitasking in contemporary lifestyles has positioned audio-first content as an essential medium for information consumption. We present AudioXtend, an approach to augment audiobook experiences during daily tasks by integrating glanceable, AI-generated visuals through optical see-through head-mounted displays (OHMDs). Our initial study showed that these visual augmentations not only preserved users’ primary task efficiency but also dramatically enhanced immediate auditory content recall by 33.3% and 7-day recall by 32.7%, alongside a marked improvement in narrative engagement. Through participatory design workshops involving digital arts designers, we crafted a set of design principles for visual augmentations that are attuned to the requirements of multitaskers. Finally, a 3-day take-home field study further revealed new insights for everyday use, underscoring the potential of assisted reality (aR) to enhance heads-up listening and incidental learning experiences.
Process and outcomes of trust in generative AI: An emotional, relational, and psychological perspective
Sheryl Ng (CNM, NUS) et al.
Research in human-machine communication (HMC) has directed much effort towards improving the capabilities of artificial agents, also referred to as chatbots, conversational agents, or interactive agents, to respond as human-like as possible. Leveraging advancements in artificial intelligence, interactions with these agents are increasingly human-like in terms of language and sociality. This project sought to uncover the process and outcomes of trust in communicative agents, with a focus on their unique capabilities as generative AI machines that allow them to display emotional and relational competency. This project also paid attention to the possibility of a boomerang effect whereby initial benefits from interacting with generative AI could lead to subsequent detriments. Through empirical investigation conducted over a two-week period, it was found that emotional and relational competency had a positive impact on trust in generative AI, which contributed to feelings of emotional support from the AI. While this could alleviate stress, it could also lead to psychological dependence on AI.
Sound Designer-Generative AI Interactions: Towards Designing Creative Support Tools for Professional Sound Designers
Purnima Kamath (AHLab, SSI, NUS) et al.
The practice of sound design involves creating and manipulating environmental sounds for music, films, or games. Recently, an increasing number of studies have adopted generative AI to assist in sound design co-creation. Most of these studies focus on the needs of novices, and less on the pragmatic needs of sound design practitioners. In this paper, we aim to understand how generative AI models might support sound designers in their practice. We designed two interactive generative AI models as Creative Support Tools (CSTs) and invited nine professional sound design practitioners to apply the CSTs in their practice. We conducted semi-structured interviews and reflected on the challenges and opportunities of using generative AI in mixed-initiative interfaces for sound design. We provide insights into sound designers' expectations of generative AI and highlight opportunities to situate generative AI-based tools within the design process. Finally, we discuss design considerations for human-AI interaction researchers working with audio.
📜 Posters
Birds of a Feather Flock Together: How Similarity Influences People’s Social Support Seeking From AI Chatbots
Zhu Zicheng (NUS) et al.
Understanding Social Stigma towards Mental Illness: from a Conversational Perspective
Meng Han (NUS) et al.
Exploring the Impact of Social Media on Self-Diagnosis: The Role of Availability Bias
Zhang Junti (NUS) et al.
Anesth-on-the-Go: Designing Portable Game-based Anesthetic Simulator for Education
Yuki Onishi (SMU) et al.
On BVI Individuals’ Situation Awareness in Indoor Spaces: A Formative Study
Smitha Sheshadri (SMU) et al.
Using AI in Social Work
Tan Yugin (NUS) et al.
Beyond "Inspiration Porn": Exploring the Motivations, Challenges, and Monetization Strategies of Blind and Visually Impaired Content Creators on YouTube
Ann Chen (NUS) et al.
Embodied AI in Guidance Interface
Neil Chulpongsatorn (SMU) et al.
ControlChat
Alexander Ivanov (SMU) et al.
Creating and Delivering Videos' Audio Descriptions for Blind People
Rosiana Natalie (SMU) et al.
Bridging Language Gaps: Enhancing Transcripts for Multilingual Communication
Qin Peinuan (NUS) et al.
🕹️ Demos
The Other Me (TOM): Towards Intelligent Wearable Proactive Assistants
Nuwan Janaka (NUS) et al.
Robotic Guide Dog
Shaojun Cai (NUS) et al.
PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels
Runze Cai (NUS) et al.
Robi Butler: Multimodal Remote Interaction with Household Robotic Assistants
Anxing Xiao (NUS) et al.
🤝 Participants List
Here is the list of participants who signed up for this event to facilitate further conversations and collaborations!