Project Snapshot
Problem
People increasingly turn to AI for advice and emotional support—but can AI actually deliver high-quality support, and what does that mean for UX design?
My Role
Study 1: Research Collaborator
Study 2: Principal Investigator
Methods
Experiments (in-person & online)
Qualitative coding & categorization
Quantitative analysis (inter-coder reliability, regression)
Outcome
Research-driven design principles for building empathetic, ethical, and trustworthy AI-powered support experiences.
Why This Project
AI systems are already embedded in mental health apps, education platforms, and customer support tools. However, poorly designed AI support can feel dismissive, unsafe, or untrustworthy—especially in emotionally sensitive contexts.
UX Challenge: How might we design AI-generated support experiences that feel helpful, respectful, and emotionally appropriate?
Research Goals
I approached this problem by focusing on user perception, not just technical capability.
Core Questions
How do people evaluate the quality of support they receive from AI?
What makes AI advice feel trustworthy, empathetic, and actionable?
How does personalization change how users seek and interpret support?
Study 1: Evaluating AI Advice Quality
Overview
Research Question: When people seek advice from large language models, do those models provide high-quality support?
My Role: Research collaborator
Methods
Controlled experiment
Qualitative coding of AI-generated responses
Inter-coder reliability testing
Regression analysis
Note: This is an ongoing project. Detailed protocol information is available upon reasonable request.
Key Findings
AI model can provide clear, actionable advice for stressful situations
Explanations that justify recommendations increase perceived quality
Respectful, autonomy-supportive language improves user receptivity
Study 2: Seeking Support from Personalized AI
Research Questions:
How do people seek support from AI systems?
How does personalization affect perceived support quality?
My Role: Principal Investigator
Methods
In-person experiments
Qualitative coding and categorization
Data collection is ongoing. Protocol available upon request.
Early Observations
Users approach AI differently when it feels more personalized
Expectations for high-quality support increase with personalization
Design Implications
Based on findings across both studies, I translated insights into design principles for AI-generated support experiences.
1. Design for Empathy, Not Just Accuracy
Conversational tone matters as much as informational correctness
Interfaces should acknowledge emotional context before offering advice
2. Make Reasoning Visible
Users trust AI more when the model provides reasons why a suggestion is made
Simple, transparent explanations increase follow-through
3. Support User Autonomy
Frame suggestions as options, not directives
Respect users’ existing plans and agency
Impact & Reflection
This project shaped how I think about designing AI systems in emotionally sensitive spaces.