Google Scholar / Github / LinkedIn / Twitter
I am a researcher at Microsoft exploring whether AI actually works for the person using it (evaluation), helps them see what they'd miss (discovery), and serves everyone equitably (fairness).
Before we dive into my research, allow me to share some selected accolades outside of my research:
What I believe in?
We evaluate AI for system verification, i.e. how accurate is the output? how good is the retrieval quality? But these are metrics for the system, not the human. Users don't just want data. They want food for thought, agency, and empowerment. If there's a gap between an AI's output and a person's ability to act on it, the system has failed. We just don't measure that.
And even the "perfect" system ignores the most important variable: the evolving human. We don't just consume information, we transform it. I study how to build AI that supports that.
What I am working on?
Human-Centric Evaluation
Evaluation breaks when we measure systems instead of usefulness for the person. Moving beyond thumbs up/down to assessments designed around what users actually need
Helping People Discover What They'd Miss
Systems that push past attention biases to surface ideas, papers, and collaborators you may not find on your own.
If these themes resonate with you, I’d love to connect. You can find my full portfolio of papers and projects here or reach out on LinkedIn if you'd like to bring these conversations to your university, forum, or organization.
I’ve guided K-12 students, undergraduates, and professionals about AI and how to use it effectively in their work through talks and initiatives. I'm also passionate about women and underrepresented voices in STEM; If that's you, please don't hesitate to say hello.