Search this site
Embedded Files
  • Home
  • Research
    • Evaluations
    • Discovery & Augmentation
    • Responsible AI
    • Others
  • Giving Back to the Community
 
  • Home
  • Research
    • Evaluations
    • Discovery & Augmentation
    • Responsible AI
    • Others
  • Giving Back to the Community
  • More
    • Home
    • Research
      • Evaluations
      • Discovery & Augmentation
      • Responsible AI
      • Others
    • Giving Back to the Community

Google Scholar / Github / LinkedIn / Twitter

I am a researcher at Microsoft exploring whether AI actually works for the person using it (evaluation), helps them see what they'd miss (discovery), and serves everyone equitably (fairness).

Before we dive into my research, allow me to share some selected accolades outside of my research:

Recognition from the 

Indian Prime Minister 

& Health Minister of Singapore

    Linkedin Post

Fun Fact: This was my origin story of 'building for and with people'.

Government sponsored scholarship to study Entrepreneurship in Carleton University, Canada

Linkedin Post

Here I learned how to approach
user problems to create impact.

Innovation Ambassador from the former Deputy High Commission of Canada to India and Chairman of AICTE (Education body of India)

Linkedin Post


Outstanding Young Engineers Award from Adobe for showing impact and innovation

Linkedin Post

What I believe in? 

We evaluate AI for system verification, i.e. how accurate is the output? how good is the retrieval quality? But these are metrics for the system, not the human. Users don't just want data. They want food for thought, agency, and empowerment. If there's a gap between an AI's output and a person's ability to act on it, the system has failed. We just don't measure that.

And even the "perfect" system ignores the most important variable: the evolving human. We don't just consume information, we transform it. I study how to build AI that supports that.

What I am working on?

Human-Centric Evaluation


Evaluation breaks when we measure systems instead of usefulness for the person. Moving beyond thumbs up/down to assessments designed around what users actually need

[See related work →]

Helping People Discover What They'd Miss

Systems that push past attention biases to surface ideas, papers, and collaborators you may not find on your own. 

[See related work →]

Responsible AI


Who does AI serve, and whose data does it protect?

[See related work →]

If these themes resonate with you, I’d love to connect. You can find my full portfolio of papers and projects here or reach out on LinkedIn if you'd like to bring these conversations to your university, forum, or organization.


I’ve guided K-12 students, undergraduates, and professionals about AI and how to use it effectively in their work through talks and initiatives. I'm also passionate about women and underrepresented voices in STEM; If that's you, please don't hesitate to say hello. 

Last Updated: Feb, 2026

Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse