ashwinram[at]u[dot]nus[dot]edu

 Google Scholar | Github | ORCID 

NEWS

Feb 2024 Two papers accepted at CHI2024!!

Jan 2024 On a research visit at the UCL Multi-Sensory Devices Group under Prof. Sriram Subramaniam.

Aug 2023 Our work on VidAdapter: Adapting Blackboard-Style Videos for Ubiquitous viewing is accepted at IMWUT 2023 !!

Apr 2023 Our work on Mindful Moments has been accepted in DIS 2023 and also received honourable mention !

Hi ! I'm Ashwin Ram

I'm a postdoctoral researcher at the National University of Singapore, School of Computing working with Prof. Shengdong Zhao at the Synteraction lab. I graduated with a PhD from the same lab. Prior to this, I completed my Bachelor’s in Electronics and Communication Engineering at NIT-Trichy, India. 

My research interests are primarily in the domain of HCI, AR/VR, and wearable computing. Broadly, my research seeks to explore the following question:

How should wearable AR (input/output) be designed to seamlessly blend users' computing needs with their daily activities ?

My thesis focused on how the visual output of dynamic information (e.g. videos) should be adapted for see-though smart glasses to facilitate its use in ubiquitous situations (e.g. while walking, commuting). In these situations, users attention is fragmented between the content and surroundings, and information should delivered in the right manner to balance their attention.

This research has been applied to a variety of digital applications such as e-learning, mental wellbeing, and personal information management, to blend them more seamlessly with a user's daily routine.

Outside of research, I like learning new languages. Currently, I have trilingual native proficiency in Malayalam, Tamil, and English. I have elementary capabilities in French, Hindi, and Arabic and I am working on improving my fluency in them. My other passion is music. I had the opportunity to learn Carnatic singing in my childhood, and I play the guitar and compose music in my free time.

CURRENT PROJECTS

Adapting videos for learning on smart glasses while walking 

Can smart glasses facilitate the task of learning from video lectures while walking better than phones? Our research suggests so, showing that with an adapted video presentation style, glasses can improve recall while allowing faster walking speed than phones. Click on the image to learn more or check out the related publications at IMWUT 21 and CHI 22.

Blending mindfulness seamlessly into our daily routine

Mental wellbeing is essential, but we just don't seem to have the time and space to practice it? We present Mindful Moments, a breath-based guidance technique on smart glasses that helps guide your breathing and reduce stress without disrupting your daily routine. You can read more about this in our article in DIS 23 which received honourable mention.

Efficient ubiquitous interactions for smart glasses 

What's a more suitable input technique to multitask with information during our daily tasks, even when our hands our busy? We compared different subtle interaction techniques in various everyday multitasking contexts on smart glasses. We found that thumb-index-finger interaction to be a promising cross-scenario alternative. You can read more about this in our article in MOBILEHCI 21.

PAST PROJECTS

Image-based localization and navigation of robots using semantic floor maps

Efficient modelling of rumor propagation in Twitter

Sound source localization using neural networks 

Effective decision-making on group chats using chat summaries