I am responsible for the technical development of an agentic system to simulate AI scientists. I am working on a multi-agentic pipeline to accelerate scientific experiment execution. I conduct computational experiments to benchmark our proposed systems and design use cases to demonstrate the usability and effectiveness of our system.
I'm a Research Team Lead at the Visual Intelligence Lab, UMNTC, where I am advised by Prof. Qianwen Wang. My primary research areas are human-computer interactions, visualization, and human-centered AI systems. I am also actively working on sub-domains such as cognitive/behavioral modeling, gamification, and educational technologies.
As a Research Team Lead, I have successfully completed three Research and development projects and produced two full papers, one short paper, and one workshop presentation, all as first authors. My major works included an interaction tool for GNNs, an educational game for LLMAS, and a cognitive modeling toolkit for scientific writing.
Besides my scientific research work, my other responsibilities included interviewing/mentoring new members, project management, and planning.
I was responsible for the technical development of an internal crowdsourcing system for asteroid classification, which helps the training process of our machine learning algorithms. I also developed several internal tool utilities for the TURBO Telescopes, including an image processing pipeline, an internal watchdog system, and an image storage module.
I was responsible for the technical development of a motion capture interface in VR on the Oculus platform. I utilized tools such as Unity 3D, Blender, and OpenXR Toolkit. The project included a humanoid model integrated with inverse kinematic and a motion capture system.