Ā Tai InuiĀ
Ā Junior Undergraduate ResearcherĀ
Ā Junior Undergraduate ResearcherĀ
šIām Tai Inui, a Mechanical Engineering undergrad at Waseda University with a minor in Computer Science & Communications, currently obsessed with the question: how can we make remote action feel truly āmineā instead of just supervision?
I care about telepresence that preserves dignity and authorship: a nurse remotely feeling tissue boundaries, a person with limited mobility still able to set the table with family, or teammates prototyping physical artifacts together across cities. Agency, for me, isnāt just a nice feeling. Itās what keeps people engaged, accountable, and willing to trust remote systems.
Previously, I was a collaborating visitor at Carnegie Mellon University, advised by Prof. Jean Oh, and a visiting student researcher at KAIST, advised by Prof. Jee-Hwan Ryu.Ā
In my spare time, I really enjoy playing golf. I'm also a huge fan of video games, to the point where I write a research paper on it for fun...Ā
We proposed SoftBit, a teleoperation interface that enhances user awareness through real-time soft robot finger shape visualization in extended reality (XR). The system combines proprioceptive sensing with an XR headset (Meta Quest2) to provide users with intuitive visual feedback about finger deformations during manipulation tasks. SoftBiTās key innovation is a real-time sim-to-real pipeline that estimates and visualizes soft finger shapes, helping users better understand robot-object interactions even when direct visual feedback is limited.
We introduced TriForce Band, a wrist-worn force-myography interface that uses a dense 4Ć4 triaxial tactile array to capture rich wrist musculature in a compact, wearable form factor. Paired with an MLP-Mixer + SE model, TriForce Band achieves 93.0% accuracy on 10 gesture classes and shows that adding shear components boosts performance by 2.9 pp over normal-force-only input.Ā
We introduced SoftNash, an entropy-regularized Nash game formulation for non-fighting virtual fixtures in teleoperation. By inflating the robotās effort weight with a single softness parameter Ļ, SoftNash smoothly dials assistance from classic high-gain fixtures to agency-preserving, pass-through behavior while maintaining accuracy and reducing conflict and workload in a 6-DoF haptic study.Ā
We introduced MicCheck, a plug-and-play acoustic contact sensor that repurposes an off-the-shelf Bluetooth pin microphone for low-cost, easily integrated contact sensing in robot grippers. Despite its simplicity, MicCheck achieves 92.9% accuracy on a 10-class material benchmark and boosts imitation-learned manipulation success (e.g., pickingāandāpouring from 0.40ā0.80), enabling contact-rich tasks like unplugging and sound-based sorting.Ā
We introduced Geodiffussr, a flow matching-based pipeline that generates realistic, text prompt-guided texture map while strictly respecting an input Digital Elevation Map (DEM). We fuse geometry into the UNet at coarse-to-fine resolutions with Multi-scale Content Aggregation (MCA), allowing the model to respect global landforms and local ridgelines alike.
We present an unsupervised slide quality assessment pipeline that combines seven expert inspired visual design metrics (whitespace, colorfulness, edge density, brightness contrast, text density, color harmony, layout balance) with CLIP-ViT embeddings, using Isolation Forestābased anomaly scoring to evaluted presentation slides. Our results show that augmenting lowālevel design cues with multimodal embeddings closely approximates audience perceptions of slide quality, enabling scalable, objective feedback in real time.
During my internship at Tokyo Robotics, I collaborated with a friend to deploy a Vision-language-Action (VLA) model onto a humanoid robot. It can now perform pick and place tasks from language instructions.
Made a table-top size telepresence robot that mimics your movements via a webcam, to add a little more spice to the boring video calls I do almost everyday.
Kuma Lab and Google Developer Group on Campus collaborated to host a hands-on workshop on AI. We introduced the basic concepts of AI, through its applications in 3 fields: Computer Vision, Robot Learning, and 3D Reconstruction.Ā
Under Google Developer Group on Campus, we hosted a hands-on workshop on game development with Unity. After introducing the basic concepts of Unity, we recreated Flappy Bird with the audience.Ā
The core members of Kuma Lab hosted a workshop on "Robotic Art", where we explored how the two fields that seem to be far apart: Robotics and Art, can go together to form an interesting topic of research.
I helped demonstrate our lab's caregiving humanoid robot at the Osaka World Expo 2025, which had a total visitor count of more than 25,000,000.
Student ambassador for ęč²ē„2024 (Geek SAI 2024), the biggest regional tech conference with huge guests like the creators of Ruby and 2Chan.