Oyama Lab is engaged in research on human-machine interface (HMI) using biological signals and computer vision.
Oyama Lab is engaged in research on human-machine interface (HMI) using biological signals and computer vision.
As one of the HMIs for people with disabilities and throat practitioners, Electromyography (EMG) is measured from an electrode mounted under the jaw. Then, we analyze the EMG using a machine learning method and try to estimate tongue movements and silent words recognition.
Brain Computer Interface (BCI) is a general term for technologies that estimate a person's condition and intention from brain information acquired using a measuring device such as an electroencephalograph and apply it as an interface. In our laboratory, we are studying BCI using steady state visual evoked potential (SSVEP) and motor imagery (MI) from electroencephalograms (EEGs) measured using a low-cost and simple electroencephalograph.
Devices such as smartwatches have a built-in pulse wave meter that can be used to measure photoplethysmographs (PPGs). In this study, we are investigating a method for estimating the type of gesture from the change in PPG that appears when a hand gesture is performed.
Monocular depth estimation is a general term for technologies that estimate the depth (generate a depth image) from a single RGB image. In our laboratory, we are working on the development and application of these methods by applying deep learning (AI) technology. We are also investigating the effect of super-resolution of depth images.
We are conducting research on system development for coaching support mainly for rugby. We are studying a method to extract only specific scenes such as scrum, lineout, and kickoff from the captured game video, and a method to track people and balls and map them to a two-dimensional plane.
We are researching a walking support device that can replace guide dogs for the visually impaired. Specifically, we are working on the development of a device that estimates Braille blocks, signs, traffic lights, obstacles, etc. from surrounding images taken using a wearable camera, etc., and conveys the type and direction to the user.
Using a small 3D motion sensor device, we are working on the estimation of sign languages, finger spellings, and aerial handwriting.