Diffwave, a diffusion based model, generates high-quality stable audio for noise cancellation
Conditional DiffWave uses pressure-affected anti-noise and a novel frequency-weighted loss
Low- and mid-frequency focus improves ANC comfort by reducing pressure-related effects
Combining original and novel losses ensures stable training and mitigates ANC pressure
Tests show strong ANC pressure reduction and waveform recovery, though variability remains
Proposal of an indoor location estimation pipeline that uses a single RGB image as input.
Consists of camera anchor detection, monocular depth estimation and distance scaling, and circle intersection-based coordinate calculation.
The anchor detection stage showed stable generalization performance, and the distance scaling stage effectively restored distance patterns like ground truth through Random Forest regression.
In the final location estimation stage, it achieved an error of 100–130 cm on average, ensuring practical accuracy.
Lightweight, four-electrode EEG pipeline that reliably decodes covert spatial attention using a compact Hybrid encoder (CNN-LSTM-MHSA) with leak-safe preprocessing and robustness-oriented training.
The system achieves 0.695 online accuracy with ~2 s end-to-end latency, outperforming baseline CNN and CNN-LSTM models.
This approach enables accurate, real-time, gaze-independent BCI control.
Designing & implementing AI model for BLE-based angle detection.
Employing Deep Learning approach (GRU & Linear Regression) by low level I/Q sample data.
Standardization : Submitting & having IPR of new specifications in Bluetooth SIG (DFWG)
Driver Emotion-Aware Personalized AI Music Composition
Generate Latent vector about melody, rhythm, chords, musical instruments
Build pattern-based classification model and relatedness-based clustering model
AI composition music based reinforcement learning
Generated music can be played longer, perform Reinforcement learning
Using Double DQN or Dueling DQN
Build Music Relationship music Regressor for arrangement recommend and Emotion-based search
Define the MRF Score, that indicates the relationship between music
Classification about lightness, saturation, and color mean user's emotion
Define loss function about pitch and density in music
After pre-learning latent vector z in Music VAE, training z using Actor-Critic method with loss fuction
Music Source Separation & Automatic Music Transcription using Deep Learning
Fashion Component Classification using Mask R-CNN
& Fashion Image Generating using GAN
Indoor position recognization under BLE RSSI enviroment using Deep Learning
(CNN, Auto-Encoder, ...)