Existing ANC systems operate based on algorithms.
As a result, they are effective for patterned noise but have limitations in handling complex or highly variable noise.
To overcome these algorithmic limitations, we propose a new framework utilizing generative models.
This approach aims to mitigate the occlusion effect and enhance ANC performance in complex noise environments.
Designing & implementing AI model for BLE-based angle detection.
Employing Deep Learning approach (GRU & Linear Regression) by low level I/Q sample data.
Standardization : Submitting & having IPR of new specifications in Bluetooth SIG (DFWG)
Driver Emotion-Aware Personalized AI Music Composition
Generate Latent vector about melody, rhythm, chords, musical instruments
Build pattern-based classification model and relatedness-based clustering model
AI composition music based reinforcement learning
Generated music can be played longer, perform Reinforcement learning
Using Double DQN or Dueling DQN
Build Music Relationship music Regressor for arrangement recommend and Emotion-based search
Define the MRF Score, that indicates the relationship between music
Classification about lightness, saturation, and color mean user's emotion
Define loss function about pitch and density in music
After pre-learning latent vector z in Music VAE, training z using Actor-Critic method with loss fuction
Music Source Separation & Automatic Music Transcription using Deep Learning
Fashion Component Classification using Mask R-CNN
& Fashion Image Generating using GAN
Indoor position recognization under BLE RSSI enviroment using Deep Learning
(CNN, Auto-Encoder, ...)