We are currently focusing on the following research topics:
Human-like recall of neural networks is our own research proposal that focuses on enabling neural networks to store and retrieve information in a manner that is similar to human memory. This is important for building intelligent systems that can operate in complex and dynamic environments, and that can adapt to new information and experiences in a flexible and efficient way.
We aim to develop novel algorithms and methods for human-like recall of neural networks. This involves studying the mechanisms of memory consolidation and retrieval in the human brain, and proposing new algorithms that can enable neural networks to store and retrieve information in a flexible and efficient manner.
Continual learning in neural networks, also known as lifelong learning or learning without forgetting, is an important and challenging problem in the field of artificial intelligence. The ability to learn and adapt to new information without forgetting previously learned knowledge is essential for building intelligent systems that can operate in dynamic and unpredictable environments.
Our research focuses on developing novel algorithms and methods for enabling neural networks to learn continually, without experiencing catastrophic forgetting. This involves studying the mechanisms of memory consolidation and interference in the brain, and translating these insights into efficient and effective learning algorithms for neural networks.
Interactive explainable AI is a research area that focuses on developing intelligent systems that can provide explanations for their decisions and actions, and that can interact with humans in a natural and intuitive way. This is important for building trust and transparency in AI, and for enabling humans to understand and interact with complex AI systems.
This area involves studying the mechanisms of human cognition and communication, and proposing new algorithms that can enable AI systems to generate explanations that are relevant, understandable, and actionable for humans.
Federated learning is a distributed machine learning paradigm that enables multiple parties to collaboratively train a shared model without sharing their data. This approach has the potential to enable the training of larger and more accurate models, while preserving the privacy of the individual data owners.
We focus on studying the challenges and limitations of current approaches, and proposing new solutions that can improve the performance, convergence, and robustness of federated learning algorithms.
Neural network compression is a research area that focuses on reducing the size and computational complexity of trained neural networks, while preserving their performance. This is important for deploying neural networks on resource-constrained devices, such as mobile phones, embedded systems, and edge devices, where the storage and computation resources are limited.
Our research focuses on developing novel algorithms and methods for neural network compression. This involves studying the trade-offs between model size, performance, and computational efficiency, and proposing new techniques that can achieve significant reductions in the size and complexity of neural networks, without sacrificing their accuracy.
At the moment, our research topics above are fully funded by the following research projects of the Korean government (IITP, NRF, etc.):
Deep Total Recall: Continual Learning for Human-Like Recall of Artificial Neural Networks (2022-present, Human-Centric AI Tech Funding, IITP)
Artificial Intelligence Convergence Innovation Human Resources Development at Inha Univ. (인공지능융합혁신대학원) (2022-present, IITP)
Brain Korea (BK) 21 program for Artificial Intelligence of Inha Univ. (2022-present, NRF)
HD-DNA Lab: Innovation Assistive Engineering Laboratory of Digital Human Cloning based on Human Behavior Stem Information (2022-2025, BRL, NRF)
Querying AI: Query-driven Interpretation of Deep Learning Models for Interactive XAI Service (2021-2024, NRF)
Artificial Intelligence Convergence Research Center at Inha Univ. (인하대 인공지능융합연구센터) (2020-2022, IITP)
Deep Partition-and-Merge: Merging and Splitting Deep Neural Networks on Smart Embedded Devices for Real Time Inference (2019-2021, IITP)
Intelligent Mobile Edge Cloud Solution for Connected Car (2019-2021, IITP)
Real-time movement data analytic technologies for enabling a fully-automated transportation service based on self-driving vehicles (2018-2021, NRF)
Human Mobility Pattern Analysis using a Massive Volume of Semantic Trajectories (2018-2019, Inha Univ.)
Autonomic BigData Cloud Computing: Enhancing Efficiency of BigData Processing using Various Cloud Computing Resources (2018-2020, IITP)