Research Projects
Research Projects
Spectral Multi-Dimensional Emotional Dynamics for Human-Centric Digital Twin Foundations (Sejong University, 2026~2027)
Foundations of Spectral Multi-Dimensional Emotional Dynamics for Human-Centric Digital Twins
This project develops a spectral multi-dimensional emotional dynamics framework as a mathematical foundation for future human-centric digital twin systems. Moving beyond traditional low-dimensional representations, emotions are modeled as structured vector–matrix dynamical systems in which emergent emotional outcomes arise from interactions among multiple emotional components and contextual factors.
By incorporating spectral decomposition techniques such as eigenvalue and singular value analysis, the framework identifies dominant emotional modes that govern stability, amplification, and oscillatory behavior.
Rather than treating emotions as isolated variables, this approach interprets emotional evolution through latent dynamical structures, enabling a deeper understanding of personalized emotional profiles. The proposed framework establishes a principled and interpretable foundation for predictive emotional modeling, with long-term implications for digital twin technologies, adaptive AI systems, and metaverse-based human interaction environments.
Soft Computing-based Causal Analysis and Fusion Study on Deep Learning and FNN for TAI(inTerpretable AI) and XAI(eXplainable AI)
(National Research Foundation, 2024~2028)
This research project focuses on developing interpretable and explainable AI frameworks by integrating deep learning with soft computing–based neural models through causal analysis and model fusion.
The goal is to move beyond black-box prediction and enable AI systems whose decision processes can be understood, analyzed, and trusted.
We study how learning dynamics, internal representations, and decision outcomes in deep neural networks can be causally explained and enhanced by combining them with mathematically structured, human-interpretable neural models.
Through this fusion, the project aims to provide theoretical insight, transparency, and robustness in AI systems while maintaining strong predictive performance.
The outcomes of this research contribute to the foundations of inTerpretable AI (TAI) and eXplainable AI (XAI), supporting reliable deployment of intelligent systems in high-stakes, real-world applications.