Research Areas

We are studying artificial intelligence systems exploit multimodal data, and benefits users by various human-computer interactions. Descriptions of our research areas are shown below.

Active Funding List


Human Data and Behavior Modelling

This line of studies aims to understand human behavior and how human think, feel, and responses by learning and interpreting data generated by human. All kinds of data, usage log, sensor data, measurement and even physiological data can be used in the studies. The AI model established in the studies can be directly utilized for intelligent interactions, including inference and user assistant features, using the background data--knowledge we surveyed/gathered.

Multimedia AI and Interaction Systems

We are interested in various multimodal/multimedia interaction scenarios which AI involves and benefits the users. One of trivial benefits of using multimodal data would be sensory and data redundancy, which enhances plausibility/power of AI results including inference. 

For example, in AI-Driven Automatic Content Generation study, it learns and analyze audio and visual data and generates immersive multimedia experience at home by automatically-created 4D effects, applicable for the current streaming services.

Affective Computing & Interaction Systems

We are also working on affective computing and interaction system. Utilizing various data from sensors, user behaviors, and prior knowledges on human emotion, we develop artificial intelligence that aware users' emotion and interact with the users to vectoring/regulating their mood. Of course, contextual data, including user's current mental and emotional status are often inferred/extracted in such scenarios. 

We expect applications on medical venue, but not limited to. Combined with Multimodal AI interaction system, we aim to develop a full stack of system that benefit the users' mental health.

Human-Centered AI and Assistive Systems

Pursuing a “warmhearted” and human-centric AI technology, we are working on AI which helps users. Based on human-computer/robot collaborative work scheme, AI-mediated motor learning (i.e., AI teaches sensorimotor skills to users). AI can mediate human-human or human-robot communication in a smooth manner. 

These idea can be utilized for assistive technology. Expanding current translation algorithm by adding audiohaptic explanations of web images for blind and low vision users, and translating music to haptics for the hearing-impaired are examples.