Current Projects

6G Flagship Strategic Research Area in Distributed Intelligence: Multimodal sensing and modelling (2022-2026)

Funded by: 6G Flagship, Academy of Finland, University of Oulu

6G relies on multimodal sensor data to detect and model the surroundings. New sensors and actuators accompanied with high-speed connectivity and low-cost computational processing have made real-time and distributed intelligent applications feasible. The challenge is to make sense of all the data. Uncertainty quantification and propagation are automated ways to improve operational data quality and privacy. It is possible to increase the trustworthiness of smart decision support systems by improving data visibility from several sources and understanding the functions and logic behind the judgments. 6G is intended to natively allow radio-based sensing and support ultra-dense sensor and actuator networks, enabling hyper-local and real-time sensing, communication, and interaction. Both the physical and programmed worlds require multidimensional orchestration.

AIRR / ViHReA / ViVi / ParkX- Artificial Intelligence Applied to Mobile Phone Video Data to Determine Medical Conditions (2020-2024)

Funded by: Swedish Research Council, Karolinska Institutet, Johns Hopkins University, Malaria Consortium, VTT, PROFI-5 DigiHealth, University of Oulu

AIRR,  ViHRea and ViVi  form  a set of projects that aim to develop and assess Computer Vision methods for the analysis of mobile phone video data to determine respiratory problems in children under-seven years, engaging with healthcare providers, key stakeholders and caregivers both in Finland and Low and Middle Income Countries (LMICs). The ParkX projects aims to use mobile video to assess Parkinso's disease symptoms. The objectives of these projects are; 1) to develop and assess the diagnostic agreement of AI algorithms from existing videos against the golden standards, 2) to determine the appropriateness and acceptability of an automated video-based mHealth app amongst healthcare providers, 3) to develop a pragmatic testing protocol, considering duration of measurement, video quality and optimal camera positioning.

Transforming primary healthcare using 6G-enabled computer vision (2021-2025) 

Funded by: PROFI-5 Data Insight for High-Dimensional Dynamics (HiDyn), University of Oulu

This project focuses in the creation of a sensing platform that is able to "see beyond the human eye capabilities" by leveraging 6G wireless communications to improving and three underutilized technologies: thermal infrared imaging, Eulerian video magnification and high speed tracking of the facial expressions. By analyzing specific clinically labelled video data, this project has the opportunity to move assistive diagnosis technology from very controlled laboratory conditions to real world scenarios, by implementing the technology in four use cases related to major primary healthcare challenges (namely pain, depression, pneumonia and stroke). This will enable the results to be generally adopted by doctors and practitioners at primary healthcare facilities, reducing examination time and costs. The project is expected to produce novel solutions for vision-based medical diagnosis in embedded devices, bringing the technology into practise and potentially changing the way that health care is performed.

MAALI: Multisensory automation for assisted living (2022-2026)

Funded by: Infotech Oulu, University of Oulu, Focus area spearhead projects

 The MAALI project investigates the creation of an intelligent autonomous multi-sensory stratified system to support assisted living. The project research technologies that will aid early diagnosis, and the monitoring of activity changes for the ageing people living at home. The objective is to impact the automation of a range of assisted living and healthcare needs by employing pattern recognition across a network of low-cost networked sensors in a home environment. The project aims to enhance existing assistive technologies by providing the capability to automatically monitor and reason over activity and vital signs data, and communicate these data to support actors in case of emergency. The sensors inform each other about activities in the household, and aim to extract the best possible information at each instant of time by employing signal processing and artificial intelligence methods. 

CoMuSe/MACHS 6G: Coordinated Multimodal Sensing for 6G Applications (2023-2024)

Funded by: 6G-Flagship, IndFiCore programme, FARIA programme, University of Texas@Austin, IIIT Bangalore, University of Oulu

The CoMuSe-6 /MACHS set of projects project aims at combining the expertise of CMVS at the University of Oulu, the Multimodal Perception Lab at IIIT Bangalore and the VITA group at the University of Texas at Austin to create, and release the first public multimodal image+radio dataset obtained with multiple cameras and sensors in different environments, at first in laboratory conditions. Our project intends to collect synchronized data of different people's regular activities in indoor settings that will be utilized to create machine learning models able to discriminate between persons to assess their relative stationarity for further measurements (such as vital signs). A set of baseline algorithms and models will be released along the dataset itself.

Past Projects

DemoFinalPW.mp4

Vision Sensing Technologies for Healthcare Diagnosis

Funded by: Infotech Oulu, University of Oulu, Focus area spearhead projects

This project explores recent advances in computer vision together with medical evidences that indicate correlation between observable symptoms on the human face and some medical conditions. The project aims at the development of computational models for detecting abnormalities reflective of diseases in person's facial structures and expressions based mainly on visual information. This would help designing futuristic unobtrusive technologies for assistive diagnosis and monitoring that people can effortlessly use in their daily lives without any contact.  As practical use cases, the project focuses on the automatic estimation of pain and depression levels from videos. The methodologies explored include novel deep learning architectures, semi-supervised learning, multimodal signal processing and video-based extraction of biosignals.

Other projects

Assessment of mental health

Real-time vital sign detection