Le Nguyen is working in the MAALI project (at the Multimodal Sensing Lab, University of Oulu) to create an intelligent autonomous multi-sensory system to support assisted living. He is also involved with Distributed Intelligence Research at 6G Flagship. His main duty is to design, implement, and evaluate machine learning models on multimodal data, using GPU-accelerated environments at CSC – IT Center for Science. He obtained the Doctor of Science (Technology) degree at Aalto University. During his doctoral studies, he participated in the project Adaptive ambient Backscatter Communications for Ultra-low power Systems (ABACUS) funded by Academy of Finland (Research Council of Finland), where he proposed and implemented novel optimization algorithms to train machine learning models across vertically-distributed data, leveraging superimposed signals. In 2017 – 2018, he participated in the EIT Digital Advanced Connectivity Platform for Vertical Segments, High Impact Initiative project developing Security and Privacy Modules to manage IoT devices. In addition, he worked at Vietnam National University Ho Chi Minh City - University of Science (VNU-HCMUS), Viet Nam; National Institute of Informatics (NII), Japan; Center for Information and Neural Networks (CiNet), Japan; University of St Andrews, UK; Technische Universität Braunschweig, Germany; and Technical Research Centre for Dependency Care and Autonomous Living (CETpD), Spain.
Email: [firstname].[lastname][AT]oulu.fi
Google Scholar: https://scholar.google.com/citations?user=tow0oVoAAAAJ&hl=en
Open Researcher and Contributor ID: https://orcid.org/0000-0001-7765-1483
Cañellas ML, Casado CÁ, Nguyen L, López MB. A self-supervised multimodal framework for 1D physiological data fusion in remote health monitoring. Information Fusion. 2025.
We propose a multimodal self-supervised learning architecture with data augmentation for 1D physiological data focused on heart and breathing activity signals, integrating waveforms obtained from a mmWave radar (Texas Instruments IWR1443) and a camera (Intel RealSense D435). For implementation and experiments, we used Python 3.8 with PyTorch and SciPy. The classification models included ResNet, Transformer, Recurrent Neural Network, and XGBoost.
Nguyen LN, Findling RD, Poikela M, Zuo S, Sigg S. Transient Authentication from First-Person-View Video. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 2025.
We propose PassFrame, which utilizes first-person-view videos to generate personalized authentication challenges based on human memory of event sequences. We used blurriness-based and eye-tracking methods to extract representative scenes, which were more efficient than VGG16, ResNet18, and MobileNetV2 (PyTorch) in terms of runtime per frame. The Android app was developed using OpenCV and OpenIMAJ for image processing. The AI-generated images and textual descriptions were produced with LAVIS. We evaluated the proposed mechanism in terms of entry time, number of attempts, the system usability scale, and the NASA Task Load Index (NASA-TLX).
Nguyen N, Nguyen L, Li H, López MB, Casado CÁ. Evaluation of video-based rPPG in challenging environments: Artifact mitigation and network resilience. Computers in Biology and Medicine. 2024.
In this article, we perform a systematic and comprehensive investigation of spatial, temporal, and visual occlusion artifacts as well as their detrimental effects on the quality of remote photoplethysmography (rPPG) measurements. We also proposed mitigation strategies to enhance the rPPG quality. The experiments were implemented on seven publicly available rPPG databases. We used MediaPipe for face detection and segmentation. We selected three non-learning rPPG algorithms (OMIT, CHROM, and POS) and four deep learning rPPG models (EfficientPhys, ContrastPhys PhysFormer, and MTTS-CAN).
Kularatne SD, Casado CÁ, Rajala J, Hänninen T, López MB, Nguyen L. FireMan-UAV-RGBT: A novel uav-based rgb-thermal video dataset for the detection of wildfires in the Finnish forests. IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). 2024.
This paper introduces the new FireMan-UAV-RGBT dataset, comprising UAV-captured RGB and thermal video data to advance wildfire detection methodologies. The dataset includes high-resolution images of boreal forests that have been carefully annotated both manually and using a semi-automatic method that leverages thermal information for improved RGB image segmentation. The utility of the dataset is assessed by applying machine learning models (PyTorch ResNet50 and Ultralytics YOLOv8) for binary classification (‘Fire’ and ‘No Fire’).
Nguyen LN, Sigg S, Lietzen J, Findling RD, Ruttik K. Camouflage learning: Feature value obscuring ambient intelligence for constrained devices. IEEE Transactions on Mobile Computing. 2021.
We present Camouflage learning, a distributed machine learning mechanism that obscures the feature values and the trained model via signal superimposition. We demonstrated the feasibility of the approach on backscatter communication prototypes and showed that Camouflage learning is more energy-efficient than traditional schemes (centralized training with scikit-learn).
Bruesch A, Nguyen N, Schürmann D, Sigg S, Wolf L. Security properties of gait for mobile device pairing. IEEE Transactions on Mobile Computing. 2019.
We present a comprehensive discussion on security properties of gait-based device pairing schemes including quantization, statistical analysis of generated sequences, attack surfaces, and a first ever empirical demonstration that video poses a significant threat to gait-based security.
Nguyen N, Sigg S, Huynh A, Ji Y. Pattern-based alignment of audio data for ad hoc secure device pairing. International Symposium on Wearable Computers (ISWC). 2012.
We propose an approximative pattern matching algorithm to achieve audio synchronization independently on each device. Our experiments showed that the proposed method improved the similarity of the audio fingerprints, which were used to generate the secret key for spontaneous proximity-based device paring.
Web co-chair at Ubicomp/ISWC 2025 (as well as a presenter and a panelist)
Keynote speaker at LongevIoT (ACM IoT 2024): Designing communication devices and learning algorithms for sustainable IoT systems
Virtual Experience co-chair at IEEE PerCom 2022
Frequent reviewers for the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Pervasive and Mobile Computing, ECML PKDD, and IJCAI