López Nava, I. H. (2018). Complex action recognition from human motion tracking using wearable sensors, Tesis de Doctorado, Instituto Nacional de Astrofísica, Óptica y Electrónica
This thesis proposes the development of machine learning and deep learning models for recognizing activities of daily living (ADLs) through the integration of multimodal information, including inertial signals, human pose keypoints, object detection in the scene, and environmental sensors. The project will address scenarios involving concurrent and interleaved activities in continuous domains, exploring hierarchical fusion strategies, temporal modeling, and self-supervised learning to enhance robustness and generalization in both structured and naturalistic environments.
https://en.ids-imaging.com/casestudies-detail/items/vertebra-by-vertebra.html
This research proposes the development of a multimodal system based on inertial sensors and video-based pose estimation for the automated measurement of spatiotemporal and kinematic gait parameters. The project will integrate probabilistic and deep learning models for gait event detection and the extraction of digital biomarkers aimed at assessing progression in neuromuscular diseases and rehabilitation strategies, incorporating uncertainty analysis and clinical validation.
https://learnopencv.com/ai-fitness-trainer-using-mediapipe/
This thesis proposes the design of a system based on 2D motion tracking for the automated evaluation of motor function and fall risk in older adults and patients with neuromuscular conditions. The system will integrate kinematic feature extraction, statistical modeling, and machine learning techniques to estimate scores associated with standardized clinical scales (e.g., Berg Balance Scale, Timed Up & Go), prioritizing interpretability, robustness, and validation with clinical specialists.
Direct Interpretation of Sign Language as a Memory Recognition Task [In press]
This thesis proposes the development of a computational model for the recognition and retrieval of dynamic signs in Mexican Sign Language formulated as temporal patterns stored within a distributed associative memory. The system will aim to recognize, reconstruct, and validate complete signs from partial fragments or noisy sequences by integrating spatiotemporal representations and structural retrieval mechanisms that prioritize interpretability and computational efficiency over purely discriminative approaches.
https://www.youtube.com/watch?v=f60CkU2B0zU
This thesis proposes the development of transfer learning strategies for sign recognition across different sign languages (e.g., LSM, ASL, LIBRAS), aiming to improve performance in scenarios with limited labeled data. The research will explore latent space alignment, supervised and unsupervised adaptation, and few-shot and zero-shot learning schemes, analyzing which representations (image-based, keypoint-based, or temporal embeddings) are most transferable across interlinguistic domains and how they impact model generalization.
https://www.dazeddigital.com/life-culture/article/40076/1/self-diagnosis-mental-health-anxiety-online
This research proposes the development of natural language processing and machine learning models for the early identification of patterns associated with depression and anxiety in data extracted from social media platforms. The project will explore deep textual representations, multimodal embeddings, and robust classification techniques, integrating interpretability analysis and ethical evaluation oriented toward digital health applications.
https://www.astro.uu.se/~oleg/di_mag.html
This thesis proposes the application of deep learning techniques for the detection, classification, and characterization of astronomical objects from large-scale observational datasets. The research will explore supervised and unsupervised neural architectures, transfer learning strategies, and dimensionality reduction methods to optimize pattern identification in spectral data and astronomical images.
https://fishi.ph/
This thesis proposes the development of a computer vision and deep learning system for the automated monitoring of marine species using aerial and underwater video. The project will integrate detection, segmentation, tracking, and identification of individuals, as well as the estimation of population-level attributes, incorporating temporal modeling to enhance robustness under environmental variability, occlusions, and challenging lighting conditions.