Publications

An Analysis of the Impact of Gender and Age on Perceiving and Identifying Sexist Posts [pdf

Jimenez-Martinez, M. P., Lopez-Nava, I. H., & Montes-y-Gómez, M. (2024, June). An Analysis of the Impact of Gender and Age on Perceiving and Identifying Sexist Posts. In Mexican Conference on Pattern Recognition (pp. 308-318). Cham: Springer Nature Switzerland.

This research addresses the challenge of detecting sexism in Spanish-language tweets on social media. Our analysis explores labeling differences among annotators with diverse sociodemographic attributes, emphasizing their relevance in automated model development. Using a dataset enriched with labels from six diverse profiles, our study revealed nuanced perceptions of sexism across different genders and age ranges. Although there is considerable agreement between genders, instances of disagreement persist. Similarly, while there is a better consensus in terms of age, disagreements still arise. We use a RoBERTuito model fine-tuned on sexism identification, reaching an F1-score of 0.856 when training the model considering only the labels of the oldest age profile. These instances underscore the necessity for continuous model refinement to effectively capture subtle language variations.

Mapping Activities onto a Two-Dimensional Emotions Model for Dog Emotion Recognition Using Inertial Data [pdf

Garcia-Loya, E. Y., Urbina-Escalante, M., Reyes-Meza, V., Pérez-Espinosa, H., & Lopez-Nava, I. H. (2024, June). Mapping Activities onto a Two-Dimensional Emotions Model for Dog Emotion Recognition Using Inertial Data. In Mexican Conference on Pattern Recognition (pp. 107-118). Cham: Springer Nature Switzerland.

Understanding animal reactions is essential for the welfare of animals, but accurately interpreting dogs’ emotions, despite their bond with humans, is challenging and often yields subjective results from human observers. Emotions manifest through physiological changes, such as heart rate fluctuations, or behavioral patterns, such as dog movements. In the present study, we measured and analyzed the movements of a group of dogs during four localized activities in two dimensions of emotion: arousal and valence. These activities (frustration, toy, abandonment, petting) were performed in natural settings while wearing the PATITA capture device. Statistical and temporal features were derived from acceleration signals and used to train various classification models. An average F1-score of 0.92 (0.05) was scored when classifying the four emotions with the ExtraTrees classifier. This work contributes to a more accurate and consistent understanding of canine emotional states using dog movements, which has potential applications in shelters, day-care centers, and even homes, where dogs often spend a lot of time alone.

An Open Framework for Nonverbal Communication in Human-Robot Interaction [pdf

Lozano, E. A., Sánchez-Torres, C. E., López-Nava, I. H., & Favela, J. (2023, November). An Open Framework for Nonverbal Communication in Human-Robot Interaction. In International Conference on Ubiquitous Computing and Ambient Intelligence (pp. 21-32). Cham: Springer Nature Switzerland.

Nonverbal communication plays a vital role in human interaction. In the context of Human-Robot Interaction (HRI), social robots are designed primarily for verbal-based communication with humans, making nonverbal communication an open research area. We present a flexible, open framework designed to facilitate nonverbal interactions in HRI. Among its components is a P2P Browser-Based Computational Notebook, leveraged to code, run, and share reactive programs. Machine-learning models can be included for real-time recognition of gestures, poses, and moods, employing protocols such as MQTT. Another key component is a broker for distributing data among different physical devices like the robot, wearables, and environmental sensors. We demonstrate this framework’s utility through two interaction scenarios: (i) the first one employing proxemics and gaze direction to initiate an impromptu encounter, and (ii) a second one incorporating object recognition and a Large-Language Model to suggest meals to be cooked based on available ingredients. These scenarios illustrate how the framework’s components can be seamlessly integrated to address new scenarios, where robots need to infer nonverbal cues from users.

DAKTILOS: An Interactive Platform for Teaching Mexican Sign Language (LSM) [pdf

Gortarez-Pelayo, J. J., Morfín-Chávez, R. F., & Lopez-Nava, I. H. (2023, November). DAKTILOS: An Interactive Platform for Teaching Mexican Sign Language (LSM). In International Conference on Ubiquitous Computing and Ambient Intelligence (pp. 264-269). Cham: Springer Nature Switzerland.

This paper presents an interactive platform for teaching Mexican Sign Language (LSM) based on Artificial Intelligence (AI) models named DAKTILOS. This platform was developed with recent Web technologies that allow integrating real-time hand tracking from 2D images natively. The platform was designed to recognize and score static and dynamic LSM alphabet signs made by users. Once the 21 hand-keypoints were recognized and extracted from the AI MediaPipe model, they were dynamically compared to the target manual configurations (letters) using AI FingerPose classifier, and scored to determine if it was performed correctly providing visual feedback. DAKTILOS allows exploring 27 letters indicating the correct configuration setting with a 3D hand. Preliminary tests with eight subjects were conducted to determine the functionality of the platform. Finally, this teaching tool can help to bridge the communication gap between the deaf and hearing communities.

Fingerspelling Recognition in Mexican Sign Language (LSM) Using Machine Learning [pdf

Morfín-Chávez, R. F., Gortarez-Pelayo, J. J., & Lopez-Nava, I. H. (2023, November). Fingerspelling Recognition in Mexican Sign Language (LSM) Using Machine Learning. In Mexican International Conference on Artificial Intelligence (pp. 110-120). Cham: Springer Nature Switzerland.

Sign languages allow deaf people to express their thoughts, emotions, and opinions in a complex and complete way, just like oral languages. Each sign language is unique and has its own grammar, syntax, and vocabulary. Mexican Sign Language (LSM) is characterized by rich gestural and facial expression that gives it a great communicative and linguistic capacity. In the study of LSM, two main components have been identified: (i) fingerspelling, and (ii) ideograms. The first is similar to spelling in oral languages, and is used to communicate proper names, technical terms or words for which there are no specific signs or which are little known to the deaf community. In this paper, we propose a method for recognizing the LSM alphabet by using machine learning-based techniques capable of classifying the signs made by 10 test subjects. 21-keypoints of the hands were extracted from the MediaPipe library, in order to have a better representation to feed the classification models. The results when classifying the 21 letters exceeded an F1-score of 0.98 with 3 of the 4 trained classifiers, and scoring values below 0.95 for less than 3 letters. Tools such as those proposed in this work can facilitate seamless communication by translating Spanish into LSM and vice versa, allowing both communities to engage effectively in various settings.

Multidisciplinary and Interinstitutional Collaboration during the COVID-19 Pandemic: from basic research to the technological development of Exergames for Rehabilitation [pdf

Perez-Sanpablo, A. I., Rodriguez, M. D., Meneses-Peñaloza, A., López-Nava, I. H., García-Vázquez, J. P., & Armenta-García, J. A. (2023, September). Multidisciplinary and Interinstitutional Collaboration during the COVID-19 Pandemic: from basic research to the technological development of Exergames for Rehabilitation. In 2023 Mexican International Conference on Computer Science (ENC) (pp. 1-6). IEEE.

The COVID-19 pandemic significantly impacted technological research and development. The pandemic encouraged remote collaboration and data collection. Here, experiences on opportunities and challenges encountered in developing two studies on exergames for physical rehabilitation in older adults and children are analyzed and shared. The first study deals with fundamental research, where a scoping review was conducted to investigate the use of exergames in older adults. Methodological challenges included establishing a standard frame of reference, coordinating multidisciplinary efforts, and adapting to remote collaboration during the pandemic. The second study focused on developing a virtual reality video game, BioGAIT, for gait rehabilitation in pediatric patients. Methodological challenges included remote collaboration between students and researchers, limited access to resources and infrastructure, and evaluations and tests. In general, the authors show the importance of teamwork, communication, time management, and selection of work tools. They stress the need for clear goals, shared leadership, trust-building, technology-enabled remote collaboration, multidisciplinary approaches, and peer support to address coordination and motivation.

A Modular Framework for Modelling and Verification of Activities in Ambient Intelligent Systems [pdf

Konios, A., Khan, Y. I., Garcia-Constantino, M., & Lopez-Nava, I. H. (2023, July). A Modular Framework for Modelling and Verification of Activities in Ambient Intelligent Systems. In International Conference on Human-Computer Interaction (pp. 503-530). Cham: Springer Nature Switzerland.

There is a growing need to introduce and develop formal techniques for computational models capable of faithfully modelling systems of high complexity and concurrent. Such systems are the ambient intelligent systems. This article proposes an efficient framework for the automated modelling and verification of the behavioural models capturing daily activities that occur in ambient intelligent systems based on the modularity and compositionality of Petri nets. This framework consists of different stages that incorporate Petri net techniques like composition, transformation, unfolding and slicing. All these techniques facilitate the modelling and verification of the system activities under consideration by allowing the modelling in different Petri net classes and the verification of the produced models either by using model checking directly or by applying Petri net slicing to alleviate the state explosion problem that may emerge in very complex behavioural models. Illustrative examples of ambient intelligent system applied to health and other sectors are provided to demonstrate the practicality and effectiveness of the proposed approach. Finally, to show the flexibility of the proposed framework in terms of verification, both an evaluation and comparison of the state space required for the property checking are conducted with respect to the typical model checking and slicing approach respectively.

Estimation of Stokes Parameters Using Deep Neural Networks [pdf

Raygoza-Romero, J. M., Lopez-Nava, I. H., & Ramírez-Vélez, J. C. (2023, June). Estimation of Stokes Parameters Using Deep Neural Networks. In Mexican Conference on Pattern Recognition (pp. 159-168). Cham: Springer Nature Switzerland.

Magnetic fields play a very important role in stellar evolution, as well as vary depending on the evolutionary stage. To understand how the stellar magnetic fields evolve is necessary to measure and map the magnetic fields over the stellar surface. It can be done through spectropolarimetric observations through the four Stokes parameters (I, Q, U, and V). In this work, we propose a deep-learning approach to estimate the Stokes parameters based on eight input parameters (dipolar moment strength, m; the magnetic dipole position inside the star, ; the rotation phase, p; the magnetic geometry of the dipolar configuration, ; and the inclination angle of the stellar rotation axis with respect to the line of sight, i) and using a synthetic dataset generated by cossam. Different configurations of a neural network have been experimented with: the number of layers and neurons; the scaling of the input and output parameters; the size of training data; and estimating separately and jointly the output parameters. The best configuration of the neural network model scores a mean squared error of 1.4e–7, 2.4e–8, 1.5e–8, and 1.3e–7, for Stokes I, Q, U, and V, respectively. In summary, the model effectively estimated the Stokes I and V, which respectively correspond to the total intensity and circular polarization of the light emitted by magnetic stars; however, struggled with the Stokes Q and U, which represent linear polarization components generally for very small m. Overall, our work presents a promising avenue for advancing our understanding of stars that host a magnetic field.

Analysis of Accelerometer Data for Personalised Mood Detection in Activities of Daily Living [pdf

Altamirano-Flores, Y. V., Konios, A., Lopez-Nava, I. H., Garcia-Constantino, M., Ekerete, I., & Mustafa, M. A. (2023, March). Analysis of Accelerometer Data for Personalised Mood Detection in Activities of Daily Living. In 2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops) (pp. 200-205). IEEE.

This paper proposes a novel approach to identify moods in Activities of Daily Living (ADLs) using accelerometer sensor data from 15 participants over 7 sessions each. Monitoring ADLs and detecting moods are of particular importance due to the potential life-changing consequences. The ADLs considered relate to preparing and drinking a hot beverage, and they were segmented into four sub-activities: (i) entering kitchen, (ii) preparing beverage, (iii) drinking beverage, and (iv) exiting kitchen. The accelerometer was attached to the participants' wrists, and prior to collecting the data, they were asked about their current mood. Two approaches were considered in the analysis according to the moods reported by the participants (happy, calm, tired, stressed, excited, sad, and bored), firstly using all trials, and secondly using a balanced sample of data. A set of statistical, temporal, and spectral features were extracted from acceleration data, and personalised classification models were built and evaluated using the Random Forest algorithm. The experimental results showed that the average F -measure for all personalized classifiers was 0.75 considering all data, and 0.76 using balanced data. The best classification results were obtained with the “preparing” and “drinking” activities, and with the “happy”,”calm”, and “stressed” moods. This suggests that the use of accelerometers, such as those incorporated into smartwatches or activity trackers, may be useful in detecting moods in ADLs.

CICESE at DA-VINCIS 2023: Violent Events Detection in Twitter using Data Augmentation Techniques [pdf

Ponce-León, E., & López-Nava, I. H. (2023). CICESE at DA-VINCIS 2023: Violent Events Detection in Twitter using Data Augmentation Techniques. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2023), CEUR Workshop Proceedings. CEUR-WS. org.

This paper describes our participation in the shared evaluation campaign of DA-VINCIS at IberLEF 2023. In this work, we address the subtasks proposed, Violent Event Identification (subtask 1) and Violent Event Category Recognition (subtask 2) using multimodal information from tweets (text and images), by using a Bidirectional Encoder Representations from Transformers (BERT) with and without data augmentation techniques. For text augmentation, the GPT-3 model and prompt engineering were used meanwhile for image augmentation an image recovery approach from the web was used, and image captioning to handle the images from the visual information. Our approach obtained second place for subtask 1 (F1= 0.9203) and first place for subtask 2 (F1= 0.8797) among 16 different teams.

Current state and trends of the research in exergames for the elderly and their impact on health outcomes: a scoping review [pdf]

López-Nava, I. H., Rodriguez, M. D., García-Vázquez, J. P., Perez-Sanpablo, A. I., Quiñones-Urióstegui, I., Meneses-Peñaloza, A., ... & Favela, J. (2023). Current state and trends of the research in exergames for the elderly and their impact on health outcomes: a scoping review. Journal of Ambient Intelligence and Humanized Computing, 14(8), 10977-11009.

In recent years there has been significant interest in assessing the impact of exergames on the healthcare of older adults. This scoping review aims to present an overview of the current state and trends of clinical studies conducted to determine the benefits of using exergames for healthy aging between 2000 and 2019. We included original studies published in English that use exergames to address common diseases associated with subjects over 60 years. The search was conducted in major electronic databases (PubMed, IEEE Xplore Digital Library, Web of Science, ACM Digital Library, and Scopus). At least two reviewers analyzed articles independently based on the inclusion criteria and a pre-defined taxonomy to extract information. Abstracted information was summarized in tables and analyzed by publication date using temporal plots and thematic analysis. A total of 4502 potentially relevant studies were identified and assessed to select 130 articles for analysis. Most studies were randomized controlled trials (70) published in journals (119), in the medical or biological area (101), with the aim of treatment (118), addressing mobility-related conditions (87). Most studies used videogame consoles (56) and commercial games (81). We detected a change in the trend of research with an increasing interest in neurological or mental-related conditions after 2017, accompanied by reports of positive results. Consequently, this represents an important area to continue exploring. We found an opportunity to conduct further analysis and studies to support the benefit of exergaming on improving medical outcomes and their long-term effects.

Approach to a Lower Body Gait Generation Model Using a Deep Convolutional Generative Adversarial Network [pdf

Carneros-Prado, D., Dobrescu, C. C., Cabañero, L., Altamirano-Flores, Y. V., Lopez-Nava, I. H., González, I., ... & Hervas, R. (2022, November). In Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022) (pp. 419-430). Cham: Springer International Publishing.

Research over gait analysis has become more relevant in the last years, especially as a tool to detect early frailty signs. However, data gathering is often difficult and requires lots of resources. Synthetic data generation is a great complementary tool for data gathering that enables the augmentation of existing datasets. Despite not being a new concept, it has gained popularity in the last years thanks to Generative Adversarial Networks (GANs), a neural network architecture capable of creating data indistinguishable from the original one. In this article deep-convolutional GANs has been used to artificially expand a gait dataset containing data of the lower part of the body. The synthetic data has been studied through three approaches: looking animations of the points and comparing them to the originals; applying principal component analysis algorithm to both datasets to visually assess how each of them is distributed; and by extracting different features from both datasets to compare their statistical differences. The evaluation showed promising results, which opens a path for using synthetic data generation in the gait analysis domain.

Emotion Recognition from Human Gait Using Machine Learning Algorithms [pdf

Altamirano-Flores, Y. V., Lopez-Nava, I. H., González, I., Dobrescu, C. C., & Carneros-Prado, D. (2022, November). In Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022) (pp. 77-88). Cham: Springer International Publishing.

The analysis of human gait has been widely used in the clinical field, e.g., for the early diagnosis of some diseases. On the other hand, it is possible to associate movement patterns during gait with several human behaviors, such as emotions. The main objective of this work is to generate models to classify three discrete emotions: happy, sad, and angry, considering the neutral state as an additional class. A set of features were extracted from the 3D position of the human skeleton during walking sessions. A descriptive analysis of the data was performed in order to select the best subsets of joints for recognizing the emotions. The models were built with the algorithms: kNN, Random Forest, and a meta-classifier (boosting). The best results were obtained with boosting with a mAP of 0.77 for balanced data, and 0.79 for unbalanced data. The results were promising when using methods based on shallow machine learning, a deep learning approach is currently being worked on.

Validity of Using a Driving Game Simulator to Study the Visual Attention Differences in Young and Older Adults [pdf]   

Vera-Uribe, E. M., Rodríguez, M. D., Armenta, J. S., & López-Nava, I. H. (2022, November). In Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022) (pp. 26-37). Cham: Springer International Publishing.

Prior research has identified that visual behavior patterns differ between older adults and the youngest, affecting their driving performance. The lack of intelligent sensing technologies to conduct such research motivated us to develop the Intelligent Multimodal Monitoring System to Infer Points of Visual Attention (SiMIPAV). It infers five classes of visual points of attention (VPoA) from the movements of the driver’s head with 98% accuracy. It also includes components to measure properties of visual attention as a function of head posture, i.e., duration and frequency of looking at specific points in the car’s cockpit). The present study aims to validate the feasibility of measuring visual behaviors with SiMIPAV using a driving simulator, which would facilitate further studies in a safe environment. Through a between-subjects study, we compared the visual behavior properties of 27 young adults (YA) aged 21–31 years and 20 older adults (OA) aged 59–74 years who participated in the naturalistic condition (YA = 15; OA = 15) or the simulation condition (YA = 12; OA = 5). We found that the frequency of looking at the road negatively correlates with driving velocity in both conditions. However, road gaze duration and speed are only correlated in the naturalistic condition. In addition, in the simulator, the younger group exhibited more risky behavior than older adults, looking less frequently at VPoAs critical to driving (i.e., rearview mirrors) and having longer gaze durations to all VPoA than in naturalistic driving.

Towards Recognition of Driver Drowsiness States by Using ECG Signals [pdf]

Garcia-Perez, S., Rodríguez, M. D., & Lopez-Nava, I. H. (2022, November). In Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022) (pp. 369-380). Cham: Springer International Publishing.

Drowsy driving is one of the leading causes of car accidents that can result in great loss and tragedy, which could be prevented with early warning. Recent work has used behavioral, physiological, and driving skill traits that are present during drowsiness, such as yawning, closed eyes, decreased heart rate, and sudden steering wheel movements. From these traits, features can be extracted to be used in machine learning (ML) models for the automatic detection of the state of drowsiness. On the other hand, the study of fatigue or sleepiness in real settings leads to risks by exposing test subjects to states of non-alertness. In the present work, it is proposed to use a combination of features extracted from physiological signals, captured with a wearable ECG sensor (Polar H10) during a simulated driving environment, for building and evaluating ML-based models in order to classify different levels of drowsiness. These levels were recorded by self-report using the Karolinska Sleepiness Scale. An accuracy of 76.5% was archived with kNN when classifying drowsiness in 2 levels and 70.5% using Random Forest when classifying drowsiness in 3 levels. The results obtained are promising despite the fact that only physiological type traits were processed.

Analysis of accelerometer data for personalised abnormal behaviour detection in activities of daily living [pdf]

Garcia-Constantino, M., Konios, A., Lopez-Nava, I. H., Pouliet, P., Ekerete, I., Mustafa, M. A., ... & Morrison, G. (2022, November). In Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2022) (pp. 302-313). Cham: Springer International Publishing.

This paper proposes a novel approach to identify personalised abnormal behaviour in Activities of Daily Living (ADLs) using accelerometer sensor data. The ADLs considered are: (i) preparing and drinking tea, and (ii) preparing and drinking coffee.Abnormal behaviour identified in the context of these activities can be an indicator of a progressive health problem or the occurrence of a hazardous incident. Monitoring ADLs for detecting abnormal behaviour is of particular importance due to the potential life changing consequences that could result from not acting timely. Prior to performing ADLs, the participants were asked six questions related to their well-being and mood. In addition to data collected from accelerometers, data was also collected from contact and thermal sensors, and radar. The work presented is a first step towards a more. personalised approach in which individual user profiles are considered as it is acknowledged that people behave differently from each other. Thus, data was collected seven times for each participant. We have evaluated our approach with accelerometer data collected from 15 participants. The experimental results show that accelerometer data is sufficient to identify the main stages of the ADLs considered, and therefore, any unusual changes in the signals and duration could mean that abnormal behaviour occurred.

Uso de plataforma de videojuegos de conducción para analizar el desempeño visual de los conductores: estudio piloto [pdf]

Vera, E. M., Armenta, J. S., Hernández-Vidal, L. P., Rodríguez, M. D., López-Nava, H., & García-Pérez, S. (2022, August). Uso de plataforma de videojuegos de conducción para analizar el desempeño visual de los conductores: estudio piloto. In 2022 IEEE Mexican International Conference on Computer Science (ENC) (pp. 1-7). IEEE.

“Fitness to drive” is the ability to drive a vehicle safely. It decreases when cognitive dimensions such as processing speed are affected in older adults. The objective way to measure is through observation to identify crash risks in naturalistic driving, which is costly in time and money. Other objective measures that can be easily monitored need to be investigated. Our project aims to examine the association between the properties of visual attention (i.e., duration and frequency of looking at specific points in the cabin) and drivers' cognitive performance. Previously, we developed the Intelligent Multi-modal Monitoring System to infer Points of Visual Attention (SiMIPAV). It measures the properties of visual attention based on head posture, which we validated with 15 young adults and 15 older adults. This study aims to validate the feasibility of measuring visual behaviors using SiMIPAV with a simulator, which would facilitate data collection in a safe environment. We recruited 12 young adults and 5 adults over the age of 59 who drove in a simulator based on video game platforms. We found that road gaze frequency was negatively correlated with driving speed in both conditions (simulator and naturalistic) and with both age groups. However, road gaze duration and speed are only correlated in the naturalistic condition.

Satellite imagery classification using shallow and deep learning approaches [pdf]

Sainos-Vizuett, M., & Lopez-Nava, I. H. (2021, June). Satellite Imagery Classification Using Shallow and Deep Learning Approaches. In Mexican Conference on Pattern Recognition (pp. 163-172). Springer, Cham.

Recent advances in remote sensing technology and high-resolution satellite imagery offer great possibilities for understanding the earth’s surfaces. However, satellite image classification is a challenging problem due to the high variability inherent in satellite data. For this purpose, two learning approaches are proposed and compared for classifying a large-scale dataset including different types of land-use and land-cover surfaces (Eurosat). Traditional (shallow) machine learning models and deep learning models are built by using a set of features extracted from the satellite images for both approaches and using the RGB images for deep models. The best F1-score obtained by the shallow approach was 0.87, while for the deep approach it was 0.91. No significant difference was found in these results; however, significant improvements can be made by exploring the deep approach in greater depth.

Adoption of Wearable Devices by People withDementia: Lessons from a non-pharmacological intervention enabled by a social robot [pdf]

Cruz-Sandoval, D., Favela, J., Lopez-Nava, I. H., & Morales, A. (2021). Adoption of Wearable Devices by Persons with Dementia: Lessons from a Non-pharmacological Intervention Enabled by a Social Robot. In IoT in Healthcare and Ambient Assisted Living (pp. 145-163). Springer, Singapore.

Wearable technology is increasingly being used in healthcare research. Studies involving older adults using these devices are also increasing, but very few have been reported with persons with dementia (PwDs). This is understandable, since there are many barriers for the adoption of this technology by PwDs. Yet, monitoring PwDs activities and behaviors is essential for tracking disease progression and assessing the efficacy of interventions and for safety reasons. This is particularly relevant in nursing homes that are facing severe challenges with the current health crisis due to COVID-19, as one of the means to deal with this is by remote monitoring and tracking, and in general instrumenting them as Ambient Assisted Living spaces. We report on a study in which we conducted a non-pharmacological intervention guided by a social robot in a nursing home with the participation of ten PwDs and six caregivers. The cognitive stimulation therapy lasted for nine weeks, in which participants used a wearable device throughout the day. The data gathered from the devices were useful in obtaining a better understanding of how behaviors changed during the intervention. In particular, we report on the adoption of the wearables by PwDs, the efficacy of the strategies we implemented, and lessons learned. We finish the chapter with recommendations for the adoption of wearable devices for activity monitoring in studies involving people with dementia.

Monitoring Behavioral Symptoms of Dementia Using Activity Trackers [pdf]

Favela, J., Cruz-Sandoval, D., Morales-Tellez, A., & Lopez-Nava, I. H. (2020). Monitoring Behavioral Symptoms of Dementia Using Activity Trackers. Journal of Biomedical Informatics, 103520.

Tertiary disease prevention for dementia focuses on improving the quality of life of the patient. The quality of life of people with dementia (PwD) and their caregivers is hampered by the presence of behavioral and psychological symptoms of dementia (BPSD), such as anxiety and depression. Non-pharmacological interventions have proved useful in dealing with these symptoms. However, while most PwD exhibit BPSD, their manifestation (in frequency, intensity and type) varies widely among patients, thus the need to personalize the intervention and its assessment. Traditionally, instruments to measure behavioral symptoms of dementia, such as NPI-NH and CMAI, are used to evaluate these interventions. We propose the use of activity trackers as a complement to monitor behavioral symptoms in dementia research. To illustrate this approach we describe a nine week Cognitive Stimulation Therapy conducted with the assistance of a social robot, in which the ten participants wore an activity tracker. We describe how data gathered from these wearables complements the assessment of traditional behavior assessment instruments with the advantage that this assessment can be conducted continuously and thus be used to tailor the intervention to each PwD.

Gait activity classification on unbalanced data from inertial sensors using shallow and deep learning [pdf]

Lopez-Nava, I. H., Valentín-Coronado, L. M., Garcia-Constantino, M., & Favela, J. (2020). Gait Activity Classification on Unbalanced Data from Inertial Sensors Using Shallow and Deep Learning. Sensors, 20(17), 4756.

Activity recognition is one of the most active areas of research in ubiquitous computing. In particular, gait activity recognition is useful to identify various risk factors in people’s health that are directly related to their physical activity. One of the issues in activity recognition, and gait in particular, is that often datasets are unbalanced (i.e., the distribution of classes is not uniform), and due to this disparity, the models tend to categorize into the class with more instances. In the present study, two methods for classifying gait activities using accelerometer and gyroscope data from a large-scale public dataset were evaluated and compared. The gait activities in this dataset are: (i) going down an incline, (ii) going up an incline, (iii) walking on level ground, (iv) going down stairs, and (v) going up stairs. The proposed methods are based on conventional (shallow) and deep learning techniques. In addition, data were evaluated from three data treatments: original unbalanced data, sampled data, and augmented data. The latter was based on the generation of synthetic data according to segmented gait data. The best results were obtained with classifiers built with augmented data, with F-measure results of 0.812 (σ = 0.078) for the shallow learning approach, and of 0.927 (σ = 0.033) for the deep learning approach. In addition, the data augmentation strategy proposed to deal with the unbalanced problem resulted in increased classification performance using both techniques.

Prototypical System to Detect Anxiety Manifestations by Acoustic Patterns in Patients with Dementia [pdf]

Hernandez, N., Garcia-Constantino, M., Beltran, J., Hecker, P., Favela, J., Rafferty, J., Cleland, I., Lopez, H., Arnrich, N., & McChesney, I., (2020). Prototypical System to Detect Anxiety Manifestations by Acoustic Patterns in Patients with Dementia. EAI Endorsed Transactions on Pervasive Health and Technology 5(19). 

INTRODUCTION: Dementia is a syndrome characterised by a decline in memory, language, and problem-solving that affects the ability of patients to perform everyday activities. Patients with dementia tend to experience episodes of anxiety and remain for extended periods, which affects their quality of life. OBJECTIVES: To design AnxiDetector, a system capable of detecting patterns of sounds associated before and during the manifestation of anxiety in patients with dementia. METHODS: We conducted a non-participatory observation of 70 diagnosed patients in-situ, and conducted semi-structured interviews with four caregivers at a residential centre. Using the findings from our observation and caregiver interviews, we developed the AnxiDetector prototype and tested this in an experimental setting where we defined nine classes of audio to represent two groups of sounds: (i) Disturbance which includes audio files that characterise sounds that trigger anxiety in patients with dementia, and (ii) Expression which includes audio files that characterise sounds expressed by the patients during episodes of anxiety. We conducted two experimental classifications of sounds using (i) a Neural Network model trained and (ii) a Support Vector Machine model. The first evaluation consists of a binary discriminating between the two groups of sounds; the second evaluation discriminates the nine classes of audio. The audio resources were retrieved from publicly available datasets. RESULTS: The qualitative results present the views of the caregivers on the adoption of AnxiDetector. The quantitative results from our binary discrimination show a classification accuracy of 98.1% and 99.2% for the Deep Neural Network and Support Vector Machine models, respectively. When classifying the nine classes of sound, our model shows a classification accuracy of 92.2%. Whereas, the Support Vector Machine model yielded an overall classification accuracy of 93.0%. CONCLUSION: In this paper, we presented the outcomes from an observational study in-site at a residential care centre, qualitative findings from interviews with caregivers, the design of AnxiDetector, and preliminary qualitative results of a methodology devised to detect relevant acoustic events associated with anxiety in patients with dementia. We conclude by signalling future plans to conduct in-situ validation of the effectiveness of AnxiDetector for anxiety detection.

Human action recognition based on low- and high-level data from wearable inertial sensors [pdf]

Lopez-Nava, I. H., & Muñoz-Meléndez, A., (2019). Human action recognition based on low- and high-level data from wearable inertial sensors. International Journal of Distributed Sensor Networks 15(12), 1-12. 

Human action recognition supported by highly accurate specialized systems, ambulatory systems, or wireless sensor networks has a tremendous potential in the areas of healthcare or wellbeing monitoring. Recently, several studies carried out focused on the recognition of actions using wearable inertial sensors, in which raw sensor data are used to build classification models, and in a few of them high-level representations are obtained which are directly related to anatomical characteristics of the human body. This research focuses on classifying a set of activities of daily living, such as functional mobility, and instrumental activities of daily living, such as preparing meals, performed by test subjects in their homes in naturalistic conditions. The joint angles of upper and lower limbs are estimated using information from five wearable inertial sensors placed on the body of five test subjects. A set of features related to human limb motions is extracted from the orientation signals (high-level data) and another set from the acceleration raw signals (low-level data) and both are used to build classifiers using four inference algorithms. The proposed features in this work are the number of movements and the average duration of consecutive movements. The classifiers are capable of successfully classifying the set of actions using raw data with up to 77.8% and 93.3% from high-level data. This study allowed comparing the use of two data levels to classify a set of actions performed in daily environments using an inertial sensor network.

Recognition of Gait Activities using Acceleration Data from a Smartphone and a Wearable Device [pdf]

Lopez-Nava, I. H., Garcia-Constantino, M., & Favela, J., (2019). Recognition of Gait Activities using Acceleration Data from a Smartphone and a Wearable Device. Proceedings 31(1), 60. 

Activity recognition is an important task in many fields, such as ambient intelligence, pervasive healthcare, and surveillance. In particular, the recognition of human gait can be useful to identify the characteristics of the places or physical spaces, such as whether the person is walking on level ground or walking down stairs in which people move. For example, ascending or descending stairs can be a risky activity for older adults because of a possible fall, which can have more severe consequences than if it occurred on a flat surface. While portable and wearable devices have been widely used to detect Activities of Daily Living (ADLs), few research works in the literature have focused on characterizing only actions of human gait. In the present study, a method for recognizing gait activities using acceleration data obtained from a smartphone and a wearable inertial sensor placed on the ankle of people is introduced. The acceleration signals were segmented based on the automatic detection of strides, also called gait cycles. Subsequently, a feature vector of the segmented signals was extracted, which was used to train four classifiers using the Naive Bayes, C4.5, Support Vector Machines, and K-Nearest Neighbors algorithms. Data was collected from seven young subjects who performed five gait activities: (i) going down an incline, (ii) going up an incline, (iii) walking on level ground, (iv) going down stairs, and (v) going up stairs. The results demonstrate the viability of using the proposed method and technologies in ambient assisted living contexts.

Semi-Automated Data Labeling for Activity Recognition in Pervasive Healthcare [pdf]

Cruz-Sandoval, D., Beltran-Marquez, J., Garcia-Constantino, M.,Gonzalez-Jasso, L., Favela, J., Lopez-Nava, I. H., Cleland, I., Ennis, A., Hernandez-Cruz, N. Rafferty, J., Synnott, J., & Nugent, C., (2019). Semi-Automated Data Labeling for Activity Recognition in Pervasive Healthcare. Sensors 19(14), 3035. 

Activity recognition, a key component in pervasive healthcare monitoring, relies on classification algorithms that require labeled data of individuals performing the activity of interest to train accurate models. Labeling data can be performed in a lab setting where an individual enacts the activity under controlled conditions. The ubiquity of mobile and wearable sensors allows the collection of large datasets from individuals performing activities in naturalistic conditions. Gathering accurate data labels for activity recognition is typically an expensive and time-consuming process. In this paper we present two novel approaches for semi-automated online data labeling performed by the individual executing the activity of interest. The approaches have been designed to address two of the limitations of self-annotation: (i) The burden on the user performing and annotating the activity, and (ii) the lack of accuracy due to the user labeling the data minutes or hours after the completion of an activity. The first approach is based on the recognition of subtle finger gestures performed in response to a data-labeling query. The second approach focuses on labeling activities that have an auditory manifestation and uses a classifier to have an initial estimation of the activity, and a conversational agent to ask the participant for clarification or for additional data. Both approaches are described, evaluated in controlled experiments to assess their feasibility and their advantages and limitations are discussed. Results show that while both studies have limitations, they achieve 80% to 90% precision.

Study Design of an Environmental Smart Microphone System to Detect Anxiety in Patients with Dementia [pdf]

Hernandez-Cruz, N., Garcia-Constantino, M., Beltran-Marquez, J., Cruz-Sandoval, D., Lopez-Nava, I. H., Cleland, I., Favela, J., Nugent, C., Ennis, A., Rafferty, J., & Synnott, J. (2019, May). Study Design of an Environmental Smart Microphone System to Detect Anxiety in Patients with Dementia. In Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare (pp. 383-388). ACM.

Patients with dementia often suffer from stress episodes that escalate to anxiety. This paper presents a feasibility study of using environmental smart microphones to detect anxiety in patients with dementia. It is based on the identified auditory manifestations of anxiety. To have a better understanding of the anxiety manifestations in patients with dementia, 70 diagnosed patients were observed in-situ and 4 caregivers were interviewed. The design of an environmental smart microphone called AnxiCare has been developed and it is introduced. Feasibility interviews regarding the use of AnxiCare were conducted with caregivers at a care residence in Spain. Results from the observations, interviews and a preliminary validation are presented.

Semi-Automated Annotation of Audible Home Activities [pdf]

Garcia-Constantino, M., Beltran-Marquez, J., Cruz-Sandoval, D., Lopez-Nava, I. H., Favela, J., Ennis, A., Nugent, C., Rafferty, J., Cleland, I., Synnott, J., & Hernandez-Cruz, N. (2019, March). Semi-Automated Annotation of Audible Home Activities. In Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (pp. 40-45). IEEE.

Data annotation is the process of segmenting and labelling any type of data (images, audio or text). It is an important task for producing reliable datasets that can be used to train machine learning algorithms for the purpose of Activity Recognition. This paper presents the work in progress towards a semi-automated approach for collecting and annotating audio data from simple sounds that are typically produced at home when people perform daily activities, for example the sound of running water when a tap is open. We propose the use of an app called ISSA (Intelligent System for Sound Annotation) running on smart microphones to facilitate the semi-automated annotation of audible activities. When a sound is produced, the app tries to classify the activity and notifies the user, who can correct the classification and/or provide additional information such as the location of the sound. To illustrate the feasibility of the approach, an initial version of ISSA was implemented to train an audio classifier in a one-bedroom apartment.

High-Level Features for Recognizing Human Actions in Daily Living Environments Using Wearable Sensors [pdf]

López-Nava, I., & Muñoz-Meléndez, A. (2018). High-Level Features for Recognizing Human Actions in Daily Living Environments Using Wearable Sensors. In Multidisciplinary Digital Publishing Institute Proceedings (Vol. 2, No. 19, p. 1238).

Action recognition is important for various applications, such as, ambient intelligence, smart devices, and healthcare. Automatic recognition of human actions in daily living environments, mainly using wearable sensors, is still an open research problem of the field of pervasive computing. This research focuses on extracting a set of features related to human motion, in particular the motion of the upper and lower limbs, in order to recognize actions in daily living environments, using time-series of joint orientation. Ten actions were performed by five test subjects in their homes: cooking, doing housework, eating, grooming, mouth care, ascending stairs, descending stairs, sitting, standing, and walking. The joint angles of the right upper limb and the left lower limb were estimated using information from five wearable inertial sensors placed on the back, right upper arm, right forearm, left thigh and left leg. The set features were used to build classifiers using three inference algorithms: Naive Bayes, K-Nearest Neighbours, and AdaBoost. The F-m e a s u r e average of classifying the ten actions of the three classifiers built by using the proposed set of features was 0.806 (σ= 0.163).

Variability Analysis of Therapeutic Movements using Wearable Inertial Sensors [pdf]

López-Nava, I. H., Arnrich, B., Muñoz-Meléndez, A., & Güneysu, A. (2017). Variability Analysis of Therapeutic Movements using Wearable Inertial Sensors. Journal of medical systems, 41(1), 7.

A variability analysis of upper limb therapeutic movements using wearable inertial sensors is presented. Five healthy young adults were asked to perform a set of movements using two sensors placed on the upper arm and forearm. Reference data were obtained from three therapists. The goal of the study is to determine an intra and inter-group difference between a number of given movements performed by young people with respect to the movements of therapists. This effort is directed toward studying other groups characterized by motion impairments, and it is relevant to obtain a quantified measure of the quality of movement of a patient to follow his/her recovery. The sensor signals were processed by applying two approaches, time-domain features and similarity distance between each pair of signals. The data analysis was divided into classification and variability using features and distances calculated previously. The classification analysis was made to determine if the movements performed by the test subjects of both groups are distinguishable among them. The variability analysis was conducted to measure the similarity of the movements. According to the results, the flexion/extension movement had a high intra-group variability. In addition, meaningful information were provided in terms of change of velocity and rotational motions for each individual.

Wearable Inertial Sensors for Human Motion Analysis: A review [pdf]

López-Nava, I. H., & Muñoz-Meléndez, A. (2016). Wearable inertial sensors for human motion analysis: A review. IEEE Sensors Journal, 16(22), 7821-7834.

This paper reviews the research literature on human motion analysis using inertial sensors with the aim to find out: which configuration of sensors have been used to measure human motion; which algorithms have been implemented to estimate position and orientation of segments and joints of human body; how the performance of the proposed systems has been evaluated; and what is the target population with which the proposed systems have been assessed. These questions were used to revise the current state-of-the-art and suggest future directions in the development of systems to estimate human motion. A search of literature was conducted on eight Internet databases and includes medical literature: PubMed and ScienceDirect; technical literature: IEEE Xplore and ACM Digital Library; and all-science literature: Scopus, Web of Science, Taylor and Francis Online, and Wiley Online Library. A total of 880 studies were reviewed based on the criteria for inclusion/exclusion. After the screening and full review stages, 37 papers were selected for the review analysis. According to the review analysis, most studies focus on calculating the orientation or position of certain joints of the human body, such as elbow or knee. There are only three works that estimate position or orientation of both, upper and lower limbs simultaneously. Regarding the configuration of the experiments, the mean age of the test subjects is 26.2 years (± 3.7), indicating a clear trend to test the systems and methods using mainly young people. Other population groups, such as people with mobility problems, have not been considered in tests so far. Human motion analysis is relevant for obtaining a quantitative assessment of motion parameters of people. This assessment is crucial for, among others, healthcare applications, monitoring of neuromuscular impairments, and activity recognition. There is a growing interest for developing technologies and methods for enabling human motion analysis, ranging from specialized in situ systems to low-cost wearable systems.

Comparison between passive vision-based system and a wearable inertial-based system for estimating temporal gait parameters related to the GAITRite electronic walkway [pdf]

González, I., López-Nava, I. H., Fontecha, J., Muñoz-Meléndez, A., Pérez-SanPablo, A. I., & Quiñones-Urióstegui, I. (2016). Comparison between passive vision-based system and a wearable inertial-based system for estimating temporal gait parameters related to the GAITRite electronic walkway. Journal of biomedical informatics, 62, 210-223.

Quantitative gait analysis allows clinicians to assess the inherent gait variability over time which is a functional marker to aid in the diagnosis of disabilities or diseases such as frailty, the onset of cognitive decline and neurodegenerative diseases, among others. However, despite the accuracy achieved by the current specialized systems there are constraints that limit quantitative gait analysis, for instance, the cost of the equipment, the limited access for many people and the lack of solutions to consistently monitor gait on a continuous basis. In this paper, two low-cost systems for quantitative gait analysis are presented, a wearable inertial system that relies on two wireless acceleration sensors mounted on the ankles; and a passive vision-based system that externally estimates the measurements through a structured light sensor and 3D point-cloud processing. Both systems are compared with a reference clinical instrument using an experimental protocol focused on the feasibility of estimating temporal gait parameters over two groups of healthy adults (five elders and five young subjects) under controlled conditions. The error of each system regarding the ground truth is computed. Inter-group and intra-group analyses are also conducted to transversely compare the performance between both technologies, and of these technologies with respect to the reference system. The comparison under controlled conditions is required as a previous stage towards the adaptation of both solutions to be incorporated into Ambient Assisted Living environments and to provide continuous in-home gait monitoring as part of the future work.

Complex human action recognition on daily living environments using wearable inertial sensors [pdf]

López-Nava, I. H., & Muñoz-Meléndez, A. (2016, May). Complex human action recognition on daily living environments using wearable inertial sensors. In Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare (pp. 138-145). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering).

The aim of this study is to evaluate how similar a set of human actions are when they are performed under controlled conditions versus the same set of actions when they are performed under uncontrolled conditions, namely in daily living environments such as the users’ houses. This research is important for automatic recognition of human actions in daily living environments, mainly using wearable sensors, which is still an open research challenge of the field of pervasive computing. Action recognition is important for various applications of this field, such as, for instance, ambient intelligence, smart devices, and healthcare. In this work, we measure and analyze five human complex actions using wearable sensors in both, structured and daily living environments. Three wearable inertial sensor units were used in this study and they were worn by three healthy young subjects on three points of their upper limbs: the scapula, the upper arm and the forearm. The complex actions involved in this study are: grooming, cooking, eating, doing housework, and mouth care. Dynamic Time Warping algorithm was used to measure the intra and inter test variability of actions in both environments. Additionally, the results of the application of three supervised classification techniques, namely C4.5, Naive Bayes and Logistic Regression, are compared in terms of true positive rate TPR, true negative rate TNR and F-measure metrics. The classification models were based on time-domain and frequency-domain features extracted from orientation signals. According to the analysis cooking and eating are the actions with highest and lowest variability, respectively. Concerning the classification results, Naive Bayes and Logistic Regression obtain a TPR of 0.911 using relevant attributes. Our results provide valuable information to measure the similarity of a set of complex actions in daily living environments and to classify them.

Estimation of temporal gait parameters using Bayesian models on acceleration signals [pdf]

López-Nava, I. H., Muñoz-Meléndez, A., Pérez Sanpablo, A. I., Alessi Montero, A., Quiñones Urióstegui, I., & Núñez Carrera, L. (2016). Estimation of temporal gait parameters using Bayesian models on acceleration signals. Computer methods in biomechanics and biomedical engineering, 19(4), 396-403.

The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results.

Comparison of a vision-based system and a wearable inertial-based system for a quantitative analysis and calculation of spatio-temporal parameters [pdf]

López-Nava, I. H., González, I., Muñoz-Meléndez, A., & Bravo, J. (2015, December). Comparison of a vision-based system and a wearable inertial-based system for a quantitative analysis and calculation of spatio-temporal parameters. In Proceedings of the 1st International Conference on Ambient Intelligence for Health (pp. 116-122). Springer, Cham.

Clinical gait analysis provides an evaluation tool that allows clinicians to assess the abnormality of gait in patients. There are currently specialized systems to detect gait events and calculate spatio-temporal parameters of human gait, which are accurate and redundant. These systems are expensive and are limited to very controlled settings. As alternative, a wearable inertial system and a single depth-camera system are proposed in order to detect gait events, and then, estimate spatial and temporal gait parameters. An experimental protocol is detailed in this paper using both systems in order to compare their performance with respect to a specialized human gait system for two age groups, elder and youth. This research attempts to contribute to the development of clinical decision support technologies by combining vision systems and wearable sensors.

Automatic Measurement of Pronation/Supination, Flexion/Extension and Abduction/Adduction Motion of Human Limbs using Wearable Inertial and Magnetic Sensors [pdf]

López-Nava, I. H., Márquez-Aquino, F., Munoz-Meléndez, A., Carrillo-López, D., & Vargas-Martínez, H. S. (2015, July). Automatic measurement of pronation/supination, flexion/extension and abduction/adduction motion of human limbs using wearable inertial and magnetic sensors. In Proceedings of the 4th International Conference on Global Health Challenges (pp. 55-60). IARIA.

This research deals with the design and programming of devices for measuring automatically human motion using portable and low-cost technologies. The movements studied in this research are pronation/supination, flexion/extension and abduction/adduction of the upper and lower limb, which are required for a number of activities of daily living. A home-made attitude and heading reference system based on inertial and magnetic sensor is presented. It was compared with a similar device available in the market, and with respect to a video-camera based system used as gold standard. An experimental platform was also built for controlling and replicating experiments. The results obtained by the proposed device are competitive and promising with a general performance comparable to a commercial device.

Exergames as Tools Used on Interventions to Cope with the Effects of Ageing: A Systematic Review [pdf]

Velazquez, A., Campos-Francisco, W., García-Vázquez, J. P., López-Nava, H., Rodríguez, M. D., Pérez-San Pablo, A. I., ... & Favela, J. (2014, December). Exergames as tools used on interventions to cope with the effects of ageing: A Systematic Review. In Proceedings of the International Workshop on Ambient Assisted Living (pp. 402-405). Springer, Cham.

Exergames are currently used as a new tool for medical purposes. In this context, this paper presents an overview of the approaches used to gather evidence about the use and impact of exergames-based interventions on elderly. In total, 2306 abstracts were returned from a database search, yielding 52 relevant papers. Our analysis found a group of papers mostly published in engineering forums with emphasis on evaluating novel technologies and an evaluation providing low-evidence, another group of studies, published mostly in medical journals, use more conventional technologies, but conduct more comprehensive evaluations from which stronger evidence is obtained.

Towards ubiquitous acquisition and processing of gait parameters [pdf]

López-Nava, I. H., & Muñoz-Meléndez, A. (2010, November). Towards ubiquitous acquisition and processing of gait parameters. In Proceedings of the Mexican International Conference on Artificial Intelligence (pp. 410-421). Springer, Berlin, Heidelberg.

Gait analysis is the process of measuring and evaluating gait and walking spatio-temporal patterns, namely of human locomotion. This process is usually performed on specialized equipment that is capable of acquiring extensive data and providing a gait analysis assessment based on reference values. Based on gait assessments, therapists and physicians can prescribe medications and provide physical therapy rehabilitation to patients with gait problems. This work is oriented to support the design of ambulatory and ubiquitous technologies for gait monitoring. A probabilistic method to automatically detect human strides from raw signals provided by wireless accelerometers is presented. Local thresholds are extracted from raw acceleration signals, and used to distinguish actual strides from characteristic peaks commonly produced by significant shifts of the acceleration signals. Then, a bayesian classifier is trained with these peaks to detect and count strides. The proposed method has a good precision for classifying strides of raw acceleration signals for both, young and elderly individuals. Strides detection is required to calculate gait parameters and provide a clinical assessment.