PCovNet: A Presymptomatic COVID-19 Detection Framework using LSTM-VAE from Wearables Data
While the advanced diagnostic tools and healthcare management protocols have been struggling to contain the COVID-19 pandemic, the spread of the contagious
viral pathogen before the symptom onset acted as the Achilles’ heel. Although RT-PCR has been widely used for COVID-19 diagnosis, they are hardly administered before any visible symptom, which provokes rapid transmission. This study proposes PCovNet, a Long Short-term Memory Variational Autoencoder (LSTM-VAE)-based anomaly detection framework, to detect COVID-19 infection in the presymptomatic stage from the Resting Heart Rate (RHR) derived from the wearable devices, i.e., smartwatch or fitness tracker. The framework was trained and evaluated in two configurations on a publicly available wearable device dataset consisting of 25 OVID-positive individuals screened out from 5262 patients in the span of four months including their COVID-19 infection phase. The first configuration of the framework detected RHR abnormality with average Precision, Recall, and F-beta scores of 0.946, 0.234, and 0.918, respectively. However, the second configuration detected aberrant RHR in 100% of the subjects (25 out of 25) during the infectious period. Moreover, 80% of the subjects (20 out of 25) were detected during the presymptomatic stage. These findings prove the feasibility of using wearable devices with such a deep learning framework as a secondary diagnosis tool to circumvent the presymptomatic COVID-19 detection problem.
PCovNet: A Presymptomatic COVID-19 Detection Framework using LSTM-VAE from Wearables Data
While the advanced diagnostic tools and healthcare management protocols have been struggling to contain the COVID-19 pandemic, the spread of the contagious
viral pathogen before the symptom onset acted as the Achilles’ heel. Although RT-PCR has been widely used for COVID-19 diagnosis, they are hardly administered before any visible symptom, which provokes rapid transmission. This study proposes PCovNet, a Long Short-term Memory Variational Autoencoder (LSTM-VAE)-based anomaly detection framework, to detect COVID-19 infection in the presymptomatic stage from the Resting Heart Rate (RHR) derived from the wearable devices, i.e., smartwatch or fitness tracker. The framework was trained and evaluated in two configurations on a publicly available wearable device dataset consisting of 25 OVID-positive individuals screened out from 5262 patients in the span of four months including their COVID-19 infection phase. The first configuration of the framework detected RHR abnormality with average Precision, Recall, and F-beta scores of 0.946, 0.234, and 0.918, respectively. However, the second configuration detected aberrant RHR in 100% of the subjects (25 out of 25) during the infectious period. Moreover, 80% of the subjects (20 out of 25) were detected during the presymptomatic stage. These findings prove the feasibility of using wearable devices with such a deep learning framework as a secondary diagnosis tool to circumvent the presymptomatic COVID-19 detection problem.
Web-version of the QUCoughScope Application
Please find the web-version of the application to make it independent of platform so that iPhone users can use it easily: https://qu-mlg.com/projects/qu-cough-scope
Exovent-Qatar: Negative Pressure Ventilator for COVID-19 Crisis
We are designing a negative pressure ventilator in collaboration with Exovent, UK and we have made initial prototype and BBC-Arab interviewed my student, Shahd about the project.
فريق هندسة نسائي يركب جهاز exovent الجديد لمساعدة مرضى فيروس كورونا
In recent years, physiological signal based authentication has shown great promises, for its inherent robustness against forgery. Electrocardiogram (ECG) signal, being the most widely studied biosignal, has also received the highest level of attention in this regard. It has been proven with numerous studies that by analyzing ECG signals from different persons, it is possible to identify them, with acceptable accuracy. In this work, we present, EDITH, a deep learning-based framework for ECG biometrics authentication system. Moreover, we hypothesize and demonstrate that Siamese architectures can be used over typical distance metrics for improved performance. We have evaluated EDITH using 4 commonly used datasets and outperformed the prior works using less number of beats. EDITH performs competitively using just a single heartbeat (96∼99.75% accuracy) and can be further enhanced by fusing multiple beats (100% accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture manages to reduce the identity verification Equal Error Rate (EER) to 1.29 %. A limited case study of EDITH with real-world experimental data also suggests its potential as a practical authentication system.
Git-hub link of the project is https://github.com/nibtehaz/EDITH
Background: Recent pandemic has brought a sudden surge in the requirement of mechanical ventilators all over the world. In this backdrop, there is wide interest in looking for ways to support multiple patients from a single ventilator. Various solutions, based upon simple mechanical division of the ventilator tubings are described. However, recently warnings have been issued by multiple international professional societies against these plumbing solutions as it can seriously harm the patients.
Methods: We have bifurcated the inspiratory and expiratory conduits from a single ventilator with addition of one way valves, pressure and flow sensors along with volume and PEEP control. A purpose-built software and a low-cost microcontroller based control system integrates and displays the data in the familiar ventilatory graphic and numerical format onto a generic screen. The system is calibrated with simulated lungs with varying compliance. In addition to the standard microbial, heat and moisture exchanger filters we design to add UV-C lights at 254-260 nanometre wavelength in the expiratory channel for its virucidal effect.
Results: The dynamic ventilatory divider system is capable of providing and controlling individual flow, tidal volume (TV) and Positive End-expiratory pressure (PEEP) for individual patients. Furthermore, it would also display the ventilatory parameters of both the patients, on a single split screen, in a familiar format. FiO2 and rate are still controlled by the mother ventilator.
Conclusions: The prototype system has a potential to provide safe ventilation to at least two individuals from a single ventilator, while maintaining the unique requirements of each patient.
QaTa-Cov19 is a joint project developed by researchers in Qatar University, Tampere University and doctors from Hamad Medical Corporation for an accurate diagnosis, early detection and infection map generation of Covid-19. The best deep AI model trained over the largest X-ray dataset with 119,316 images is now available for online trials below. This is a free application with no commercial strings attached. Simply Upload one or more X-ray images and you will see the infection maps generated in few seconds like the examples shown below.
QaTa-COV19 Dataset: The researchers of Qatar University and Tampere University have compiled QaTa-COV19 dataset, which consists of 4603 COVID-19 chest X-rays. From 4603 images, 2951 of them have their corresponding ground-truth segmentation masks, which can be found as mask_FILENAME.png.
Early-QaTa-COV19: This dataset is a subset of QaTa-COV19 dataset, which consists of 1065 chest X-rays including no or limited sign of COVID-19 pneumonia cases for early COVID-19 detection.
Kaggle link of the dataset is https://www.kaggle.com/aysendegerli/qatacov19-dataset
Hydroponic Project
A team of researchers from Qatar University, Doha, Qatar, and the University of Dhaka, Bangladesh along with their collaborators from Pakistan and Malaysia in collaboration with medical doctors have created a database of chest X-ray images for COVID-19 positive cases along with Normal and Viral Pneumonia images. In our current release, there are 1200 COVID-19 positive images, 1341 normal images, and 1345 viral pneumonia images. We will continue to update this database as soon as we have new x-ray images for COVID-19 pneumonia patients.
Please find the Kaggle link for dataset and GitHub link for Matlab codes and trained models.
Cite:
M.E.H. Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, M.A. Kadir, Z.B. Mahbub, K.R. Islam, M.S. Khan, A. Iqbal, N. Al-Emadi, M.B.I. Reaz, M. T. Islam, “Can AI help in screening Viral and COVID-19 pneumonia?” IEEE Access, Vol. 8, 2020, pp. 132665 - 132676.
A team of researchers from Qatar University, Doha, Qatar, and the University of Dhaka, Bangladesh along with their collaborators from Malaysia in collaboration with medical doctors from Hamad Medical Corporation and Bangladesh have created a database of chest X-ray images for Tuberculosis (TB) positive cases along with Normal images. In our current release, there are 3500 TB images, and 3500 normal images.
The link for the dataset is: https://www.kaggle.com/tawsifurrahman/tuberculosis-tb-chest-xray-dataset
Cite:
Tawsifur Rahman, Amith Khandakar, Muhammad A. Kadir, Khandaker R. Islam, Khandaker F. Islam, Zaid B. Mahbub, Mohamed Arselene Ayari, Muhammad E. H. Chowdhury. (2020) "Reliable Tuberculosis Detection using Chest X-ray with Deep Learning, Segmentation and Visualization". IEEE Access, Vol. 8, pp 191586 - 191601. DOI. 10.1109/ACCESS.2020.3031384.