European Research Council Advanced Research Grant awarded to Prof Cecilia Mascolo
Runtime: 01.10.2019 – 30.09.2024
Role: Participant
Partners: University of Cambridge
This work proposes a systematic framework to link sounds to disease diagnosis and to deal with the inherent issues raised by in-the-wild sensing: noise and privacy concerns. We exploit audio models in wearable systems maximizing the use of local hardware resources with power optimization and accuracy. Privacy will arise as by-product taking away the need for cloud analytics. The framework will embed the ability to quantify the diagnostic uncertainty by embedding this as a first-class citizen in the model and consider patient context as confounding factors through cross-sensor modality components which take advantage of additional sensor input indicative of the user behaviour.
EU H2020 / EFPIA Innovative Medicines Initiative (IMI) 2 Call 3
Runtime: 01.04.2016 – 31.03.2022
Role: Participant
Partners: King’s College London, Provincia Lombardo-Veneta – Ordine Ospedaliero di San Giovanni di Dio— Fatebenefratelli Lygature, Università Vita-Salute San Raffaele, Fundacio Hospital Universitari Vall D’Hebron, University of Nottingham, Centro de Investigacion Biomedica en Red, Software AG, Region Hovedstaden, Stichting VU-Vumc, University Hospital Freiburg, Stichting IMEC Nederland, Katholieke Universiteit Leuven, Northwestern University, Stockholm Universitet, University of Augsburg, University of Passau, Università degli Studi di Bergamo, Charité – Universitätsmedizin Berlin, Intel Corporation (UK) Ltd, GABO:mi, Janssen Pharmaceutica NV, H. Lundbeck A/S, UCB Biopharma SPRL, MSD IT Global Innovation Center
The general aim is to develop and test a transformative platform of remote monitoring (RMT) of disease state in three CNS diseases: epilepsy, multiple sclerosis and depression. Other aims are: (i) to build an infrastructure to identify clinically useful RMT measured biosignatures to assist in the early identification of relapse or deterioration; (ii) to develop a platform to identify these biosignatures; (iii) to anticipate potential barriers to translation by initiating a dialogue with key stakeholders (patients, clinicians, regulators and healthcare providers).
EU Horizon 2020 Innovation Action (IA) – 9.3% acceptance rate in the call
Runtime: 01.02.2015 – 31.07.2018
Role: Participant
Partners: Imperial College London, University of Augsburg, University of Passau, PlayGen Ltd, RealEyes
The main aim of SEWA is to deploy and capitalise on existing state-of-the-art methodologies, models and algorithms for machine analysis of facial, vocal and verbal behaviour, and then adjust and combine them to realise naturalistic human-centric human-computer interaction (HCI) and computer-mediated face-to-face interaction (FF-HCI). This will involve development of computer vision, speech processing and machine learning tools for automated understanding of human interactive behaviour in naturalistic contexts. The envisioned technology will be based on findings in cognitive sciences and it will represent a set of audio and visual spatiotemporal methods for automatic analysis of human spontaneous (as opposed to posed and exaggerated) patterns of behavioural cues including continuous and discrete analysis of sentiment, liking and empathy.
Industry Cooperation with HUAWEI TECHNOLOGIES
Runtime: 12.11.2016 – 11.11.2018
Role: Participant
Partners: University of Passau, University of Augsburg, HUAWEI TECHNOLOGIES
The research target of this project is to develop state-of-the-art methods for speech enhancement based on deep learning. The aim is to overcome limitations in challenging scenarios that are posed by non-stationary noise and distant speech with a potentially moving device and potentially limited power and memory on the device. It will be studied how deep learning speech enhancement can successfully be applied to multi-channel input signals. Furthermore, an important aspect is robustness and adaptation to unseen conditions, such as different noise types.