I like logical puzzles and feel incredibly excited when new elegant theories open the door to solving practical problems. This is the reason I did my PhD, continued with postdoctoral training and, after a short excursion to the industry, decided to stay in academia. This page describes several research topics I have been recently working on and contributed to.
Visit our group webpage for more information on funded project and research activities: Embedded Learning and Sensing Systems - Research.
Building efficient ensembles from trained deep models is a fundamental technique to improve prediction accuracy. Model ensembles are built in either the output space or in the weight space. While several related works show that output-space ensembling improves prediction accuracy and robustness, it requires a separate inference pass through each model, making it less interesting for resource-constrained applications. An alternative research direction investigates the means to build model ensembles in the weight space by averaging trained solutions. Vanilla averaging two SGD solutions fails: it leads to a low accuracy model due to a high loss barrier between two models trained independently from different seeds. However, there are cases when weight-space averaging is successful, e.g., if two models share a part of their optimization trajectory during training. We conjecture that the energy barrier between two independently trained SGD solutions can be removed if permutation invariance is taken into account and show how to do this in practice for a number of datasets and architectures.
Permutation conjecture
ICLR'22 and arXiv
Zero barrier with REPAIR
ICLR'23 and arXiv
REPAIR solves variance collapse
ICLR'23 and arXiv
Traditional deep learning models require extensive computational power and memory, limiting their applicability on embedded systems and edge devices. Research on resource-efficient and adaptive deep learning aims to optimize model performance while ensuring robustness and safety. Efficient model compression techniques, such as quantization and pruning, enable deployment on embedded hardware with minimal loss in accuracy. However, ensuring these compressed models remain reliable under distribution shifts and adversarial conditions is essential for building safe AI-based systems. Additionally, deep learning models must adapt or reconfigure efficiently for new tasks without excessive retraining, enabling long-term usability in dynamic environments. Advancing these areas requires novel algorithmic approaches that balance efficiency, adaptability, and robustness, ensuring AI systems remain effective and trustworthy across diverse real-world applications.
From LLMs to Edge: Parameter-Efficient Fine-Tuning on Edge Devices, EMERGE EWSN'25
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints, IEEE TMC'25
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation, arXiv preprint'25
Forget the Data and Fine-Tuning! Just Fold the Network to Compress, ICLR'25
FocusDD: Real-World Scene Infusion for Robust Dataset Distillation, preprint
Sensor-Guided Adaptive Machine Learning on Resource-Constrained Devices, IoT'24
Subspace Configurable Networks, CoLLAs'24 (oral presentation)
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints, PML4LRS ICLRw'24
Studying the Impact of Magnitude Pruning on Contrastive Learning Methods, HAET ICMLw'22
Understanding the Effect of Sparsity on Neural Networks' Robustness, OPPO ICMLw'21
Subspace-configurable networks, configuration subspace D=2 (left), and achieved accuracy for rotation transformation (right), arXiv and Twitter paper
Subspace-configurable networks, D=8
arXiv and Twitter paper
AI-based systems are increasingly integrated into critical decision-making processes, from healthcare and finance to law enforcement and autonomous systems. However, their reliability, fairness, and robustness remain open challenges. Safety concerns arise when AI systems fail unpredictably, leading to potentially catastrophic consequences, such as autonomous vehicle malfunctions or incorrect medical diagnoses. Bias in AI models can amplify societal inequalities, as seen in hiring algorithms or facial recognition systems that disproportionately misclassify underrepresented groups. Moreover, AI models remain vulnerable to adversarial attacks and data poisoning, exposing them to manipulation and exploitation. Addressing these challenges requires a principled approach to auditing, interpreting, and improving AI models to ensure they are both trustworthy and resilient. Research in this area aims to develop robust learning frameworks and bias mitigation strategies to enhance the safety and fairness of AI-based systems in real-world applications.
Breaking the Illusion: Real-world Challenges for Adversarial Patches in Object Detection, EMERGE'24
Studying the Impact of Magnitude Pruning on Contrastive Learning Methods, ICMLw'22
To Share or Not to Share: On Location Privacy in IoT Sensor Data, IoTDI'22
Embracing Opportunities of Livestock Big Data Integration with Privacy Constraints, IoT'19
Low-cost environmental sensors and environmental models present interesting use-cases for testing, optimizing and improving machine learning models. For example, low-cost gas sensors may drift over time or their measurements may be affected by other environmental processes and environmental dynamics. Sophisticated machine learning models for accurate sensor calibration can help to compensate for these effects. On-device sensor data processing, however, may face severe resource constraints including processing power, memory and energy budget, that need to be taken into account. Another example is that some types of chemical gas sensors are power-hungry and machine learning can be used to replace actual measurements with high-quality predictions. Finally, sensor data coming from IoT sensors measuring gases and particle concentrations in the ambient air help to push the limits of today's air quality maps by extending the conventional networks of static high-quality measurement stations with dense IoT measurements. The challenge is to show that it is possible to build high-quality and high-resolution air quality maps using low-cost, less precise, low-resolution, less stable, noisy sensors. While successfully solving this challenge, we managed to improve the accuracy of air quality models by accounting for air pollution transfer, and used our models to understand the impact of COVID-19 lockdown measures on local air quality.
PCDCNet: A Surrogate Model for Air Quality Forecasting with Physical-Chemical Dynamics and Constraints, arXiv preprint'25
Geometric Data Augmentations to Mitigate Distribution Shifts in Pollen Classification from Microscopic Images, IEEE ICPADS'23
SensorFormer: Efficient Many-to-Many Sensor Calibration with Learnable Input Sub-Sampling, IoTJ'22
TIP-Air: Tracking Pollution Transfer for Accurate Air Quality Prediction, CPD'21
Compensating Altered Sensitivity of Duty-Cycled MOX Gas Sensors with ML, SECON'21
Understanding the Impact of Lockdown Measures on Local Air Quality, UrbCom'21
Air pollution model
LUR, PerCom'14 / PMC'15
Sensor calibration
SensorFormer, IoTJ'22
Tracking pollution transfer
TIP Air, CPD'21
Automated pollen sensing
prototype, EWSN'20