Alexandre Graell i Amat
Full Professor
Department of Electrical Engineering
Chalmers University of Technology
SE-412 96 Gothenburg, Sweden
Email: alexandre dot graell at chalmers dot se
Phone: +46 31772 1753
I am a Full Professor at Chalmers University of Technology, with a strong foundation in coding theory and an expanding focus on artificial intelligence. My research initially centered on applying coding-theoretic principles to areas such as wireless and optical communications, distributed computing, and privacy-preserving data storage and retrieval. Over the past few years, my interests have expanded toward AI, first by leveraging coding-theoretic concepts in AI and, more recently, engaging in AI research beyond coding applications.
My current work focuses on developing theoretically sound and practically viable methods to enhance AI security and safeguard user privacy. This includes privacy-preserving AI, adversarial robustness, and trustworthy AI, with a particular emphasis on graph-based learning methods and federated learning.
Despite my increasing focus on AI, I remain actively involved in coding theory, particularly in emerging applications such as DNA storage.
Recent news
02/25: Paper accepted at International Conference on Learning Representations (ICLR) 2025
Our paper "Decoupled subgraph federated learning" has been accepted at ICLR 2025
Abstract: We address the challenge of federated learning on graph-structured data distributed across multiple clients. Specifically, we focus on the prevalent scenario of interconnected subgraphs, where interconnections between different clients play a critical role. We present a novel framework for this scenario, named FedStruct, that harnesses deep structural dependencies. To uphold privacy, unlike existing methods, FedStruct eliminates the necessity of sharing or generating sensitive node features or embeddings among clients. Instead, it leverages explicit global graph structure information to capture inter-node dependencies. We validate the effectiveness of FedStruct through experimental results conducted on six datasets for semi-supervised node classification, showcasing performance close to the centralized approach across various scenarios, including different data partitioning methods, varying levels of label availability, and number of clients.
02/25: Paper accepted at IEEE Transactions on Information Forensics and Security
Our paper "FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation" has been accepted for publication at IEEE Transactions on Iformation Forensics and Security
Abstract: Federated learning (FL) has emerged as a promising approach for collaboratively training machine learning models while preserving data privacy. Due to its decentralized nature, FL is vulnerable to poisoning attacks, where malicious clients compromise the global model through altered data or updates. Identifying such malicious clients is crucial for ensuring the integrity of FL systems. This task becomes particularly challenging under privacy-enhancing protocols such as secure aggregation, creating a fundamental trade-off between privacy and security. In this work, we propose FedGT, a novel framework designed to identify malicious clients in FL with secure aggregation while preserving privacy. Drawing inspiration from group testing, FedGT leverages overlapping groups of clients to identify the presence of malicious clients via a decoding operation. The clients identified as malicious are then removed from the model training, which is performed over the remaining clients. By choosing the size, number, and overlap between groups, FedGT strikes a balance between privacy and security. Specifically, the server learns the aggregated model of the clients in each group--vanilla federated learning and secure aggregation correspond to the extreme cases of FedGT with group size equal to one and the total number of clients, respectively. The effectiveness of FedGT is demonstrated through extensive experiments on three datasets in a cross-silo setting under different data-poisoning attacks. These experiments showcase FedGT's ability to identify malicious clients, resulting in high model utility. We further show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al. and the robust aggregation technique Multi-Krum in multiple settings.
09/24: Paper at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)
We presented the paper "Secure Aggregation is Not Private Against Membership Inference Attacks" at this year's ECML-PKDD
Abstract: Secure aggregation (SecAgg) is a commonly-used privacy-enhancing mechanism in federated learning, affording the server access only to the aggregate of model updates while safeguarding the confidentiality of individual updates. Despite widespread claims regarding SecAgg's privacy-preserving capabilities, a formal analysis of its privacy is lacking, making such presumptions unjustified. In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update. We design a simple attack wherein an adversarial server seeks to discern which update vector a client submitted, out of two possible ones, in a single training round of federated learning under SecAgg. By conducting privacy auditing, we assess the success probability of this attack and quantify the LDP guarantees provided by SecAgg. Our numerical results unveil that, contrary to prevailing claims, SecAgg offers weak privacy against membership inference attacks even in a single training round. Indeed, it is difficult to hide a local update by adding other independent local updates when the updates are of high dimension. Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection, in federated learning.