Alexandre Graell i Amat
Full Professor
Department of Electrical Engineering
Chalmers University of Technology
SE-412 96 Gothenburg, Sweden
Email: alexandre dot graell at chalmers dot se
Phone: +46 31772 1753
Alexandre Graell i Amat
Full Professor
Department of Electrical Engineering
Chalmers University of Technology
SE-412 96 Gothenburg, Sweden
Email: alexandre dot graell at chalmers dot se
Phone: +46 31772 1753
I am a Full Professor at Chalmers University of Technology, with a strong foundation in coding theory and an expanding focus on artificial intelligence. My research initially centered on applying coding-theoretic principles to areas such as wireless and optical communications, distributed computing, and privacy-preserving data storage and retrieval. Over the past few years, my interests have expanded toward AI, first by leveraging coding-theoretic concepts in AI and, more recently, engaging in AI research beyond coding applications.
My current work focuses on developing theoretically sound and practically viable methods to enhance AI security and safeguard user privacy. This includes privacy-preserving AI, adversarial robustness, and trustworthy AI, with a particular emphasis on graph-based learning methods and federated learning.
Despite my increasing focus on AI, I remain actively involved in coding theory, particularly in emerging applications such as DNA storage.
Recent news
09/2025: Two papers accepted at NeurIPS!
We are happy to share that we had two papers accepted to Neural Information Processing Systems (NeurIPS) 2025:
M. Lassila, J. Östman, K.-H. Ngo, and A. Graell i Amat, "Practical Bayes-Optimal Membership Inference Attacks"
Abstract: We develop practical and theoretically grounded membership inference attacks (MIAs) against both independent and identically distributed (i.i.d.) data and graph-structured data. Building on the Bayesian decision-theoretic framework of Sablayrolles et al., we derive the Bayes-optimal membership inference rule for node-level MIAs against graph neural networks, addressing key open questions about optimal query strategies in the graph setting. We introduce BASE and G-BASE, computationally efficient approximations of the Bayes-optimal attack. G-BASE achieves superior performance compared to previously proposed classifier-based node-level MIA attacks. BASE, which is also applicable to non-graph data, matches or exceeds the performance of prior state-of-the-art MIAs, such as LiRA and RMIA, at a significantly lower computational cost. Finally, we show that BASE and RMIA are equivalent under a specific hyperparameter setting, providing a principled, Bayes-optimal justification for the RMIA attack.
The submitted version of the paper is available on arXiv and the camera-ready version with additional discussions and results will follow soon.
J. Aliakbari, J. Östman, A. Panahi, and A. Graell i Amat, "Subgraph Federated Learning via Spectral Methods"
Abstract: We consider the problem of federated learning (FL) with graph-structured data distributed across multiple clients. In particular, we address the prevalent scenario of interconnected subgraphs, where interconnections between clients significantly influence the learning process. Existing approaches suffer from critical limitations, either requiring the exchange of sensitive node embeddings, thereby posing privacy risks, or relying on computationally-intensive steps, which hinders scalability. To tackle these challenges, we propose FEDLAP, a novel framework that leverages global structure information via Laplacian smoothing in the spectral domain to effectively capture inter-node dependencies while ensuring privacy and scalability. We provide a formal analysis of the privacy of FEDLAP, demonstrating that it preserves privacy. Notably, FEDLAP is the first subgraph FL scheme with strong privacy guarantees. Extensive experiments on benchmark datasets demonstrate that FEDLAP achieves competitive or superior utility compared to existing techniques..
The camera-ready version of the paper will follow soon.
02/25: Paper accepted at International Conference on Learning Representations (ICLR) 2025
Our paper "Decoupled subgraph federated learning" has been accepted at ICLR 2025
Abstract: We address the challenge of federated learning on graph-structured data distributed across multiple clients. Specifically, we focus on the prevalent scenario of interconnected subgraphs, where interconnections between different clients play a critical role. We present a novel framework for this scenario, named FedStruct, that harnesses deep structural dependencies. To uphold privacy, unlike existing methods, FedStruct eliminates the necessity of sharing or generating sensitive node features or embeddings among clients. Instead, it leverages explicit global graph structure information to capture inter-node dependencies. We validate the effectiveness of FedStruct through experimental results conducted on six datasets for semi-supervised node classification, showcasing performance close to the centralized approach across various scenarios, including different data partitioning methods, varying levels of label availability, and number of clients.
02/25: Paper accepted at IEEE Transactions on Information Forensics and Security
Our paper "FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation" has been accepted for publication at IEEE Transactions on Iformation Forensics and Security
Abstract: Federated learning (FL) has emerged as a promising approach for collaboratively training machine learning models while preserving data privacy. Due to its decentralized nature, FL is vulnerable to poisoning attacks, where malicious clients compromise the global model through altered data or updates. Identifying such malicious clients is crucial for ensuring the integrity of FL systems. This task becomes particularly challenging under privacy-enhancing protocols such as secure aggregation, creating a fundamental trade-off between privacy and security. In this work, we propose FedGT, a novel framework designed to identify malicious clients in FL with secure aggregation while preserving privacy. Drawing inspiration from group testing, FedGT leverages overlapping groups of clients to identify the presence of malicious clients via a decoding operation. The clients identified as malicious are then removed from the model training, which is performed over the remaining clients. By choosing the size, number, and overlap between groups, FedGT strikes a balance between privacy and security. Specifically, the server learns the aggregated model of the clients in each group--vanilla federated learning and secure aggregation correspond to the extreme cases of FedGT with group size equal to one and the total number of clients, respectively. The effectiveness of FedGT is demonstrated through extensive experiments on three datasets in a cross-silo setting under different data-poisoning attacks. These experiments showcase FedGT's ability to identify malicious clients, resulting in high model utility. We further show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al. and the robust aggregation technique Multi-Krum in multiple settings.
09/24: Paper at European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)
We presented the paper "Secure Aggregation is Not Private Against Membership Inference Attacks" at this year's ECML-PKDD
Abstract: Secure aggregation (SecAgg) is a commonly-used privacy-enhancing mechanism in federated learning, affording the server access only to the aggregate of model updates while safeguarding the confidentiality of individual updates. Despite widespread claims regarding SecAgg's privacy-preserving capabilities, a formal analysis of its privacy is lacking, making such presumptions unjustified. In this paper, we delve into the privacy implications of SecAgg by treating it as a local differential privacy (LDP) mechanism for each local update. We design a simple attack wherein an adversarial server seeks to discern which update vector a client submitted, out of two possible ones, in a single training round of federated learning under SecAgg. By conducting privacy auditing, we assess the success probability of this attack and quantify the LDP guarantees provided by SecAgg. Our numerical results unveil that, contrary to prevailing claims, SecAgg offers weak privacy against membership inference attacks even in a single training round. Indeed, it is difficult to hide a local update by adding other independent local updates when the updates are of high dimension. Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection, in federated learning.