This work investigates the environmental impact of GNN-based recommender systems, an aspect that has been largely overlooked in the literature. Specifically, we conduct a comprehensive analysis of the carbon emissions associated with training and deploying GNN models for recommendation tasks. We evaluate the energy consumption and carbon footprint of different GNN architectures and configurations, considering factors such as model complexity, training duration, hardware specifica- tions and embedding size.
Our work tries to address the reproducibility problems in the domain of Sequential Recommendation Systems by standardising data pre-processing and model implementations, providing a comprehensive code resource, including a framework for developing SRSs and establishing a foundation for consistent and reproducible experimentation. We conduct extensive experiments on several benchmark datasets, comparing various SRSs implemented in our resource. We challenge prevailing performance benchmarks, offering new insights into the SR domain.
The proposed library can be used by research and practitioners to streamline the research in the field of Recommendation Systems. For details refer to the paper.
In this work, we propose a solution integrating a cutting-edge model inspired by category theory: Sheaf4Rec. Our approach takes advantage from sheaf theory and results in a more comprehensive representation that can be effectively exploited during inference. Our proposed model exhibits a noteworthy relative improvement of up to 8.53% on F1-Score@10 and an impressive increase of up to 11.29% on NDCG@10, outperforming existing state-of-the-art models such as NGCF, KGTORe and other recently developed GNN-based models. Sheaf4Rec shows remarkable improvements in terms of efficiency: we observe substantial runtime improvements ranging from 2.5% up to 37% when compared to other GNN-based competitor models.
All the code is written in Python and is based on Pytorch, Pytorch Geometric and the use of Wandb for logging purposes.
This is our approach to the task of identification of persuasion techniques in text, which is a subtask of the SemEval-2023 Task 3 on the multilingual detection of genre, framing, and persuasion techniques in online news. The subtask is multi-label at the paragraph level and the inventory considered by the organizers covers 23 persuasion techniques.Β
Our solution is based on an ensemble of a variety of pre-trained language models fine-tuned on the propaganda dataset.Β
The official evaluation shows our solution ranks 1st in English and attains high scores in all the other languages, i.e. French, German, Italian, Polish, and Russian.Β
The aim of this project is to develop a safe navigation framework for the TIAGo robot moving in a human crowd. Our approach is based on the paper of Vulcano et al., where a sensor-based scheme is presented. This scheme consists of two modules, the Crowd Prediction and Motion Generation modules, which run sequentially during every sampling interval. Our setup is implemented in Python using ROS and to validate our implementation multiple experiments are performed on Gazebo in scenarios of different complexity.
Data augmentation techniques are used to increase the size and variability of training data for learning visual tasks. These techniques are well-known in computer vision and include rotation, cropping, scaling and other transformations to increase the size of a dataset.Β However, no one has addressed modeling variations in the sensor domain. This paper proposes an automatic, physically-based, and straightforward augmentation pipeline to simulate, on real images, multiple effects which arise from non-ideal optics, such as spherical aberration, defocus, astigmatism, and coma. The introduction of these effects on a real dataset can improve the ability to perform multiple computer vision tasks on it. We validate this assumption on two popular computer vision tasks: object detection and semantic segmentation introducing sensor effects into the PASCAL VOC 2012 dataset. In the end, we show that these techniques can improve the performance of our models on the detection task while reaching very similar results on segmentation.