Learning Representations for Structured Data
special session @ IJCNN 2020
Learning Representations for Structured Data is a special session at the 2020 International Joint Conference on Neural Networks, World Congress on Computational Intelligence that will be held in Glasgow (UK), on July 19-24, 2020.
Call for papers
Structured data, e.g. sequences, trees and graphs, are a natural representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their relationships, represented by the edges (and their labels). Graphs are one of the most general and complex forms of structured data allowing to represent networks of interacting elements, e.g. in social graphs or metabolomics, as well as data where topological variations influence the feature of interest, e.g. molecular compounds. Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information.
Thanks to the growing availability of computational resources and data, modern machine learning methods promote flexible representations that can be learned end-to-end from data. For instance, recent deep learning approaches for learning representation on structured data complement the flexibility of data-driven approaches with biases about structures in data, coming from prior knowledge about the problem at hand. Nonetheless, representation learning is becoming of great importance in other areas, such as kernel-based and probabilistic models. It has also been shown that, when the data available for the task at hand is limited, it is still beneficial to resort to representations learned in an unsupervised fashion, or on different, but related, tasks.
This session focuses on learning representation for structured data such as sequences, trees, graphs, and relational data. Topics that are of interest to this session include, but are not limited to:
- Deep learning and representation learning for graphs
- Learning with network data
- Graph generation (probabilistic models, variational autoencoders, adversarial training, …)
- Graph reduction and pooling in Graph Neural Networks
- Adaptive processing of structured data (neural, probabilistic, kernel)
- Recurrent, recursive and contextual models
- Tensor methods for structured data
- Reservoir computing and randomized neural networks for structures
- Relational deep learning
- Learning implicit representations
- Applications of adaptive structured data processing: e.g. Natural Language Processing, machine vision (e.g. point clouds as graphs), materials science, chemoinformatics, computational biology, social networks.
Paper Submissions: January 15, 2020
Paper Acceptance Notifications: March 30, 2020
Conference: July 19-24, 2020
Davide Bacciu, University of Pisa
Nicolò Navarin, University of Padova
Filippo Maria Bianchi, Norwegian Research Centre
Thomas Gärtner, TU Wien
Alessandro Sperduti, University of Padova
For any enquire, please write to: bacciu [at] di.unipi.it or nnavarin [at] math.unipd.it