SDM'19 Workshop on

Deep Learning for Graphs

Introduction


Graphs (a.k.a., networks) are the universal data structures for representing the relationships between interconnected objects. They are ubiquitous in a variety of disciplines and domains ranging from computer science, social science, economics, medicine, to bioinformatics. Representative examples of real-world graphs include social networks, knowledge graph, protein-protein interaction graphs, and molecular structures. Graph analysis techniques can be used for a variety of applications such as recommending friends to users in a social network, predicting the roles of proteins in a protein-protein interaction network, and predicting the properties of molecule structures for discovering new drugs.

One of the most fundamental challenges of analyzing graphs is effectively representing graphs, which largely determines the performance of many follow-up tasks. This workshop aims to discuss the latest progress on graph representation learning and their applications in different fields. We aim to bring researchers from different communities such as machine learning, network science, natural language understanding, recommender systems, drug discovery. We specially invite submissions related to toolkits and frameworks which make it easy to apply deep learning on graphs. The topics of interest include but are not limited to:

  • Unsupervised node representation learning
  • Learning representations of entire graphs
  • Graph neural networks
  • Graph generation
  • Adversarial attacks to graph representation methods
  • Heterogeneous graph embedding
  • Knowledge graph embedding
  • Graph alignment
  • Dynamic graph representation
  • Graph matching
  • Graph representation learning for relational reasoning
  • Graph anomaly detection
  • Applications in recommender systems
  • Applications in natural language understanding
  • Applications in drug discovery
  • Toolkits and frameworks which make it easy to apply deep learning on graphs
  • Other applications

Speakers


  1. Stephan Günnemann: Professor of data mining & analytics at Technische Universität München
  2. Danai Koutra: Assistant Professor, University of Michigan, Ann Arbor
  3. Pietro Liò: Professor of Computational Biology at the University of Cambridge
  4. Yizhou Sun: Associate Professor at University of California, Los Angeles
  5. Srinivasan Parthasarathy: Professor at The Ohio State University


Call for Papers

The SDM'19 Deep Learning for Graphs workshop encourages submissions that present both original results and preliminary/existing work. We welcome extended-abstract submissions to introduce preliminary works and ideas, as well as recently-published research at the top conferences. The workshop accepts both full papers (4 to 8 pages) and extended abstracts (1 to 2 pages) for published or ongoing work. Papers should be submitted as PDF, using the SIAM conference proceedings style, available at here and submitted via Easychair.


The organizers can be reached at event-dlg@googlegroups.com

Important Dates

Submission deadline: March 22, 2019

Acceptance Notification: March 30, 2019

Workshop date: May 4, 2019

Target Audience


This workshop could be potentially interesting to researchers in a variety of fields including researchers working on fundamental research of representation learning (especially graph representation learning), and researchers in different application domains of graph representation learning including network science, recommender systems, natural language understanding, and drug discovery.

Program

The workshop will take place in Imperial 1. Following is the agenda for the worksop:


  • 9:30 am - 9:35 am: Opening Remarks
  • 9:35 am - 10:20 am : Invited Talk: Srinivasan Parthasarathy
  • 10:20 am - 10:50 am: Presentation of the following papers:
      • Euler: a framework for deep learning on large-scale graphs - Yan Zhang, Shuai Li, Yi Ren, Siran Yang, Yuan Wei, Genbao Chen, Xu Tian, Shiyang Wen, Wei Lin, Di Zhang and Jinhui Li. (10 minutes)
      • Introducing Graph Smoothness Loss for Training Deep Learning Architectures - Myriam Bontonou, Carlos Eduardo Rosar Kos Lassance, Ghouthi Boukli Hacene, Vincent Gripon, Jian Tang and Antonio Ortega. (10 minutes)
      • Using Embeddings of Line Graph Powers to Retrieve Item Substitutes - Brooke Fitzgerald, Dora Jambor and Putra Manggala. (10 minutes)
  • 10:50 am - 11:35 am: Invited Talk: Danai Koutra


  • 11:35 am - 1:30 am: Lunch Break


  • 1:30 pm - 2:15 pm: Invited Talk: Stephan Günnemann
  • 2:15 pm -2:45 pm: Presentation of the following papers:
      • Geometric Scattering for Graph Data Analysis - Feng Gao, Guy Wolf and Matthew Hirn. (10 minutes)
      • A Unified Deep Learning Formalism For Processing Graph Signals - Myriam Bontonou, Carlos Eduardo Rosar Kos Lassance, Jean-Charles Vialatte and Vincent Gripon. (10 minutes)
      • Graph Laplacian Problems on Graph Neural Networks - Tse-Yu Lin and Yen-Lung Tsai. (10 minutes)

2:45 pm- 3:20 pm: Break

3:20 pm - 4:05 pm: Invited talk: Yizhou Sun

4:05 pm - 4:10 pm: Closing Remarks

Invited Talks


Dr. Srinivasan Parthasarathy - Benchmarking Graph Representation Learning Methods on Two Tasks: Insights and Renewed Bearing


Abstract (from the speaker): In this talk I will discuss an ongoing effort to benchmark popular graph representation learning methods on a range of datasets on both node classification and link prediction. Our survey covers 12 popular algorithms and 14 datasets. For both tasks we also develop a heuristic (without using representational learning) approach that serves as in important baseline (drawn from task-specific advances). Initial results suggest that classical methods (e.g. Laplacian Eigenmaps) and simple baseline heuristics offer competitive performance on certain types of datasets. We also find that if memory is not an issue - methods based on matrix factorization appear to offer an advantage over random-walk based methods. We conclude with a set of open questions and ideas. This is a joint work with S. Gurukar, P. Vijayan, B. Ravindran and others.


Speaker Bio: Dr. Srinivasan Parthasarathy received his PhD in Computer Science from the University of Rochester, New York, USA. He is a Professor in the Computer Science and Engineering Department at the Ohio State University (OSU). He directs the data mining research laboratory at OSU and co-directs the university-wide undergraduate program in Data analytics. His research interests are broadly in the areas of Data Mining, Databases, Bioinformatics and High Performance Computing. He has an h-index of over 50 and his work has been cited over 10,000 times. He is a recipient of an Ameritech Faculty fellowship in 2001, an NSF CAREER award in 2003, a DOE Early Career Award in 2004, and multiple grants or fellowships from IBM, Google and Microsoft. His papers have received eight best paper awards or similar honors from leading conferences in the field, including ones at SIAM international conference on data mining (SDM), IEEE international conference on data mining (ICDM), the Very Large Databases Conference (VLDB) ACM Knowledge Discovery and Data Mining (SIGKDD), ACM Bioinformatics and ISMB. He is currently completing his final term as SDM steering chair.


Dr Danai Koutra - Pocket-size structural embeddings in large-scale networks


Abstract (from the speaker) : Networks naturally capture a host of real-world interactions, from social interactions and email communication to web browsing to brain activity. Over the past few years, representation learning over networks has been shown to be successful in a variety of downstream tasks, such as classification, link prediction, and visualization. Most existing approaches seek to learn node representations that capture node proximity. In this talk, I will discuss our recent work on a different class of node representations that aim to preserve the structural similarity between the nodes. I will present the lessons learned from designing efficient structural embedding methods for large-scale heterogeneous data, including ways to overcome the computational challenges and massive storage requirements that many existing techniques face. Throughout the talk, I will discuss applications to professional role discovery, entity resolution, entity linking across data sources, and more.


Speaker Bio: Danai Koutra is an Assistant Professor in Computer Science and Engineering at University of Michigan, where she leads the Graph Exploration and Mining at Scale (GEMS) Lab. Her research focuses on practical and scalable methods for large-scale real networks, and has applications in neuroscience, organizational analytics, and social sciences. She won an NSF CAREER award and an Amazon Research Faculty Award in 2019, an ARO Young Investigator award and an Adobe Data Science Research Faculty Award in 2018, the 2016 ACM SIGKDD Dissertation award, and an honorable mention for the SCS Doctoral Dissertation Award (CMU). She is the Program Director of the SIAG on Data Mining and Analytics, an Associate Editor of ACM TKDD, a tutorial co-chair for KDD'19, and a demo co-chair for CIKM'19. At the University of Michigan, she is leading the "Explore Graduate Studies in CSE" workshop, which aims to broaden participation in computer science at the graduate level. She has co-organized 3 tutorials and 3 workshops. She has worked at IBM Hawthorne, Microsoft Research Redmond, and Technicolor Palo Alto/Los Altos. She earned her Ph.D. and M.S. in Computer Science from CMU in 2015 and her diploma in Electrical and Computer Engineering at the National Technical University of Athens in 2010.



Dr. Stephan Günnemann - Adversarial Robustness of Machine Learning Models for Graphs


Abstract (from the speaker): Graph neural networks and node embedding techniques have recently achieved impressive results in many graph learning tasks. Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used, e.g. the web, adversaries are common. In my talk, I will shed light on the aspect of adversarial robustness for state-of-the art graph-based learning techniques. I will highlight the unique challenges and opportunities that come along with the graph setting and introduce different perturbation approaches showcasing the methods vulnerabilities. I will conclude with a short discussion of methods improving robustness.


Speaker Bio: Stephan Günnemann is a Professor at the Department of Informatics, Technical University of Munich. He acquired his doctoral degree in 2012 at RWTH Aachen University, Germany in the field of computer science. From 2012 to 2015 he was an associate of Carnegie Mellon University, USA; initially as a postdoctoral fellow and later as a senior researcher. Stephan Günnemann has been a visiting researcher at Simon Fraser University, Canada, and a research scientist at the Research & Technology Center of Siemens AG. His main research interests include the development of robust and scalable machine learning techniques for graphs and network data. His works on subspace clustering on graphs as well as his analysis of adversarial robustness of graph neural networks have received the best research paper awards at ECML-PKDD 2011 and KDD 2018.


Organizers

  • Jian Tang is currently an assistant professor at Montreal Institute for Learning Algorithms (Mila) and HEC Montreal since December, 2017. He finished his Ph.D. at Peking University in 2014, was a researcher at Microsoft Research between 2014-2016, and was a Postdoc fellow at the University of Michigan and Carnegie Mellon University between 2016- 2017. His research focuses on graph representation learning with applications in natural language understanding, recommender systems, and drug discovery. Most of his papers are published in top-tier venues across machine learning and data mining conferences (ICML, KDD, WWW, and WSDM). He co-organized a tutorial on graph representation learning at KDD 2017, and published one of the first papers on node representation learning (LINE). One of his papers on learning extremely low-dimensional node representations for graph visualization (LargeVis) was nominated for the best paper at WWW 2016. He also received a best paper at ICML 2014 for a constructive theoretical analysis of statistical topic models.
  • Shagun Sodhani is a MSc student at Montreal Institute for Learning Algorithms (Mila) since September 2017 under supervision of Dr Jian Tang. Prior to that, he was working with the Machine Learning team at Adobe Systems where he was awarded the Outstanding Young Engineer Award. His research interest focuses on applications of graph representation learning.
  • William L. Hamilton is a Visiting Researcher at Facebook AI Research, and he will be joining McGill University’s School of Computer Science as an Assistant Professor in January 2019. He completed his PhD at Stanford University in 2018 under the supervision of Jure Leskovec and Dan Jurafsky, and prior to that he completed a MSc at McGill University under the supervision of Joelle Pineau. His research focuses on Graph Representation Learning as well as large-scale computational social science. He has published papers on Graph Representation Learning in top-tier venues across machine learning and network science (NIPS, ICML, KDD, and WWW) and co-organized a tutorial on the topic at WWW 2018 (i.e., TheWebConf). He is the lead developer of GraphSAGE, a state-of-the-art open-source framework for Graph Representation Learning. He was the SAP Stanford Graduate Fellow 2014- 2018, received the Cozzarelli Best Paper Award from the Proceedings of the National Academy of Sciences (PNAS) in 2017, and his work has been featured in numerous media outlets, including Wired, The New York Times, and The BBC.
  • Reihaneh Rabbany is an Assistant Professor at the School of Computer Science, McGill University. She is a member of Mila - Quebec's artificial intelligence institute, and a Canada CIFAR AI Chair. Her research is at the intersection of network science, data mining and machine learning, with a focus on analyzing real-world interconnected data, and social good applications. She has contributed to more than 20 peer-reviewed research papers, published in top-tier conference and journals including KDD, NeurIPS, AAAI, ECML/PKDD, DMKD, and Plos One. Before joining McGill, she was a Postdoctoral fellow at the School of Computer Science, Carnegie Mellon University, and completed her Ph.D. in Computing Science Department at the University of Alberta. She has been recognized as a top female graduate in the fields of electrical engineering and computer science in the 2016 Rising Stars program, and was a recipient of the Queen Elizabeth II Graduate Scholarship during graduate studies.
  • Vincent Gripon is a permanent researcher with IMT-Atlantique (Institut Mines-Telecom), Brest, France. He obtained his M.S. from École Normale Supérieure of Cachan and his Ph.D. from Télécom Bretagne. His research interests lie at the intersection of graph signal processing, machine learning and neural networks. He co-authored more than 70 papers in the above-mentioned domains. Since October 2018, he is an invited professor at Université de Montréal.