Graphs (a.k.a., networks) are the universal data structures for representing the relationships between interconnected objects. They are ubiquitous in a variety of disciplines and domains ranging from computer science, social science, economics, medicine, to bioinformatics. Representative examples of real-world graphs include social networks, knowledge graph, protein-protein interaction graphs, and molecular structures. Graph analysis techniques can be used for a variety of applications such as recommending friends to users in a social network, predicting the roles of proteins in a protein-protein interaction network, and predicting the properties of molecule structures for discovering new drugs.
One of the most fundamental challenges of analyzing graphs is effectively representing graphs, which largely determines the performance of many follow-up tasks. This workshop aims to discuss the latest progress on graph representation learning and their applications in different fields. We aim to bring researchers from different communities such as machine learning, network science, natural language understanding, recommender systems, drug discovery. We specially invite submissions related to toolkits and frameworks which make it easy to apply deep learning on graphs. The topics of interest include but are not limited to:
The SDM'19 Deep Learning for Graphs workshop encourages submissions that present both original results and preliminary/existing work. We welcome extended-abstract submissions to introduce preliminary works and ideas, as well as recently-published research at the top conferences. The workshop accepts both full papers (4 to 8 pages) and extended abstracts (1 to 2 pages) for published or ongoing work. Papers should be submitted as PDF, using the SIAM conference proceedings style, available at here and submitted via Easychair.
The organizers can be reached at event-dlg@googlegroups.com
This workshop could be potentially interesting to researchers in a variety of fields including researchers working on fundamental research of representation learning (especially graph representation learning), and researchers in different application domains of graph representation learning including network science, recommender systems, natural language understanding, and drug discovery.
The workshop will take place in Imperial 1. Following is the agenda for the worksop:
2:45 pm- 3:20 pm: Break
3:20 pm - 4:05 pm: Invited talk: Yizhou Sun
4:05 pm - 4:10 pm: Closing Remarks
Abstract (from the speaker): In this talk I will discuss an ongoing effort to benchmark popular graph representation learning methods on a range of datasets on both node classification and link prediction. Our survey covers 12 popular algorithms and 14 datasets. For both tasks we also develop a heuristic (without using representational learning) approach that serves as in important baseline (drawn from task-specific advances). Initial results suggest that classical methods (e.g. Laplacian Eigenmaps) and simple baseline heuristics offer competitive performance on certain types of datasets. We also find that if memory is not an issue - methods based on matrix factorization appear to offer an advantage over random-walk based methods. We conclude with a set of open questions and ideas. This is a joint work with S. Gurukar, P. Vijayan, B. Ravindran and others.
Speaker Bio: Dr. Srinivasan Parthasarathy received his PhD in Computer Science from the University of Rochester, New York, USA. He is a Professor in the Computer Science and Engineering Department at the Ohio State University (OSU). He directs the data mining research laboratory at OSU and co-directs the university-wide undergraduate program in Data analytics. His research interests are broadly in the areas of Data Mining, Databases, Bioinformatics and High Performance Computing. He has an h-index of over 50 and his work has been cited over 10,000 times. He is a recipient of an Ameritech Faculty fellowship in 2001, an NSF CAREER award in 2003, a DOE Early Career Award in 2004, and multiple grants or fellowships from IBM, Google and Microsoft. His papers have received eight best paper awards or similar honors from leading conferences in the field, including ones at SIAM international conference on data mining (SDM), IEEE international conference on data mining (ICDM), the Very Large Databases Conference (VLDB) ACM Knowledge Discovery and Data Mining (SIGKDD), ACM Bioinformatics and ISMB. He is currently completing his final term as SDM steering chair.
Abstract (from the speaker) : Networks naturally capture a host of real-world interactions, from social interactions and email communication to web browsing to brain activity. Over the past few years, representation learning over networks has been shown to be successful in a variety of downstream tasks, such as classification, link prediction, and visualization. Most existing approaches seek to learn node representations that capture node proximity. In this talk, I will discuss our recent work on a different class of node representations that aim to preserve the structural similarity between the nodes. I will present the lessons learned from designing efficient structural embedding methods for large-scale heterogeneous data, including ways to overcome the computational challenges and massive storage requirements that many existing techniques face. Throughout the talk, I will discuss applications to professional role discovery, entity resolution, entity linking across data sources, and more.
Speaker Bio: Danai Koutra is an Assistant Professor in Computer Science and Engineering at University of Michigan, where she leads the Graph Exploration and Mining at Scale (GEMS) Lab. Her research focuses on practical and scalable methods for large-scale real networks, and has applications in neuroscience, organizational analytics, and social sciences. She won an NSF CAREER award and an Amazon Research Faculty Award in 2019, an ARO Young Investigator award and an Adobe Data Science Research Faculty Award in 2018, the 2016 ACM SIGKDD Dissertation award, and an honorable mention for the SCS Doctoral Dissertation Award (CMU). She is the Program Director of the SIAG on Data Mining and Analytics, an Associate Editor of ACM TKDD, a tutorial co-chair for KDD'19, and a demo co-chair for CIKM'19. At the University of Michigan, she is leading the "Explore Graduate Studies in CSE" workshop, which aims to broaden participation in computer science at the graduate level. She has co-organized 3 tutorials and 3 workshops. She has worked at IBM Hawthorne, Microsoft Research Redmond, and Technicolor Palo Alto/Los Altos. She earned her Ph.D. and M.S. in Computer Science from CMU in 2015 and her diploma in Electrical and Computer Engineering at the National Technical University of Athens in 2010.
Abstract (from the speaker): Graph neural networks and node embedding techniques have recently achieved impressive results in many graph learning tasks. Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used, e.g. the web, adversaries are common. In my talk, I will shed light on the aspect of adversarial robustness for state-of-the art graph-based learning techniques. I will highlight the unique challenges and opportunities that come along with the graph setting and introduce different perturbation approaches showcasing the methods vulnerabilities. I will conclude with a short discussion of methods improving robustness.
Speaker Bio: Stephan Günnemann is a Professor at the Department of Informatics, Technical University of Munich. He acquired his doctoral degree in 2012 at RWTH Aachen University, Germany in the field of computer science. From 2012 to 2015 he was an associate of Carnegie Mellon University, USA; initially as a postdoctoral fellow and later as a senior researcher. Stephan Günnemann has been a visiting researcher at Simon Fraser University, Canada, and a research scientist at the Research & Technology Center of Siemens AG. His main research interests include the development of robust and scalable machine learning techniques for graphs and network data. His works on subspace clustering on graphs as well as his analysis of adversarial robustness of graph neural networks have received the best research paper awards at ECML-PKDD 2011 and KDD 2018.