Home‎ > ‎

KDD17 Tutorial: Learning Representations of Large-scale Networks

Abstract:
Large-scale networks such as social networks, citation networks, the World Wide Web, and traffic networks are ubiquitous in the real world. Networks can also be constructed from text, time series, behavior logs, and many other types of data. Mining network data attracts increasing attention in academia and industry, covers a variety of applications, and influences the methodology of mining many types of data. A prerequisite to network mining is to find an effective representation of networks, which largely determines the performance of downstream data mining tasks. Traditionally, networks are usually represented as adjacency matrices, which suffer from data sparsity and high-dimensionality. Recently, there is a fast-growing interest in learning continuous and low-dimensional representations of networks. This is a challenging problem for multiple reasons: (1) networks data (nodes and edges) are sparse, discrete, and globally interactive; (2) real-world networks are very large, usually containing millions of nodes and billions of edges; and (3) real-world networks are heterogeneous. Edges can be directed, undirected or weighted, and both nodes and edges may carry different semantics. 

In this tutorial, we will introduce the recent progress on learning continuous and low-dimensional representations of large-scale networks. This includes methods that learn the embeddings of nodes, methods that learn representations of larger graph structures (e.g., an entire network), and methods that layout very large networks on extremely low (2D or 3D) dimensional spaces. We will introduce methods for learning different types of node representations: representations that can be used as features for node classification, community detection, link prediction, and network visualization. We will introduce end-to-end methods that learn the representation of the entire graph structure through directly optimizing tasks such as information cascade prediction, chemical compound classification, and protein structure classification, using deep neural networks. We will highlight open source implementations of these techniques. 

Tutors:
Jian Tang, Cheng Li, Qiaozhu Mei

Contact: Jian Tang, tangjianpku@gmail.com

Slides


References:
  • [1]  Mukund Balasubramanian and Eric L Schwartz. 2002. The isomap algorithm and topological stability. Science 295, 5552 (2002), 7–7.
  • [2]  Mikhail Belkin and Partha Niyogi. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, Vol. 14. 585–591.
  • [3]  Michae ̈l Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convo- lutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems. 3837–3845.
  • [4]  Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 855–864.
  • [5]  Joseph B Kruskal. 1964. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29, 1 (1964), 1–27.
  • [6]  Cheng Li, Xiaoxiao Guo, and Qiaozhu Mei. 2016. DeepGraph: Graph Structure Predicts Network Growth. arXiv preprint arXiv:1610.06251 (2016).
  • [7]  Cheng Li, Jiaqi Ma, Xiaoxiao Guo, and Qiaozhu Mei. 2017. DeepCas: an End-to- end Predictor of Information Cascades. In Proceedings of the 26th international conference on World wide web.
  • [8]  Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
  • [9]  Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In Proceedings of the 33rd annual international conference on machine learning. ACM.
  • [10]  Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learn- ing of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 701–710.
  • [11]  Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. 2011. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12, Sep (2011), 2539–2561.
  • [12]  Jian Tang, Jingzhou Liu, Ming Zhang, and Qiaozhu Mei. 2016. Visualization Large-scale and High-dimensional Data. arXiv preprint arXiv:1602.00370 (2016).
  • [13]  Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Predictive text embeddin

    through large-scale heterogeneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1165–1174.
  • [14]  Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. ACM, 1067–1077. 
Software:





Ċ
Jian Tang,
Aug 12, 2017, 2:05 PM
Comments