Networks Representation Learning

Scalable Graph Convolution Networks with Gated Attentive Propagation (GAP) kernels for Collective Classification

Key contributors: Priyesh V.

Domain: Network representation learning, Deep learning and Factorization machines

Description:

Learning algorithms for node classification on network data, which comprises of collections of entities (nodes) joined by relationships (edges) is different from conventional machine learning algorithms which ignores the relational information and assumes samples (entities) are drawn from an i.i.d. Network data can also contain one or more contextual features (attributes) associated with nodes or relations. Learning algorithms should adhere to the structure in the data explicitly and capture complex correlations between the node and its neighbors for the classification task.

Existing works enforce relational information or extract relational information by use of spectral kernels, which has the same neighborhood weighing mechanism across all the entities based on the choice and parameterization of kernel. This limits the capacity to generalize their mechanism to capture relational information in networks which have highly irrelevant related nodes for the end task such as in high density networks which may have low homophily and noisy edges.

To overcome it, we propose a Recurrent cell with Gated Attentive Propagation (GAP) Kernel for Graph Convolutional Neural network (GCN) which learns to weigh each node’s individual relationship using attention mechanisms. Leveraging attention mechanism alleviate the problem associated with homophilic kernels without exploding the number of parameters required, thereby allowing us to scale to capture higher order relationship information without succumbing to exploding irrelevant neighborhood with increase in neighborhood depth.

Suggested Readings:

Reduce bias in random walk based embedding models

Key contributors: Mohan Bhambhani.

Domain: Network representation learning

Description:

Random walk based embedding models Deepwalk and node2vec are not very complex models and typically have high bias. My work has been an attempt to reduce bias in these models.

Suggested Readings:

Label guided and node embedding guided community structure discovery to learn community preserving network embedding.

Key contributors: Anasua Mitra.

Domain: Network representation learning

Description:

Recent works have shown the benefits of network representation learning that also preserve community structure in networks. Modularity maximization based community discovery and incorporating that in embeddings is one of the most recent work in this direction. Presently we are trying to learn communities from network by using simple intuitions like :- neighboring nodes having similar labels should be part of same community or neighboring nodes having similar representations should be part of same community.

Suggested Readings: