Accepted Papers

 Authors Title   PDF
Mohammad Khoshneshin and Nick Street  A graphical model for multi-relational social network analysis  Link
Jie Liu and David Page  Structure Learning of Undirected Graphical Models with Contrastive Divergence  Link
Tanmoy Mukherjee, Vinay Pande and Stanley Kok  Extracting New Facts in Knowledge Bases:-A matrix tri factorization approach  Link
Christopher Aicher, Abigail Z. Jacobs and Aaron Clauset  Adapting the Stochastic Block Model to Edge-Weighted Networks  Link
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston and Oksana Yakhnenko  Irreflexive and Hierarchical Relations as Translations  Link
Luiz Gomes-Jr, Rodrigo Jensen and André Santanchè  Query-based inferences in the Complex Data Management System  Link
Tommi Kerola, Linus Hermansson, Fredrik Johansson, Vinay Jethava and Devdatt Dubhashi  Entity Disambiguation in Anonymized Graphs Using Graph Kernels  Link
Bert Huang, Ben London, Ben Taskar and Lise Getoor  Empirical Analysis of Collective Stability  Link
Anthony Coutant, Philippe Leray and Hoel Le Capitaine  Learning Probabilistic Relational Models using co-clustering methods  Link
Yang Chen and Daisy Zhe Wang  Web-Scale Knowledge Inference Using Markov Logic Networks  Link
Daniil Mirylenka and Andrea Passerini  Learning to Grow Structured Visual Summaries for Document Collections  Link
Joseph Pfeiffer, Jennifer Neville and Paul Bennett  Combining Active Sampling with Parameter Estimation and Prediction in Single Networks  Link
Vinay Prabhu, Rohit Negi and Miguel Rodrigues  Bipartisan cloture roll call vote prediction using the joint press release network in US Senate  Link
Jay Pujara, Hui Miao, Lise Getoor and William Cohen  Large-Scale Knowledge Graph Identification using PSL  Link
David Arbour, James Atwood, Ahmed El-Kishky and David Jensen  Agglomerative Clustering of Bagged Data Using Joint Distributions  Link
Zhe Zhang and Munindar P. Singh  ReNew: A Semi-Supervised Self-Learning Sentiment Analysis Sentiment Flow Opinion Mining Text Mining for Sentiment Flow Analysis  Link
Maximilian Nickel and Volker Tresp  Logistic Tensor Factorization for Multi-Relational Data  Link


Mohammad Khoshneshin and Nick Street. A graphical model for multi-relational social network analysis
Abstract: In this paper, we propose a graphical model for multi-relational social network analysis based on latent variable models. Latent variable models are one of the successful approaches for social network analysis. These models assume a latent variable for each entity and then the probability distribution over relationships between entities is modeled via a function over latent variables. Here, we use latent feature networks (LFN) --- a general purpose framework for multi-relation learning via latent variable models. The experimental results show that using the side information via the proposed model can drastically improve the link prediction task in a social network.


Jie Liu and David Page. Structure Learning of Undirected Graphical Models with Contrastive Divergence
Abstract: Structure learning of Markov random fields (MRFs) is generally NP-hard (Karger & Srebro, 2001). Many structure learners and theoretical results are under the correlation decay assumption in the sense that for any two nodes i and k, the information about node i captured by node k is less than that captured by node j where j is the neighbor of i on the shortest path between i and k (Netrapalli et al., 2010). In this paper, we propose to learn structure of MRFs with contrastive divergence (Hinton, 2002) and demonstrate that our structure learner can recover the structures of these correlation non-decay MRFs.

Tanmoy Mukherjee, Vinay Pande and Stanley Kok. Extracting New Facts in Knowledge Bases:-A matrix tri factorization approach
Abstract: Knowledge bases provide with the benefit of organizing knowledge in relational form but suffer from incompleteness of new entities and relationships. Prior work has focused on extending KB's from unannotated text which could lead to wrong information due to noise present in the corpora. Here we introduce a Matrix Tri Factorization model which predicts new relationships which can be incorporated into the corpora.


Christopher Aicher, Abigail Z. Jacobs and Aaron Clauset. Adapting the Stochastic Block Model to Edge-Weighted Networks
Abstract: We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model's posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible


Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston and Oksana Yakhnenko. Irreflexive and Hierarchical Relations as Translations
Abstract: We consider the problem of embedding entities and relations of knowledge bases in low-dimensional vector spaces. Unlike most existing approaches, which are primarily efficient for modeling equivalence relations, our approach is designed to explicitly model irreflexive relations, such as hierarchies, by interpreting them as translations operating on the low-dimensional embeddings of the entities. Preliminary experiments show that, despite its simplicity and a smaller number of parameters than previous approaches, our approach achieves state-of-the-art performance according to standard evaluation pro- tocols on data from WordNet and Freebase.


Luiz Gomes-Jr, Rodrigo Jensen and André Santanchè. Query-based inferences in the Complex Data Management System
Abstract: This paper describes how the Complex Data Management System (CDMS) enables query-based inferences over structure and unstructured data represented in its graph model. The CDMS offers querying and management mechanisms for data typical of complex networks. It enables flexible querying based on combinations of correlation metrics that capture properties of the topology of the underlying graph. This flexibility supports a range of information retrieval applications. Here we show preliminary work on how the CDMS infrastructure can also be used in learning tasks. We envision a framework in which, for certain tasks, learning is indistinguishable from the conventional evolution of the database. Feature extraction and management is based on CDMS's mapper mechanism. Learned models are represented as queries, with combination of metrics and parameters fitted to the training data. We show preliminary experiments based on real data for a health diagnosis task.


Tommi Kerola, Linus Hermansson, Fredrik Johansson, Vinay Jethava and Devdatt Dubhashi. Entity Disambiguation in Anonymized Graphs Using Graph Kernels
Abstract: This paper presents a method for entity disambiguation in anonymized graphs based on local neighborhood structure.
Existing approaches leverage node information, which might not be available in several contexts due to privacy concerns. We consider this problem in the supervised setting where we are provided a base graph and a set of nodes labelled as ambiguous or unambiguous. We characterize the similarity between two nodes based on their local neighborhood structure using graph kernels; and solve the resulting classification task using SVMs. We also present extensions of two graphs kernels, namely, the direct product kernel and the shortest path kernel, with significant computational benefits. We show empirical evidence on two real-world datasets highlighting the advantages of our approach.


Bert Huang, Ben London, Ben Taskar and Lise Getoor. Empirical Analysis of Collective Stability
Abstract: When learning structured predictors, collective stability is an important factor for generalization. London et al. (2013) provide the first analysis of this effect, proving that collectively stable hypotheses produce less deviation between empirical risk and true risk, i.e., defect. We test this effect empirically using a collectively stable variant of max-margin Markov networks. Our experiments on webpage classification validate that increasing the collective stability reduces the defect and can thus lead to lower overall test error.


Anthony Coutant, Philippe Leray and Hoel Le Capitaine. Learning Probabilistic Relational Models using co-clustering methods
Abstract: Probabilistic Relational Models (PRM) are probabilistic graphical models which define a factored joint distribution over a set of random variables in the context of relational datasets. While regular PRM define probabilistic dependencies between objects' descriptive attributes, an extension called PRM with Reference Uncertainty (PRM-RU) allows in addition to manage link uncertainty between them, by adding random variables called selectors. In order to avoid problems due to large variables domains, selectors are associated with partition functions, mapping objects to a set of clusters, and selectors' distributions are defined over the set of clusters. In PRM-RU, the definition of partition functions constrain us to learn them using flat (i.e. non relational) clustering algorithms. However, many relational clustering techniques show better results in this context. Among them, co-clustering algorithms, applied on binary relationships, focus on simultaneously clustering both entities objects to use as much information available from the relationship as possible. In this paper, we present a work in progress about a new extension of PRM, called PRM with Co-Reference Uncertainty, which associates, to each class containing reference slots, a single selector and a single co-partition function learned using a co-clustering algorithm.


Yang Chen and Daisy Zhe Wang. Web-Scale Knowledge Inference Using Markov Logic Networks
Abstract: In this paper, we present our on-going work on ProbKB, a PROBabilistic Knowledge Base constructed from web-scale extracted entities, facts, and rules represented as a Markov logic network (MLN). We aim at web-scale MLN inference by designing a novel relational model to represent MLNs and algorithms that apply rules in batches. Errors are handled in a principled and elegant manner to avoid error propagation and unnecessary resource consumption. MLNs infer from the input a factor graph that encodes a probability distribution over extracted and inferred facts. We run parallel Gibbs sampling algorithms on Graphlab to query this distribution. Initial experiment results show promising scalability of our approach. 


Daniil Mirylenka and Andrea Passerini. Learning to Grow Structured Visual Summaries for Document Collections
Abstract: In this paper we propose a method of summarizing collections of documents with concise topic hierarchies, and show how it can be applied to visualization and browsing of academic search results. The proposed method consists of two steps: building the graph of topics relevant to the documents, and selecting the optimal subgraph thereof.In the first step, we map documents to a universal topic hierarchy and extract a graph of relevant topics. In the second step, we learn how to build summaries of the extracted topic graph using a structured output prediction approach. We describe how to build topic graphs based on the network of articles and categories of Wikipedia,
cast graph summarization problem into sequential prediction, and apply DAgger (dataset aggregation) to incrementally 'growing' graph summaries. Initial experiments suggest that our method is able to learn how to grow good topic summaries from a small number of examples.


Joseph Pfeiffer, Jennifer Neville and Paul Bennett. Combining Active Sampling with Parameter Estimation and Prediction in Single Networks
Abstract: A typical assumption in network classification methods is that the full network is available to both learn the model and apply the model for prediction. Often this assumption is appropriate (publicly visible friendship links in social networks), however in other domains, while the underlying relational structure exists, there may be a cost associated with acquiring the edges. In this preliminary work we explore the problem of active sampling—where the goal is to maximize the number of positive (e.g., fraudulent) nodes identified, while simultaneously querying for network structure that is likely to improve estimates. We outline the problem formally and discuss five problem cases that are likely to be observed in real world scenarios. Our key finding shows how the parameter estimates when learned from the distribution of labeled samples can be biased with respect to the parameters for the distribution of unlabeled samples, which can negatively impact the number of positive instances recalled. We further demonstrate that the estimation of the generative distribution from the labeled samples is also biased.


Vinay Prabhu, Rohit Negi and Miguel Rodrigues. Bipartisan cloture roll call vote prediction using the joint press release network in US Senate
Abstract: Bipartisan cloture vote prediction is deemed extremely challenging owing to the fact that senators tend to exude lesser allegiance towards their party and state affiliations during these votes. In this paper, we harness the joint press release network of these senators as an Ising prior and demonstrate its usefulness in increasing accuracy of vote prediction. We also compare the accuracy of the Maximum aPosterior Probability (MAP) and Maximum Posterior Marginal (MPM) solutions obtained upon using the Loopy Belief Propagation (LBP) approximate inference algorithm.


Jay Pujara, Hui Miao, Lise Getoor and William Cohen. Large-Scale Knowledge Graph Identification using PSL
Abstract: Building a web-scale knowledge graph, which captures information about entities and the relationships between them, represents a formidable challenge. While many large-scale information extraction systems operate on web corpora, the candidate facts they produce are noisy and incomplete. To remove noise and infer missing information in the knowledge graph, we propose knowledge graph identification: a process of jointly reasoning about the structure of the knowledge graph, utilizing extraction confidences and leveraging ontological information. Scalability is often a challenge when building models in domains with rich structure, but we use probabilistic soft logic (PSL), a recently-introduced probabilistic modeling framework which easily scales to millions of facts. In practice, our method performs joint inference on a real-world dataset containing over 1M facts and 80K ontological constraints in 12 hours and produces a high-precision set of facts for inclusion into a knowledge graph.


David Arbour, James Atwood, Ahmed El-Kishky and David Jensen. Agglomerative Clustering of Bagged Data Using Joint Distributions
Abstract: Current methods for hierarchical clustering of data either operate on features of the data or make limiting model assumptions. We present the hierarchy discovery algorithm (HDA), a model-based hierarchical clustering method based on explicit comparison of joint distributions via Bayesian network learning for predefined groups of data. HDA works on both continuous and discrete data and offers a model-based approach to agglomerative clustering that does not require pre-specification of the model dependency structure.


Zhe Zhang and Munindar P. Singh. ReNew: A Semi-Supervised Self-Learning Sentiment Analysis Sentiment Flow Opinion Mining Text Mining for Sentiment Flow Analysis
Abstract: The sentiment contained in opinionated text provides interesting and valuable information for building and improving social-based services. However, due to the complexity and diversity of linguistic representations, it is challenging to build a framework that can accurately detect and process the sentiment expressed in opinionated text. In this paper, we propose a semi-supervised framework for sentiment flow analysis that exhibits an iterative learning structure. To capture the idea of sentiment flow, instead of inferring the polarity at the word level, our framework focuses on the segment level. Experiments show that our framework performs well in sentiment classification on a review dataset.


Maximilian Nickel and Volker Tresp. Logistic Tensor Factorization for Multi-Relational Data
Abstract: Tensor factorizations have become increasingly popular approaches for various learning tasks on structured data. In this work, we extend the Rescal tensor factorization, which has shown state-of-the-art results for multi-relational learning, to account for the binary nature of adjacency tensors. We study the improvements that can be gained via this approach on various benchmark datasets and show that the logistic extension can improve the prediction results signicantly.
Ċ
Jay Pujara,
Aug 14, 2013, 3:18 PM
Ċ
Jay Pujara,
May 28, 2013, 1:02 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 29, 2013, 12:19 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Ċ
Jay Pujara,
May 28, 2013, 1:02 PM
Ċ
Jay Pujara,
Jun 7, 2013, 7:29 AM
Ċ
Jay Pujara,
May 28, 2013, 1:02 PM
Ċ
Jay Pujara,
May 28, 2013, 1:02 PM
Ċ
Jay Pujara,
May 28, 2013, 1:03 PM
Comments