Previous Work

ACM SIGMOD '21 Publication

Link to Conference

Link to Codebase

The premise of this paper was a novel system to automatically generate language bias for relational learning. Developing language bias is typically a manual process that requires deep domain expertise. This system circumvents that process by using information within the training examples and the database to automatically generate language bias with similar, and in some cases better, effectiveness when compared to manually developed language bias. This system works in conjunction with the relational learning system Castor which has been previously discussed.

ACM SIGMOD '20 Publication

Link to Paper

Link to Codebase

The premise of our current research is to remove the data cleaning layer from the machine learning process. In our most recent publication we provided a novel approach to learning efficiently and effectively over dirty data with very little preprocessing and oversight. Specifically, we focused on a single machine learning algorithm: relational learning. In the future we are looking to expand the general idea to more learning algorithms.