research

Understanding the knowledge encoded in black-box models is important for the user of the model for several reasons; for example, to validate its prediction accuracy, to understand why a prediction was correct or wrong, to further leverage its predictive abilities, to gain knowledge about the phenomena under study, to gain knowledge about the model itself. My research focuses on understanding what type of knowledge is encoded in such black-box models to achieve the previous points described. In the papers below it is shown my most recent work on this endeavor.

First author:

    • Talk (slides) at AAAI Spring Symposium 2015 in Stanford for the paper Towards Extracting Faithful and Descriptive Representations of Latent Variable Models [pdf]
    • Workshop paper at the Cognitive Computation workshop at NIPS 2015. Extracting Interpretable Models from Matrix Factorization Models [pdf]
    • Short paper and talk at EACL 2017. How Well Can We Predict Hypernyms from Word Embeddings? A Dataset-Centric Analysis [pdf]
    • Long paper and talk at NAACL 2018. Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness [pdf]

Co-author:

    • Sebastian Riedel, Sameer Singh, Guillaume Bouchard, Tim Rocktäschel, Ivan Sanchez. Towards Two-Way Interaction with Reading Machines. In Statistical Language and Speech Processing. Springer International Publishing. Pages: 1-7. 2015.

Congress Proceedings (co-author):

    • Luis Pineda et al. The Golem Team, RoboCup@Home 2011. In Proceedings of RoboCup 2011 [pdf].
    • Luis Pineda et al. The Golem Team, RoboCup@Home 2012. In Proceedings of RoboCup 2012 [pdf].