Building Interpretable Models: From Bayesian Networks to Neural Networks (PhD thesis). Viktoriya Krakovna, 7 September 2016.
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models. Viktoriya Krakovna, Finale Doshi-Velez. International Conference on Machine Learning (ICML) Workshop on Human Interpretability in Machine Learning (WHI), 23 June 2016 (ArXiv). Neural Information Processing Systems (NIPS) Workshop on Intepretable Machine Learning for Complex Systems, 9 Dec 2016 (ArXivposter). 
A Minimalistic Approach to Sum-Product Network Learning for Real Applications. Viktoriya Krakovna, Moshe Looks. International Conference for Learning Representations (ICLR) workshop track, 2 May 2016. (ArXivOpenReviewposter
Interpretable Selection and Visualization of Features and Interactions Using Bayesian Forests. Viktoriya Krakovna, Jiong Du, Jun S. Liu. New England Statistics Symposium (NESS), 25 April 2015. (ArXiv, posterR packagecode)
A generalized-zero-preserving method for compact encoding of concept latticesMatthew Skala, Victoria Krakovna, Janos Kramar, Gerald Penn. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1512–1521, Uppsala, Sweden, 11-16 July 2010.