Publications

Highly Refereed Conference Proceedings

Erin J. Talvitie, Zilei Shao, Huiying Li, Jinghan Hu, Jacob Boerma, Rory Zhao, Xintong Wang. Bounding-Box Inference for Error-Aware Model-Based Reinforcement Learning. In Proceedings of the First Reinforcement Learning Conference (RLC), 2024.

Zaheer Abbas, Samuel Sokota, Erin J. Talvitie, Martha White. Selective Dyna-Style Planning Under Limited Model Capacity. In Proceedings of the Thirty-seventh International Conference on Machine Learning (ICML), 2020.

E. Talvitie. Learning the Reward Function for a Misspecified Model. In Proceedings of the Thirty-fifth International Conference on Machine Learning (ICML), 2018.

E. Talvitie. Self-Correcting Models for Model-Based Reinforcement Learning. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence (AAAI), 2017.

Yitao Liang, Marlos C. Machado, E. Talvitie, and Michael Bowling. State of the Art Control of Atari Games Using Shallow Reinforcement Learning. In Proceedings of the Fifteenth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2016. Nominated for best paper award.

E. Talvitie. Agnostic System Identification for Monte Carlo Planning. In Proceedings of the Twenty-ninth AAAI Conference on Artificial Intelligence (AAAI), 2015.

Sriram Srinivasan, E. Talvitie, and Michael Bowling. Improving Exploration in UCT Using Local Manifolds. In Proceedings of the Twenty-ninth AAAI Conference on Artificial Intelligence (AAAI), 2015.

Ujjwal Das Gupta, E. Talvitie, and Michael Bowling. Policy Tree: Adaptive Representation for Policy Gradient. In Proceedings of the Twenty-ninth AAAI Conference on Artificial Intelligence (AAAI), 2015.

E. Talvitie. Model Regularization for Stable Sample Rollouts. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence (UAI), 2014.

Marc Bellemare, Joel Veness, and E. Talvitie. Skip Context Tree Switching. In Proceedings of the Thirty- first International Conference on Machine Learning (ICML), 2014.

E. Talvitie. Learning Partially Observable Models Using Temporally Abstract Decision Trees. In Advances in Neural Information Processing Systems 25 (NIPS), 2012.

E. Talvitie and Satinder Singh. Maintaining Predictions Over Time Without a Model. In Proceedings of the Twenty-first International Joint Conference on Artificial Intelligence (IJCAI), 2008.

E. Talvitie and Satinder Singh. Simple Partial Models for Complex Dynamical Systems. In Advances in Neural Information Processing Systems 22 (NIPS), 2009.

E. Talvitie and Satinder Singh. An Experts Algorithm for Transfer Learning. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI), 2007. 

Journals

Farzane Aminmansour, Taher Jafferjee, Ehsan Imani, Erin J. Talvitie, Michael Bowling, and Martha White. Mitigating Value Hallucination in Dyna-Style Planning via Multistep Predecessor Models. Journal of Artificial Intelligence Research 80 (2024): 441-473.


Marlos C. Machado, Marc G. Bellemare, E. Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. Journal of Artificial Intelligence Research 61:523-562, 2018.


E. Talvitie and Satinder Singh. Learning to Make Predictions in Partially Observable Environments Without a Generative Model. Journal of Artificial Intelligence Research 42:353-392, 2011.


Refereed Workshops and Symposiums

Muhammad Zaheer, Samuel Sokota, Erin Talvitie, Martha White. Selectively Planning with Imperfect Models vis Learned Error Signals. Presented at The NeurIPS 2019 Optimization Foundations for Reinforcement Learning Workshop, 2019.

G. Zacharias Holland, E. Talvitie, and Michael Bowling. The Effect of Planning Shape on Dyna-Style Planning in High-Dimensional State Spaces. Presented at The Fourth International Conference on Reinforcement Learning and Decision Making (RLDM), 2019.

G. Zacharias Holland, E. Talvitie, and Michael Bowling. The Effect of Planning Shape on Dyna-Style Planning in High-Dimensional State Spaces. Presented at The ICML Workshop on Prediction and Generative Models for Reinforcement Learning (PGMRL), 2018.

E. Talvitie. Self-Correcting Models for Model-Based Reinforcement Learning. Presented at The Third International Conference on Reinforcement Learning and Decision Making (RLDM), 2017. Best paper award.

E. Talvitie and Michael Bowling. Pairwise Offset Features for Atari 2600 Games. Presented at The AAAI Workshop on Learning for General Competency in Video Games, 2015.

E. Talvitie, Britton Wolfe, and Satinder Singh. Building Incomplete But Accurate Models. In Proceedings of the Tenth International Symposium on AI and Mathematics (ISAIM), 2008.