Masters, P., Kuhn, G. & Luff, P. (2025). Towards a Framework for Detecting Deceptive Contextual and Behavioural Signals. In Proceedings of IEEE Statistical Signal Processing Workshop, Edinburgh, UK, June 8 - 11, 2025 (pp.403-407).
Masters, P., Gallagher, D., Moreau, Luc., & Vered, M. (2025). Rethinking Explainable AI: Explanations can be Deceiving. (Extended abstract.) In Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems, Detroit, Michigan, USA, May 19 – 23, 2025
Wally Smith, Peta Masters, Jingyi Jannie Liu, Gustav Kuhn, and Ryan M. Kelly. 2024. Explaining the inexplicable: a study of people's reactions to futuristic AI. In Proceedings of Nov 30 - Dec 4, 2024 (OzCHI'24). ACM, New York, NY, USA. 10 pages.
Price, A., Pereira, R. F., Masters, P., & Vered, M. (2023, May). Domain-Independent Deceptive Planning. In Proceedings of the 22nd International Conference on Autonomous Agents and MultiAgent Systems (pp. 95-103).
Masters, P., Smith, W., & Kirley, M. (2021). Extended goal recognition: Lessons from magic. Frontiers in Artificial Intelligence, 4, 730990 (pp. 1-17).
Masters, P., Kirley, M., & Smith, W. (2021). Extended goal recognition: a planning-based model for strategic deception. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 871-879).
Masters, P., Smith, W., Sonenberg, L., & Kirley, M. (2021). Characterising deception in AI: A survey. In Deceptive AI. First International Workshop, DeceptECAI 2020, Santiago de Compostela, Spain, August 30, 2020 and Second International Workshop, DeceptAI 2021, Montreal, Canada, August 19, 2021, Proceedings 1 (pp. 3-16). Springer International Publishing.
Liu, Z., Yang, Y., Miller, T., & Masters, P. (2021). Deceptive reinforcement learning for privacy-preserving planning. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 818-826).
Masters, P., & Sardina, S. (2017, August). Deceptive Path-Planning. In IJCAI (pp. 4368-4375).
Masters, P., & Vered, M. (2021). What's the context? implicit and explicit assumptions in model-based goal recognition. In International Joint Conference on Artificial Intelligence 2021 (pp. 4516-4523).
Masters, P., & Sardina, S. (2021). Expecting the unexpected: Goal recognition for rational and irrational agents. Artificial Intelligence, 297, 103490.
Masters, P., & Sardina, S. (2019, May). Goal recognition for rational and irrational agents. In Proceedings of the 18th international conference on autonomous agents and multiagent systems (pp. 440-448).
Masters, P., & Sardina, S. (2019). Cost-based goal recognition in navigational domains. Journal of Artificial Intelligence Research, 64, 197-242.
Masters, P., & Sardina, S. (2017, May). Cost-based goal recognition for path-planning. In Proceedings of the 16th conference on autonomous agents and multiagent systems (pp. 750-758).
Weerawardhana, S., Akintunde, M. E., Masters, P., Roberts, A., Kefalidou, G., Lu, Y., ... & Moreau, L. (2024, August). More Than Trust: Compliance in Instantaneous Human-robot Interactions. In 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN) (pp. 1556-1563). IEEE.
Wall, E., Matzen, L., El-Assady, M., Masters, P., Hosseinpour, H., Endert, A., ... & Padilla, L. (2024, April). Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization. In 2024 IEEE 17th Pacific Visualization Conference (PacificVis) (pp. 22-31). IEEE.
Masters, P., Weerawardhana, S., & Young, V. (2023, March). Towards an Open Source Library and Taxonomy of Benchmark Usecase Scenarios for Trust-Related HRI Research. In International Conference on Human-Robot Interaction (HRI) Workshop on Advancing HRI Research and Benchmarking Through Open-Source Ecosystems (pp. 1-4).
Masters, P., Young, V., Chamberlain, A., Weerawardhana, S., Mckenna, P. E., Lu, Y., ... & Moreau, L. (2023, July). A Practical Taxonomy of TAS-related Usecase Scenarios. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems (pp. 1-6).
And/or see my Google Scholar page.