Related Work
Key Publications
Coman, A., & Aha, D.W. (2018). AI rebel agents. AI Magazine, 39(3), 16-26. Link
Mirsky, R., & Stone, P. (2021). The Seeing-Eye Robot Grand Challenge: Rethinking Automated Care. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (pp. 28-33). Link
Hadfield-Menell, D., Russell, S. J., Abbeel, P., & Dragan, A. (2016). Cooperative inverse reinforcement learning. Advances in neural information processing systems, 29, 3909-3917.
More on the importance of alignment between the objectives of AI systems and their users' objectives, see Stuart Russell's talk.
Related Work
Aha, D.W., & Coman, A. (2017). The AI rebellion: Changing the narrative. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (pp. 4826-4830). San Francisco, CA: AAAI Press.
Amos-Binks, A., Dannenhauer, D., & Aha, D.W. (2019). Intention dynamics of rebel agent behavior. In Proceedings of the Seventh Annual Conference on Advances in Cognitive Systems. Cambridge, MA: Cognitive Systems Foundation.
Amos-Binks, A., Dannenhauer, D., & Aha, D.W. (2019). Computational models of rebel agent behavior for interactive narrative. In L.H. Gilpin, D. Holmes, & J.C. Macbeth (Eds.) Story-Enabled Intelligence: Papers from the AAAI Spring Symposium (Technical Report SS-19-06). Stanford, CA: AAAI Press.
Apker, T.; Johnson, B.; and Humphrey, L. 2016. LTL Templates for Play-Calling Supervisory Control. In Proceedings of the 54th AIAA Science and Technology Forum Exposition. Red Hook, NY: Curran Associates, Inc.
Arnold, T., Kasenberg, D., & Scheutz, M. (2017). Value Alignment or Misalignment--What Will Keep Systems Accountable?. In Workshops at the thirty-first AAAI conference on artificial intelligence.
Banks, J. (2020). Good Robots, Bad Robots: Morally Valenced Behavior Effects on Perceived Mind, Morality, and Trust. International Journal of Social Robotics, 1-18.
Boggs, J., Dannenhauer, D., Floyd, M.W., & Aha, D.W. (2018). The ideal rebellion: Maximizing task performance in rebel agents. In M. Molineaux, D. Dannenhauer, & M. Roberts (Eds.) Goal Reasoning: Papers from the IJCAI Workshop. Stockholm, Sweden. Link
Borenstein, J., and Arkin, R. 2016. Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being. Science and Engineering Ethics 22(1): 31–46.
Briggs, G.; McConnell, I.; and Scheutz, M. 2015. When Robots Object: Evidence for the Utility of Verbal, but Not Necessarily Spoken Protest. In Social Robotics: Seventh International Conference. Lecture Notes in Artificial Intelligence, 83–92. Berlin: Springer.
Briggs, G., and Scheutz. M. 2015. “Sorry, I Can’t Do That”: Developing Mechanisms to Appropriately Reject Directives in Human-Robot Interactions. In Artificial Intelligence for Human-Robot Interaction: Papers from the AAAI Fall Symposium, edited by B. Hayes and M. Gombolay. Technical Report FS-15-01. Palo Alto, CA: AAAI Press.
Briggs, G., Williams, T., Jackson, R. B., & Scheutz, M. (2021). Why and How Robots Should Say ‘No’. International Journal of Social Robotics, 1-17.
Coman, A., & Aha, D.W. (2017). Cognitive support for rebel agents: Social awareness and counternarrative intelligence. In Proceedings of the Fifth Conference on Advances in Cognitive Systems. Troy, NY: Cognitive Systems Foundation.
Dannenhauer, D., Floyd, M., Magazzeni, D., & Aha, D.W. (2018). Explaining rebel behavior in goal reasoning agents. In S. Biundo, P. Langley, D. Magazzeni, & D. Smith (Eds.) Explainable AI Planning: Papers of the ICAPS Workshop. Delft, The Netherlands. Link
Fisac, J.F., Gates, M.A., Hamrick, J.B., Liu, C., Hadfield-Menell, D., Palaniappan, M., Malik, D., Sastry, S.S., Griffiths, T.L. and Dragan, A.D., (2020). Pragmatic-pedagogic value alignment. In Robotics Research (pp. 49-57). Springer, Cham.
Gregg-Smith, A., and Mayol-Cuevas, W. W. 2015. The Design and Evaluation of a Cooperative Handheld Robot. In 2015 IEEE International Conference on Robotics and Automation. Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Kress-Gazit, H., Eder, K., Hoffman, G., Admoni, H., Argall, B., Ehlers, R., ... & Sadigh, D. (2021). Formalizing and guaranteeing human-robot interaction. Communications of the ACM, 64(9), 78-84.
Lewis, C., & Norman, D. A. (1995). Designing for error. In Readings in Human–Computer Interaction (pp. 686-697). Morgan Kaufmann.
Milli, S.; Hadfield-Menell, D.; Dragan, A.; and Russell, S. 2017. Should Robots Be Obedient? arXiv preprint: arXiv:1705.09990[cs.AI]. Ithaca, NY: Cornell University Library.
Mirsky, Reuth, and Peter Stone. 2021. Intelligent Disobedience and AI Rebel Agents in Assistive Robotics. In ASIMOV workshop as part of the International Conference on Intelligent Robots and Systems (IROS) Link
Mohammad, Z. (2021). A Rebellion Framework with Learning for Goal-Driven Autonomy [Master's thesis, Wright State University]. OhioLINK Electronic Theses and Dissertations Center. Link
Murphy, R., & Woods, D. D. (2009). Beyond Asimov: the three laws of responsible robotics. IEEE intelligent systems, 24(4), 14-20.
Sarathy, V., Arnold, T., & Scheutz, M. (2019). When exceptions are the norm: Exploring the role of consent in hri. ACM Transactions on Human-Robot Interaction (THRI), 8(3), 1-21.