9:30 - 10:30 "Automated Moral Decision Making by Learning from Humans: Why and How", by Vincent Conitzer (Carnegie Mellon University)
Bio: Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford. Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).
Abstract: Ethics is complex and we do not know how to boil it down to a simple formula. Consequently, at this point in time, probably the best way to have AI systems make moral decisions is to have them learn from humans. One may well wonder what the point of this even is; if at best, we get AI systems to perform at a human level, then should we not simply leave moral decisions to humans? I will discuss various reasons for why we may want AI systems to make moral decisions, and what the corresponding risks are. I will then go over a detailed case study of our work on kidney exchanges from this viewpoint.
14:30 - 15:30 "Exploring Human-AI Co-Creativity under Human Control: Framing, Reframing, Brainstorming, and Future Challenges", by Michael Muller, IBM Research
Bio: Michael Muller works at IBM Research in Cambridge MA USA (on the historical and contemporary lands stewarded by the Wampanoag and Massachusett Peoples). His work takes place at the intersection of human-computer interaction, AI, and social justice. Michael is known for early work in participatory design, and for co-proposing and co-leading the CHI conference program subcommittee on Critical and Sustainable Computing and Social Justice. His more recent work has explored possible futures for human interactions with generative AI applications. Michael has led and participated in mentorship programs for students and early-career scholars at multiple ACM SIGCHI conferences, including the CHI Early Career Symposium and the CSCW Student Reviewer Mentoring program. ACM recognizes Michael as a Distinguished Scientist. Michael co-chairs the SIGCHI CARES committee, and is a member of the SIGCHI Research Ethics committee. He holds membership in Fempower.tech and AccessSIGCHI, and has begun a term of service on the US National Academies Board on Human-Systems Integration.
Abstract: Generative AI has the potential to support human creativity. In our work, we investigate how one or more humans can collaborate with an AI agent to co-create their contributions, while maintaining human control over process and outcomes. In earlier work, we had developed a conversational UI to large language models (LLMs) for software engineering tasks. Ross and colleagues showed that a well-tuned UI could make a back-end LLM behave in a humble, polite, and highly supportive way. We re-used this architecture to explore creativity and co-creativity opportunities, through careful prompt-engineering. After surveying human-human co-creativity strategies, we first experimented with the well-known strategy of framing a problem with a productive representation. Next, we explored the more powerful concept of reframing a problem after the initial frame had been found to be flawed or insufficient in some way. The conversational UI allowed the human to control how the conversation developed, and which aspects of the conversation would be preserved in an analogy-based design. In our third experiment, we moved from specialist methods to the more generally-adopted processes of brainstorming. A human was able to guide the UI+LLM in exercises based on divergent-thinking, convergent-thinking, summarization, and structured organization/re-organization of outcomes. While these initial experiments were successful, we were only able to implement a dialog between one human and one AI. Our next projects will use a specialized environment in which multiple humans can interact with the UI+LLM configuration, with preservation of each human’s identity, thus adding aspects of Mutual Theory of Mind to the co-creative exercises. After that, we hope to revisit multi-agent symbiotic cognitive computing architectures for a richer configuration of multiple humans and multiple AI agents. Throughout this work, we have focused on principles of IBM’s Augmented Human Intelligence, in which AI is used to support and extend the work of humans – not to replace humans. Following a recent debate of Shneiderman and Muller, we label all AI conversational turns with an “AI” or “APP” marker – i.e., we explicitly avoid any so-called Turing test confusions about who or what is speaking or acting. We maintain human control of both process and outcomes. As we showed in a recent CHIWORK paper, these are design choices. It is possible to create interactive AI solutions that channel and control the work of humans. Recent work by many researchers have documented the potential and actual harms of such systems. We make a different choice: We design for AI applications that support, educate, and enable human abilities and human agency.