We solicit papers on the topic of goal reasoning, autonomously deciding what should be done and when it should be done, in the absence of complete authoritative guidance. Intelligent actors (including humans and computational agents) are often given goals, utility functions, or reward structures by others which serve as a guide for subsequent decisions, actions, and other behavior. However, sometimes that guidance is unavailable or insufficient, and actors need to figure out what to do for themselves in a decision-making context. This may be due to missing implementation advice, vagueness in goal setting, ethical constraints that conflict with goals, etc. Work on utility functions, reward structures, or other representations of behavior fitness are entirely reasonable, as long as agents are making higher-level decisions about what to do in an environment. This includes management of long-term behavior by setting goals, self-alignment of decision-making to match observed human behavior, anticipation of the future to formulate rewards or utilities, tradeoffs between motivations in selecting appropriate current goals, and generating expectations that help to monitor progress. As a result, the broad topic of goal reasoning is studied in diverse subfields of AI such as motivated systems, cognitive science, automated planning, and agent-oriented programming. We aim to bring together researchers from sometimes distinct subfields to encourage cross-disciplinary discussion on goal reasoning.
Topics include, but are not limited to:
Foundations:
Theoretical models of goal reasoning or comparisons to other models of autonomy
Studies of implicit or inferrable goals, rewards, or utility functions
Interpretation and acquisition of ethical rules
Goal management: including formulation, selection, or optimization of reward/goal/utility structures
Online goal resolution (e.g., goal deferment, re-goaling, reward modification, utility update)
Reasoning about refusal to work to human guidance (rebellion)
Learning, evaluation, or analysis of goal reasoning systems
Alignment of goal selection to external perception
Systems:
Goals in self-motivated systems, hybrid systems, Belief-Desire-Intention systems, or Goal-Driven Autonomy
Multi-agent or distributed goal management
Demonstrations or applications of goal reasoning systems
Alignable decision makers
Human Interaction:
Alignment to observed human behavior
Interactive goal reasoning, human-machine goal reasoning
Explanation and diagnosis of notable objects or events impacting goals
Challenges in interpreting goals and constraints presented in natural language
Defining and accomplishing social/interpersonal goals
Out of Scope:
Work on alignment of large language models for question answering/classification/etc.
Ethical frameworks that do not provide a justification for actions
Approaches that only consider how to achieve a pre-defined, static goal
We welcome existing publications from other venues that are appropriate for discussion at this workshop.