We aim to share and promote new paradigms and evaluation protocols to advance the state of the art in human-AI collaboration for such open-ended complex tasks where, oftentimes, there is no structured ground-truth output. Part of this objective includes understanding where governance lies within AI system workflows and identifying patterns that emerge across deployed use cases of AI systems in the global majority. These workflows and patterns can be visualized through diagrams and graphical models, depicting where oversight and governance lies and how stakeholders interact with the system. Most existing projects capture incidents or failures of AI systems, not successes! Thus there is a gap in knowledge as to what responsible use and governance of AI systems look like, especially for the global majority.
We invite one-page summaries on a deployed use case of an AI system. We expect a discussion on the context of use, governance, intention, risks, and an accompany workflow diagram. Specific instructions and an examples can be found here. Feel free to submit multiple use cases!