Registration & Submissons:
February 10, 2026
Notification of Acceptance:
February 27, 2026
Workshop Date:
April 13-17, 2026 (specific date TBA)
*All deadlines are 23:59 AOE
For any other questions, feel free to contact us via email:
user.interface.workshop@gmail.com.
To participate in our workshop, please register through the Google Form link.
While our workshop itself is free, you must also complete the official CHI 2026 conference registration to attend, with early registration deadline on Wednesday, March 4, 2026.
PLEASE NOTE: We welcome all interested participants to join the workshop, whether with or without a submission. If the number of registrations exceeds our capacity, we may prioritize participants who have submitted position papers or video demos.
However, if you wish to present your work and contribute to the workshop discussions through a short impulse talk, you must submit either:
A 2–6 page position paper in the double-column CHI Extended Abstract format (excluding references).
A video demo of less than 10 minutes can be submitted if the main idea can be more easily conveyed in this format.
Submissions of work should also be uploaded through the above registration link.
Submissions will undergo a peer-review process, with each reviewed by at least two committee members or organizers. Selection will be based on quality and relevance. Authors of accepted submissions will give a short impulse talk (about 5 minutes) to seed discussions and activities. Accepted papers will optionally be available on the workshop website with the authors' consent. At least one author of each accepted position paper must register for and attend the workshop, either in person or remotely.
Submissions can cover, but are not limited to, the following topics:
GUI Agents as Collaborative Partners Across Interfaces:
How autonomous agents perceive, understand, and manipulate UIs to accomplish user tasks on behalf of users. This topic explores agents' capabilities in navigating complex interfaces, from web applications to mobile apps and operating systems. Examples include agents that execute multi-step workflows across applications (e.g., OpenAI Operator, Anthropic Computer Use), agents that assist novice users through complex interfaces, and agents that provide accessible UI overviews for users with disabilities.
Designing Interfaces for Agent Understanding:
What makes a UI "agent-readable"? This topic investigates how UI design across modalities can support agent interpretation and operation. Key questions include how to represent UI semantics, structure, and affordances in ways that both humans and agents can understand. Examples include semantic UI structures that agents can interpret, accessible designs that benefit both humans and agents, and consistent interaction patterns that enable reliable agent operation.
Human-AI-UI Collaboration Patterns:
In the Human ←→ Agent ←→ UI triangle, agents act as mediators between human intent and interface actions. This topic examines interaction patterns where agents interpret user goals and translate them into UI operations, while UIs must be designed to support both human oversight and agent execution. Key questions include: How do users communicate intent to agents? How do agents demonstrate their understanding of UI state? How can UIs support seamless handoff between human and agent control? Examples include mixed-initiative systems where users can intervene in agent tasks, and UI designs that make agent actions transparent, reversible, and correctable through user demonstration or feedback.
Agents in Spatial and XR Environments:
Unique challenges emerge when agents operate in embodied, spatial, and hybrid interface contexts. This topic explores how agents can understand and interact with 3D/AR/VR interfaces, where UI elements have spatial relationships and context-dependent meanings. Examples include agents that navigate spatial UI layouts in XR, context-aware AR interfaces, and agent-driven assistance in mixed reality tasks.