AI systems that work collaboratively with humans and enhance user capabilities provide greater value by emphasizing human-centered design and following responsible AI principles. These systems can be classified as HHAI (Hybrid Human-Artificial Intelligence) systems, in short, HAI-based systems. Building and maintaining the HAI-based systems up-to-date invites software teams to rethink and reshape their core activities to follow ethical principles like fairness, transparency, and accountability to ensure they benefit people and society. Unlike what has been proposed in similar events, we focused on bringing human perspectives under the umbrella of responsible AI into the software development process.
New methods and practices for building, maintaining, and continuously evolving HAI-based systems should consider that they need to complement human abilities, highlighting the importance of adaptive, collaborative, responsible, interactive, and human-centered intelligence, as well as they require strengthening the need for Responsible AI Engineering. The usage of Software Engineering for Human-Artificial Intelligence (SE4HAI), just like the usage of Human-Artificial Intelligence for Software Engineering (HAI4SE), should be based on solid principles of fairness, reliability, privacy, transparency, sustainability, accountability, and explainability. The high-quality HAI-based systems have one or more AI modules or components that responsibly improve and enhance the user experience, considering their respective interactions to coevolve human and artificial intelligence continuously.
This workshop tackles that gap by discussing technical papers about how to redesign software development practices to create AI systems that are responsible and user-focused, to develop AI tools and models that work alongside humans fairly and ethically, and to tackle key challenges, like managing AI systems responsibly, ensuring quality, and minimizing environmental impact. Moreover, a keynote talk and a panel will foster the debate among researchers and technical professionals on strategies to (re)shape and (re)think the SE4HAI and AI4HAI practices.
The AI software lifecycle indicates some shifts in traditional software engineering practices. AI software development involves iterative processes such as model training/testing, validation, and deployment, which differ significantly from conventional software development. Traditional approaches generally emphasize well-formed requirements and deterministic algorithms, whereas AI relies on probabilistic models and continuous learning from historical and user-interaction data. Furthermore, the popularization of AI in the last few years, along with the usage of AI tools and LLMs like Github Copilot, ChatGPT, and Gemini by software teams, also highlights some changes in software development and evolution processes. Integrating AI techniques and tools into AI software processes effectively demands engineers to adopt agile methodologies, focus on data quality and governance, and even incorporate ethical considerations to ensure fairness, reliability, privacy, transparency, sustainability, accountability, and explainability. The evolution from software engineering to AI engineering is crucial for harnessing AI's capabilities and driving long-term innovation responsibly and sustainably.
The goal of WoRTH_AI -- Workshop on Responsible Technology and Human-Centered AI Engineering -- is to share, discuss, debate, and propose advances both to SE4HAI and HAI4SE, emphasizing the premise that a responsible Human-Artificial Intelligence (HAI) based software should improve and facilitate the humans' activities and not to cut-out their workforce. For one-day programming, we invite researchers and practitioners to consider submitting their technical papers on SE4HAI, HAI4SE, or both:
Software processes to develop and evolve responsible HAI systems.
Responsibility, ethics, fairness, transparency, accountability, sustainability, reliability, and explainability in developing and evolving responsible HAI systems.
Governance of HAI software ecosystems.
Impact of the responsible use of LLMs in software process activities, such as requirement elicitation, modeling, designing, coding, testing, and deployment of HAI systems.
Quality assurance of HAI software.
HAI software requirement engineering.
HAI software UI/UX design.
HAI software modeling and designing.
HAI software architecture.
HAI software testing.
CI/CD, DevOps, MLOps, and AIOps of HAI.
Energy sustainability of HAI system lifecycle,
and other related topics.
The paper submission process for WoRTH 2025 follows the submission instructions for the main conference, which are available at HHAI2025 Submission Instructions.
All submissions must be written in English and double-blind. Thus, the paper cannot contain any trace of the authors' identities, such as names, affiliations, acknowledgments, and references. All papers must be original and not simultaneously submitted to another journal or conference.
Articles must be a minimum of 10 and a maximum of 14 pages, including references. The abstract must be a maximum of 200 words. Authors can find the LaTeX template here.
All submissions should adhere to CEUR-WS umbrella. We intend to publish the WoRTH Proceedings under CEUR-WS Guidelines.
Authors should submit their work in PDF format via EasyChair (see the button below). The review process will only consider contributions that include both abstracts and papers.
Abstract registration deadline: April 11, 2025 April 28, 2025
Submission deadline: April 11, 2025 April 28, 2025
Authors notification: May 2nd, 2025 May 9th, 2025
Workshop date: June 10, 2025
All deadlines are AOE timezone.
To be announced.
Humberto Torres Marques-Neto, PUC Minas, Brazil
Jussara M. Almeida, UFMG, Brazil
Davide Bacciu, University of Pisa, Italy
Daniele Quercia, Nokia Bell Labs, United Kingdom & Politecnico di Torino, Italy