AI is rapidly evolving and entering a pivotal phase that will shape the future of society. As AI becomes part of everyday life, its development must align with the broader goal of serving humanity, a change toward what we call AI for social good.
The workshop will focus on two major aspects of AI for social good:
Directing AI research toward urgent societal challenges: We will examine how work ranging from foundational algorithms to deployed systems can address public-health surveillance, environmental sustainability, critical-infrastructure protection, and global cooperation. Topics include early-warning diagnostics in underserved regions, intelligent food-supply orchestration, climate-risk monitoring, equitable humanitarian-aid allocation, and multi-agent negotiation support for participatory governance.
Ensuring everyday AI operates responsibly: As AI-enabled services permeate finance, health care, mobility, and media, the community must guarantee that these systems are lawful, ethical, and trustworthy. Discussion will therefore cover governance frameworks and regulation—comparing requirements in the EU AI Act’s risk-based obligations for high-risk systems, the NIST AI Risk Management Framework 1.0’s guidance on trustworthiness, and the ISO/IEC 42001 management-system standard for organisational oversight, alongside global principles from the OECD and UNESCO, together with technical safeguards such as bias detection and mitigation, robustness and adversarial testing, privacy-preserving learning, continual monitoring, and red-teaming of generative models to prevent harmful or misleading outputs. We will also examine socio-technical evaluation and participatory design methods, including impact assessments, model cards, data sheets, and community co-creation processes to embed human agency, accessibility, and equity throughout the AI life-cycle, and reflect on deployment case studies that audit recommender systems, certify clinical decision support, verify autonomous-vehicle perception, or establish content-authenticity pipelines.
The workshop targets high-quality original papers related to technical innovation, deployment in a social scenario, challenges, and many more including, but not limited to, the following:
Human-centered AI for policy-making and civic engagement
AI for agriculture, food security, and supply chains
AI for smart and sustainable energy management
AI in digital humanities and cultural preservation
Tools, datasets, testbeds, standards, and case studies on AI for Social Good
AI for climate adaptation, environmental resilience, and sustainability
AI in education for equitable and inclusive learning
AI for social welfare, justice, and equality
Ethical, fair, and accountable AI systems
AI for healthcare, wellbeing, and public health
AI is rapidly evolving and entering a pivotal phase that will shape the future of society. As AI becomes part of everyday life, its development must align with the broader goal of serving humanity, a change toward what we call AI for social good.
The workshop will focus on two major aspects of AI for social good:
Directing AI research toward urgent societal challenges: We will examine how work ranging from foundational algorithms to deployed systems can address public-health surveillance, environmental sustainability, critical-infrastructure protection, and global cooperation. Topics include early-warning diagnostics in underserved regions, intelligent food-supply orchestration, climate-risk monitoring, equitable humanitarian-aid allocation, and multi-agent negotiation support for participatory governance.
AI is rapidly evolving and entering a pivotal phase that will shape the future of society. As AI becomes part of everyday life, its development must align with the broader goal of serving humanity, a change toward what we call AI for social good.
The workshop will focus on two major aspects of AI for social good:
Directing AI research toward urgent societal challenges: We will examine how work ranging from foundational algorithms to deployed systems can address public-health surveillance, environmental sustainability, critical-infrastructure protection, and global cooperation. Topics include early-warning diagnostics in underserved regions, intelligent food-supply orchestration, climate-risk monitoring, equitable humanitarian-aid allocation, and multi-agent negotiation support for participatory governance.
Ensuring everyday AI operates responsibly: As AI-enabled services permeate finance, health care, mobility, and media, the community must guarantee that these systems are lawful, ethical, and trustworthy. Discussion will therefore cover governance frameworks and regulation—comparing requirements in the EU AI Act’s risk-based obligations for high-risk systems, the NIST AI Risk Management Framework 1.0’s guidance on trustworthiness, and the ISO/IEC 42001 management-system standard for organisational oversight, alongside global principles from the OECD and UNESCO, together with technical safeguards such as bias detection and mitigation, robustness and adversarial testing, privacy-preserving learning, continual monitoring, and red-teaming of generative models to prevent harmful or misleading outputs. We will also examine socio-technical evaluation and participatory design methods, including impact assessments, model cards, data sheets, and community co-creation processes to embed human agency, accessibility, and equity throughout the AI life-cycle, and reflect on deployment case studies that audit recommender systems, certify clinical decision support, verify autonomous-vehicle perception, or establish content-authenticity pipelines.
The workshop targets high-quality original papers related to technical innovation, deployment in a social scenario, challenges, and many more including, but not limited to, the following:
Human-centered AI for policy-making and civic engagement
AI for agriculture, food security, and supply chains
AI for smart and sustainable energy management
AI in digital humanities and cultural preservation
Tools, datasets, testbeds, standards, and case studies on AI for Social Good
AI for climate adaptation, environmental resilience, and sustainability
AI in education for equitable and inclusive learning
AI for social welfare, justice, and equality
Ethical, fair, and accountable AI systems
AI for healthcare, wellbeing, and public health
AI is rapidly evolving and entering a pivotal phase that will shape the future of society. As AI becomes part of everyday life, its development must align with the broader goal of serving humanity, a change toward what we call AI for social good.
The workshop will focus on two major aspects of AI for social good:
Directing AI research toward urgent societal challenges: We will examine how work ranging from foundational algorithms to deployed systems can address public-health surveillance, environmental sustainability, critical-infrastructure protection, and global cooperation. Topics include early-warning diagnostics in underserved regions, intelligent food-supply orchestration, climate-risk monitoring, equitable humanitarian-aid allocation, and multi-agent negotiation support for participatory governance.
Ensuring everyday AI operates responsibly: As AI-enabled services permeate finance, health care, mobility, and media, the community must guarantee that these systems are lawful, ethical, and trustworthy. Discussion will therefore cover governance frameworks and regulation—comparing requirements in the EU AI Act’s risk-based obligations for high-risk systems, the NIST AI Risk Management Framework 1.0’s guidance on trustworthiness, and the ISO/IEC 42001 management-system standard for organisational oversight, alongside global principles from the OECD and UNESCO, together with technical safeguards such as bias detection and mitigation, robustness and adversarial testing, privacy-preserving learning, continual monitoring, and red-teaming of generative models to prevent harmful or misleading outputs. We will also examine socio-technical evaluation and participatory design methods, including impact assessments, model cards, data sheets, and community co-creation processes to embed human agency, accessibility, and equity throughout the AI life-cycle, and reflect on deployment case studies that audit recommender systems, certify clinical decision support, verify autonomous-vehicle perception, or establish content-authenticity pipelines.
The workshop targets high-quality original papers related to technical innovation, deployment in a social scenario, challenges, and many more including, but not limited to, the following:
Human-centered AI for policy-making and civic engagement
AI for agriculture, food security, and supply chains
AI for smart and sustainable energy management
AI in digital humanities and cultural preservation
Tools, datasets, testbeds, standards, and case studies on AI for Social Good
AI for climate adaptation, environmental resilience, and sustainability
AI in education for equitable and inclusive learning
AI for social welfare, justice, and equality
Ethical, fair, and accountable AI systems
AI for healthcare, wellbeing, and public health