Program
8:30 - 9:20 : Distinguished Inaugural Address
How Can We Know if You're for Real? On the Evaluation of Ethics Signaling in AI
Justin Biddle, Georgia Institute of Technology
Dr. Justin B. Biddle is an Associate Professor in the School of Public Policy at the Georgia Institute of Technology; the Director of the Georgia Tech Ethics, Technology, and Human Interaction Center, and a Faculty Affiliate in the Georgia Tech Center for Machine Learning. His research focuses on the role of values in the design of technological systems and on the ethics of emerging technologies, with a particular focus on artificial intelligence (AI) systems. He is a part of the Georgia Artificial Intelligence Manufacturing Technology Corridor (Georgia AIM) and leads a team to facilitate the early identification and management of potential ethical and societal consequences of AI-enabled manufacturing systems. He is also co-lead of the Ethical AI Thrust of the National AI Institute on Advances in Optimization. He received his PhD in History and Philosophy of Science from the University of Notre Dame.
9:20 - 9:40 : Decision Procedures for Artifical Moral Agents
Tyler Cooke
Whenever one considers the possibility of designing ethical artificial intelligence (AI), it is tempting to think that the success of such a project would depend on whether systems could be built to implement the same kinds of ethical decision-making procedures as the ones we regard as appropriate for humans. This paper calls into question the foregoing line of thought. It argues that (i) the appropriateness of a decision procedure for a given moral agent depends on the nature of the agent’s capacities; (ii) AIs and humans possess capacities that differ in their nature; and (iii) if (i) and (ii), then the appropriate decision procedures for AIs are different from the ones that are appropriate for humans. The temptation to design ethical AI that employs the same decision procedures as humans should be resisted, lest we miss out on the benefits that could be gained from AI that utilizes distinct procedures.
9:40 - 10:00 : Eight Years of Philosophy @HLRS — Reflections on the Past, Present and Future of a Trans-Disciplinary Project
Nico Formánek, Michael Resch, Andreas Kaminski
The High Performance Computing Center Stuttgart (HLRS) has been operating a Philosophy of Computational Sciences group since 2016. Its collaboration with HLRS and external simulation scientists has covered many topics ranging from ethics and epistemology of simulations, to sociological aspects of HPC, modeling for policy, and philosophy of science of simulations. This talk will give a peek into three topics from the past, present and future of the group's work, reflecting on the opportunities and challenges of a highly trans-disciplinary collaboration.
10:00 - 10:30 : Coffee Break
Refreshments and snacks available in the common space outside the meeting room.
10:30 - 10:50 : Beyond Biomedical Simulations in Supercomputing: Ethical Challenges and Regulatory Obstacles with Boundary Conditions in Healthcare
Luka Poslon, Johannes Gebert
Achieving trustworthy AI systems with easy usability for all stakeholders in the healthcare sector is challenging, as trustworthiness has many facets. It has been shown that even when physicians lack knowledge or understanding, patients are usually willing to use drugs that are demonstrably safe and efficient (Boddington, 2017). Reducing the opacity of black-box AI systems is crucial for healthcare AI applications because of the moral and professional responsibility of physicians to provide reasons and explanations for their decisions (Holzinger et al., 2019). However, black-box models are common in AI and are generally thought to pose a problem for trustworthiness. Despite the fact that robotic surgical systems are as efficient as physicians, many patients still trust a surgeon more than a robotic system (Longoni, 2019). This talk explores the challenges in healthcare simulations, emphasizing the need for ethical frameworks and adaptive regulatory mechanisms to address data requirements and privacy concerns.
10:50 - 11:10 : Towards Understanding the Societal Impact of Open Science
Hena Ahmed, Jay Lofstead
Open science has been a central priority of U.S. federal research and policy goals in the 21st century. "Open science" is understood as an umbrella term covering various issues of "openness" in scientific practice and knowledge sharing, including democratic participation in scientific research (e.g., citizen science), equitable science communication, and fair intellectual property laws for digitized artifacts. This study will situate federal research agendas and policy frameworks for open science alongside the evolution and nationwide adoption of HPC resources. We also pay significant attention to AI/ML as a disruptive technology in both science/technology research and policy-making. The resulting contribution will be an analysis of the societal impacts of today's open science movement, grounded in an evaluation of risks and benefits posed by evolving scientific practices, paradigms, communities, and cultures.
11:10 - 11:30 : Private and Equitable Access for Large-Scale Systems
Ana Luisa Veroneze Solórzano, Devesh Tiwari, Rohan Basu Roy, Benjamin Schwaller, Sara Petra Walton, Jim M. Brandt
Large-scale systems collect enormous amounts of data and metadata about users located around the globe. Access to such systems faces regulatory, political, economic, and technological barriers, while privacy constraints prevent data sharing. We discuss operational practices for redesigning private and equitable HPC flows where privacy can serve utility. We explore anonymization techniques in the HPC context and discuss challenges in incorporating them into large-scale systems. We refer to current data protection regulations to address data transparency, processing, and management responsibilities for systems administrators.
11:30- 11:50 : Fostering Ethics, Equity, and Accessibility in High Performance Computing: A Call for Advocacy
Elaine M. Raybourn
HPC is at the forefront of the present and future generations’ biggest challenges in social, economic, epistemological, and ethical issues related to sustainability, equity, accessibility, and justice. At its best, HPC is a powerful tool that can be used for the social good of humanity by governments, for-profit as well as not-for-profit organizations, and teams of dedicated scientists and practitioners. In the words of Phil Roth, SC24 General Chair, “it matters what we do with it.” In this presentation we present opportunities to exercise moral imagination, conceive value propositions for including ethics in HPC, introduce practical approaches to fostering three types of advocacy (self, individual, system), and demonstrate several techniques and tools that can be used to hone moral imagination and the development of ethical mindsets for the practice of advocacy.
11:50- 12:00 : Summary and Discussion
Javier Gomez-Lavin
A chance to thank everyone for participating, summarize major themes, and plan for the future.