Artificial Intelligence

Workshop 2022

Christianity and Artificial Intelligence Workshop

Are you someone who...

  • researches or implements AI-related technology, wanting to grow your knowledge of related ethics & theology?

  • scholar of AI ethics and theology, working to translate your knowledge for applied technical settings?



Passion Talks offers a shared space to explore these crucial questions in community. Join us for our main event, an online poster session and conversation. And we would love to hear from you and your expertise! You’ll connect with other professionals in related areas, and get a chance to grow your public engagement skills.

The artificial intelligence session will bring together two themes.

We welcome AI researchers and ethics scholars to workshop their preliminary sketches, bridging ethical considerations with practical AI and adding an interdisciplinary approach to existing work.

Theme 1: Building a Future with AI


Submissions in this theme are focused on general topics relating to bringing a Christian perspective to recent innovations in technology and AI. Note that we welcome personal AI research that provoke ethical considerations but may not have a clear conclusion.

Mission: Our goal is to empower AI and technology practitioners to present on their ongoing research, providing real-world questions and perspectives.


To bring together a diversity of perspectives and ideas, we invite submissions presenting research on topics including but not limited to the following:

  • Ethical AI (Data Ethics, Ethical Algorithm Design)

  • Explainable AI and AI Safety

  • ML Fairness and Bias in Language Models

  • Responsible AI and AI for Social Good

  • Bible translation and faith-based AI applications

  • personal AI research with motivating ethical or faith-based implications


We acknowledge that Tech and AI are rapidly evolving fields, and so we welcome interdisciplinary submissions that do not fit neatly into existing categories, as well as work that addresses the social impact of machine learning.

Theme 2: Christianity and AI


Submissions in this theme are focused on general topics relating to the implications of AI to Christian faith, and the possibility for Christianity to update its practices given the rapid speed of AI advancement.

Mission: Our goal is to empower ethics researchers to present on their ongoing research, providing ethical and academic questions and perspectives.


To bring together a diversity of perspectives and ideas, we invite submissions presenting research on topics including but not limited to the following:

  • The trajectory of AI: Is it a path to autonomy, or is it a tool subdued by man to serve the common good of society?

  • What are the ethical frameworks that need to put in place to prevent the potential harm and abuse of AI?

  • Potential theological disruptions as a result of AI

  • Interactions of AI, humanity, and the Holy Spirit

  • Emotional and conscious AI


We acknowledge that Tech and AI are rapidly evolving fields, and so we welcome interdisciplinary submissions that do not fit neatly into existing categories, as well as work that addresses the social impact of machine learning.

Call for Participation

November 5, 2022 - 9 - 11 am Pacific | Virtual Poster Session on GatherTown

This year's theme is towards shaping impactful spaces around our passions. We will be meeting virtually in a poster session format with interactive sessions in a game-based world. A virtual poster event is an interactive forum where researchers, enthusiasts, practitioners, and students present the latest developments and ongoing projects and needs. Poster presenters discuss their work, receive feedback, inspire and find inspiration from others, and network with like-minded community members.


SUBMISSIONS DUE: 9/30/22

Accepted Talks

Righteous AI: The Christian voice in the Ethical AI conversation

Gretchen Huizinga


The field of AI ethics is dominated by a materialist worldview. While religious traditions provide a wealth of wisdom concerning human moral behavior, religious perspectives have been marginalized and ethics has been framed in humanistic rather than transcendent terms. My thesis is that humanistic ethical principles, even if codified into laws and regulations, are insufficient to ensure robust and beneficial AI. Further, acknowledgment of divine intelligence, along with an ordinate understanding of human intelligence, is foundational to robust and beneficial AI. While materialist thought seeks to compel us to be good without transcendent reason or power, the Christian faith speaks clearly about the role of God as originator, motivator, and sustainer of human moral behavior, compelling us to look beyond ethics and toward righteousness that cannot be accomplished by our own will or power.


I am a Research Fellow at AI and Faith and a podcast host/PI for the Beatrice Institute’s initiative Being Human in an Age of AI. Previously, I was the executive producer/host of the Microsoft Research Podcast, where I interviewed more than 100 scientists about their work in technology research. This exposed me to the latest innovations in AI and also to emergent work in AI ethics where researchers were voicing concerns about the dangers of AI. The discourse revolved primarily around legal and political issues. Missing were religious voices. Since 84% of the world identifies with some form of religious belief, I decided to conduct research to add a missing voice and bring viewpoint diversity to the discourse. The result was a PhD from the University of Washington and a dissertation titled Righteous AI: The Christian voice in the Ethical AI conversation, which I’m currently expanding into a book.


Spiritual Strivings in a Sociotechnical World

Mark Graves


Repeated close engagement with technology affects human spirituality. AI can increase those effects as a powerful artifact, complex medium, or engaged agent. Studying those effects benefits from a model of sociotechnical spirituality. I propose the models include human striving and commitments as well as AI activity and agency. This builds upon Christian spirituality scholarship, psychology of religion and spirituality, and pragmatic philosophy. Unpacking the dimensions of human spirituality creates a framework to study how AI activity and agency affects sociotechnical spirituality. Creating a model of sociotechnical spirituality facilitates its study. It also identifies directions that could advance how people use technology to develop further in their personal spiritual commitments.


Mark Graves holds a PhD in computer science from the University of Michigan and a MA in theology from the Jesuit School of Theology and the Graduate Theological Union in Berkeley and has completed fellowships in genomics at Baylor College of Medicine, moral psychology at Fuller Seminary, and moral psychology and theology at University of Notre Dame. His work has included developing AI and data solutions in biotech, pharmaceutical, and healthcare industries and teaching and scholarship on the relationships between neuroscience, spirituality, and the soul. His current research focuses on using NLP techniques for understanding and modeling human morality, ethical approaches to data science and ML, and philosophical and psychological foundations for constructing moral AI. Mark publishes extensively in peer-reviewed publications on these topics and has written three books.


In awe of creation through bioinspired robotics

Heiko Kabutz


For the development of millimeter scale robotics, significant inspiration is taken from insects and araneae. Through deeper understanding of the beauty of nature on the small scale, the vastness of creation and greatness of God's power is perceived. Building autonomous robots with capabilities still only fractions of any animal ability the complexity of nature becomes clear. With the use of bioinspired robotics the beauty and complexity of nature can be taught.


I am a PhD student in Mechanical Engineering at University of Colorado Boulder. Before coming to CU, I received my BEng in Mechanical Engineering from University of Pretoria, South Africa, in 2019. My research interest is in the mechanical design, manufacturing and control of robust legged movement mechanisms for robotics. Current focus is on bio-inspiration connected with spider and cockroach locomotion to small scale robotics.


Active Fairness through Diverse Bias Beliefs

Richard Zhang


The increasing influence of technology and AI has rightly prompted a concerted response to remove bias in its use and promote fairness outcomes in society. However, defining fairness and achieving equal outcomes for all parties seem rather elusive, especially in light of results in fairness impossibility. In this article, we examine the diversity of beliefs about bias and introduce the {\it active} fairness framework. Instead of fully eliminating negative sources of bias, active fairness attempts to systemically introduce {\it positive} sources of bias, such as the presumption of innocence in legal applications, with minimal risk. Furthermore, it attempts to elucidate the nuance of societal fairness efforts by providing more visibility into fairness metrics and remediation efforts, as well as the tradeoffs considered. Active and passive fairness work together to promote positive societal outcomes for all.


Richard Zhang is a Senior Research Engineer at Google Brain in Pittsburgh, where he leads research efforts on hyperparameter optimization, Bayesian methods, and theoretical deep learning. He also spearheads faith-based diversity initiatives within Google to empower people of faith to bring their unique perspectives to research discussions and core initiatives in fairness, responsibility, ethics. He is grateful to have graduated with a PhD in Applied Mathematics and Computer Science at UC Berkeley and before that, Richard graduated in the Great Class of 2014 from Princeton University, where he first personally experienced God in a wave of revival.


AI and the Stories We Tell Ourselves: The Authorial Leverage of Ideology

Sherol Chen


We present the evaluative framework of Authorial Leverage [1], which hold the following tenets: (1) Artificial Intelligence (AI) is built with human expressive intent as its goal, (2) the development of AI is motivated by what it means to be human, and (3) AI is meant to facilitate how we can be better at being human. In particular, we look at how our ideologies shape the pursuit of AI from a historical sense [2], formal and symbolic foundations [3], to the rising machine learning paradigms [4]. The importance of evaluation and benchmarking is not limited to testing the capabilities of our tools [4], but also, and more importantly, the ends we wish to achieve. The building of AI requires us to represent the world, whether by data or by rules, whether human centered or with the human-in-the-loop and has become an extension of our human intelligence and literacy.


Sherol Chen has studied Artificial Intelligence for over a decade. Currently, she is part of Google Research working on building and understanding large Machine Learning models. At Google, Sherol has advised on Machine Learning for Cloud enterprises as a subject matter expert, worked in Research at Google Brain for Machine Learning in Music and Creativity for project Magenta, and built algorithmic search results for YouTube. She's taught Artificial Intelligence for Stanford University Pre-Collegiate and around the world in six different countries. Her PhD work is in Computer Science, researching storytelling and Artificial Intelligence at the Expressive Intelligence Studio. Sherol is also a founding member of the Google Inter-Belief Network and served as an inaugural steering committee member for the Google Christian Fellowship.


Why Augmented Cognition - The Opportunity Cost of ASI -

Joanna Ng


There are many options of trajectories of AI going forward. Given the limited resources to develop AI advancement, this talk makes a case for prioritizing AI trajectories that bless people, over trajectories that aim to dominate people. Why it is important to avoid the opportunity cost of augmented cognition for Artificial Super Intelligence. This talk will also assert that smart is not smart enough unless it is cognitive, by sharing from one of the recently granted patent in the field.


Joanna Ng is a technologist and an inventor. She had 49 patents granted to her name, and published 20+ peer-reviewed academic paper. Joanna runs her start up focusing on augmented cognition assistance. Prior to that, Joanna worked for IBM, held a seven-year tenure as the Head of Research, Director of Centre for Advanced Studies, IBM Canada, and attained the title of IBM Master Inventor.


Why Compassion is No Longer Optional in the Era of AI

Benny Xian


We believe technology is a force for doing good. But, technology without compassion leads to division and division leads to harm. We created Project AI+Compassion because we believe compassion should be an important part of AI, business, and entrepreneurship. We created Project AI+Compassion with the mission to bring compassion to tech. As we enter the 5th industrial revolution with the rapid advances in AI, doing good for humanity is paramount. The mindset of compassion is an essential first step. It has a profound impact on everything we do from the product we design, the company culture we foster, to the community we build around us. We thank you for your interest in the work that we are doing. We invite you to join our community!


Benny is passionate about building innovative technology ventures from hardware, software, advanced/predictive analytics (AI), consumer, to enterprise. Many of which focused on disruptive innovations as coined by Clayton Christensen at Harvard. Previously, he has played key roles from product engineering (Actel), product (Transmeta IPO: TMTA), to operations (Midori Linux developed by Linus Torvalds), to co-­founding and investing (BeyondCore – acquired by SalesForce), to creating Voyadi and Project AI+C. He became an amateur social psychologist after spending too much time with Elliot Aronson and his lifelong work on cognitive dissonance. Elliot is the arguably the most important living social psychologist of our time. He brought together the intellectual brilliance of Leon Festinger at Stanford and the compassionate heart of Abraham Maslow. Benny graduated from Stanford University with a MS in electrical engineering and a BSEE with high honors from University of Florida.

Target Audience

  • Technologists

  • Practitioners

  • Scientists of Christian faith, and of other faith beliefs

  • Theologians interested in thinking the theological implications of AI

  • Pastors and full time ministers wanting to make sense on AI.

Belief/Values: We’re recruiting speakers who identify with Christianity, defined by the Nicene Creed. Guests of all belief systems are very welcome to come and interact with speakers.