Organizers
(ordered alphabetically)
William Agnew (he/him)
William is a Ph.D. student in Computer Science at University of Washington. He is advised by Pedro Domingos and Sidd Srinivasa and supported by an NDSEG Fellowship. His research focuses on developing human priors for reinforcement learning, with projects in object oriented reinforcement learning. He was an organizer of the ICML 2019 Generative Modeling and Model-Based Reasoning for Robotics and AI workshop. He is the founder of Queer in AI and chair of the inaugural Queer in AI @ NeurIPS 2018 workshop. He has organized numerous other Queer in AI workshops and events, including the ICML 2019, ICML 2020, and NeurIPS 2019 workshops. He currently directs oSTEM’s outreach efforts to academia, including Queer in AI, as Vice President of External STEM Partnerships.
Abeba Birhane (she/her)
Abeba Birhane is a PhD candidate in cognitive science at the school of computer science at university college Dublin, Ireland & Lero. Her research explores questions of ethics, justice, and bias that arise with the design, development, and application of artificial intelligence.
Micah Caroll (he/him)
Micah is a PhD candidate in computer science at UC Berkeley, where he is advised by Anca Dragan. His research explores the dynamics of interactions between humans and AI systems, and in particular how they relate to broader societal consequences.
Elliot Creager (he/him)
Elliot is a PhD Candidate at the University of Toronto and the Vector Institute, where he is supervised by Richard Zemel. He works on a variety of topics within machine learning, especially in the areas of algorithmic bias and representation learning. He is also a student researcher at Google Brain in Toronto.
Agata Foryciarz (she/her)
Agata is a Ph.D. student at the Computer Science department at Stanford University, where she is advised by Nigam Shah. She works on clinical applications of machine learning, focusing on algorithmic bias and dataset shift. She is a founder and organizer of Stanford Computer Science and Civil Society, and serves as a data science advisor to the Panoptykon Foundation. She has co-organized the 2019 HAI-AI Index Workshop on Measurement in AI Policy at Stanford.
Pratyusha Ria Kalluri (she/they)
Ria is a Ph.D. student at the Stanford Artificial Intelligence Laboratory (SAIL), where they are co-advised by Dan Jurafsky and Stefano Ermon. They work on detecting and inducing concepts in otherwise opaque Machine Learning models, as well as examining how current AI concentrates power in the hands of a few, supporting the dreaming and building of Radical AI instead. They cocreated Stanford Inclusion in AI (IIAI) and the decentralized Radical AI Network. They were honored to be selected by the NSF fellowship, the Open Philanthropy AI Fellowship, and the PD Soros Fellowship for New Americans.
Sayash Kapoor (he/him)
Sayash is an incoming PhD Student in Computer Science at Princeton University and a software engineer at Facebook. His research focuses on questions of Fairness and Bias in Machine Learning systems.
Suzanne Kite (she/her)
Suzanne Kite is an Oglala Lakota performance artist, visual artist, and composer and a PhD candidate at Concordia University and Research Assistant for the Initiative for Indigenous Futures, and a 2019 Trudeau Scholar Her research is concerned with contemporary Lakota epistemologies through research-creation, computational media, and performance practice. Recently, Kite has been developing a body interface for movement performances, carbon fiber sculptures, immersive video & sound installations.
Raphael Gontijo Lopes (he/him)
Raphael is a Research Associate at Google Brain. He works on computer vision, robustness, and out-of-distribution generalization. He is the founder of Queer in AI and has organized numerous Queer in AI workshops and events, including the NeurIPS 2018, ICML 2019, and NeurIPS 2019 workshops. He is also an organizer of the Radical AI Network.
Manuel Sabin (they/them)
Manuel is finishing their Ph.D. at UC Berkeley as a Theoretical Computer Scientist advised by Shafi Goldwasser and Christos Papadimitriou and is next joining Radboud University as a postdoc to work with Mireille Hildebrandt. Their work varies from Complexity Theory to analyzing the implicit politics encoded into technology and its socio-technical effects in redistributing power in the world. They believe, however, that including more marginalized voices and having an environment for them to bring their full selves is the most impactful way to make a field evolve. To this end they founded and organized the QTPOC Reclaiming Education and STEM (QTPRES) Conference for the Queer, Trans, and POC community in the SF Bay Area to reimagine an academia and STEM that didn’t structurally silo itself off from other disciplines, its impact on society, other ways of knowing and communicating, and from participation and ownership of the most marginalized communities (Postponed for COVID-19).
Marie-Therese Png (she/her)
Marie-Therese Png is a PhD candidate at the Oxford Internet Institute, whose research occupies the space between algorithmic coloniality in high-level AI governance discourse and contestations led by technology activists and civil society actors. She was previously Technology Advisor to the UN Secretary General’s High Level Panel on Digital Cooperation, working across technology policy domains including digital inclusion, lethal autonomous weapons, cybersecurity, and algorithmic racial discrimination, with special focus on multi-stakeholder coalition building and advocating for representation of low-middle income member states. Marie-Therese has worked with Google DeepMind on cross-cultural AI value alignment, co-authoring the academic publication Decolonial Theory as Socio-technical Foresight in Artificial Intelligence Research. Additional research projects include case studies on facial recognition systems in Singapore with a lens of digital human rights - engaging ‘non-Western’ perspectives in AI governance, building on her work as member of the IEEE Ethically Aligned AI Classical Ethics Committee.
Marie-Therese works collaboratively at the intersection of technology ethics and systemic harms - as a Research Affiliate at the MIT Media Lab where she founded implikit.org and co-organised the MIT BioSummit global biohacking movement, and as a Research Associate at the Harvard Artificial Intelligence Initiative where she co-led the Global Civic Debate on AI, US-China AI Summit, and the first AI Roundtable at the World Government Forum. She is currently a co-organiser of the 2020 iteration of the Rhodes Must Fall Oxford movement. Marie-Therese holds an undergraduate in Human Sciences from Oxford, and a Master’s in Mind, Brain, and Education from Harvard.
Maria Skoularidou (she/her)
Maria is a PhD Student at the University of Cambridge working on probabilistic machine learning. Prior to this workshop she co-organised "Advances and Challenges in ML languages" and "Symposium on Causal Inference". Moreover, she is the founder of {Dis}Ability in AI, a group that aims to support and advocate for people with disabilities in AI.
Mattie Tesfaldet (they/them) is a computer vision researcher and artist based in Montréal, Canada. They are pursuing their PhD at McGill University and Mila, co-supervised by Derek Nowrouzezahrai and Christopher Pal, researching generative models for visual content creation and differentiable image parameterizations. Mattie most recently interned as a researcher at Element AI, researching novel meta-learning methods for few-shot image generation. Outside of academia, they like to apply their research with the aim of exploring the intersection of human creativity and artificial intelligence. Particularly, developing new AI-based mediums for communication, expression, and sharing of visual imagery.
Mariya Vasileva (she/her)
Mariya is an Applied Scientist at Amazon based in Los Angeles, CA. She received her PhD from the University of Illinois at Urbana-Champaign under the advisorship of Prof. David Forsyth, where she worked on computer vision applications in the fashion domain, representation learning for visual search, zero-shot retrieval, and generative models for clothing. Apart from her research interests, she is passionate about diversity and inclusion in the machine learning community, and a proponent of responsible and socially-conscious use of AI. Mariya hosted an open discussion on the role of AI systems in critical societal functions like healthcare, welfare allocation, and criminal justice at the Women in Machine Learning 2020 workshop, and is currently studying the intersection of fairness and machine learning for policy making.
Ramon Vilarino (he/him)
Ramon is a Data Scientist at Experian DataLab for Latin America in São Paulo, Brazil, where he works on innovative designs for credit scoring systems focusing on explainability and ethical implications. While he prepares for PhD application, he has worked on multiple outreach initiatives thought to bring marginalized communities closer to the debate around science and technology and also raising awareness to technology interconnections with society among the Brazilian data science community, organizing local meet-ups and fostering tech job opportunities for women and people of color.
Rose E. Wang (she/her)
Rose recently graduated with a Bachelor's degree in Electrical Engineering and Computer Science at MIT. She is an incoming Stanford PhD student and will be supported by the NSF. She previously worked with Professor Joshua Tenenbaum, Professor Jonathan How and Google Brain Robotics on fields including multi-agent systems and verification algorithms. During her undergrad, she has sought to create a more inclusive CS community for those underrepresented by leading efforts within MIT’s Women in EECS group (WiEECS), creating a conference fund for female undergraduates and establishing a professional development branch for undergraduates to explore industry through internships and academia through research projects. She is the co-organizer for the ICML Women in Machine Learning 2020 session on continual reinforcement learning.