2024 Participant Bios & Abstracts

Marc Aidinoff

"Which Democracy Do Computers Make?"

Abstract: Computers have been fashioned as tools both to operationalize democracy and to theorize it. This talk seeks to particularize the seemingly co-constitutive relationship between liberal democracy and network computing in three steps. First, it begins in the 1990s as U.S. politicians and theorists, including historians of science, conceptualized computers as the mechanism par excellence for establishing consensus and order from disunity. Second, it turns to technocrats who hoped networked computing could effectively constrain democratic action by replacing an unruly rights-based liberal to a disciplined contract-based liberalism. Using the example of welfare automation, it explores how technocrats saw key technical similarities between computing and contracts. For them, computing networks were, like contracts, immanent, self-contained systems without recourse to fundamental claims outside of the epistemological schema of the system. Finally, the talk asks how responses to AI, including “Collective Constitutional AI” and an “AI Bill of Rights,” both reify and reject the shortcomings of these earlier attempts at actualizing a democratic dream.

Bio: Marc Aidinoff is a historian of twentieth-century technology, social policy, and state epistemology in the United States. He is currently a postdoctoral research associate at the Institute for Advanced Study in Princeton, NJ and will be an assistant professor of history of science at Harvard University in fall 2025. Aidinoff’s current book project, Rebooting Liberalism: The Computerization of the Social Contract from 1974 to 2004, offers an alternative history of neoliberalism anchored in the realities of the late twentieth-century welfare office where potential welfare recipients struggled to make themselves legible to computerized case management systems. Aidinoff traces the work of liberals who came to believe that they could make the welfare state popular with white voters by computerizing the mechanisms of governance. Aidinoff recently served as Chief of Staff and Senior Advisor in the Biden-Harris White House Office of Science and Technology Policy where he helped lead a team of 150 policymakers on key initiatives including the Blueprint for an AI Bill of Rights. He earned his Ph.D. from the Massachusetts Institute of Technology and B.A. from Harvard College.

Cameron Buckner

"Generative DNNs as models of imagination, creativity, and planning"

Abstract: In current debates over neural-network-based AI, neural network researchers have donned the mantle of philosophical empiricism and associationism, and its critics have taken up the side of philosophical nativism and rationalism. This dynamic is vividly illustrated in a centuries-displaced debate between David Hume and Jerry Fodor over the role that the imagination plays in rational cognition, which centers on the question as to whether statistical learning procedures could be bootstrapped to perform the forms of creativity characteristic of the human mind. Fodor alleged that empiricists are unable to explain how minds synthesize exemplars for use in reasoning, compose novel concepts from simpler concepts, engage in consequence-sensitive planning, or distinguish between causal and intentional relations amongst thoughts. Fodor applauds Hume for agreeing that rational cognition involves all these creative operations, but criticizes Hume’s delegation of these operations to the faculty of the imagination. In this talk I explain how a variety of different "generative" architectures help explain how an empiricist imagination could perform such operations. I conclude with more general morals about the prospects of abstraction and reasoning in empiricist theorizing, about a mutually beneficial interaction between philosophy and science in this context, and about how philosophers and scientists should think about the staggering power of these new systems—such as the “sparks of general intelligence” recently displayed by GPT-4—more generally.

Bio: Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. His research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence.  He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. Recent representative publications include “Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks” (2018, Synthese), and “Rational Inference: The Lowest Bounds” (2017, Philosophy and Phenomenological Research)—the latter of which won the American Philosophical Association's Article Prize for the period of 2016–2018.  He just published a book with Oxford University Press that uses empiricist philosophy of mind (from figures such as Aristotle, Ibn Sina, John Locke, David Hume, William James, and Sophie de Grouchy) to understand recent advances in deep-neural-network-based artificial intelligence.

Miles Cranmer

Details to come.

Kathleen Creel

"Fairness and Randomness in Predictive Systems"

Abstract: Algorithmic monocultures occur when many decision-makers use the same predictive system. A monoculture ensures consistency, but also amplifies the weaknesses, biases, and idiosyncrasies of the original predictive system. Algorithmic monoculture can lead to consistent ill-treatment of individual people by homogenizing the decision outcomes they experience. When the same person re-encounters the same or similar models, she might be wrongly rejected again and again.  We propose stochastic procedures that reduce the homogeneity of decision outcomes.  In doing so, we take a step back from the predictive performance or fairness of individual models to focus instead on the fairness of the decision ecosystem as a whole.

Bio: Kathleen Creel is an assistant professor at Northeastern University, cross appointed between Philosophy and Computer Science.  Her research explores the moral, political, and epistemic implications of machine learning as it is used in automated decision making and in science.  She received her PhD from University of Pittsburgh’s History and Philosophy of Science Department and the Herbert Simon award from the International Association for Computing and Philosophy in 2023.

Stephanie Dick

"Ethics and Epistemology in the History of AI"

Abstract: Whenever someone announces that a computer has accomplished a given task - that it has recognized a face, diagnosed a disease, won a game of chess, that it has proved a mathematical theorem, that it has composed a poem - the task in question has been redefined. It has been, of necessity, transformed into the sort of thing that computers can do, into its operations, data structures, its formalisms, its terms. The focus of my scholarship has been on the epistemic and political stakes of these transformations, especially in the areas of automated theorem-proving and early automation in American policing. What is happening to our problems and problem domains, our questions and answers, as more and more inquiry is scaffolded by computational tools. The field of “AI ethics” has been widely critiqued because it demands that social and political values, like ‘fairness’ or ‘equality’ be transformed into constraints on classifiers and this has proven more difficult and more controversial than many at first hoped, while also failing to adhere to many communities’ sense of what is “ethical”. This talk will situate the present proliferation of machine learning and AI ethics in a longer history of epistemic and political concern surrounding “artificial intelligence”, using examples from the introduction of data-driven prediction to domestic policing.

Bio: Stephanie Dick is an Assistant Professor of Communication at Simon Fraser University. She has a PhD in History of Science from Harvard University. Her work focuses on the history of mathematics, computing, and the mind. Her projects explore attempts to automate mathematical intelligence and theorem-proving in the 20th century; the introduction of centralized computer databases to American policing in the 1960s; and in her most recent work, she is exploring debates about the character of the human mind at the intersection of logic, philosophy, and the occult. She is co-editor with Janet Abbate of Abstractions and Embodiments: New Histories of Computing and Society, she co-edits the “Mining the Past” column at the Harvard Data Science Review; she is co-editor of the most recent issue of the British Journal for the History of Science "Themes": Histories of AI: A Genealogy of Power; she sits on the editorial board of the IEEE Annals of the History of Computing. Before returning home to Canada to join the faculty at SFU, Stephanie was an Assistant Professor of History and Sociology of Science at the University of Pennsylvania, and a Junior Fellow with the Harvard Society of Fellows.

Paul Ginsparg

"Ask not what ML can do for Physics, ..."

Abstract: I interpret the question assigned to this session as: Will scientists of the not-too-distant future be outperformed by Newton- , Einstein-, and Bohr-bots as easily as their counterparts in Chess, Jeopardy and Go have been forced to welcome our new computer overlords? or: Will an AI see further by standing on the shoulders of giant datasets? We cannot definitively answer these questions without a more detailed characterization of where high level human creative processes land within the larger computational complexity landscape, but we can try to reason about whether there is any fundamental obstacle to superhuman AI performance in theoretical science. As the title suggests, I will also endeavor to touch on the inverse question.

Bio: I received a B.A. in Physics from Harvard University (1977), and a doctorate in theoretical particle physics from Cornell University (1981), advised by Kenneth G. Wilson. I was in the Society of Fellows at Harvard from 1981-1984, then faculty member in the physics department at Harvard University until 1990, a staff member in the theoretical division of Los Alamos National Laboratory from 1990-2001, and Professor of Physics and Computing & Information Science at Cornell University since 2001. I've authored papers in quantum field theory, string theory, conformal field theory, and quantum gravity, and in 1991 started the e-print archives (now arXiv.org, with over 2.4M submissions). I've served on too many committees and advisory boards to remember, and have received various awards, including Macarthur, Radcliffe, and Simons Fellowships, and most recently the AIP Karl T. Compton medal for outstanding statesmanship in Science, and the Einstein Foundation Award for Promoting Quality in Research. As a member of the Information Science department at Cornell, I employed a minimal grounding in finite dimensional vector spaces over the real numbers to benefit from over two decades of weekly Artificial Intelligence and Machine Learning Seminars, and Computer Science Colloquia. This provided an amazingly detailed albeit passive exposure to the rapid developments in deep learning from over a decade ago through current developments in text/image/audio generative AI. I am currently teaching a course this spring that requires infosci students to build a character level Shakespeare generator from scratch using the transformer architecture. In the fall I employed a minimal grounding in finite dimensional vector spaces over the complex numbers to teach a physics course in Quantum Information, including use of current cloud quantum computing resources, quantum error correction, measures of entanglement, quantum sample complexity, and some recent developments in the use of generative ML to model quantum data. It is my subjective experience that enthusiasm for a subject area can frequently substitute for substantive background.

Zoë Hitzig

"Contextual Confidence and Generative AI"

Abstract: Generative AI models perturb the foundations of effective human communication. They present new challenges to contextual confidence, disrupting participants’ ability to identify the authentic context of communication and their ability to protect communication from reuse and recombination outside its intended context. A broad range of challenges around privacy and authenticity (e.g. data protection, surveillance, misinformation, disinformation, impersonation, etc.) can be understood as challenges to contextual confidence. The framework I outline suggests that the future of AI depends on the strategies we develop to stabilize communication in the face of threats to contextual confidence. Time permitting, I will discuss a few stabilizing strategies like provenance tracking, federated learning and proof of personhood.

Bio: Zoë Hitzig is a Junior Fellow at the Harvard Society of Fellows. She is interested in normative issues in the design of algorithms, markets and contracts – especially around privacy, transparency and equity. She holds a PhD in Economics from Harvard and an MPhil in History and Philosophy of Science from Cambridge.

Matthew Jones

Details to come.

Peter Norvig

"Two Cultures of Statistics"

Abstract: Science has long used observations about the world (data) to confirm or refute existing theories, and to help create new theories. We have about 400 years of experience in doing that for relatively simple theories involving a handful of variables (as with Kepler's laws of planetary motion). But we have only about 20 years of experience for complex theories involving millions or billions of variables derived from data science and artificial intelligence. What will we need to do to gain facility in working with these tools, and confidence in trusting their results?

Bio: Peter Norvig is a Fellow at Stanford's Human-Centered AI Institute and a research director at Google Inc; previously he directed Google's core search  algorithms and Research groups. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences.

Fabian Offert

"This Is Your Brain on ImageNet: Embeddings as Trading Zone and Cultural Technique"

Abstract: "Embedding" is one of the most important techniques in the machine learning toolbox. Polemically, in natural language processing and computer vision, any useful knowledge is embedded knowledge. While the technique itself is hardly more than an advanced form of compression, it is the universality of embeddings that renders them interesting from an epistemological perspective: universal faculties – such as "seeing" in the case of computer vision, which I will focus on in this talk – are extrapolated from particular datasets and represented in an exclusively relational manner. Exactly because of their universality, embeddings live on, sometimes way beyond the lifespan of the datasets that they represent. "Historical" deep convolutional neural network features, for instance, still inform the training of newer generative models by ways of perceptual distance metrics that determine the realism of generated images. By becoming just another part of the training pipeline, however, they cease to appear as distinctive epistemic structures. More importantly, this "historical opacity" of embeddings obfuscates what I propose to understand as a major epistemic shift: scientific knowledge, at least if it relates to the visual, is often generated with the help of cultural data. Embeddings, then, can be seen as a cultural technique, and as a trading zone that spans, surprisingly, not only branches of the natural sciences, but the sciences and the humanities.

Bio: Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara. His research and teaching focuses on the visual digital humanities, with a special interest in the epistemology and aesthetics of computer vision and machine learning. His current book project focuses on "Machine Visual Culture" in the age of foundation models. At UCSB, he is affiliated with the Department of German, the Media Arts and Technology program, the Comparative Literature program, the Mellichamp Initiative in Mind & Machine Intelligence, and the Center for Responsible Machine Learning. He is also principal investigator of the international research project "AI Forensics" (2022-25), funded by the Volkswagen Foundation, and was principal investigator of the UCHRI multi campus research group "Critical Machine Learning Studies" (2021-22).

Emily Sullivan

"Machine Learning in science: Dimensions of understanding"

Abstract: More and more sciences are turning to machine learning (ML) technologies to solve long-standing problems or make new discoveries—ranging from medical science to fundamental physics. At the same time, the exact same modelling technologies are used across society ranging from determining what news we see on social media to fraud detection and criminal risk assessment. The ever-growing fingerprint ML modeling has on the production of scientific and social knowledge comes with opportunity and also pressing challenges. In this talk, I discuss how philosophy of science and epistemology can help us understand the potential and limits of ML used for science and society. Specifically, I will draw the themes regarding the nature of scientific modeling, understanding, explanation, and idealization.

Bio: Emily Sullivan is an Assistant Professor of philosophy at Utrecht University. She is a fellow in the ESDiT research consortium and an Associate Editor for The British Journal for the Philosophy of Science. Her research is at the intersection of philosophy and data and computer science and explores the way that technology mediates our knowledge. Her work can be found in Australasian Journal of Philosophy, Philosophical Studies, Oxford Studies in Experimental Philosophy, Philosophy of Science, and more. Her paper ‘Understanding from Machine Learning Models’ received an honorable mention for the 2022 Popper Prize. She is currently the PI on a NWO Veni project (2021-2024) on the explainability of machine learning systems.

Leslie Valiant

"Educability"

Abstract: We seek to define the capability that has enabled humans to develop the civilization we have, and that distinguishes us from other species. For this it is not enough to identify a distinguishing characteristic - we want a capability that is explanatory of humanity's achievements.  "Intelligence" does not work here because we have no agreed definition of what intelligence is or how an intelligent entity behaves. We need a concept that is behaviorally better defined. The definition will need to be computational in the sense that the expected outcomes of exercising the capability need to be both specifiable and computationally feasible. This formulation is related to the goals of artificial intelligence research but is not synonymous with it, leaving out, for example, the many capabilities we share with other species. We make a proposal for this essential human capability, which we call  "educability." It synthesizes abilities to learn from experience, to learn from others, and to chain together what we have learned at different times. It starts with the now standard notion of learning from examples, as formalised in the Probably Approximately Correct model and exploited in the practice of machine learning. The recent demonstrations of Large Language Models learning to generate smoothly flowing prose provide a clue that pursuing  computationally well-defined tasks in this way constitutes a promising approach toward capturing broader human capabilities. The basic question then is: What are these broader human capabilities. This is what the educability notion attempts to answer. What we ask computers to do, in the main, reflects human capabilities. Hence a better understanding of human capabilities can be expected to provide goals for our future technology.

Bio: Leslie Valiant was educated at King's College, Cambridge; Imperial College, London; and at Warwick University where he received his Ph.D. in computer science in 1974. He is currently T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, where he has taught since 1982. Before coming to Harvard he had taught at Carnegie Mellon University, Leeds University, and the University of Edinburgh.