8:30 - Opening remarks
8:40 - Nori Jacoby
Title: Mapping the geometry of internal representations through sampling with people
Abstract: Human perception is rich, multi-dimensional and contextual. (Consider, for example, the ways in which emotion is conveyed by the voice: by pitch, volume, and many other parameters.) Yet behavioral methods, biased by their limitation to one-dimensional and simplified stimulus spaces, typically produce an impoverished understanding of human perception. Inspired by Monte Carlo Markov Chain techniques borrowed from machine learning and physics, my research program addresses this gap by developing new adaptive sampling methods, in which each successive stimulus depends on the subject's response to the previous stimulus. Such processes allow us to sample from the complex and high-dimensional joint distribution associated with internal representations and obtain high resolution maps of perceptual spaces. In this talk, I demonstrate how this method sheds light on fundamental questions, such as how high-dimensional representations are efficiently structured in the mind and how information constraints shape internal representational spaces. This work bridges novel experimental approaches in psychophysics with computational models of the mind.
9:05 - Alison Gopnik [CANCELED]
Title: Empowerment Gain as Causal Model Construction
Abstract: Learning about the causal structure of the world is a fundamental problem for human cognition. Causal models and especially causal learning have proved to be difficult for Large Models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In the very different tradition of reinforcement learning, researchers have described an intrinsic reward signal called “empowerment” which maximizes mutual information between actions and their outcomes. “Empowerment” may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. If an agent learns an accurate causal world model they will necessarily increase their empowerment, and increasing empowerment will lead to a more accurate causal world model. Empowerment may also explain distinctive empirical features of children’s causal learning, as well as providing a more tractable computational account of how that learning is possible. In an empirical study, we systematically test how children and adults use cues to empowerment to infer causal relations, design effective causal interventions and appropriately generalize to new contexts.
9:30 - Mimi Liljeholm
Title: Information Theory & Computational Cognitive Modeling
Abstract: I will review different uses of Information Theory in cognitive modeling and extend the discussion to include Bayesian Inference and Reinforcement Learning. I will argue that, while powerful, these generic frameworks are neither necessary nor sufficient for explaining cognitive phenomena at Marr’s ‘What/Why’ level. Moreover, when construed as ‘unifying accounts’ they often obscure rather than clarify important lines of inquiry in Psychology and Neuroscience. I conclude that they are best understood as tools through which scientific concepts can be translated into quantitative predictions and applications.
9:55 (Coffee Break)
10:30 - Terry Regier
Title: Boas, Shannon, and the origin of semantic categories
Abstract: Cross-language variation in semantic categories (e.g. word meanings) has been explained in terms of information-theoretic principles. Central to such accounts is a prior distribution over meanings that need to be conveyed. It has often been assumed that this distribution is the same for different speech communities, but loosening that assumption allows us to connect information-theoretic explanations to classic proposals about the relation of language and culture.
11:00 - Panel Discussion
12:00 - Closing remarks