Past Talks

Recording of talks may be available upon request. Please contact the organizers.

Dr. Rune Nguyen Rasmussen

Postdoctoral Researcher,

University of Copenhagen

10th of June 2022, 14:00 -15:00 (BST)

On the contributions of retinal direction selectivity to cortical motion processing in mice


Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.


Rune received his PhD in 2021 from Aarhus University, DK, where he studied the contributions of retinal direction selectivity to cortical motion procession in the lab of Keisuke Yonehara. There, he combined genetic tools with intrinsic signal optical imaging and two-photon microscopy to characterize the effects of disrupting retinal direction selectivity on the response properties of direction-selective cells in different visual cortical areas. In 2021, Rune joined Maiken Nedergaard’s lab at University of Copenhagen as a postdoctoral fellow to study state-dependent interactions between astrocytes and neurons in the visual system. To this end, he employs two-photon microscopy and whole-cell patch-clamp electrophysiology in awake behaving mice.

Dr. Charlotte Arlt

Editor at Nature Neuroscience,

previously Postdoctoral Researcher

Harvard Medical School

22nd of April, 2022 14:00-15:00 (GMT)

Cognitive experience alters cortical involvement in navigation decisions


The neural correlates of decision-making have been investigated extensively, and recent work aims to identify under what conditions cortex is actually necessary for making accurate decisions. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same task, revealing past learning as a critical determinant of whether cortex is necessary for decision tasks. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation-based visual discrimination task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multi-area calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron-neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in decision tasks.

Charlotte received her PhD in 2017 from UCL, UK, where she studied the circuitry of the cerebellum in the lab of Michael Häusser. There, she combined two-photon microscopy and electrophysiology to characterize the interactions between molecular layer interneurons, Purkinje cells, and their excitatory afferents. In 2017, Charlotte joined Christopher Harvey's lab at Harvard Medical School as a postdoctoral fellow to study neocortical networks for navigation decisions and how they are shaped by experience. To this end, she employed multi-area optogenetic perturbations and large-scale two-photon microscopy of cortical association areas while mice performed navigation-based decision tasks in virtual reality. Since March 2021, Charlotte has been an Associate Editor at Nature Neuroscience based in Berlin, Germany.

Dr. Tiago Marques

Postdoctoral Researcher

MIT Department of Brain and Cognitive Sciences

PhRMA Foundation Fellow

January 24, 2022 14:00-15:00 (GMT)

What does the primary visual cortex tell us about object recognition?


Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

How does hierarchical processing in neuronal networks in the brain give rise to sensory perception, and can we use this understanding to develop more human-like computer vision algorithms? Answering these questions has been the focus of Tiago’s research during the past years. He first encountered the problem of visual perception during his PhD at the Champalimaud Research, where he studied visual cortical processing in the mouse. Under the supervision of Leopoldo Petreanu, Tiago developed a head-fixed motion discrimination task for mice and established a causal link between activity in the primary visual cortex (V1) and motion perception. Following that project, he studied the functional organization of cortical feedback and showed that feedback inputs in V1 relay contextual information to matching retinotopic regions in a highly organized manner. In 2019, Tiago joined the lab of Prof. James DiCarlo at MIT to continue his training, where is currently a PhRMA Foundation Postdoc Fellow. His current research consists of using artificial neural networks (ANNs) to study primate object recognition behavior. He has continued to focus on early visual processing and implemented a set of novel benchmarks to evaluate how well different ANNs match primate V1 at the single neuron level. More recently, he started to develop new computer vision models constrained by neurobiological data that are more robust to image perturbations.

Dr. Anne Urai

Assistant Professor

Cognitive Psychology Unit

Leiden University

December 6, 2021 15:15-16:15 (GMT)

Choice history bias as a window into cognition and neural circuits


Perceptual choices not only depend on the current sensory input, but also on the behavioral context, such as the history of one’s own choices. Yet, it remains unknown how such history signals shape the dynamics of later decision formation. In models of decision formation, it is commonly assumed that choice history shifts the starting point of accumulation towards the bound reflecting the previous choice. I will present results that challenge this idea. By fitting bounded-accumulation decision models to behavioral data from perceptual choice tasks, we estimated bias parameters that depended on observers’ previous choices. Across multiple animal species, task protocols and sensory modalities, individual history biases in overt behavior were consistently explained by a history-dependent change in the evidence accumulation, rather than in its starting point. Choice history signals thus seem to bias the interpretation of current sensory input, akin to shifting endogenous attention towards (or away from) the previously selected interpretation. MEG data further pinpoint a neural source of these biases in parietal gamma-band oscillations, providing a starting point for linking across species.

Dr Anne Urai studied cognitive neuroscience and philosophy at University College Utrecht, Xiamen University in China, University College London and École Normale Supérieure, Paris. During her doctoral research in the lab of Tobias Donner at the Universitätsklinikum Hamburg-Eppendorf and University of Amsterdam, she investigated how our previous choices bias the way we interpret later information, and how this process is affected by the confidence in our decisions. She then joined Cold Spring Harbor Laboratory in New York as a postdoctoral fellow, investigating the neurophysiology of decision-making using high-density neural recordings in the mouse brain. During this time she was a core member of the International Brain Laboratory collaboration, working as part of a global team of systems and computational neuroscientists. Currently she is an Assistant Professor at Leiden University leading the CoCoSys lab. Her research on the neural basis of decision-making across mammalian species, the interaction between learning and perception, and the neural basis of cognitive aging.

Dr. Jennifer Sun

Lecturer

Institute of Ophthalmology

University College London

November 22, 2021 12:00-13:00 (GMT)

Wiring & rewiring: circuit development and plasticity in the sensory cortices

To build an appropriate representation of the sensory stimuli around the world, neural circuits are wired according to both intrinsic factors and external sensory stimuli. Moreover, the brain circuits have the capacity to rewire in response to altered environment, both during early development and throughout life.

In this talk, I will give an overview about my past research in studying the dynamic processes underlying functional maturation and plasticity in rodent sensory cortices. I will also present data about the current and future research in my lab – that is, the synaptic and circuit mechanisms by which the mature brain circuits employ to regulate the balance between stability and plasticity. By applying chronic 2-photon calcium and close-loop visual exposure, we studied the circuit changes at single-neuron resolution to show that concurrent running with visual stimulus is required to drive neuroplasticity in the adult brain.

Dr Sun recently joined University College London as a Lecturer to lead the Visual Plasticity Lab based at Institute of Ophthalmology. Jennifer obtained her PhD from University of Southern California studying sensory cortical development and circuit computation using computational and systems approaches. During her postdoc work at UCSF, she focused on the cellular and circuit mechanisms of neuroplasticity in developing and adult brain. At UCL, her group Visual Plasticity Lab is looking into how neuroplasticity in the visual systems is regulated by visual and non-visual cues. To this end, they apply state-of-art imaging techniques, together with molecular, physiological, and computational approaches, to understand the biological basis of visual cortical plasticity.

Dr. Alexandra Keinath

Postdoctoral Fellow

The Brandon Lab

McGill University

October 18, 2021 14:00-15:00 (BST)

Dynamic maps of a dynamic world


Extensive research has revealed that the hippocampus and entorhinal cortex maintain a rich representation of space through the coordinated activity of place cells, grid cells, and other spatial cell types. Frequently described as a ‘cognitive map’ or a ‘hippocampal map’, these maps are thought to support episodic memory through their instantiation and retrieval. Though often a useful and intuitive metaphor, a map typically evokes a static representation of the external world. However, the world itself, and our experience of it, are intrinsically dynamic. In order to make the most of their maps, a navigator must be able to adapt to, incorporate, and overcome these dynamics. Here I describe three projects where we address how hippocampal and entorhinal representations do just that. In the first project, I describe how boundaries dynamically anchor entorhinal grid cells and human spatial memory alike when the shape of a familiar environment is changed. In the second project, I describe how the hippocampus maintains a representation of the recent past even in the absence of disambiguating sensory and explicit task demands, a representation which causally depends on intrinsic hippocampal circuitry. In the third project, I describe how the hippocampus preserves a stable representation of context despite ongoing representational changes across a timescale of weeks. Together, these projects highlight the dynamic and adaptive nature of our hippocampal and entorhinal representations, and set the stage for future work building on these techniques and paradigms.

I am a neuroscientist using a diverse mix of techniques, species, and computational approaches to help build a multilevel understanding of hippocampal contributions to cognition and navigation. I completed my PhD at the University of Pennsylvania working with Drs. Isabel Muzzio, Vijay Balasubramanian, and Russell Epstein, and am now a postdoctoral research at McGill University and the Douglas Mental Health Research Institute working with Dr. Mark Brandon. My work has been supported by numerous fellowships including a Banting postdoctoral fellowship through which I am currently funded. I am currently in the process of searching for a permanent position as an independent investigator.

Mr. Zaki Ajabi

PhD Candidate

The Brandon Lab

McGill University

October 4, 2021 13:00-14:00 (BST)

Population dynamics of the thalamic head direction system during drift and reorientation


The head direction (HD) system is classically modeled as a ring attractor network which ensures a stable representation of the animal’s head direction. This unidimensional description popularized the view of the HD system as the brain’s internal compass. However, unlike a globally consistent magnetic compass, the orientation of the HD system is dynamic, depends on local cues and exhibits remapping across familiar environments5. Such a system requires mechanisms to remember and align to familiar landmarks, which may not be well described within the classic 1-dimensional framework. To search for these mechanisms, we performed large population recordings of mouse thalamic HD cells using calcium imaging, during controlled manipulations of a visual landmark in a familiar environment. First, we find that realignment of the system was associated with a continuous rotation of the HD network representation. The speed and angular distance of this rotation was predicted by a 2nd dimension to the ring attractor which we refer to as network gain, i.e. the instantaneous population firing rate. Moreover, the 360-degree azimuthal profile of network gain, during darkness, maintained a ‘memory trace’ of a previously displayed visual landmark. In a 2nd experiment, brief presentations of a rotated landmark revealed an attraction of the network back to its initial orientation, suggesting a time-dependent mechanism underlying the formation of these network gain memory traces. Finally, in a 3rd experiment, continuous rotation of a visual landmark induced a similar rotation of the HD representation which persisted following removal of the landmark, demonstrating that HD network orientation is subject to experience-dependent recalibration. Together, these results provide new mechanistic insights into how the neural compass flexibly adapts to environmental cues to maintain a reliable representation of the head direction.

I am a PhD candidate in Dr. Mark Brandon's lab (McGill University). I study the head direction system (i.e. the brain's internal compass), in mice, using miniaturized microscopes and calcium imaging of thalamic head direction cells, during free navigation. I have special interests in mathematical models of neural systems and applications of statistical methods in the discovery of low-dimensional latent manifold structures from high-dimensional neural data. Part of my PhD work has been in collaboration with the Center for Theoretical Neuroscience (Columbia University) where I worked with Prof. Liam Paninski and Dr. Xue-Xin Wei (now at UT Austin). I hold a Bachelor's degree in electrical engineering from Université de Montreal/Ecole Polytechnique de Montreal and a Master's degree in telecommunications engineering from CentraleSupélec (Paris). I was a visiting student at MIT in Prof. Mehmet Fatih Yanik's lab where I did my Master's project in neuroengineering before moving to Dr. Mehrdad Jazayeri's lab where I worked as a research assistant.

Dr. Amir Behbahani

Postdoctoral Fellow

Dickinson Lab

California Institute of Technology, USA

September 20, 2021 16:00-17:00 (BST)

“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila


The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.

Amir Behbahani is a mechanical engineer turned neuroscientist/bioengineer. He received his Ph.D. in Mechanical Engineering from UCLA, developing a first-ever wafer-level post-fabrication modification technique for ring-type resonators used in MEMS gyroscopes. His technique significantly reduced the manufacturing cost of these devices without sacrificing their quality, earning him the outstanding student award. For his postdoctoral project at Caltech, he is studying flies’ foraging behavior using sophisticated genetic tools and agent-based dynamical modeling. In addition to work on scientific projects, he is deeply committed to the work of building diversity and equity in our community. To that end, he has led and initiated many outreach and advocacy programs during his time as a student and postdoc, including organizing exploreCaltech as the chair of the Caltech Postdoc Association and publishing articles about difficulties the postdoc community faces during the pandemic, which was covered by Science.

Dr. Hannah Haberkern

Postdoctoral Fellow

Jayaraman Lab

HHMI Janelia Research Campus, USA

August 16, 2021 14:00-15:00 (BST)

Neural circuits that support robust and flexible navigation in dynamic naturalistic environments


Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.

Hannah Haberkern is a neuroscientist interested in understanding how animals orient and navigate in complex naturalistic environments. Currently, as a postdoc in the Jayaraman lab at HHMI Janelia, she investigates how environmental context and past experiences shape navigation in walking fruit flies. To do this, she combines virtual reality techniques and two-photon calcium imaging with connectomic analysis as well as behavioral studies and modeling. Previous projects focused on different aspects of navigation in adult and larval fruit flies as well as crickets. Originally from Germany, Hannah first studied biomedicine at the University of Würzburg, before transitioning to a masters in Computational Biology and Bioinformatics at the ETH/University Zurich. For her PhD she worked with Berthold Hedwig at the University of Cambridge and Vivek Jayaraman at HHMI Janelia as part of a joint graduate program.

Dr. Mehran Ahmadlou

Postdoctoral Fellow

Heimel Lab

Netherlands Institute for Neuroscience

Currently:

Hofer lab, Sainsbury Wellcome Centre London UK

July 12, 2021 13:00-14:00 (BST)

A brain circuit for curiosity


Motivational drives are internal states that can be different even in similar interactions with external stimuli. Curiosity as the motivational drive for novelty-seeking and investigating the surrounding environment is for survival as essential and intrinsic as hunger. Curiosity, hunger, and appetitive aggression drive three different goal-directed behaviors—novelty seeking, food eating, and hunting— but these behaviors are composed of similar actions in animals. This similarity of actions has made it challenging to study novelty seeking and distinguish it from eating and hunting in nonarticulating animals. The brain mechanisms underlying this basic survival drive, curiosity, and novelty-seeking behavior have remained unclear. In spite of having well-developed techniques to study mouse brain circuits, there are many controversial and different results in the field of motivational behavior. This has left the functions of motivational brain regions such as the zona incerta (ZI) still uncertain. Not having a transparent, nonreinforced, and easily replicable paradigm is one of the main causes of this uncertainty. Therefore, we chose a simple solution to conduct our research: giving the mouse freedom to choose what it wants—double freeaccess choice. By examining mice in an experimental battery of object free-access double-choice (FADC) and social interaction tests—using optogenetics, chemogenetics, calcium fiber photometry, multichannel recording electrophysiology, and multicolor mRNA in situ hybridization—we uncovered a cell type–specific cortico-subcortical brain circuit of the curiosity and novelty-seeking behavior. We found in mice that inhibitory neurons in the medial ZI (ZIm) are essential for the decision to investigate an object or a conspecific. These neurons receive excitatory input from the prelimbic cortex to signal the initiation of exploration. This signal is modulated in the ZIm by the level of investigatory motivation. Increased activity in the ZIm instigates deep investigative action by inhibiting the periaqueductal gray region. A subpopulation of inhibitory ZIm neurons expressing tachykinin 1 (TAC1) modulates the investigatory behavior.

Mehran’s publication records started from age 18 by publishing mathematical Olympiad books for high school students. After completing his Bachelor and Master degrees in bio-electric engineering, he moved to Alexander Heimel’s lab in Amsterdam, where he received his PhD in neuroscience. During his PhD he worked on subcortical processing and plasticity in visual system and was awarded for the Dutch Neuroscience Thesis Prize at 2019. Then, he transitioned to behavioral neuroscience and conducted a postdoctoral research on brain circuits of exploratory and defensive behaviors. He is currently a postdoctoral fellow at Sonja Hofer’s lab at Sainsbury Wellcome Centre at University College London. His main interest is to understand the brain circuits and mechanisms underlying instinctive behaviors.

Dr. Kohitij Kar

Research Scientist

DiCarlo Lab

McGovern Institute for Brain Research

Massachusetts Institute of Technology (MIT), USA

June 14, 2021 14:00-15:00 (BST)

Towards a neurally mechanistic understanding of visual cognition


I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

Kohitij Kar (“Ko”) is currently a Research Scientist at the McGovern Institute for Brain Research at MIT working in the lab of Dr. James DiCarlo. He completed his Ph.D. in the Department of Behavioral and Neural Sciences at Rutgers University in New Jersey (PhD advisor: Bart Krekelberg). His current research lies in the intersection of neurophysiological investigations of visual intelligence in the non-human primates and artificial intelligent systems.

Dr. Daniel Zaldivar

Postdoctoral Fellow

Section of Cognitive Neurophysiology and Imaging

National Institute of Mental Health (NIMH), USA

May 18, 2021 13:00-14:00 (BST)

Whole-brain fMRI mapping of neural activity recorded from a single voxel


The temporal correlation of fMRI-fluctuations between two brain regions is often taken as a measure of functional-connectivity. However, individual neurons within a given region exhibit diverse and sometimes uncorrelated activity-patterns from their neighbouring neurons, raising the question of how to interpret area-based fMRI correlations. By using simultaneous fMRI and electrophysiology in non-human primates we investigated how multiple, simultaneously recorded neural signals from a single-voxel map onto spontaneous fMRI-fluctuations elsewhere in the brain. I will show you how single-units within <1mm3 of cortical tissue participate in multiple anatomical-functional domains, even under conditions of minimal stimulation. This local functional diversity cannot be ascertained from either the local LFP or fMRI activity.

Daniel Zaldivar completed his MD degree at the National Polytechnic Institute in Mexico, and his PhD in systems neuroscience at Max Planck Institute for Biological Cybernetics in Germany, under Prof. Nikos Logothetis. He is currently a postdoctoral fellow at the Section of Cognitive Neurophysiology and Imaging at the National Institute of Mental Health, in the USA. At the NIH, Daniel’s goal is to be able to generate a functional map of basal forebrain projections to the entire brain, using fMRI, optogenetics and electrophysiology in non-human primates, and to be able to understand neuromodulatory ascending pathways.

Dr. Tomomi Karigo

Postdoctoral Fellow

David Anderson Lab,

California Institute of Technology, USA

April 26, 2021 16:00 – 17:00 (GMT)

Hypothalamic control of internal states underlying social behaviors in mice


Social interactions such as mating and fighting are driven by internal emotional states. How can we study internal states of an animal when it cannot tell us its subjective feelings? Especially when the meaning of the animal’s behavior is not clear to us, can we understand the underlying internal states of the animal? In this talk, I will introduce our recent work in which we used male mounting behavior in mice as an example to understand the underlying internal state of the animals.

In many animal species, males exhibit mounting behavior toward females as part of the mating behavior repertoire. Interestingly, males also frequently show mounting behavior toward other males of the same species. It is not clear what the underlying motivation is - whether it is reproductive in nature or something distinct.

Through detailed analysis of video and audio recordings during social interactions, we found that while male-directed and female-directed mounting behaviors are motorically similar, they can be distinguished by both the presence of ultrasonic vocalization during female-directed mounting (reproductive mounting) and the display of aggression following male-directed mounting (aggressive mounting). Using optogenetics, we further identified genetically defined neural populations in the medial preoptic area (MPOA) that mediate reproductive mounting and the ventrolateral ventromedial hypothalamus (VMHvl) that mediate aggressive mounting. In vivo microendocsopic imaging in MPOA and VMHvl revealed distinct neural ensembles that mainly encode either a reproductive or an aggressive state during which male or female directed mounting occurs. Together, these findings demonstrate that internal states are represented in the hypothalamus and that motorically similar behaviors exhibited under different contexts may reflect distinct internal states.

Tomomi received her PhD at the University of Tokyo where she studied the neuroendocrinological mechanisms of reproduction using fish as a model to explore evolutionary conserved regulatory mechanisms. As a HFSP postdoctoral fellow, she transitioned to systems neuroscience, studying neural mechanisms of innate social and emotional behaviors using mice with David Anderson at Caltech.

Dr. Ching-Lung Hsu

Assistant Research Fellow

Institute of Biomedical Sciences, Neuroscience Program of Academia Sinica (NPAS), Academia Sinica, Taiwan

https://www.ibms.sinica.edu.tw/ching-lung-hsu/

March 29, 2021 12:00 – 13:00 (GMT)

Rapid synaptic plasticity contributes to the emergence of task-relevant place-cell firing in a visually guided behavioral task

Animals use visual cues to guide complex behavior. In learning goal-directed, memory-dependent spatial tasks, the brain forms neural codes that require localization of the animal’s position and the plans of subsequent actions, likely dictated by immediate and prior visual cues. The cellular mechanisms supporting such kinds of conjunctive, spatially dependent code during learning are poorly understood.

In the hippocampus, place cells have been considered critical for both spatial representation and navigation. A subset of place cells, called splitter cells, exhibit place-dependent firing modulated by behavioral motor trajectories, but the exact plasticity mechanisms that shape this behavior-dependent spatial code remain unknown. Here, we applied whole-cell patch-clamp recording to CA1 pyramidal neurons in awake mice performing a visually cued two-choice task in virtual reality, which required functionally intact dorsal hippocampus. With precise control of visual cues, we found that calcium plateau potentials can rapidly and robustly trigger the emergence of splitter cells in CA1. Further experiments showed that the specific cue-reward association is necessary for this cellular learning rule to engage the influence of prior visual cues over place-cell firing. Finally, I would like to discuss a possible model based on ideas of attractor networks representing cues, and a couple of potential directions for future work.

Ching-Lung has a Bachelor degree of Zoology (major) and Electric Engineering (minor) and received his PhD from National Taiwan University. After studying synaptic plasticity of corticothalamic synapses in the somatosensory system of rodents, he moved to Nelson Spruston’s lab at Janelia Research Campus of HHMI to work on cellular and biophysical mechanisms that may contribute to hippocampal functions related to spatial navigation and memory. Ching-Lung recently started his lab at Academia Sinica in Taiwan. His lifelong interests include understanding how the algorithmic properties required for dynamic cognitive processes can be supported by the computations of individual neurons, and playing with his daughter. He plans to pursue the former goal in a more systematic manner, using electrophysiology, brain slices, mouse virtual reality and kilohertz frame-rate two-photon imaging of synapses.

Dr. Marcia Becu

Postdoctoral Fellow

Christian Doeller Lab

Kavli Institute for Systems Neuroscience, NTNU

March 2, 2021 13:00 – 14:00 (GMT)

Modulation of landmark and geometry spatial coding in healthy aging.


My research work investigates the behavioral consequences of visual and cognitive aging within the spatial cognition framework. In this talk, I will present evidence suggesting that spatial representations are preferentially anchored on geometric cues with advancing age, while landmark information fail to be bound to the cognitive map of space. I will argue against the traditional view that attributes age-related navigation difficulty to an impairment of allocentric representations by showing that older adults are as efficient as younger adults at using complex allocentric strategies, given that their preferred cue (i.e. geometry) is available at the time of the navigation decision. I will present a short series of experiments combining eye-tracking and ecological navigation which highlight the importance of the ground plane and corners for the visual extraction of geometric cues and suggest that the way we visually explore a given environment is predictive of our own spatial coding preference.

Marcia has been trained in cognitive and experimental psychology. She holds a Master’s degree in neuropsychology and neuroscience from the University of Grenoble (France). She then obtained her PhD from the French National Centre for Scientific Research (CNRS) and the National Institute of Health and Medical Research (Inserm) in the lab of Angelo Arleo. She is currently a postdoctoral fellow in the lab of Christian Doellar at the Kavli Institute for Systems Neuroscience in Trondheim (Norway) and a guest researcher at the Max Planck Institute for Human Cognitive and Brain Sciences Sciences in Leipzig (Germany). She studies the links between spatial cognition and visual in the framework of human aging. She is especially interested in how we perceive and memorize environmental information and how this knowledge guides our behavior. She addresses these questions using eye tracking combined with motion tracking, virtual reality and neuroimaging, with particular emphases on ecological and immersive techniques, which allow navigation to be tested in natural, yet controlled conditions.

Dr. Alina Peter

Postdoctoral fellow

Pascal Fries Lab

Ernst Strüngmann Institute (ESI) for Neuroscience

January 18, 2021 12:00 – 13:00 (GMT)

Insights from stimulus repetition, color and context on primate V1 gamma and firing rates

Repeated encounters with stimuli offer an opportunity to the visual system to alter and optimize its responses. A frequently observed result of repeated stimulation on short time scales is a reduction in firing rates. Here, we observe that responses in awake primate V1 reduce in strength but increase in gamma-band synchrony, for both natural and grating stimuli. These effects are stimulus-specific, build up over tens of repetitions, show some persistence over minutes, and specificity to the stimulated location in the visual field. Furthermore, effects are stimulus-dependent: stimuli that show strong gamma-band responses to begin with tend to show the strongest increases with repetition. Time permitting, I will connect these findings to my work regarding the role of predictable spatial context, color and stimulus-drive for generating gamma-band responses for both artificial and natural images.

Alina was originally trained as a psychologist and got a Master’s degree in Neuroscience with Peter de Weerd in Maastricht and at the Donders Institute in Nijmegen. She then joined the group of Pascal Fries at the ESI in Frankfurt where is completed her PhD last February studying adaptive and contextual modulation in primate V1 gamma activity and firing rates. She continued as a postdoc at the ESI and next month she will start a new position at the lab of Jim DiCarlo at MIT.

Dr. Madineh Sedigh-Sarvestani

David Fitzpatrick Lab

Max Planck Florida Institute for Neuroscience, U.S.A.

www.msarvestani.com

December 14, 2020 14:00 – 15:00 (GMT)

A sinusoidal transform of the visual field

The retinotopic maps of many visual cortical areas are thought to follow the fundamental principles that have been described for primary visual cortex (V1) where nearby points on the retina map to nearby points on the surface of V1, and orthogonal axes of the retinal surface are represented along orthogonal axes of the cortical surface. We've found a striking departure from this conventional mapping in the secondary visual area (V2) of the tree shrew. Although local retinotopy is preserved, orthogonal axes of the retina are represented along the same axis of the cortical surface, an unexpected geometry explained by an orderly sinusoidal transform of the retinal surface. This sinusoidal topography is ideally suited for achieving uniform coverage in an elongated area like V2, is predicted by mathematical models designed to achieve wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and stimulus response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for the large-scale layout of topographic maps.

Madineh was trained as an engineer, and she got her PhD in Biomedical Engineering at Penn State University where she studied the interaction between sleep and epilepsy networks in rodents. She transitioned to systems neuroscience at U Penn, working with Diego Contreras and Larry Palmer on thalamocortical circuits in the cat visual system. From there she moved to Max Planck Florida to study the visual system of tree shrews with David Fitzpatrick. Madineh has been involved in teaching computational neuroscience at Cold Spring Harbor Summer Schools, and recently at Neuromatch Academy.