How experience shapes the brain
How experience shapes the brain
During development, maturation and experience work jointly to provide optimal neural representations of the environment to cope with future needs. By studying typically developing individuals, sensory deprivation, and sensory restoration - e.g. cochlear implantation for hearing -, the SEED group explores the mechanisms underlying functional brain development and organization. We are particularly interested in critical and sensitive periods. These are limited windows during early development when the brain is particularly receptive to experience, and certain abilities (such as vision, hearing or language) must be developed. These phases represent both opportunities for learning and periods of vulnerability if typical input is not provided. The neural architecture shaped in infancy and childhood continues to guide much of our behavior as adults.
"So we beat on, boats against the current, borne back ceaselessly into the past."
The Great Gatsby, F. Scott Fitzgerald
Research is conducted at the interface of cognitive neuroscience, biological psychology, and developmental psychology. We apply multiple methods, including computational approaches, electrical neuroimaging, functional magnetic resonance imaging, and psychophysics, to elucidate complex neural dynamics.
Our research focuses on understanding the development and functioning of sensory and cognitive systems within a multisensory framework.
ONGOING PROJECTS
How the brain learns to process natural speech signals in typical development and when hearing is provided in late - individuals using cochlear implants
Brain organization and plasticity in sensory-deprivation (blindness and deafness)
Short term plasticity of audio-visual integration
Extraction of speech from noise
Lip-Reading: Advance and Unresolved questions in a key communication skill
Martina Battista, Francesca Collesei, Eva Orzan, Marta Fantoni, Davide Bottari (2025).
Journal of Audiology Research.
Lip-reading, i.e., the ability to recognize speech using only visual cues, plays a fundamental role in audio-visual speech processing, intelligibility, and comprehension. This capacity is integral to language development and functioning; it emerges in early development, and it slowly evolves. By linking psycholinguistics, psychophysics, and neurophysiology, the present narrative review explores the development and significance of lip-reading across different stages of life, highlighting its role in human communication in both typical and atypical development, e.g., in the presence of hearing or language impairments. We examined how relying on lip-reading becomes crucial when communication occurs in noisy environments and, on the contrary, the impacts that visual barriers can have on speech perception. Finally, this review highlights individual differences and the role of cultural and social contexts for a better understanding of the visual counterpart of speech.
Resilience and vulnerability of neural speech tracking after hearing restoration
Alessandra Federici, Marta Fantoni, Francesco Pavani, Giacomo Handjaras, Evgenia Bednaya, Alice Martinelli, Martina Berto, Franco Trabalzini, Emiliano Ricciardi, Elena Nava, Eva Orzan, Benedetta Bianchi & Davide Bottari (2025).
Communications Biology.
https://doi.org/10.1038/s42003-025-07788-4
The role of early auditory experience in the development of neural speech tracking remains an open question. To address this issue, we measured neural speech tracking in children with or without functional hearing during their first year of life after their hearing was restored with cochlear implants (CIs), as well as in hearing controls (HC). Neural tracking in children with CIs is unaffected by the absence of perinatal auditory experience. CI users and HC exhibit a similar neural tracking magnitude at short timescales of brain activity. However, neural tracking is delayed in CI users, and its timing depends on the age of hearing restoration. Conversely, at longer timescales, speech tracking is dampened in participants using CIs, thereby accounting for their speech comprehension deficits. These findings highlight the resilience of sensory processing in speech tracking while also demonstrating the vulnerability of higher-level processing to the lack of early auditory experience.
Disentangling nonverbal communicative signals in the brain by combining communicative features alteration and neural tracking
Francesca Collesei, Marta Fantoni, Davide Bottari (2024).
Journal of Applied Psycholinguistics 2, p.45-62
Social communication entails the processing of a myriad of signals. While it may seem effortless, humans need to process multitudes of only partially correlated multimodal events, such as speech and nonverbal communicative cues from the face (e.g., mouth and eye movements), head nods, and hand gestures. Studying how humans make sense of this information requires investigating how the brain processes this multitude of complex, intertwined, and continuous events. With the aim to better estimate the specific roles of nonverbal communicative cues in face-to-face communication, we revise recent works stemming from two main approaches. The first relies on contexts or experimental protocols that hinder specific communicative features to estimate the impact of their lack. For in- stance, studies conducted in naturalistic contexts in which obstacles affect nonverbal communicative cues or in experimentally manipulated contexts in which specific stimulus features are altered. The second leverages the neural tracking technique, which provides the possibility to characterize how the brain encodes the wide gamut of continuous information associated with nonverbal signals through the continuous measurement of the associated brain activity. Integrating these approaches might offer new perspectives for disentangling sensory-based components of social communication.
Brain Encoding of Naturalistic, Continuous, and Unpredictable Tactile Events
Nicolò Castellani, Alessandra Federici, Marta Fantoni, Emiliano Ricciardi, Francesca Garbarini and Davide Bottari (2024).
eNeuro.
https://doi.org/10.1523/ENEURO.0238-24.2024
Studies employing EEG to measure somatosensory responses have been typically optimized to compute event-related potentials in response to discrete events. However, tactile interactions involve continuous processing of nonstationary inputs that change in location, duration, and intensity. To fill this gap, this study aims to demonstrate the possibility of measuring the neural tracking of continuous and unpredictable tactile information. Twenty-seven young adults (females, 15) were continuously and passively stimulated with a random series of gentle brushes on single fingers of each hand, which were covered from view. Thus, tactile stimulations were unique for each participant and stimulated fingers. An encoding model measured the degree of synchronization between brain activity and continuous tactile input, generating a temporal response function (TRF). Brain topographies associated with the encoding of each finger stimulation showed a contralateral response at central sensors starting at 50 ms and peaking at ∼140 ms of lag, followed by a bilateral response at ∼240 ms. A series of analyses highlighted that reliable tactile TRF emerged after just 3 min of stimulation. Strikingly, topographical patterns of the TRF allowed discriminating digit lateralization across hands and digit representation within each hand. Our results demonstrated for the first time the possibility of using EEG to measure the neural tracking of a naturalistic, continuous, and unpredictable stimulation in the somatosensory domain. Crucially, this approach allows the study of brain activity following individualized, idiosyncratic tactile events to the fingers.
The impact of face masks on face-to-face neural tracking of speech: auditory and visual obstacles
M. Fantoni, A. Federici, I. Camponogara, G. Handjaras, A. Martinelli, E. Bednaya, E.Ricciardi, F.Pavani, D.Bottari(2024).
Heliyon, 10(15).
DOI: 10.1016/j.heliyon.2024.e34860
Face masks provide fundamental protection against the transmission of respiratory viruses but hamper communication. We estimated auditory and visual obstacles generated by face masks on communication by measuring the neural tracking of face-to-face speech. To this end, we recorded the EEG while participants were exposed to naturalistic audio-visual speech, embedded in multi-talker noise, in three contexts: (i) no-mask (audio-visual information was fully available), (ii) virtual mask (occluded lips, but intact audio), and (iii) real mask (occluded lips and degraded audio). The neural tracking of lip movements and the sound envelope of speech was measured through backward modeling, that is, by reconstructing stimulus properties from neural activity. Behaviorally, face masks increased listening -phonological-errors in speech content retrieval and perceived listening difficulty. At the neural level, we observed that the occlusion of the mouth abolished lip tracking and dampened neural tracking of the speech envelope at the earliest processing stages. Degraded acoustic information due to face mask filtering altered neural tracking at later processing stages instead. Finally, a consistent link emerged between the increment of listening perceived difficulty and the drop in reconstruction performance of speech envelope when attending to a speaker wearing a face mask. Results clearly dissociated the visual and auditory impacts of face masks on face-to-face neural tracking of speech. While face masks hampered the ability to predict and integrate audio-visual speech, the auditory filter generated by face masks impacted the neural processing stages typically associated with auditory selective attention. The link between perceived difficulty and neural tracking drop provided evidence of a major impact of face masks on the metacognitive levels subtending speech processing.
Distinguishing Fine Structure and Summary Representation of Sound Textures from Neural Activity
Martina Berto, Emiliano Ricciardi, Pietro Pietrini, Nathan Weisz & Davide Bottari (2023).
eNeuro.
https://doi.org/10.1523/ENEURO.0026-23.2023
The auditory system relies on both local and summary representations; acoustic local features exceeding system constraints are compacted into a set of summary statistics. Such compression is pivotal for sound-object recognition. Here, we assessed whether computations subtending local and statistical representations of sounds could be distinguished at the neural level. A computational auditory model was employed to extract auditory statistics from natural sound textures (i.e., fire, rain) and to generate synthetic exemplars where local and statistical properties were controlled. Twenty-four human participants were passively exposed to auditory streams while the electroencephalography (EEG) was recorded. Each stream could consist of short, medium, or long sounds to vary the amount of acoustic information. Short and long sounds were expected to engage local or summary statistics representations, respectively. Data revealed a clear dissociation. Compared with summary-based ones, auditory-evoked responses based on local information were selectively greater in magnitude in short sounds. Opposite patterns emerged for longer sounds. Neural oscillations revealed that local features and summary statistics rely on neural activity occurring at different temporal scales, faster (beta) or slower (theta-alpha). These dissociations emerged automatically without explicit engagement in a discrimination task. Overall, this study demonstrates that the auditory system developed distinct coding mechanisms to discriminate changes in the acoustic environment based on fine structure and summary representations.
Altered neural oscillations underlying visuospatial processing in cerebral visual impairment.
Federici, A., Bennett, C. R., Bauer, C. M., Manley, C. E., Ricciardi, E., Bottari, D., & Merabet, L. B. (2023).
Brain Communications.
https://doi.org/10.1093/braincomms/fcad232
Visuospatial processing deficits are commonly observed in individuals with cerebral visual impairment, even in cases where visual acuity and visual field functions are intact. Cerebral visual impairment is a brain-based visual disorder associated with the maldevelopment of central visual pathways and structures. However, the neurophysiological basis underlying higher-order perceptual impairments in this condition has not been clearly identified, which in turn poses limits on developing rehabilitative interventions. Using combined eye tracking and EEG recordings, we assessed the profile and performance of visual search on a naturalistic virtual reality-based task. Participants with cerebral visual impairment and controls with neurotypical development were instructed to search, locate and fixate on a specific target placed among surrounding distractors at two levels of task difficulty. We analysed evoked (phase-locked) and induced (non-phase-locked) components of broadband (4-55 Hz) neural oscillations to uncover the neurophysiological basis of visuospatial processing. We found that visual search performance in cerebral visual impairment was impaired compared to controls (as indexed by outcomes of success rate, reaction time and gaze error). Analysis of neural oscillations revealed markedly reduced early-onset evoked theta [4-6 Hz] activity (within 0.5 s) regardless of task difficulty. Moreover, while induced alpha activity increased with task difficulty in controls, this modulation was absent in the cerebral visual impairment group identifying a potential neural correlate related to deficits with visual search and distractor suppression. Finally, cerebral visual impairment participants also showed a sustained induced gamma response [30-45 Hz]. We conclude that impaired visual search performance in cerebral visual impairment is associated with substantial alterations across a wide range of neural oscillation frequencies. This includes both evoked and induced components suggesting the involvement of feedforward and feedback processing as well as local and distributed levels of neural processing.
Crossmodal plasticity following short-term monocular deprivation.
Federici, A., Bernardi, G., Senna, I., Fantoni, M., Ernst, M. O., Ricciardi, E., & Bottari, D. (2023).
NeuroImage.
A brief period of monocular deprivation (MD) induces short-term plasticity of the adult visual system. Whether MD elicits neural changes beyond visual processing is yet unclear. Here, we assessed the specific impact of MD on neural correlates of multisensory processes. Neural oscillations associated with visual and audio-visual processing were measured for both the deprived and the non-deprived eye. Results revealed that MD changed neural activities associated with visual and multisensory processes in an eye-specific manner. Selectively for the deprived eye, alpha synchronization was reduced within the first 150 ms of visual processing. Conversely, gamma activity was enhanced in response to audio-visual events only for the non-deprived eye within 100–300 ms after stimulus onset. The analysis of gamma responses to unisensory auditory events revealed that MD elicited a crossmodal upweight for the non-deprived eye. Distributed source modeling suggested that the right parietal cortex played a major role in neural effects induced by MD. Finally, visual and audio-visual processing alterations emerged for the induced component of the neural oscillations, indicating a prominent role of feedback connectivity. Results reveal the causal impact of MD on both unisensory (visual and auditory) and multisensory (audio-visual) processes and, their frequency-specific profiles. These findings support a model in which MD increases excitability to visual events for the deprived eye and audio-visual and auditory input for the non-deprived eye.
A modality-independent proto-organization of human multisensory areas.
Setti, F., Handjaras, G., Bottari, D., Leo, A., Diano, M., Bruno, V., Tinti, C., Cecchetti, L., Garbarini, F., Pietrini, P., Ricciardi, E. (2023).
Nature Human Behaviour.
The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to combine information across modalities. However, it is still unclear whether the representation of coherent auditory and visual events requires any prior audiovisual experience to develop and function. Here we measured brain synchronization during the presentation of an audiovisual, audio-only or video-only version of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically developed individuals. Intersubject correlation analysis revealed that the superior temporal cortex was synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features, and relied on a similar modality-independent topographical organization of slow temporal dynamics. The human superior temporal cortex is naturally endowed with a functional scaffolding to yield a common representation across multisensory events.
Neuroplasticity following cochlear implants.
Pavani, F., & Bottari, D. (2022).
In Handbook of Clinical Neurology (Vol. 187, pp. 89-108). Elsevier.
Davide Bottari
December 2025 (McGill University), keynote lecture at EEG-Workshop event
December 2024 (Scuola Normale Superiore, SNS), invited seminar
April 2024 SIOP (Trieste), keynote lecture
March 2024 Paris Lodron University Salzburg, invited seminar
February 2024 University College of London (UCL), invited seminar
December 2023 IRCN (Tokyo), invited seminar
Alessandra Federici
July 2025 IMRF (International Multisensory Research Forum)
June 2025 ESPCI (European Symposium on Pediatric Cochlear Implantation)
June 2025 Milano Bicocca University, invited seminar
September 2024 WCA (World Conference of Audiology)
September 2024 AIP (Associazione Italia di Psicologia)
September 2024 SIPF (Società Italiana di Psicofisiologia e Neuroscienze Cognitive)
October 2023 Trinity College Institute of Neuroscience (TCIN), invited seminar
Benedetta Bianchi (Meyer Children Hospital, Florence, Italy)
Eva Orzan (Burlo Children Hospital, Trieste, Italy)
Alessandro Scorpecci (Bambin Gesù Hospital, Rome, Italy)
Francesca Garbarini (Università di Torino, Italy)
Linda Polka (McGill University, Canada)
Nathan Weisz ( University of Salzburg, Austria)
Takao Hensch (Harvard and IRCN Tokyo)
Stefan Debener (University of Oldenburg, Germany)
Marc Ernst (University of Ulm, Germany)
Lotfi Merabet (Harvard Medical School, USA)
Elena Nava (Università Milano Bicocca Italy)
Francesco Pavani (University of Trento, Italy)
Twitter: @SEED_IMTLucca
E-mail: seedlabimt@gmail.com