More than 1600 full-term, healthy children from families whose primary language is English, will be recruited across 13 collaborating labs with at least 200 children at each of 8 ages (3, 6, 12, 18, 24, 36, 48, 60 months).
There are three sources of data:
The “cross-sectional sample,” refers to these 1600 children (200 independently recruited at each of 8 ages). These data will be used for deriving preliminary norms. See Table 1 for details.
The “longitudinal groups” will be a subset of the above sample. Three groups of 150 children will be followed longitudinally for 3 time points/waves each. One will be tested at 6, 12, and 18 months, a second (separate) group at 18, 24, and 36 months, and a third at 36, 48, and 60 months. The first assessment of each group (at 6, 18, and 36 months) is also part of the cross-sectional sample. See Table 2 for details.
An independent “8-wave longitudinal sample” (N = 100) will also be available from a completely within-subject longitudinal design spanning 3 to 60 months from our current RO1 plus longitudinal follow ups at 48 and 60 months. These data will be pooled with the longitudinal groups yielding a final sample of about 250 longitudinal children.
Each lab will contribute 80 or more participants, and the aggregate dataset will provide a mix of high, middle, and low SES participants from homes where English is spoken to the child, with a small group (from 3 of our locations, Miami, Ithaca, Los Angeles) hearing a second language (not English) less than 30% of the time. All participants will complete a common background questionnaire detailing race, ethnicity, demographic information, SES, % exposure to different languages, sibling information and other variables to be used as covariates and/or to screen data.
Ages: 3, 6, 12, 18, 24, 36, 48, 60 months
Home Language: English (spoken to the child at least 70% of the time)
Inclusion Criteria:
Gestational Age:
3 and 6 months: 38 to 41 weeks
12 months and older: 37 to 41 weeks
Birthweight: 5 lbs or more
APGAR: scores of 8 or higher (if available)
At each age, (3, 6, 12, 18, 24, 36, 48, and 60 months), each participant will receive the MAAP (~8 min) and IPEP (~8 min) in a single visit, as well as an age-appropriate standardized language and social outcome (12 months and older). Also, each participant will receive an assessment of nonverbal cognitive functioning to characterize the sample and serve as a covariate (not an outcome). See Table 3 for details.
We strongly encourage all MultiNet sites to film all child-based assessments (i.e., not parental questionnaires like the CDI or ITSEA). This can protect against data loss and can serve as a source for additional coding. Also, we are hoping to upload videotapes of all assessments to Databrary.
In response to the COVID-19 pandemic and anticipated difficulties of conducing "in-person" data collection, we have transitioned MultiNet assessments to be administered remotely, using online data collection.
The Multisensory Attention Assessment Protocol (MAAP; Bahrick, Todd, & Soska, 2018) presents two side-by-side visual events, one in synchrony with a natural soundtrack, along with a central distractor event, and is adapted from infant 2-choice intermodal preference protocols. It integrates into a single test, assessments of three fundamental “building blocks” of attention that support typical development of social and communicative functioning. The MAAP assesses 1) duration of looking (attention maintenance), 2) accuracy of intersensory processing (synchrony matching/detection), and 3) speed of attention shifting to audiovisual social and nonsocial events in the context of high competition (requiring disengagement from a distractor event) or low competition (no disengagement required). The MAAP thus can provide a uniquely integrated picture of a variety of interrelated attention skills typically studied separately. For an example video clip, please visit https://nyu.databrary.org/volume/326.
There are 24 15 s trials across 2 blocks of 12 trials; one block of social (women speaking with positive affect) and one of nonsocial events (objects dropping into a clear container). On each trial, a 3 s central visual stimulus (dynamic geometric pattern) is followed by two side-by-side dynamic lateral events for 12 s, one woman/object in synchrony with its natural, centrally located soundtrack, and the other, out of synchrony. On half the trials, the central distractor remains on while the lateral events are presented, providing irrelevant competing stimulation (high competition) and on half the trials it is turned off when the lateral events appear (low competition; requiring no disengagement from the central event). Data are videotaped and coded live by trained observers, blind to the lateral positions of the events. The MAAP shows excellent interobserver reliability (n=11; duration: r=.90; accuracy: r=.91; speed: r=.90).
MAAP measures: Three measures are assessed on each trial:
Duration of looking (attention maintenance): Proportion of available looking time (PALT) to the two lateral events
Accuracy of intersensory processing: Proportion of total looking time (PTLT) to the sound-synchronous lateral event
Speed of shifting/disengaging: Latency or reaction time (RT) to shift attention to either lateral event
Whereas the MAAP focuses on 3 measures in high and low competition, the Intersensory Processing Efficiency Protocol (IPEP; Bahrick, Soska, & Todd, 2018) is a more fine-grained measure of intersensory processing—speed and accuracy of detecting a sound-synchronous target event amidst 5 asynchronous distractors. The IPEP assesses two measures of accuracy: 1) accuracy (frequency) of finding the target event, and 2) accuracy (duration) of attending to the target event after it has been found, as well as 3) speed of finding the target event for both social and nonsocial events. It is a more difficult and sensitive measure of intersensory processing, and successfully indexes developmental change from infancy through adulthood. For an example video clip, please visit https://nyu.databrary.org/volume/336.
On each trial, participants view a 3 x 2 grid of 6 dynamic visual events along with the synchronized soundtrack to one (target) event. There are 48 8 s trials across 4 blocks of 12 trials (2 social, 2 nonsocial). Social events consist of 6 women all reciting different stories using positive affect. Nonsocial events consist of objects striking a surface in an erratic temporal pattern creating percussive sounds. The soundtrack to each of the 6 faces/objects is played (in one of several random orders) in each block.
IPEP measures: Three measures are assessed on each trial and derived from eye-tracking:
Accuracy (frequency): Proportion of total trials on which the target was found (PTTF)
Accuracy (duration): Proportion of total looking time (PTLT) to the target event
Speed: Latency or reaction time (RT) to shift attention to the sound-synchronous target event
A 20-40 minute, widely-used parent report assessing receptive (words understands) and expressive (words understands and says) vocabulary. There are 2 age-appropriate forms: Words & Gestures (12 and 18 months) and Words & Sentences (24 months; optional administration at 18 months). Raw scores (vocabulary counts) and percentiles can be calculated.
A 10-15 minute, standardized assessment of expressive vocabulary and word retrieval. The EVT will be administered at 36, 48, and 60 months. Standard scores and percentiles can be calculated.
A 25-30 minute, parent report assessing 4 domains (internalizing, externalizing, dysregulation, social competence). The social competence domain is comprised of 5 subscales (attention, empathy, peer relations, imitation/play, mastery motivation). ITSEA social competence predicts observational measures of attachment, temperament, mastery, emotion regulation, coping, and joint attention. The ITSEA will be administered at 12, 18, and 24 months of age. ITSEA has strong internal consistency, α =.90, and test-retest reliability, r=.90. Standard scores and percentiles can be calculated.
A 10-25 minute, parent report that includes social skills subscales (e.g., cooperation, engagement, self-control, empathy) and (similar to ITSEA) problem behaviors (externalizing, internalizing). The SSIS will be administered at 48 and 60 months of age. It predicts a range of observational measures (e.g., adaptive functioning, inhibitory control) and has strong internal consistency, α =.95, and test-retest reliability, r=.87. Standard scores and percentiles can be calculated.
The Ages and Stages Questionnaire (ASQ; Squire, Bricker, & Twombly, 2009) is a 10-15 minute, parent report assessing 5 domains (communication, gross motor, fine motor, problem solving, and personal-social). The ASQ will be administered at 3, 6, 12, 18, 24, 36, 48, and 60 months. Total scores and developmental cutoffs can be calculated.
Databrary (https://nyu.databrary.org/) is an online data-sharing library and will serve as a central hub for upload and sharing all data (videos, raw looking/eye-tracking data, processed MAAP and IPEP variables, coding of outcomes, demographic information). MAAP/IPEP variables will be calculated and linked with background information into a single flat file for each participant and uploaded weekly by each site.
Our data management team (Co-I Todd, Consultant Kasey Soska, Postdoc, Bret Eschman, and Databrary consultant Rick Gilmore) will oversee compiling, hosting, and sharing the MultiNet dataset on Databrary. This will include:
Overseeing and facilitating the upload and tracking of data to Databrary from remote sites
Ensuring completeness and cleanliness (e.g. uniformity of naming conventions, background information) of uploaded files
Marking sessions for inclusion in MultiNet after quality control has been completed
Collating processed MAAP, IPEP, and outcome data
Generating an aggregated dataset on Databrary for analyses by the MultiNet sites
The Aggregate MultiNet Database will include data from all participants and all of their assessments across all sites. It will contain all useable data appropriate for conducting analyses (e.g., mean RTs across all MAAP trials, total number of words understood from the CDI, standardized scores from the WPPSI) and will be developed to be shared and explored by all Multinet investigators. From this database we will derive preliminary norms for performance on the MAAP and IPEP. We will create the Aggregate Databases by combining data across all Site Databases, and Site Databases will be uploaded using a common format across sites.
Detail and summary databases. Each measure (MAAP, IPEP, language, social, and cognitive outcomes) will have one “detail” and one “summary” database.
Detail Databases: These databases will consist of data from all participants at the most basic (or lowest) level of detail (e.g., individual looking times from the MAAP, visual fixations derived from eye-tracking on the IPEP; individual item responses from CDI). They will be useful for inspecting and screening “raw” data.
Summary Databases: Summary Databases will be derived from Detail Databases and will include data at a level that is most useful for analysis (e.g., mean scores across trials on the MAAP, total number of words spoken on the CDI).
Useable and reject data. Data will need to be clearly identified as Useable or Reject. Categorizing data as useable vs. reject can take place at various levels, including the trial level (e.g., individual MAAP-IPEP trial; individual item on parent questionnaire) or at the protocol level (e.g., entire MAAP-IPEP protocol). A participant may have rejected data at the trial level (e.g., becomes fussy during the last 5 trials of the MAAP protocol) but still have sufficient useable trial data for the protocol to be considered useable.
The following categories will be adopted for categorizing useable vs. reject data at both the trial and protocol level (They were created with MAAP and IPEP data in mind, but can be extended to outcome measures as well).
Definite reject (DR): Very fussy, crying, or asleep (for 80% or more of that trial), or extreme parental interference (e.g., turning child away from screen or covering eyes)
Possible reject (PR): Not attending well to task, somewhat fussy, sleepy (for most of that trial), or some parental interference,
Experimenter or Equipment error (ER)
Site PIs are encouraged to include any additional measures collected on their MultiNet participants (after first completing all required MultiNet assessments). These additional measures (e.g., fMRI, EEG, temperament scales, additional tests of social, language, or cognitive functioning, etc.) will create the MultiNet Supplemental Databases and will be shared with other MultiNet investigators. Complete useable data should be included (with reject data clearly marked using our standardized list).