.

Taking apart what brings us together:

a multidimensional approach to the origins of social (dys)functioning

This page describes in details the behavioral experimental tasks that would be applied in WP1 and the brain networks likely involved in those tasks as described in WP2.

Experimental Task applied in WP1

Set-up. The Figure below (Figure A) illustrates the experimental set-up. (A) Participants will comfortably seat in front of a table, and the experimenter’s confederate who will act as interaction partner will seat at the opposite side. Two pairs of buttons will be located on the table, one pair in front of the experimenter and one in front of the child. The two buttons forming a pair will have the same height (10.5 cm) but different dimensions (8 and 2 cm diameter) and will thus afford different movements to be operated on: (C) the bigger one can be pressed only with a whole-hand press and the small one can be pressed only with a single-finger press. A screen will be located in the middle of the table next to the buttons, on the left from the child’s perspective, so that both the participant and the experimenter will be able to optimally see the visual stimuli appearing on it. Visual stimuli will consist of images of a lion or a frog. Each of the participant’s buttons will be associated to a specific animal: the big button will be associated with the lion and the small button with the frog. This association will remain constant for the participant throughout the whole experiment. (B) At each trials, parters play in turn and make the animals appear on the screen.

Figure A. The figure illustrates (A) the experimental set-up, (B) the trial time-line, and (C) the movements afforded by each response button.

JA Task. Participants will play together with the confederate with the aim of making "the animals meet" on the screen. They will be asked to coordinate their actions with the confederate’s actions and make either the same (congruent) or the different (incongruent) animal appear on the screen. For example, request of “same animal” implies that if the confederate performs a single-finger press on the small button and thus makes a frog appear on the screen, the participant also has to perform a single-finger press on the small button to make a frog appear; on the contrary, in the example reported above request of “different animal” requires to respond with a whole-hand press on the big button to make the lion appear. Thus, participants will have to coordinate with the partner holding in mind that they share the goal of making the animals meet on the screen, and will coordinate on the basis of this knowledge. This resembles real-life interactive situations when the interaction effect is crucial (e.g. when playing a melody together by playing different instrument).

We will measure the ratio between the congruent and incongruent experimental condition as an index of JA performance, according to the following rationale. If participants represent the JA as dependent on the shared goal, they can associate their own and the partner’s action depending on it (e.g. associate incongruent actions as to obtain different animals, or congruent actions as to obtain same animals): this makes an either congruent or incongruent situation to be expected. On the contrary, if participants do not take the shared goal into account, association will not be made, and incongruent actions are expected to be more difficult because they will create interference with one’s own motor planning. Performance in the JA condition will be compared to that achieved by participants in a control Non Interactive condition that will be identical in perceptual terms, but will differ as to the absence of the shared goal. Here, participants’ instruction will be to make a specific animal (e.g. lion or frog) appear on the screen, playing after their partner. Here, the partner’s incongruent actions are expected to create interference, as shown by previous studies8,9.

Prediction Task. Participants will be required observe on the screen clips of the partner’s movements that show the action cut at different lengths (from 10% to 90% in steps of 5%). Participants will be asked to respond which button (the big or small one) the confederate would press on the basis of the observed kinematics, independently from direction (left/right side) that will be randomised. This task resembles real life situations when we need to rapidly infer what action other people are doing (e.g. passing a glass of water) in order to rapidly and correctly respond (e.g. grasping the glass).

Perspective Taking Task. Perspective-taking is a skill that has been defined in either a perceptual or cognitive terms (see Table 2), and it has thus been investigate by applying experimental tasks which focus on the ability to adopt another's perspective in either physical (spatial) or mental terms. As, on the basis of previous literature, it is not possible to decide a priori which of these two connotations of perspective-taking is more relevant to joint action, we will use two experimental paradigms that explore both aspects; we will preliminarily test whether they follow a similar or different developmental pathway, before comparing it with that shown by JA and action prediction abilities. Cognitive Task. Participants will play together with the confederate by making "the animals meet" on the screen. The set-up is identical to that of the JA task. Crucially, however, in some blocks of this task participants will be informed that the association between buttons and animals is inverted in the confederate’s button only: her buttons will “work in a different way” than the participant’s ones, i.e. with an inverted button-animal association. Thus, the participant would observe an action (e.g. whole-hand press) that for them would lead to an effect (e.g. the lion) but that leads to a different effect (e.g. the frog) for the confederate. If they want to anticipatorily prepare their response, participants then need to “put themselves in the confederate’s shoes” and take into account the other’s buttons work differently. This task resembles real-life situations when we need to coordinate with partners that provide their contributions to the shared goal in a way or via means that are different than the ones we would have chosen, e.g. when others mix ingredients in a different order than our favorite one while we are cooking together. In these instances, we are required to assume the other person’s perspective to correctly predict what he or she is doing and to understand why he or she is doing it, in order to efficiently coordinate and complement the partner’s action. This requires flexibility to shift from our own (egocentric) to the other’s (allocentric) perspective (Epley et al., 2004). For this reason, we will use the difference between performance in the reversed vs. not reversed association condition as an inverse index of the ability to take on the partner’s perspective while playing with him/her. Perceptual Task. we will measure spatial perspective-taking abilities by using the same experimental set-up used in the other tasks, with a small change. Here, a unidirectional black glass will be put on the monitor as to prevent the confederate to see half of the screen. Participants will be required to “make sure both you and your partner see the lion/frog on the screen” when pressing their buttons and making the animals appear on the screen. Participants’ button presses will randomly make the animals appear on the part of the screen either visible or invisible to the confederate, and we will test whether participants take into account this information while playing, and press the buttons until the partner also sees the required animal on the screen.

Importantly, modified versions of the tasks described above will be applied in WP2 in order to identify the related neural correlates, and in WP3 in order to train subjects' social functioning either thanks to the "JA" or the "Perspective Taking" tasks (depending on the winning Scenario identified in the pioneer phase, see Project Proposal).

Data Analysis.

In all tasks, participants’ performance will be measured in terms of accuracy of response (percentage correct trials over total) and reaction times (time-delay from the go-signal to participants’ response). Despite a general age-dependent improvement (i.e. higher accuracy and faster reaction times) might be expected as children grow up, and a general age-dependent decay of performance (i.e. lower accuracy and slower ter reaction times) might be expected in old age, we expect additional task-specific effects depending on different developmental pathways. Analyses. Participants’ performance will be analyzed by generalized linear mixed model analyses that allow taking into account individuals’ performance as clustered depending on the age-group. Simple slope analysis will be first performed separately for data recorded in: Children (age 2-17), Young adults (age 18-35), Older adults (35-59), and Elderly participants (60-80), to describe developmental pathways of (i) JA, (ii) action prediction, and (iii) perspective-taking performance characteristic of each age-group. A path analysis will be then performed to identify whether the causal association between performance in the three tasks changes across life-span (see Figure B).

Figure B. The figure illustrates the causal relationships that will be tested by applying a path analysis on data collected in WP1.

Neural Correlates of Joint Action, Action Prediction and Perspective Taking (WP2)

In WP2 we will be able to describe the neural correlates of our JA task as identified by functional magnetic resonance imaging (fMRI).

As described in the Project Proposal, we will test if the JA task (as compared to a control, non-interactive task) recruits prefrontal areas responsible for "abstract" social cognition or rather a fronto-parietal network responsible for motor planning and action prediction. To validate our findings, two localiser tasks will be added to test with a conjunction analysis in the very same sample whether the areas recruited during the JA tasks indeed correspond to areas in involved in well-established tasks requiring social cognition and/or action prediction.

Importantly, the results of previous review studies applying a quantitative meta-analytic approach suggest that tasks requiring social cognition and action prediction might indeed largely differ as for the brain activations they elicit in fMRI.

On the one hand, the review by Molenberghs and colleagues (2016) has applied an activation likelihood estimation (ALE) meta-analysis on 144 datasets (involving 3150 participants) and show that cognitive tasks requiring social cognition consistently activate bilaterally the prefrontal cortex (both in its dorsal and medial portions), the right temporo-parietal junction and bilaterally the middle temporal gyrus (see Figure C).

Figure C. The figure illustrates the results of the ALE meta-analysis on studies reporting fMRI activations during tasks requiring social cognition as described by Molemberghs et al. (2016) "Understanding the minds of others: A neuroimaging meta-analysis". Neuroscience & Biobehavioural Reviews 65:276-291.

On the other hand, using a similar approach, Casper and colleagues (2010) reviewed 129 fMRI studies on action observation tasks and showed they consistently activate fronto-parietal brain regions (see Figure D). Importantly, prefrontal regions were not significantly activated in these tasks.

Figure D. The figure illustrates the results of the ALE meta-analysis on studies reporting fMRI activations during action observation tasks as described by Casper et al. (2010) "ALE meta-analysis of action observation and imitation in the human brain". Neuroimage 50:1148-1167.

Despite the "Perspective Taking" and "Action Prediction" tasks applied in our project are not identical to the ones included in the review papers described above, they likely require cognitive processes similar to the ones involved in social cognition tasks (in the case of Perspective Taking) and in action observation tasks (in the case of Action Prediction). Thus, the results of these reviews, reported in Figure C and D, illustrate the brain network that are expected in our two tasks. However, in order to avoid inverse inference processes, we will test if this is the case by measuring the neural correlates of these tasks directly in our participants, and we will quantitatively measure the overlap of the related activations with those emerged during the JA task be means of a conjunction analysis.