Analysis

Statistical analysis for object lifting studies

The section presumes you've done an object lifting project examining how either (or both) the size-weight illusion and fingertip force prediction changes as a function of some variable (e.g., fatigue, colour, etc). It also assumes that you've finished preprocessing and sorting all your grip/load/perceptual data as outlined in the pre-processing section of this wiki.

At this point, you should have a giant rectangle of data in Excel, with every row representing a participant, and each column representing different trials nested within different conditions nested withing different dependent variables. You are now ready to copy and paste this massive chunk of data into the stats program of your choice (JASP, JAMOVI, or SPSS). Remember, if you use SPSS you will need to create the variable names manually (with some Excel help) to paste in. If you want use use JASP/JAMOVI, simply save your big rectangle of data as a .csv file, which you'll be able to open directly with either program.

First thing you want to do is to visualize you data. Usually the best way to do this is to run a condition x trial ANOVA (i.e., don't average across your repeated trials within a condition - just plot them all) which will give you a graph which looks a bit like one of the ones from Figure 2 in this paper. This figure is a pretty good representation of what we'd expect to see in a typical weight illusion study - constant differences between the reported heaviness for the large and small objects across trials (perception - top graph). Statistically, this should give you a main effect of object size (the size-weight illusion), a main effect of trial (stuff feels heavier overall the more you lift), and no size x trial interaction (the magnitude of the illusion does not change. By contrast with the grip and load forces you should see large differences between the force rates applied to each object on the first few trials, which are quickly wiped out (fingertip-force adaptation, as seen in the middle and bottom graphs). Statistically, this pattern of data usually leads to a main effect of size (driven by the large differences in the early trials), sometimes a main effect of trial (which is meaningless), and a strong interaction between size and trial (showing the fingertip force adaptation itself). Graphs like this are usually worth including in a thesis/paper simply because they provide a nice overview of the data and lets the reader see that everything is as it should be. But they don't usually address your experimental hypothesis...

Now lets think about the experiment, and testing our hypotheses. Generally speaking, with object lifting studies we are using manipulations of the stimulus to induce an effect on perception/action (such as size manipulations to induce a size-weight illusion), and then we are looking to see whether this effect is made bigger or smaller due to our experimental manipulation. 

So first off, perception. Using the size-weight illusion as an example, the difference between the ratings given to the smallest object and the ratings given to the largest object will give you a simple metric of how 'big' the illusion was in that particular condition. Typically the SWI doesn't change over time (no size x trial interaction - verify this with your graph), so we can average the ratings given to each object in each condition across all the repeats (usually 8-12 repeated trials) and then subtract the average rating given to the large object from the average rating given to the small object. Then compare the average difference score (i.e., the strength of the illusion) across your groups/conditions with a simple ANOVA or t-test (depending on how many groups/conditions you have), which will tell you if the strength of the illusion has been affected by your manipulation.

The other side of the coin is action - did you manipulation affect the way that your sensorimotor  system planned to pick up the object? This is done in a similar way to the perceptual analysis above, but instead of averaging across all 10 repeats of of each object/condition combination, we only want to look at this effect on the first trial. Why? Think back to the the middle and lower graphs on Figure 2 of this paper - the effect is really only there on early trials before automatic error corrections kick in and mess everything up for our attempt to get a peek behind the curtain. So this time you subtract your smallest object's initial force from the your largest object's initial force, to get a measure of 'sensorimotor prediction' and see how it varies across conditions - just like the perceptual measures. This number should be largest on trial 1, and get progressively smaller over the course of the experiment. There's a few ways you can go here, but the simplest thing to report is simply how this trial 1 difference score (the sensorimotor prediction) varies across your groups/conditions with the same statistical test you performed on the perceptual data.

Most of my papers follow this strategy - after that it's simply a matter of making sure you point out how your stats test your hypothesis (rather than simply show all the differences) and nicely visualizing your data. I think this paper of mine is one of my better efforts at doing this.