EEG Data Processing
If necessary, filter out R markers (this means response, as opposed to stimulus). These are unnecessary and can be removed through edit markers, select all response, delete.
Non specifically segment, select all relevant markers (encoding, retrieval, and responses) and segment with -200 before and ~1800 after. (this gets rid of unnecessary downtime).
Filter for electrical noise (Under Data Filtering, IIR Filter). I used a high pass up to 40hZ, you may end up using different settings.
Ocular Correction ICA! This is an important step for experiments that involve lots of eye movement, and it is irrelevant for experiments which don’t! Leave settings unchanged until the Ocular Activity tab. Select 66 for VEOG (vertical eye movement) and use common reference (not use reference channel). Select 65 for HEOG (horizontal eye movement) and again, use common reference. It is very important in the EEG set up that the ground adn the electrodes to the left and right of the eyes are part of electrode 65 (which is the one plugged in on the left in the amplifier) and that the ones above and below an eye are plugged into the port on the right. No other settings need to be changed. Once this step has finished running (it is slow) change the view to correction and show original. Then look for blink and horizontal eye movement components and check that they have filtered out the unnecessary signal through turning them on and off.
Segment by condition! In the IOR and basic cuing task, this means separating into the two localizers, LCatch, RCatch, a grouped valid/invalid, the individual LVal, RVal etc, and New, OldInv, OldVal. Only the retrieval blocks need any specific filtering (for New only includes subjects that responded new in the 5s period after the target appeared (place this in the Advanced Boolean Expression field S3 (100, 5000) or S4 (100, 5000) etc..
For each condition, do a baseline correction (only the preceding 200ms), then artifact rejection (make sure one channel does not appear too frequently, if it does it may need to be topographically interpolated).
Next do average references (Channel Preprocessing, New Reference, include all but 65 and 66 in its calculation) then simply average the segments (in the result evaluation section).
Finally include any peak processing (peak detection is also under result evaluation) that you feel is needed.
Before leaving remember to open the Quality Sheet and input Correct Rejections (correct new) Hit Cued, Hit Uncued, components removed etc. If the subject has a sufficiently low number of segments rejected then they are worth including!
Grand Averages/Exporting
Go to Grand Average (also under result evaluation) select the nodes (like L_Val let’s say) that you would like to grand average, name the new grand average (Grand Average L_Val), then the subjects you would like examined. (This is why keeping naming consistent throughout is essential!!).
Once all subjects are done (and you have made grand averages for figure purposes) it's time to export values. Go to export (which is the only important tab not in the transformation heading, but rather in the export heading) and either select export peak (typically not what we've been doing) or export area. If it's export peak, then just name a node in which you have used peak detection and name the peak you want exported).
For export area, you input the timescale you want examined and the subjects you want examined and then input all the nodes you want to examine (for the IOR task I would typically export L_Val, R_Val, L_Inv, and R_Inv all in one file).
For both peak/area the exported data values will be in .txt file on the computer's local version of the EEG Data under Export, open these files in Excel (and when asked by Excel if you would like to break these up into separate cells say yes) and then you should have either your peak or area information.
If it is peak it will only have been exported for the electrodes you specified when doing peak detection in data processing, if area it will have averages for all electrodes BUT will organize them by data node (so it is easy to find the values at, let's say, P6 for all four conditions).
Fixing figures for experiments with laterality issues
Go to the Grand Average that you want to rename channels in and choose pooling (under channel preprocessing).
Take the channel you want to consolidate (I was using P5/P6) and rename for each condition so that you have a consistent naming set up throughout. (In L_Val, for example, P5 is noncued target, P6 is cued distractor).
Once both Valid (noncued) conditions have had the P5 and P6 renamed through pooling, choose one node (it does not matter which) and select data comparison.
Choose sum, find the other node (the other valid/noncued node) and select it.
Use formula evaluator to divide each of these new ‘electrodes’ (noncued target and cued distractor) by 2 (as earlier they were summed).
Repeat this for the other condition, then (if necessary) rename the nodes again to just Target/Distractor (so that you can drag and drop between conditions (electrodes with different names obviously will not be overlaid upon each other)).