Video Footage
We recorded video footage and audio from every group and task in the study. This footage is a screen recording of the experimenter's desktop PC with the Unity3D window open. An example excerpt is pictured below.
Of note is the top-left portion of the image. This shows a top-down view of the virtual environment which all participants share. Everything that is visible to participants is also visible in this view.
Note that due to anonymity purposes, we don't make the video footage publicly available. However, it is still possible to watch a re-simulated replay of the study using Unity, please see this page for details.
We used this footage for the following purposes:
Coding of collaborative phases of discussion and tightly-coupled collaboration
Coding of observations ranging from individual to group behaviours that have been deemed interesting
Sanity checking of insights found through other methods such as data visualisations
The outputs of the first two purposes are detailed in this page.
VideoCodingData
Three of the authors each coded different groups, mainly looking for:
Discussion phases, where participants would discuss topics relevant to the task regardless of their physical position in the environment
Tightly-coupled collaboration phases, where participants would visibly work closely with one another to solve some task. We did not consider groups individually working on the same task to be tightly-coupled collaboration. This typically involved some form of physical proximity to each other
Presentation phases, where participants were presenting their findings to the experimenter specifically during FET. We consider presentation to begin when groups were no longer solving the task, and had either begun to rearrange their workspace to present or begun figuring out how to present as a group
For each of these phases, we specify which participants took part, as there were many instances of only two people working together.
We then used R to merge each group and task into a single file:
Observations
Individual authors then more closely viewed the video footage to extract a deeper qualitative contextual understanding of individual and collaborative behaviours. Example characteristics of behaviours that we were looking for are:
Behaviours that are intrinsically unique to VR which are not possible on desktop-based visualisation system
Placement of 2D/3D visualisations in relation to the surfaces in the environment and to each other
Movement of participants in the environment in relation to the visualisations, panels, and tables present in the room
Transitions into tightly-coupled collaboration was initiated and what caused them
After annotating the footage for every group and task, similar observations were given the same behavioural code for the purposes of organisation, and further grouped into the following broad categories:
Visualisation Interaction - How participants interacted with and made use of visualisations, either individually or as a group
View Management - How participants individually organised their views (visualisations) in the space and surfaces around them
Workspace Organisation - How groups divided and/or shared the entire space amongst each other
Collaboration - How groups collaborated with each other, be it transitioning into collaboration or behaviours during collaboration
Spatial Awareness - How spatially aware participants were of the virtual objects in the environment, treating them like real objects
System Functionality - How participants made use of (or found limitations in) the functionalities provided by the system
Annotation - How participants made use of annotations via the marker
For thoroughness, we also classified and collated general behaviours and results of individuals and groups, in particular:
Individual Behaviours - The manner in which individuals managed their views and presented during FET
Group Behaviours - De facto collaboration strategy of each group
Answers (DT) - Answers given for each of the Directed Tasks,
Answers (FET) - All findings presented by groups alongside the specific visualisation used while presenting
The final compilation of observations can be found in the following Google Sheets document: