Trace Data

This form of data are those which were automatically logged and generated by the system during the study. The original data falls under the following three buckets:

Player Data

This is data pertaining to the player (participant). As there are three players in each group, three files are created per each task, one for each player.
This was logged at a rate of 20Hz, synchronised across all players by the server. Each row represents the point in time which the data was logged since the beginning of the task.

The values (columns) which we log are:

  • Timestamp

  • Position and rotation of the head

    • HeadPosition.x

    • HeadPosition.y

    • HeadPosition.z

    • HeadRotation.x

    • HeadRotation.y

    • HeadRotation.z

    • HeadRotation.w

  • Position and rotation of the left controller

    • LeftPosition.x

    • LeftPosition.y

    • LeftPosition.z

    • LeftRotation.x

    • LeftRotation.y

    • LeftRotation.z

    • LeftRotation.w

  • Buttons which were being pressed on the left controller

    • LeftTrigger

    • LeftGrip

    • LeftTouchpad

    • LeftTouchpadAngle

  • Position and rotation of the right controller

    • RightPosition.x

    • RightPosition.y

    • RightPosition.z

    • RightRotation.x

    • RightRotation.y

    • RightRotation.z

    • RightRotation.w

  • Buttons which were being pressed on the right controller

    • RightTrigger

    • RightGrip

    • RightTouchpad

    • RightTouchpadAngle

  • Object which the player was gazing at

    • GazeObject

    • GazeObjectOriginalOwner

    • GazeObjectOwner

    • GazeObjectID

  • Object which the player was pointing at with the left controller

    • LeftPointObject

    • LeftPointObjectOriginalOwner

    • LeftPointObjectOwner

    • LeftPointObjectID

  • Object which the player was pointing at with the right controller

    • RightPointObject

    • RightPointObjectOriginalOwner

    • RightPointObjectOwner

    • RightPointObjectID

Action Data

This is data pertaining to the actions which a player (participant) performs. As there are three players in each group, three files are created per each task, one for each player.
An action is logged as soon as a player performs it, with its timestamp synchronised to the server. Both discrete actions and continuous actions are logged.

The values (columns) which we log are:

  • Timestamp

  • ObjectType

    • The type of object that the action was performed on, usually a Chart (visualisation) or a Panel

  • OriginalOwner

    • The ID of the player who had originally created the object that the action was performed on

  • Owner

    • The ID of the player who currently owns the object that the action was performed on

  • Name

    • The name of the object that the action was performed on, if relevant, usually a button with text on it

  • TargetID

    • The unique ID of the object that the action was performed on, visualisations are given unique IDs to distinguish them apart

  • Description

    • The name of the action being performed, ending with start/end for continuous actions

Object Data

This is data pertaining to the objects that are in the scene. The server logs this data independently from the players.
This was logged at a rate of 20Hz. Each row represents one object in the scene at a given point in time since the beginning of the task. That is, when the server logs object data for a given timestamp, a new row is added for each unique object in the scene.

The values (columns) which we log are:

  • Timestamp

  • ObjectType

  • Player who owns the object

    • OriginalOwner

    • Owner

  • Position and rotation of the object

    • Position.x

    • Position.y

    • Position.z

    • Rotation.x

    • Rotation.y

    • Rotation.z

    • Rotation.w

  • Visualisation unique ID and size (vis only)

    • ID

    • Width

    • Height

    • Depth

  • Visualisation Properties (vis only)

    • xDimension

    • yDimension

    • zDimension

    • Size

    • SizeDimension

    • Color

    • ColorDimension

    • FacetDimension

    • FacetSize

    • xNormaliser

    • yNormaliser

    • zNormaliser

Due to oversights, some caveats to note with data collected are:

  • Visualisation size values were not logged for Group 1

  • Visualisation properties were not logged for Study Part A

  • Object Gaze/Pointing values were not logged for Study Part A

Using these three main datasets, we generate the following datasets which were used for the majority of the visualisations found on this website:

PlayerData

This is the positional and rotational data of the participants' movements over the course of the tasks. It also includes information such as what buttons they were pressing and what object they are looking/pointing at.

Code

Our goal was to consolidate player and action data together such that the timestamps in which actions occurred would be visible in the player data. Note that this is not intended to replace action data as a whole. We also had to merge each of the four tasks that each group performed into a single file. We decided to do this initial step using C#:

We then append all datasets from each of the 10 groups into a single file using R:

ActionData

This is the event data of all actions performed by the participants. This includes actions such as grabs/grasps, visualisation creations, visualisation destroys, brushing, and panel interactions and button clicks.

Each action has a specified start and end time, however the duration is only relevant for continuous actions (e.g. brushing, grabs). Each action also specifies the owner of the object which the action is being performed on.

Code

The first step was to process the actions such that each row represented a single action, regardless if it was discrete or continuous. That is, adding new columns for start and end times particularly for continuous actions. We decided to do this initial step using C#:

We then append all datasets from each of the 10 groups into a single file using R:

ObjectData

This is the positional and rotational data of the objects in the scene, which include the panels, visualisations, markers, and erasers. It also includes the participant that originally created the object.

Due to oversights, only Part B includes information about visualisation properties (e.g. x dimension, y dimension, colour dimension).

Code

We simply needed to consolidate all tasks and groups together into a single file. We did this first by using C#:

Then by using R:

StatisticsData

This calculates certain metrics such as movement and action counts. This sources data from other datasets: Player Data, Action Data, and Coding Data (found here).

Code

We enumerated through each group and task, for each one calculating measures such as distance travelled, actions performed, etc. Each of these measures are calculated in their respective functions defined at the start of the script. This was done using R:

LookingAtData

This calculates the duration of time which participants spent looking at visualisations with certain properties. Particularly, whether or not the visualisation was 2D or 3D, and the general location which the visualisation was located in the room.

These locations are broadly defined as the area immediately around the table, the area immediately around the outer wall, and the "in-between" remaining area. Note that this data only applies to Part B.

Code

To create the dataset, we first use Unity and C# to "replay" the data and calculate these durations. The replay itself is based on Player Data and Object Data: