Sightlab Code Attributes and Methods
SightLab Code Attributes and Methods
In SightLab, various attributes and methods can be accessed and modified for experiment customization. Most of these are also adjustable via the GUI. See also "Importing Sightlab into Your Existing Code"
Also can see a generic example here without the GUI
Here's a brief overview of some key attributes and methods:
General Configuration
sightlab.is_GUI: Controls the GUI display (0 = off, 1 = on).
sightlab.objects.append: Adds objects to the tracking list. Always set the environment first.
sightlab.indicatorWindow.visible(viz.OFF)Hide the Sightlab indictator window to have your own info panels and windows
Example:
env = vizfx.addChild("utils/resources/environment/dojo.osgb")
sightlab.objects.append(env)
Changing Resources and Media Directory
sightlab.set_env_path("your/custom/path")
sightlab_360.set_media_path("your/custom/path") - For SightLabVR_360
#Note put the path to the folder where the resource is located, not to the actual file (i.e. if the file is in your root folder just put ('rootFolderPath/'), use ../ if in a seprate folder inside SightLab (i.e. sightlab.set_env_path("../Demos/MyDemo/"), although it is recommended to create a separate folder as all the files in the root folder will be shown
Experiment Settings
sightlab.sceneConfigDict["trials"]= : Sets the number of trials for the study (0 for infinite trials).
sightlab.sceneConfigDict["record"]= : Toggles session recording (1 = on).
sightlab.setGazeTime(0.3) : Adjusts the dwell time or fixation registration time.
Avatar and Gaze Point Configuration
sightlab.sceneConfigDict["avatarHead"], sightlab.sceneConfigDict["avatarrhand"], sightlab.sceneConfigDict["avatarlhand"], and sightlab.sceneConfigDict["gazePoint"]: Change the models used for the avatar's head, hands, and gaze point.
Example: sightlab.sceneConfigDict["avatarHead"] = "utils/resources/avatar/head/Male1.osgb"
Object Tracking and Interaction
sightlab.gazeObjectsDict: Used to add objects of interest.
sightlab.grabObjectsDict: Used to set objects as grabbable.
Note: If setting an object as grabbable, but not as an object of interest you need to add this code for it to show up in the replay (where "basketball" is the name of your object):
sightlab.objects.append(basketball)
Example:
For adding a list of objects to these dictionaries, see the following example:
for index in range(0, len(grabbableObjects)): sightlab.gazeObjectsDict[objectNames[index]] = grabbableObjects[index] sightlab.grabObjectsDict[objectNames[index]] = grabbableObjects[index]
Option 2:
OBJECT_NODE_NAMES = ['Soccerball', 'Baseball', 'Globe']
# Initialize an empty list to store the objectsgazeObjects = []
# Iterate over names to create objects and add them to the gaze objects dictionaryfor name in OBJECT_NODE_NAMES: gazeObject = env.getChild(name) sightlab.gazeObjectsDict[name] = gazeObject gazeObjects.append(gazeObject)
If working with a very large scene, may need to adjust distance of gaze point using this code:
- gazeMat.getLineForward(10000)
Getting Position of Child Objects
If you are using env.getChild to access your objects you need to use a .getTransform to access the object's position: targetTransform = env.getTransform('targetTransform')
It also works to call the getChild command on the GEODE associated with the object
Event Flags
sightlab.set_flag(''): Used to send a flag saved in the tracking data file. For example, when an object appears or a user action occurs.
Example
def buttonTrigger():
set_flag('T key pressed')
print("T key pressed")
vizact.onkeydown('t',buttonTrigger)
Gaze Time Events
onGazeBegin(e), onGazeEnd(e), onGazeTime(e): These functions handle events when gaze starts, ends, or reaches a certain duration (threshold).
viz.callback(eye_tracker_utils.GAZE_BEGIN_EVENT2,onGazeBegin), viz.callback(eye_tracker_utils.GAZE_END_EVENT2,onGazeEnd), viz.callback(eye_tracker_utils.GAZE_TIME_EVENT2,onGazeTime): These lines set up callbacks that trigger the onGazeBegin, onGazeEnd, and onGazeTime functions when the corresponding gaze events occur.
Experiment Control Events
viz.sendEvent(TRIAL_START_EVENT) Use this to trigger starting the trial (best to place this within a viztask function (see the included SightLabVR.py template)
viz.sendEvent(TRIAL_END_EVENT) Use this to trigger ending the trial (best to place this within a viztask function (see the included SightLabVR.py template)
In order to change the key or event that starts the session you can either change that in the settings.py file (CONTINUE_EVENT_KEY), or see this example for using either the right hand trigger or the spacebar:
def sightLabExperiment():
waitTrigger = viztask.waitEvent('triggerPress')
waitSpacebar = viztask.waitKeyDown(' ')
yield viztask.waitAny([waitTrigger, waitSpacebar])
viz.sendEvent(TRIAL_START_EVENT)
Gaze Data Functions
sightlab.gazeTime.getViews(): Retrieves the count of views (or gaze events) for each object in the scene. Useful for analyzing the frequency of gaze towards specific objects.
sightlab.gazeTime.getTotalTimes(): Retrieves the total gaze duration for each object in the scene. Helps identify objects that attracted the most attention.
sightlab.gazeTime.getAvgTimes(): Calculates the average gaze duration for each object in the scene, based on total gaze time divided by number of gaze events. Provides insights into the average gaze duration per object.
Experiment Management Variables
sightlab.participant: Used to store and manage participant-related data.
sightlab.timeLine: Manages the experiment timeline.
sightlab.trialNumber: Tracks the current trial number in the experiment.
Changing Starting Position and Getting Current Positoin
transportNode = vizconnect.getTransport('main_transport').getNode3d()
transportNode.setPosition([0,0,0])
transportNode.setEuler([0,0,0])
#Get current position to place for setting
def getPosition():
print(transportNode.getPosition())
print(transportNode.getEuler())
vizact.onkeydown('t',getPosition)
Accessing Raw Data
sightlab.eyeTracker.getMatrix(): The `getMatrix` method provides a 4x4 transformation matrix, which combines rotation and translation information to express the gaze vector's current state.
sightlab.eyeTracker.getMatrix().getEuler # Euler angles for the eye gaze
env = sightlab.objects[0] Get access to the environment model, add this after viz.sendEvent(TRIAL_START_EVENT)
sightlab.objects[1].getPosition(). Get the position of the gaze point (must be called after the experiment starts)
sightlab.objects[2].getPosition(). Get the user's head position (must be called after the experiment starts)
Note: to see a list of all of the objects you can get a handle to and their indexes, run a session and review the order at the top of the tracking_data_replay.txt file
Using the Sample Template Script
The SightLabVR.py script is one you can build off of to create new, custom experiments. Note: It is recommended to make a copy of the main script (i.e. SightLabVR.py) and work off of that (so you always have an unmodified version). You can add custom code in the sightLabExperiment function to add additional functionality to your experiment. This can be such things as adding additional phases to the experiment, proximity sensors, animations, toggling visibility, adding audio, avatars and avatar interactions, enabling physics and much more. To see a full list of options see the Vizard documentation.
import viz
import vizfx
from utils import sightlab
#Code to disable the GUI from showing (set to 1 if you want the GUI)
sightlab.is_GUI = 1
#run sightlab experiment. Add code here to run alongside main sightlab experiment
import viztask
def sightLabExperiment():
yield viztask.waitKeyDown(' ')
print('experiment start')
yield viztask.waitKeyDown(' ')
print('experiment end')
viztask.schedule(sightlab.experiment)
viztask.schedule(sightLabExperiment)
360 Video
sightlab_360.sphere.setEuler(180,0,0) To access the 360 video sphere object use sightlab_360.sphere. This code is showing how you can change the orientation of the video. Place this after the experiment has been started
def sightLabExperiment():
yield viztask.waitKeyDown(' ')
print('experiment start')
sightlab_360.sphere.setEuler(180,0,0)
The following code shows how to add multiple videos, just create a new mediaObject and then set that as the texture to the sphere object
yield viztask.waitKeyDown('v')
#Here can define if changing to an image or monoscopic/stereoscopic
mediaType = "video"
sightlab_360.sphere = panorama_utils.MonoSphere(mediaType)
mediaObject2 = viz.addVideo("utils/resources/media/football_pass.avi")
mediaObject2.play()
sightlab_360.sphere.texture(mediaObject2)
Session Replay
To access the environment object in the SessionReplay you can use the following code where objects[0] is the environment:
session_replay_object = session_replay.AvatarReplay(session_replay.replayfileName[session_replay.trialNumber-1],session_replay.trackingfileName[session_replay.trialNumber-1])
env = session_replay_object.objects[0]
For Multi User Session Replay
envTest = session_replay_server.AvatarReplay(session_replay_server.replayfileName[str(session_replay_server.trialNumber)][i],
session_replay_server.trackingfileName[str(session_replay_server.trialNumber)][i], i, session_replay_server.sceneManager)
env = envTest.objects[0]
Remember to check out each object's available methods by typing sightlab. and browsing the autocomplete list that appears.