To explore human behavior in VR environment, we want to study the performance of people in different conditions for a simple point and select task.
Pointing with controllers, and selecting objects (cubes/spheres) on sides of a hallway (4 meter long, 1.5 meter wide) in a limited time. Different Audio will be played when they clicked inside or outside of the object for audio feedback, and we will show the ray-cast as a visual feedback.
General Notes:
min distance to objects for walk conditions: 50 cm
Direction of virtual walking: down the hallway in a straight line
Direction of selection: Direction of the ray casted from controllers
We will choose targets in a way that people with average heights can see them in the FoV.
we’ll ask them to select as many targets as possible while minimizing the error.
Without walking Conditions
two controllers, for interacting
one controller, for interacting (dominant hand)
Actual walking conditions
two controllers for interacting, actual walking
one controller for interacting (dominant hand), actual walking
Virtual walking conditions
two controllers, both for interacting and virtual walking
two controllers, one for interacting (dominant Hand), one for virtual walking (non-dominant hand)
one controller, for interacting and virtual walking (dominant hand)
Before the experiment:
1 minute practice time per condition
Demographics form (age, gender, handedness)
Familiarity with different technologies (Wii remote, Kinect for Xbox, Kinect for Windows, Leap Motion, Oculus, Vive, HoloLens, PS4 move controllers) that support gestural Interaction
Design
within subject
Independent variables
Target width (3 levels) - 10, 20, 30
Target Density (2 levels) - 10, 20
Condition (7 levels)
Dependent variables
Task completion time
Number of attempts
Number of errors
Selection Hand
Number of Participants
5
Number of Trials
5 participants * 7 conditions * 3 widths * 2 densities * 2 repeats = 420
Order of Trials
Conditions: Latin Square
widths, densities: Randomized
Target Positions: Random positions from a predefined cube, half of them on right side and half on the left
After Experiment
NASA TLX
short questionnaire on the experience based on the experiment to get feedback on preferences, etc.
This recent paper shows how in-air gestures are still ineffective and limited:
Pham, H. A. (2018). The challenge of hand gesture interaction in the Virtual Reality Environment: evaluation of in-air hand gesture using the Leap Motion Controller.
Some ray-casting papers for virtual reality:
Mayer, S., Schwind, V., Schweigert, R., & Henze, N. (2018, April). The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
Lee, S., Seo, J., Kim, G. J., & Park, C. M. (2003, April). Evaluation of pointing techniques for ray casting selection in virtual environments. In Third international conference on virtual reality and its application in industry (Vol. 4756, pp. 38-44). International Society for Optics and Photonics.
A paper that compares the tracker based, and control based input for virtual environment:
A., Kulik, A., Huckauf, A., & Fröhlich, B. (2007, July). A comparison of tracking-and controller-based input for complex bimanual interaction in virtual environments. In Proceedings of the 13th Eurographics conference on Virtual Environments (pp. 43-52). Eurographics Association.
We expect to find answer to the following research questions:
We are interested to find out if having two controllers with the same functionalities would be more efficient in interactive VR applications in particular for pointing and selection.
We are primarily interested to see for two controller conditions that participants can use both controllers for interaction, which one they choose. Do they point and select everything with one controller or do they switch between controllers based on the location of objects on right or left side of their positions?
The result of this project will inform the following areas:
Gesture-controlled application for motor rehabilitation
Experiments informing motor behavior science
More intuitive interaction types in an immersed environment
The target platform is primarily head mounted display in particular VIVE, but we are planning to test it in CAVE2 as well.