Selection is when the user indicates their intent. When your finger moves towards a specific elevator button and hovers, you are selecting.
A Selection Box or Marquee selection method can take the form of drawing a geometric outline or silhouette similar to a rectangle or lasso tool.
The Framing method uses a structured frame or object to select multiple objects contained within. Imagine a camera or limited view window to capture objects within its boundaries. In many cases, the frame is moved by orienting the viewport. In this screenshot, a gun has a scope that can select everything in its range.
The Swiping method usually uses a "hold" input, such as a depressed button or trigger, to maintain selection mode. There may be a raycast that then selects all objects that it touches.
Selection - Here selection occurs when in direct contact. It can also occur by touching an object with your hand or input device, colliding for the selection method to occur when grab button is pushed.
Committing - Here committing occurs when the grip button is pressed.
Selection - Bringing the input device near to an object without colliding enables selection, usually highlighting the object from a short distance
Committing - Here committing occurs when the grip button is pressed.
Committing is when the user confirms their intent. When you press that button you were hovering over, you are committing to that action.
Robo Recall (Epic Games, 2017)
Committing - It occurs multiple times: when the joystick is released to commit to the teleport location + rotation, when grabbing the object from the floor, and when activating the red laser. Committing could be perceived as the activation of an action that has been shown to the user as something possible to be done.
One example is a virtual handle object appearing on the item you are attempting to move.
Job Simulator (Owlchemy Labs, 2016)
Without additional icons or handles, the object is manipulated by interacting with it directly. It is the only object that can be interacted with when selected.
A virtual tool is an intermediary object or interface to control another object. Similar to the co-location object mentioned above. Think of the "grabber" or "crane" game machine. The controls to move the claw, and the claw is the intermediary tool.
Involves using both hands to perform roughly the same movements in coordination at the same time to accomplish a task
Eg: Driving a car with both hands on the wheel, or manipulating a rotation as shown.
Involves using both hands to perform roughly the same movements in coordination not at the same time to accomplish a task
Eg: Pulling a pulley, one hand pulls after the other then continues (Shadow Point)
Involves using both hands to perform different movements in coordination at the same time to accomplish a task
Eg: Pulling your hands apart and together to affect scale (Google Blocks)
Using both hands, but each hand uses a different motion and the movement of each hand moves at a different time.
Eg: Fishing, one hand controls the rod while the other controls the reel. The movements may or may not be happening simultaneously (Gone Fishing)
Linear Example
Hyperbolic Example
A beam of light extended to trace virtual pathing, pointing at the target surface. A “ray” is “cast in a line vector.
Linear: Straight line
Hyperbolic: Curved path
The raycast starts from head/eye mounted position, controlled by head movement (as opposed to hand movement). Usually this means the starting point of the raycast cannot be seen, as the point of origin is not visible from the eye position. Imagine a flashlight or laser strapped to the forehead.
Oculus Home (Oculus)
Tracked input devices can be things such as hand held controllers or other body trackers. Visual representation can be held objects (eg. swords, guns, boxing gloves) or virtual representation of the real-world controller shape, that the user can control to interact with virtual objects without input device controls (often through tracked motion).
Oculus Home Beta (Oculus, 2021)
In the absence of hardware, the hands are directly tracked by the camera. Requires the application and the headset to have hand-tracking capabilities.
Example of how a physical keyboard may look in VR
Some VR applications offer the use of a real-world, physical keyboard (the one you use for desktop computing) inside the virtual world. In other words, instead of a virtual keyboard in the virtual world that you interact with via raycasts or virtual touch, your fingers interact with your regular, plastic keyboard to input characters and commands to the VR application. There may be a virtual representation of the keyboard, but the typing is done in the physical world outside of VR.
Of course, this interaction is not offered in room-scale environments but may occur in VR experiences that are stationary.
You may see applications that are able to sense or respond to particular user gestures, regardless of the input method used to capture the gesture or the operations that it triggers.
Beat Saber (Beat Games, 2018)
Spatial gestures may include sweeping motions, pointing, pulling, drawing, diminishing a menu, and so on.
Usually, spatial gestures imitate natural physical laws of cause and effect in 3D space.
Surface-based gestures refer to gestures that take place in a 2D surface or plane within the virtual environment.
They may import interaction patterns typically associated with touchscreens such as taps, slides, and so on.
Symbolic gestures may include tracing a letter shape, waving goodbye, or using sign language.
They have been "assigned" explicit meaning by the application and are recognized as discrete tokens of user input.
The path a gesture takes used in nonverbal communication for emphasis or to add detail to the message that a person conveys or expresses.
(i.e "Put that object over there", "I caught a fish this big")
Affective gestures are used for emotional, non-verbal interactions. Usually these are intuitive and subconscious, but they may require an understanding of emotional nonverbal language interaction for which people may have varying ability. For example, people who have autism spectrum disorder may have difficulty interpreting affective gestures.
Brass Tactics (Hidden Path Entertainment, 2018)
as Icons, Shaders, Animations, External Light Source and Text Annotations / Tool tips, this is the type of design artwork focused on calling the user's attention to a possible action, either at hand or around, the signifier informs the user that an option is available.
Visual Shader would be the texture or material of a game object changing in a way that indicates an action (When placing objects in slots [keys in a door] the object may turn green if its the correct object or red if its misplaced or an incorrect object). In this case, the golden shader is turned on to indicate selection and the affordance of grabbing that tower.
Citation:
Joseph J. LaViola, Doug A. Bowman, Ernst Kruijff, Ivan Poupyrev; 3D User Interfaces Theory and Practice. Addison-Wesley Professional; 2nd edition (2017).
Jason Jerald. Ph.D.; The VR Book: Human-Centered Design for Virtual Reality. Assocation for Computing Machinery and Morgan & Claypool Publishers (2016).
Mixed reality UX elements - Mixed Reality. (2021, January 8). Microsoft Docs. Retrieved 2021, from https://docs.microsoft.com/en-us/windows/mixed-reality/design/app-patterns-landingpage