This section covers inclusive techniques for incorporating auditory and tactile components to game design with an effort to provide a multisensory experience to visitors while considering vision based needs. It provides the basic, intermediate and advanced techniques in game design as well as specific software and hardware techniques . It also contains accessibility considerations for individuals using assistive technologies for visual disabilities.
If the game uses field of view (3D engine only), allow a means for it to be adjusted
Ensure sound / music choices for key objects / events are distinct from each other
Give a clear indication that interactive elements are interactive
Ensure manual / website are provided in a screenreader friendly format
Provide separate volume controls or mutes for effects, speech and background/music
Avoid placing essential temporary information outside the player’s eye-line
Sonification of 3D Objects and Information: Through the process of sonar style audio pulses, key features can be mapped in a way that creates an audio map for the user. This feature should be able to be enabled and disabled by command for the user. Notable uses are for directional alignment assistance and to highlight the location of key objects or goals. SenseTech Demo
Binaural Audio - Binaural audio is the technique of representing a 3-D stereo sound sensation of that heard expected in a real 3D environment. This includes directional sound, and change in audio direction in dependence of the user’s head orientation.
Communication Binaural Audio - This technique provides the notable uses of providing directional alignment assistance, providing instructions, and notifying users of start and end of simulations. It consists of representing audio in a direction based format, where for example, saying the sentence “turn left” being heard primarily from the left ear. Note this technique can be provided from both a user centric (user personal device and/or assistive technology) and surround sound capacity.
Example of this method 1: (Note this example requires stereo headphones)
Binaural Surround Sound 3D Soundscapes - The three techniques within VR/AR/AI that exist here are the provision of audio through surround sound speakers, stereo binaural, and 3D audio headsets. The purpose of this audio is to represent the sound of the environment as it portrays the picture. For example, if a car passes from left to right on the screen, the audio feedback for a front facing user is exactly left to right, however for a backwards facing user, the car will pass from right to left.
Example of this method 1: Projector Based Virtual Reality (Note this example requires stereo headphones)
Example of this method 2: Still image binaural representation:(Note this example requires stereo headphones)- link for this not working(ask Robert)
Example of this method 3: An example soundscape produced by Coppin and graduate students in Multi-Sensory Studio/Seminar (Inclusive Design, OCAD U) of a street scene depicted by Canadian artist William Kurelek.(add picture)
Gaze tracking for HMD VR - Gaze-tracking provides visually impaired users with information about their surroundings. With the tap of a button, the user is able to hear an audio cue about whatever object is directly in front (up to a certain distance) of their head gaze. This technology can be embodied in AR technologies through the techniques of AI based object, text and face detection.
Example of this method 1: AI Developed by Microsoft
Picture Alterations - An important part of making a digital median accessible is to ensure there is the option to adjust the visual settings to a level that is comfortable to the user. Different users with hypo or hyper sensitivity to light have unique needs. The following categories are standard adjustable options within the VR/AR/AI platforms:
Brightness: Adjustment of overall luminance experienced in the visual view. Important for those hyper or hypo sensitivity to light.
Contrast and Picture Sharpness: Adjust the brightness difference between each pixel and the average of its adjacent pixels. This enhances the picture for users with hypo-sensitivity to details found in the virtual scene.
Recoloring: Important for those with color blindness. For users with hyper sensitivity to high fidelity color pallets, they can interchanged with simple color pallets to reduce sensory overload.
Edge Enhancement: Option to enable a thin outline around all 3D objects or pictures. This helps users determine key information such as orientation, objects of interest, and helps low vision users get more information about the general shape of the image.
Text Augmentation: Adjust the text size, font, and color of text to ensure it can be read from the user.
Magnification Lens: Provide a tool that is either automatic or user controlled to provide close ups on far away or hard to see objects.
Described video - This technique consists of pausing the experience and allowing the user to listen to the description of the virtual environment. Mimicking the techniques of described video tactics for TV broadcasting, this method gives the user a clear understanding of objects, people, and narrative of the picture.
Example of this method 1: 2D video
Example of this method 2: 3D video
Option for Removal of Visual Effects - Special effects in VR/AR and AI applications should be used sparingly, especially if the choice of digital median is head mounted display devices. Effects can greatly disrupt the user’s sense of balance or expose the user to an experience that overwhelmed them. Some examples of these effects are of the following:
Flickering Lights: Ability to disable effects for users with hypersensitivity to a rapid change in brightness. This is especially important for users with a health history that includes seizures.
Highly animated or detailed backgrounds: In 3D VR/AR and AI applications, the draw distance of a picture can be reduced or blurred to ensure that only the primary features are conveyed to the user.
Screen overlay effects: Effects such as the screen changing color, blur or bloom (brightness) should be avoided or removable by the user.
Increase Picture Quality: Use of equipment that provides a higher amount of pixels per area. This increases the fidelity of picture alterations, and provides a more detailed experience as a baseline. For projector virtual reality, this includes the purchase of higher end projectors with higher pixel count. This technique is best embodied by the head mounted displays, where pixel definition of 8K pixels (4k pixels per eye) is commercially ready and achievable.
Short Focal Length: For low vision users, there is a correlation that with a shorter distance between the lens to the person's eyes, the greater the picture sharpness. Using a head-mounted display platform improves this component, as the lens is within an inch from the person’s eye. For project virtual reality platforms, using ultra or normal short throw projects will allow the user to closely approach the screens, and have a better view of the image, within masking it with a shadow. Other options include, projector representation on the reversed plane (transparent screens), or the use of LED monitors.
Auto Calibration of Lens Distance for Head Mounted Display Headsets: Each user’s head, and eye proportion is different. Each headset has the ability to be manually adjusted for this property as a baseline. With the use of face detection software, the lens distance can be altered automatically to be optimal for each user.
Force-Feedback Haptic Glove - A force feedback glove is a capture controller that can interface with VR/AR and AI applications. This hardware provides the following major features:
Provides force-feedback: Restricts user’s fingers when interacting with virtual objects.
Provides haptic (vibration) feedback: Independent vibration cells on each finger and palm for when your hand encounters a virtual object.
Haptic Bodysuit - A haptic bodysuit can transfer sensation from virtual reality to a human body through electric impulses controlled by a mini computer (control unit) with an advanced motion capture system on board. The technology is based on neuron-muscular stimulation that is widely used in elector-therapy, medicine and professional sport. [14] Haptic suits incorporate a mesh of sensors that could deliver a wide range of sensations such as touch, wind, water, heat, cold as well as the force with mild electric pulses. It can also collect data from the body for real time motion tracking and various bio-metric parameters.
Ultrahaptics Surfaces - Propulsion of air through focused ultrasound to recreate a 3D object in the virtual space. Sound waves are transformed into detectable shear waves that can be felt by the user's skin. This wave activates specific kinds of receptors in your skin that are tuned to adapt to stimuli in predictable ways. Some receptors turn off quickly, while others continue to send neural spikes to the brain as long as the stimulus is present. This allows users to realistically realize 3D objects, that are represented purely from a digital soundscape.