Format and Topics
The purpose of the workshop is to exchange ideas and experiences. We will have talks followed by time for discussions, roundtable meetings, and focus groups.
Friday night will have talks from Users such as Expert Echolocators. Saturday morning will have talks from Scientists.
Major topics will include:
- Human Echolocation
- Scientific Models of Human Perception
- Brain Plasticity and Development (mainly in terms of perception)
- New Methods of Sensory Substitution/Augmentation
More specifically, each speaker's talk will be about:
- Friday, 13:00, Daniel Kish: Perspectives on Flash-Sonar.
- Saturday, 9:30, Jenny Read: Depth Perception and Cue Combination. It can be tempting to think of our perceptions as reporting objective truths about the state of the world around us. However, scientists now think of perception as a more active process, with our brains constantly inferring the most likely state of the world, based on sensory inputs and our own previous knowledge and experience of the world. To help with this, we combine all available sources of information, or “cues”, both within a sense (like vision) and across senses (like combining vision and touch). Depth perception is one of the most challenging aspects of vision, as information about distance and 3D structure has to be inferred from a flat retinal image. I will briefly discuss the visual cues we use to we achieve this.
- Saturday, 10:00, Marko Nardini (co-authors James Negen and Lore Thaler): Augmenting Human Perception. When can an augmented sense (such as human echolocation or from vision-to-sound devices) feel and function like a ‘native’ sense? In our research we are asking to what extent the brain treats new senses comparably to more established ones. We are studying three key markers for this. Does the brain use the same methods for dealing with noise and conflicting signals? Does the processing become “automatic”? Is it done using the same sensory brain areas involved in standard perception? Our research aims to find out whether it is possible to meet these criteria – and if so, to find the best ways to do it. I will discuss our recent work, teaching vision-to-sound and touch-to-sound augmentations to sighted people, and sketch some of our future plans to further study the flexibility of human perception.
- Saturday, 10:30, Quoc Vuong: Please DO touch the display! Using next generation touchscreens to enhance users' multimedia experience. In places like art galleries, visitors are often warned "Please DON'T touch the display!". But touch can be a powerful sense to enhance a person's visual experience. In the Haptic Vision project, we are using next-generation touchscreens that can provide users with realistic touch sensations, like the roughness of a surface. I will present current studies which explore how blind and sighted people perform on different tasks with these touchscreens, and how their brain makes sense of the sensations from these touchscreens.
- Saturday, 11:30, Amir Amedi: What Longitudinal Sensory Substitution Studies Teach us about Brain Organization and Neuroplasticity
- Saturday, 12:00, Lore Thaler (co-authors Liam Norman and Dorothy Cowie): Click-Based Echolocation and its use for People with Visual Impairments. When vision is unavailable, information about the space that surrounds us is provided by our other senses. Whilst touch is useful for things that are close by (including with the extension of a tool like a cane), hearing can give us information about things that are out of reach. Whilst for regular source hearing people rely on the environment itself to make a sound, click-based echolocation gives the user more control, because the user can decide when and how to make clicks and to investigate their surroundings. I will talk about some of our research findings that strongly suggest that the use of click-based echolocation for people with vision impairments is not limited to having more sensory information from each and every click, but that there appear to be longer term benefits on a more general level. The specific examples I will give are about walking, in particular walking paths and walking speed, and about brain function, in particular how the brain adapts early visual cortex to process sound.