Speakers


 
Prof. Alan Johnston

Cognitive Perceptual and Brain Sciences and CoMPLEX, University College London


Talk Title: From local motion to global motion

Abstract: When an object translates rigidly the local speeds orthogonal to contours vary with the cosine of the angle between the local normal to the contour and the global motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern appears to move as a single surface. This global Gabor array has proved a useful tool to investigate the integration of local motion signals. We have found that integration is influenced by the spatial arrangement of the Gabor elements, that motion integration precedes spatial shifts and motion drag phenomena and that the apparent global motion direction can also influence the representation of local motion vectors. We have also compared two theoretical approaches to motion integration – the standard intersection of constraints solution (IOC) and a new combination rule, the harmonic vector average (HVA). The harmonic vector average has the advantage over the vector average of providing a viable speed estimate for global Gabor arrays. Psychophysical experiment using Gabor arrays in which the local orientation distribution is biased with respect to the global motion direction allow us to separate the HVA and IOC predictions. Our data show perceived direction and speed falls between the IOC and harmonic vector average predictions, indicating some role for the harmonic vector average in motion integration.

Biography: Alan Johnston is Research Strategy Director and former Director of CoMPLEX and was previously Head of the Department of Psychology at UCL. He is a member of Institute of Cognitive Neuroscience and a Honorary Professor at Nottingham University. He also leads an interdisciplinary network Vision@UCL. He has had 30 years experience in experimental psychology, neurobiology of vision and computational modelling. His interests range from the computational modelling of human image motion processing, through the effects of adaptation on visual time perception, to the capture, representation and perception of facial movement. His work has been supported by the MRC, EPSRC, BBSRC, the Leverhulme Trust, the Wellcome Trust, HFSP, EU, the ARC and NTT and he is a co-PI on the CoMPLEX DTC  and 2020 Science programme. He has published over 90 journal articles including 5 in Current Biology and 3 in Nature.
 

Prof. Steven Dakin

Institute of Ophthalmology, University College London

Talk Title: Visual Crowding

Abstract: Foveal vision deals only with the central 1% of the visual field leaving our peripheral vision to deal with the remaining 99%. Peripheral vision is limited not by acuity, or by contrast sensitivity, but by crowding – the tendency of clutter to interfere with our ability to recognise objects presented in the periphery. Crowding matters for two reasons: first, understanding crowding promises to reveal the limits of, and by extension the building blocks of, human object recognition. Second, many people with loss of central vision are forced to use their peripheral vision for everyday tasks and crowding limits e.g. face-recognition in such people; by understanding crowding we hope to help patients overcome it e.g. through development of targeted visual aids.
 I will describe work in my lab that has uncovered several aspects of crowding.
•    Crowding doesn’t make us uncertain, it actively changes appearance of our visual world
•    The regularisation of appearance produced by crowding is equivalent to an averaging or smoothing of information in the periphery.
•    Averaging happens across attributes like orientation, but also of position. (Blending of feature-positions explains why letters are so “crowd-able”.)
•    Crowding is all-or-nothing and essentially probabilistic.
•    Crowding is not a limitation of attention.
•    The results of crowding manifest in many cortical areas but appear to be strongest in later visual areas such as primate V4.

Biography: Steven Dakin is Professor of Visual Psychophysics at the UCL Institute of Ophthalmology, part of University College London. He is a member of the Institute of Cognitive Neuroscience, an investigator at the National Institute of Health Research Centre at Moorfields Eye Hospital and holds an honorary position at Massachusetts Eye and Ear Hospital in Boston, MA.  His research interests centre on spatial vision (form perception, texture vision and face recognition), low vision (amblyopia, macular degeneration and glaucoma) and visual deficits associated with schizophrenia and autism. He primarily uses visual psychophysics, functional brain imaging and computational modeling to explore these areas His work has been supported by the BBSRC, the Wellcome Trust, Fight for Sight UK, Special Trustees of Moorfields Eye Hospital and UK BMRC. He currently sits on the editorial board of Journal of Vision and has published over 90 articles in journals including Current Biology (5), PNAS (3), and Nature (1).
 

 Dr. Shin’ya Nishida

NTT Communication Science Labs, Nippon Telegraph and Telephone Corp.

Talk Title:
Hierarchical processing of motion information by human vision

Abstract: In this talk, I will give a global view of visual motion processing revealed by human perception studies including ours. The motion processing consists of multiple stages. At first, raw motion signals are extracted. The raw signals are 1D (orientation specific), multi-scale (spatial-frequency specific) and local (position specific). The next stage computes 2D and/or global motion signals through interactions among the raw motion signals. The output of this stage is the spatiotemporal pattern of retinal motion vectors. Finally, through analysis of the pattern of motion vectors, human observers recognise moving objects/events, such as a human walker, running water, and self motion. As opposed to a previous believe that visual motion is processed within a specialised module, it has been shown that motion processing is tightly interacted with form or colour processing.

Biography:  Shin'ya Nishida is a Senior Distinguished Scientist and Group Leader of Sensory Representation Research Group, Human Information Science Laboratory, NTT Communication Science Laboratories, Atsugi, Japan. He was born in Osaka in 1962. He received BA, MA and Ph.D degrees in Psychology from Faculty of Letters, Kyoto University. After spending two years as a postdoc at ATR Auditory and Visual Perception Laboratories, he has been working at NTT since 1992. He was an Honorary Research Fellow, Department of Psychology and Institute of Cognitive Neuroscience, University College London (1997-1998), Visiting professor of Tokyo Institute of Technology (2006-2012) and is a Visiting Professor of National Institute for Physiological Sciences (2008-). He is an Editorial Board Member of Journal of Vision, and Vision Research, and member (RENKEI KAIIN) of Science Council of Japan. He was awarded JSPS (Japan Society for the Promotion of Science) prize, and JPA (The Japanese Psychological Association) International Prize in 2006. His main research is psychophysical investigation of the mechanisms of human visual perception, including visual motion perception, material perception, time perception and cross-attribute/modal interactions.
 

Dr. Lewis Griffin

Computer Science and CoMPLEX, University College London


Talk Title:
Image Texture: Representation and Applications

Abstract: Within Computer Vision, methods of representing texture in images have been greatly advanced over the last decade by use of Bag-of-Textons (BoT) representations, which use a histogram of the local structural features (textons) present in an image. I will present a BoT representation which gives leading performance, across a range of databases, at the task of retrieving from a database the texture which matches a sample. The representation is based on textons defined in terms of Basic Image Features which are local symmetry features computed in a principled manner from the responses of a bank of derivative-of-gaussian linear filters, similar to V1 simple cells.
I will show the effectiveness of this type of image representation in two applications were texture would be expected to be useful: estimation of wind speed from the weathering patterns on sand grains; and identification of cell types and stress levels in phase microscopy images. I will also show that texture is more broadly useful than might be expected. I will demonstrate this by showing results on: identification of the authorship of handwriting; and learning the appearance of object categories from the statistics of language.

Biography: Lewis D. Griffin received a BA in Mathematics & Philosophy in 1988 from Oxford University, UK. He studied for a PhD while a Research Assistant in the Dept of Neurology, Guys Hospital, London. In 1995, he was awarded a PhD from the University of London for a thesis (“Descriptions of Image Structure”) in the area of Computational Vision. In 1997, following postdoctoral positions at INRIA Sophia-Antipolis, France and University of Surrey, UK, he was appointed Lecturer in Vision Sciences at Aston University, UK. In 2000, he moved to Medical Imaging Sciences, King’s College London. In 2005, he moved to Computer Science, UCL where he is at present, now a Senior Lecturer. He is a co-director of CoMPLEX, UCL’s Centre for Mathematics and Physics in the Life Sciences and Experimental Biology; and on the steering committee of the Centre for Forensic Science. His research interests are: image structure, colour vision and computer vision.
 

Dr. Xuefeng Liang

Dept. of Intelligence Science and Technology, KU Japan

Talk Title: Salient Motion Detection using Potential Surface

Abstract:  A challenge in motion segmentation is that different motions are often mixed up and interdependent in the real data. Since the 2D representation of the dependent motions is obscure, it leads the segmentation difficult. In this talk, we will discuss a motion segmentation and recovery method for complex scenarios. Using an invariant interpretation of the motion vector, we first transform the 2D motion vector field into the 3D potential surface. In which, different motions are placed onto different layers so that they can be segmented much easier. By applying the surface fitting, the potential surfaces of global and local motions are then estimated. Finally, the recovered motions are obtained by projecting the segmented potential surfaces back to motion field. Using potential surface makes our method able to deal with both independent and dependent, rigid and non-rigid motions without prior knowledge. We will also explore its application in video alignment and action recognition.

Biography: Xuefeng Liang is a G30 associate professor in the Department of intelligence Science of Informatics, Kyoto University. He was awarded a Ph.D. degree in information science from Japan Advanced Institute of Science and Technology, Japan, in 2006. After that, he worked on the vision system in robotics as a postdoc in Ubiquitous Functions Research Group, ISRI, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan. In 2008, he moved to London and worked on vision perception study at both Vision Lab, Queen Mary University of London and Department of Psychology, University College London UK. In 2010, he was appointed associate professor in Graduate School of Informatics at Kyoto University. His interests include computer vision, pattern recognition, image processing, and computational geometry.