Recent site activity

Homepage

Research Fellow at the Sensory Motor Neuroscience (SyMoN) Laboratory

School of Psychology, University of Birmingham, UK.

Contact Information

E-mail address: a.r.koene_at_bham.ac.uk

Back to top


Research Interests

My core research interest is to understand how humans and other living creature perceive and behave; respond and act upon their environment and how this dynamic interplay shapes us into who we are. In recognition of the broad scope of this question I am a strong believer in the need for inter- and multi-disciplinary approaches and have worked at research groups in a wide range of departments and institutions, including university departments of Physics as well as Psychology, a bio-medical research lab, a robotics research laboratory and the RIKEN Brain Science Institute. Though my work has primarily taken the form of computational neuroscience I have also performed behavioural, psychophysical and Human-Robot Interaction experiments with healthy human subjects, been involved in neural imaging experiments and contributed towards the development of a humanoid robot.

Based on the philosophy of 'understanding through creating' I believe that bio-mimetic and biologically inspired computational and robotic engineering can teach us not only how to build more flexible and robust tools but also how actual living creatures deal with their environment. I am therefore a strong believer in the fertile information exchange between scientific as well as engineering research disciplines.

Current areas of investigation include: 
- Developing a Computational Social Science network at the University of Birmingham
- Establishing a Behavior Informatics platform to provide open access to behavior data and models for a deeper, more integrated understanding of human and animal behavior.
- Human-Human object transfer interactions for the purpose of improving Robot-Human interaction. As part of this project I'm also running an internet based experiment on gestures recognition.

Previous and ongoing work includes: 
- Computational modelling of action selection in the Basal Ganglia, Hippocampus based learning of associations between sensory inputs and spatial locations and Amygdala based acquisition of value associations with these sensory inputs patterns. Together these form the basis of a robot control system that uses emotion-inspired processing to bias decision making in mobile robots.
- Top-down modulation of sensory processing, auditory stimulus localization and sensori-motor control in humanoid robots, salience of search targets with multiple features, cross-modal (audio-visual) search, auditory modulation of visual object salience, cross-modal facilitation in signal discrimination, bi-stable visual perception, cerebellar control of saccadic eye movements. 

Current Projects

Computational Social Science

Computational Social Science (CSS) lies at the intersection of applied mathematics, statistics, computer science, and the social sciences, combining ideas from each of these to discover and understand patterns of individual and group behaviours. The aim of this work is to establish greater interaction and cooperation between the computational and social science research groups to make use of the modern data collection and processing opportunities for gaining a better understanding of human behaviour, at both the individual and group levels. The first stage in this process is a workshop I am currently organizing through the Institute of Advanced Studies that will aim to establish a more permanent Computational Social Science network at the University of Birmingham.

Behavior Informatics

The Behavior Informatics project aims to complement existing Neuroinformatics efforts with a platform focused on sharing, mining, analyzing and modeling of human and animal behavior. Behavior Informatics will integrate behavioral information across multiple scales and disciplines to discover new insights concerning the interaction between low level actions (or single agent behavior) and global (or societal) behaviors. For this purpose it will bring together related studies in disciplines ranging from Biology, Neuroscience and Psychology to Computer science and Robotics in order to encourage cross- fertilization on the basis of correlations in data patterns, shared problem spaces, complementary modeling and experimental design. By applying data mining methods as well as wiki-style human collaboration, Behavior Informatics will explore the correlations between different experiments/data sets/models to establish a clear ontology of behaviors and discover new links between behaviors at micro and macro scales.
More information can be found at the dedicated Behavior Informatics project webpages https://www.behaviorinformaticsproject.org.


CogLaboration: Real World Human-Robot Collaboration

The EU FP7-ICT project "CogLaboration" aims to develop "Real World Human-Robot Collaboration: From the Cognition of Human-Human Collaboration to the Cognition of Fluent Human-Robot Collaboration". As part of the CogLaboration project at the SyMoN lab I am using Motion Capture and PHANToM robot experiments to measure the forces, movements and interactions between humans performing simple object transfer tasks. Based on the data we collect from Human-Human interactions our CogLaboration partners will produce a robot-arm control system for Robot-Human object transfer interaction. This, in turn, will contribute to the development of service robots capable of safe and direct interaction with human at home or at work.

Back to top


Previous Projects

Reinforcement learning of temporal context effects 

At the Integrated Theoretical Neuroscience lab of the RIKEN Brain Science Institute I worked on the data analysis and computational modeling of Dopamine Neuron activity that was recorded in tasks where monkeys learned the temporal pattern of inter-trial reward probability changes (Nakahara et al, Neuron, 41, 269-280, 2004; Bromberg-Martin et al, Neuron, 67, 499-510, 2010).


ICEA: Integrating Cognition Emotion and Autonomy


As part of the ICEA project at the ABRG I am involved in computational modelling of the Basal Ganglia circuit for action selection as well as combining this circuit with models of the hippocampus and amygdalar (developed by other labs that are part of the EU ICEA project) to control a simulated rat (ICEAsim). The different models written in various programming languages (C, Phython, Matlab) and the Webots robot simulation environment are combined using Brahms
Short videos of the simulated rat performing a plus-maze goal directed navigation task
are available here [  Observer_POV][Rat_POV]
The movements where the rat freezes at the end of the maze arms are when the rat is receiving its reward. The duration of freezing corresponds to the amount of reward being received.


Computational model of contour integration 

In this project we are modelling the lateral interactions between V1 hypoercolumns to produce human-like contour integration (i.e. filling in of gaps between co-linear line segments). My role in this project is as advisor and proof-reader while the actual work is done by Zong-En Yu (PhD. student at NTU in Taiwan). 


Context effects on size perception: Psychophysics

Perceived size depends on a combination of retinotopic size (the size of the projection of an object on the retina), prior knowledge concerning the sizes of this type of object and the visual context that the object is presented in. Visual context can provide cues concerning the distance to the object as well as visual references such as objects of known size relative to which judgements can be made. In this study we used psychophysical measurements to determine how different texture backgrounds affect the perceived size of a test disk. This study was done at the National Taiwan University with Prof. Chien-Chung Chen.


Computational model of texture context effects on size perception

In this project I developed a model of size perception that combined bottom-up retinotopic size detectors with top-down contextual modulation in order to replicate and explain the data from our psychophysics experiments. This study was done at the National Taiwan University with Prof. Chien-Chung Chen.
fMRI study of background context effects on size perception

Using the same basic stimuli that we used in the psychophysics experiments, we employed an event-related fMRI paradigm to record brain activity related to context induced decrease in perceived stimulus disc size. This study was done at the National Taiwan University with Prof. Chien-Chung Chen.


Computational model of visual attention modes: mode selection and interaction

In this project (a collaboration with Dr. J. Moren at ATR, Japan) we aim to develop a visual attention system, for use in humanoid robots, that combines 'scene scanning', 'target tracking' and 'reflexive panic like' attention modes.


Computational modelling of rapid (saccadic) eye movement control

Development of models of the cerebellar circuit involved in saccade generation based on the experimental data collected by Dr. Denis Pelisson and Dr. Laurent Goffart.


Active audio-visual perception in a humanoid robot: reflex gaze shifts and attention driven saccades

Full awareness of sensory surroundings requires active attentional and behavioural exploration. In visual animals, visual, auditory and tactile stimuli elicit gaze shifts (head and eye movements) to aid visual perception of stimuli. Such gaze shifts can either be top-down attention driven (e.g. visual search) or they can be reflex movements triggered by unexpected changes in the surroundings. We are developing an active vision system with focus on multi-sensory integration and the generation of desired gaze shift commands that will be part of the sensory-motor control of the humanoid robot CB.


Salience from combined feature contrasts, evidence for feature specific interaction suggestive of V1 mechanisms

A target can be salient against a background of distractors due to a unique feature such as color (C), orientation (O), or motion direction (M) or combinations of them. Using subjects’ reports comparing saliencies between two stimuli, Nothdurft (Vision Research, 2000,40:1183-1201) found that combining features increases salience. Since salience serves visual selection rather than discrimination, reaction times (RT) provide a more direct measure. Krummenacher et al.(J. Experimental Psychology: Human Perception & Performance, 2002,28(6):1303-1322) measured RTs for detecting targets unique in O, C or combination O+C, revealing that O+C requires shorter RTs than predicted by a race model, which models RT as an outcome of a race between two independent decision processes on O or C only. We measured RT to locate targets unique in O, C, or M or their combinations. Significant (by t-test) violation of the race model by shorter RTs was found in O+M and C+O but not C+M. These results are consistent with some V1 neurons being conjunctively selective to O+M, others to C+O but almost none to C+M (Horwitz & Albright, Journal of Vision, 2005,5:525-533; Li, Trends in Cognitive Science ,2002,6:9-16). Comparing shortest RTs in the single versus double feature conditions corroborated this finding.


Cross-modal interactions in audio-visual perception

Audio-visual search experiments revealed that audio-visual synchrony detection follows the pattern of an attentive serial process and not a pre-attentive parallel process. This study was done in collaboration with Dr. S. Nishida and Dr W. Fujisaki of NTT, Japan.

We investigated if apparent enhancement of perceived colour contrast when a visual stimulus is coincident with an auditory tone (Sheth and Shimojo, 2004) is due to sensory level interaction between the visual and auditory modalities or due to interaction at the decision level.

We showed a significant improvement in the signal discrimination ability of subjects when the signal is presented in two modalities as compared to uni-modal signals. The bi-modal signal discrimination improvement was found to persist regardless if the signals in the two modalities are presented simultaneously or sequentially. For uni-modal signals in contrast it was found that there was no improvement in signal discrimination ability even when presentation of the signal was repeated.

How does the brain integrate signals from different sensory sources within or between modalities to form a coherent percept of the environment?

This work was done in Prof. Alan Johnston’s lab as part of the Human Frontier Science Program funded project: ‘The role of neural synchrony in multi-modal integration’.


Modelling of bi-stable perception (slant rivalry)

Raymond van Ee's ground at the Helmhotlz Institute recently investigated how much depth is perceived when subjects view a slanted plane in which binocular disparity and monocular perspective provide opposite slant information. Using a metrical experimental paradigm it was found that for small cue conflict observers perceived the slant of the plane as an average of the perspective and disparity specified slants. When the cue conflict was larger, however, observers experienced bi-stability. In a following experiment they measured the time course of percept changes during bi-stability in slant perception and the effect of voluntary control by the subjects. During this experiment four situations were tested: natural, hold perspective, hold disparity and speed up. By comparing the normalized histograms of frequencies of percept durations belonging to the different instructions the effect of voluntary control could clearly be seen both in the shift of the peak in the distribution and in the mean percept duration.

Traditional bottom-up competition models of bi-stability assume a ‘binary’ process in which the percept must choose one of two alternatives. The transition from an averaging regime to a bi-stable regime as a function of cue conflict is therefore inherently incompatible with traditional competition based bi-stability models.

We therefore developed a new model that uses a combination of spatial activity maps (for the averaging) and winner-take-all competition (for the bi-stability). The effect of voluntary control is included in the model as a top-down process that primes the neurons corresponding to the instructed shift in attention such that they have a heightened responds.


Mechanics of the oculo-motor system and its consequences for eye movement control

The topic of my PhD research was the control of saccadic eye movements. This work was subdivided into four subtopics: 1. Cause of kinematic differences during centrifugal and centripetal saccades; 2. Quantification of saccadic signal modification as a function of eye orientation; 3. Properties of 3D rotations and their relation to eye movement control; 4. Errors resulting from the use of eye plant models that treat agonist-antagonist muscle pairs as a single muscle.
Fuzzy-Neural Networks for automated rule extraction from data sets


Initialization and structure learning in fuzzy neural networks for data-driven rule-based modelling

Gradient-based optimization was used to fit a fuzzy-neural-network model to data and a number of techniques were developed to enhance transparency of the generated rule base: data-driven initialization, similarity analysis for redundancy reduction, and evaluation of the rules contributions. The initialization uses flexible hyper-boxes to avoid redundant and irrelevant coverage of the input space. Similarity analysis detects redundant terms while the contribution evaluation detects irrelevant rules. Both are applied during network training for early pruning of redundant or irrelevant terms and rules, excluding them from further parameter learning (training). The method was tested on a standard example from the literature, the ‘Rosenbrock Valley’.

Back to top


Personal Interests

  • Amateur theatre acting
  • Baking/Cooking
  • CGI modelling (Blender)
  • Hiking
  • Playing Piano & Violin
  • Writing of Short Stories & Poems

Back to top



Favourite Links

Behavior Informatics project
Comments