The theme of this year's workshop is Policy Approximation, meaning directly storing an approximation to the optimal policy (rather than computing it from, e.g., an approximate value function). Thus we are interested in policy-gradient methods, actor-critic methods, or really any interesting way of generating actions beyond computing them from value functions. For example, the drift-diffusion models of action selection in psychology seem very relevant, as does the idea of storing a mixed value/policy object in dynamic policy programming.

It is good to have a theme each year, but of course there is always residual interest in previous themes. Some themes from past years that seem to keep recurring are life-long learning, perceptual learning and representational change, state estimation, function approximation, real-time learning, temporal abstraction, and planning. It would not be inappropriate for there to be echos of these themes in this year's meeting.