Special Session on

Reinforcement Learning, Deep Learning, Curriculum Learning and Attention Models

IJCNN2020 International Joint Conference on Neural Networks

Call for Papers

This special session will provide a unique platform for researchers from the Deep Learning and Reinforcement Learning communities to showcase and discuss their theory and application areas to share their experience towards a uniformed Deep Reinforcement Learning (DRL) framework, in order to allow this important interdisciplinary branch to advance on solid grounds. It will focus on the potential benefits of the different approaches to combine RL, DL, CL and attention mechanism to harness the best out of these fields. The aim is to bring more focus onto the potential of infusing reinforcement learning framework with deep learning and attention mechanism capabilities that could allow it to deal more efficiently with more realistic learning applications, including, but not restricted to, online streamed data processing that involves actions, which put the whole connectivity of the web at the fingers of such powerful models.

Deep Learning (DL) has been at the focus of neural network research community due to its ability to scale well into difficult problems and its performance breakthroughs over other architectural and learning techniques in important benchmarking problems. This was mainly in the form of improved data representation in supervised learning tasks. More recently, more complex architectures, such as generative adversarial neural networks (GAN), variational autoencoder (VAE) and autoencoders with attention mechanism, played an important role in advancing further the capabilities of learning architectures that can self-generate samples from data and they constitute an important step towards the ultimate goal of self-sustained learning process.

On the other hand, reinforcement learning (RL) is considered the model of choice for problems that involve learning from interaction, where the target is to optimize a long-term control strategy or to learn to formulate an optimal policy. But RL has explored already the ideas of self-sustained training process since its early days with the famous TD-Gammon. Typically, RL and DL combined (DRL) effectiveness is proven through games problems. However, DRL applications are much wider, for example DRL can replace any ad hoc process of model-tuning and hyper parameter tuning to come up with the best settings and recently it has started to be utilised to find the best architecture for a deep learning process. In addition, DRL application can involve processing a stream of data coming from different sources, ranging from central massive databases to pervasive smart sensors.

At the same time Curriculum Learning (CL) exists for both deep learning and for reinforcement learning and is gaining traction to become an important method for reducing training time for an RL system through a carefully selected schedule of intermediate tasks to be learned, in order to help an agent to accomplish a specific final task. The selection of intermediate tasks and task generation processes can both be automated. Combining DRL with Curriculum Learning has the potential of improving speed and performance for those models especially when combined with a self-training scheme. This advantage can be utilised to overcome issues of training an end-to-end DRL system which requires massive amount of data and a lengthy training time.

There is currently no uniform approach to view deep learning with reinforcement learning from the same perspective, despite good attempts. For example, GAN was proven to be related to the conventional actor-critic model, but in fact the two fields may need to be converge closer in the future which will have tremendous benefits for both communities. Examples of important open questions are: “How to relate experience replay with batch update?”, “How to make the state-action learning process deep?”, “How to make the architecture of an RL system appropriate to deep learning without compromising the interactivity of the system?”. Although recently there have been important advances in dealing with these issues, they are still scattered and with no overarching framework that promote them in a well-defined and consistent way.

Scope and Topics

Topics are of wide interest, they include, but are not limited to the following

Novel DRL Algorithms and Architectures

    • Optimising Deep Neural Architecture using RL
    • Transfer Learning for DL and RL
    • Attention mechanism for RL
    • Curriculum Learning for DL and RL
    • Optimisation and Convergence
    • Hierarchical RL

DRL on Cyber Security Applications

    • Facial Recognition
    • Emotion Recognition
    • Agitation Recognition
    • Suspect Image Synthesis

DRL Applications on Control and Sensors

    • Robotics and Control
    • Smart Building and Cities
    • Image Processing
    • Computer Vision

DRL on Data Analytics Applications

    • Athletics Performance and Coaching
    • Sports Analysis
    • Data Streams
    • Time Series
    • Policy Improvement

Important Dates

  • Paper Submission: January 30, 2020
  • Notification of Acceptance: March 15, 2020
  • Camera Ready Deadline: April 15, 2020
  • Conference Dates: July 19-24, 2020

All papers should be prepared according to the IJCNN 2020 policy and should be submitted electronically using the conference website (https://wcci2020.org/submissions/) .

To submit your paper to this special session, you will use the IJCNN upload link and choose our special session "Deep Reinforcement and Curriculum Learning with Attention Architectures, Theory and Application" in the research topic list.

All papers accepted and presented at IEEE IJCNN/WCCI 2020 will be included in the conference proceedings published by IEEE Explore, which are typically indexed by EI.

Session Chairs

Dr Abdulrahman Altahhan, a.altahhan@leedsbeckett.ac.uk, Leeds Beckett University, UK.

Prof Vasile Palade, ab5839@coventry.ac.uk, Coventry University, UK.