Home

Master in Artificial Intelligence and Robotics

Learning in Autonomous Systems

Proff. Luca Iocchi, Giorgio Grisetti

A.A. 2015/2016

NOTE. Since A.Y. 2016/17 this course has been replaced by Probabilistic Robotics (given by Prof. Grisetti) and a section of Artificial Intelligence (given by Prof. Iocchi).

Teachers

Prof. Luca Iocchi (Home page)

Prof. Giorgio Grisetti (Home page)

Dipartimento di Ingegneria informatica automatica e gestionale “Antonio Ruberti”

Università di Roma “La Sapienza”

Via Ariosto 25, Roma 00185, Italy.

Room B115

E-mail: iocchi/grisetti@dis.uniroma1.it

In e-mail messages, please use the prefix "[LAS]" to the subject.

Visiting Professor

Prof. Marc Hanheide (Home page)

School of Computer Science

University of Lincoln, UK

Office: DIAG, Room B114

Schedule

Monday 14:00 - 17:15 - Room B2

Friday 10:15 - 11:45 - Room B2

Last lecture will be on May 16th.

Office hours

Prof. Luca Iocchi - Friday 12:00 (during the lectures) or appointment by e-mail

Prof. Giorgio Grisetti - appointment by e-mail

Description of the course

The course gives 6 CFU and can be attended by any student enrolled in the Master degrees in Artificial Intelligence and Robotics, Computer Science and Control Engineering.

Objectives

The goal of the course is to present techniques and tools for machine learning in complex dynamic systems and autonomous agents. In particular, the course will describe probabilistic models for representing dynamic systems and autonomous agents, reinforcement learning techniques, learning in graphical models, state estimation techniques. The course will also present many examples of successful application of Machine Learning algorithms in different application scenarios.

At the end of the course the student will be able to use the addressed techniques and tools in modeling and solving learning problems for complex dynamic systems. Students will gain the capability of solving complex learning problems with dynamic systems, by devising a proper formulation of the problem, performing adequate design and implementation choices, designing and executing effective experiments to evaluate the results obtained.

Syllabus

  1. Introduction

    • Typical Problems for robotic applications

    • Basics of probabilities and linear algebra

  2. Models of dynamic systems

    • General concepts

    • Model taxonomy

    • Markov Decision Processes

    • Hidden Markov Models (forward, backward)

    • Dynamic Bayesian Networks

    • Partially Observable Markov Decision Processes

    • Probabilistic Graphical Models

  3. Reinforcement Learning

    • Q-Learning algorithm

    • Non-deterministic algorithms

    • Inverse Reinforcement Learning

    • RL in plan space

  4. Bayes Filtering in DBN

    • Discrete filters (forward)

    • Particle filters

  5. Learning in Probabilistic Graphical Models

    • Learning in HMM (Baum-Welch)

    • Learning in DBN: estimating CPD from supervised data sets

  6. Multi-Agent Learning

    • Multi-source multi-object tracking

    • Multi-agent learning

Teaching Material

  • Provided in the Lectures section

Other Suggested textbooks

They are or will be available in the DIAG library.