Research Projects

Current Projects:

MuMMER (EC H2020): Multimodal Mall Entertainment Robot (2016-2020)

We will develop a humanoid robot (based on Aldebaran's Pepper platform) able to engage and interact autonomously and naturally in the dynamic environments of a public shopping mall, providing an engaging and entertaining experience to the general public. Using co-design methods, we will work together with stakeholders including customers, retailers, and business managers, to develop truly engaging robot behaviours, including telling jokes or playing games, as well as providing guidance, information, and collecting customer feedback. Crucially, our robot will exhibit behaviour that is socially appropriate, combining speech-based interaction with non-verbal communication and human-aware navigation. To support this behaviour, we will develop and integrate new methods from audiovisual scene processing, social-signal processing, high-level action selection, and human-aware robot navigation. Throughout the project, the robot will be deployed in a large public shopping mall in Finland.

PI: Oliver Lemon

Duration: 48 months

Amazon Alexa Challenge 2017 & 2018

PIs: Verena Rieser, Oliver Lemon

Duration: 11/2016 - 11/2017; 02/2018-12/2018

website: here

Funded by Amazon.

MIRIAM

PI: Helen Hastie

Duration: 1/6/2017 - 12/12/2017

Website: here

Collaborators: SeeByte and Tekever

Funded by Dstl.

LUCINE - Learning User preferenCes by INtEraction (DataLab, EPSRC Impact Acecleration)

PI: Verena Rieser

Duration: 12/2016 - 07/2018

Collaborative Innovation project with EmoTech LTD.

Funded by the DataLab and EPSRC Impact Acceleration

MaDrIgAL: MultiDimensional Interaction management and Adaptive Learning (EPSRC)

PI: Verena Rieser

Research Co-I: Simon Keizer

Duration: 06/2016 - 05/2019

Grant number: EP/N017536/1

website: here

Joint project with Prof. Harry Bunt (University of Tilburg, NL), Dr. Norbert Pfleger (SemVox GmbH, DE)

DILiGENt: Domain-Independent Language Generation (EPSRC)

PI: Verena Rieser

Duration: 03/2015 - 02/2018

Grant number: EP/M005429/1

EPSRC website: here

Project poster: here

Joint project with London Media Technology Campus (UCL/BBC Sebastian Riedel, Co-I) and University of Sheffield (Andreas Vlachos, Co-I).

Past Projects:

Emote (EC FP7)

The EMOTE project will design, develop and evaluate a new generation of artificial embodied tutors that have perceptive capabilities to engage in empathic interactions with learners in a shared physical space.Co-I Helen HastieDuration: 36 months

Generation for Uncertain Information (EPSRC)

PI: Verena Rieser, Grant number: EP/L026775/1

BABBLE: domain-general methods for learning natural spoken dialogue systems (EPSRC)

PI: Oliver Lemon

Researcher Co-I: Arash Eshghi

Duration: 04/2015 - 10/2017

Grant number: EP/M01553X/1 : EPSRC website

Website: here

REGIME: REport GeneratIon from Metat-Data (Dstl funded under the ASUR programme)

PI: Helen Hastie

Duration: 11/15-11/16

Joint project with SeeByte and Thales

The two main barriers to wide spread adoption of autonomous unmanned systems are the lack of trust in the systems and information overload of operators and personnel. REGIME will investigate natural language report generation techniques to address these issues, optimising situation awareness during and post mission and ensuring that the rationale behind autonomous systems’ decisions is completely transparent- this is particularly important for underwater vehicles where bandwidth is limited.

Strategic Conversation: STAC (ERC)

STAC project website

ERC Advanced Grant 2011-2016

Parlance (EC FP7)

This Project is on hyperlocal interactive search (2011-2014).

Official website http://www.parlance-project.eu

Collaborative website (needs login)

SpeechCity Impact Acceleration Grant

Main PI: Verena Rieser

Duration: 11/2013 - 11/2014

Website: here

ECHOES (ESRC/EPSRC): virtual characters for children with autism

      • ECHOES II aims to develop an adventurous technology- enhanced learning environment in which both typically developing children and children with Aspergers Syndrome at Key Stage 1 (ages 5-7) can explore and improve social interaction and collaboration skills. The environment will also serve as a tool for researchers, teachers, parents, and practitioners to investigate problems that children may encounter in specific social contexts and the ways in which those problems may be addressed.
      • The technology-enhanced learning environment will combine existing technologies in new ways. With the active participation of user groups, we will combine interactive multitouch screens, gesture and gaze tracking, and intelligent agent-based context-sensitive interfaces to create a novel interactive multi-modal environment that can be adapted to the needs of specific individuals, and that can provide new ways of investigating and supporting the development of social skills in children.
      • ABC-POMDP (EPSRC): belief state compression for POMDP dialogue models
    • "Scaling up Statistical Spoken Dialogue Systems for real user goals using automatic belief state compression" (2009-2012)
    • Grant number: EP/G069840/1

SpaceBook (EC FP7): conversational speech applications for smart city exploration

EC FP7 2011-2014

The SB wiki: http://www.spacebook-project.eu/wiki/Main_Page

The email list: spacebook-all@cs.umu.se.

The SB website: http://spacebook-project.eu/

Help4Mood (EC FP7): virtual characters for health-care

EC FP7 2011-2014

Project page (internal)

The H4M "wiki": http://www.help4mood.info/KB/main.aspx

JAMES (EC FP7): socially intelligent Human-Robot Interaction

www.james-project.eu

PROJECT POSTER

EC FP7 2011-2014, project no. 270435

Classic project (EC FP7): Computational Learning in Adaptive Systems for Spoken Conversation

website

TALK project: Tools for Ambient Linguistic Knowledge (EC FP6): in-car spoken dialogue systems (with BMW, Bosch)

website