Joint Action for Multimodal Embodied Social Systems. An EU FP7 collaborative project: University of Edinburgh (UK), Heriot-Watt University (UK), University of Bielefeld (Ger), FORTISS (Munich, Ger), and FORTH (Crete, Greece). The project involves research on human-machine interaction, robotics, and computer vision, and also aims to develop a proof-of-concept robotic bartender.
Watch this video demo to see the basic initial system in action.
Spatial and Personal Adaptive Communication Environment: Behaviours & Objects & Operations & Knowledge. An EU FP7 collaborative project: University of Edinburgh (UK), Heriot-Watt University (UK), KTH (Stockholm, Sweden), Umeå University (Sweden) and Liquid Media (Stockholm). The project is about pedestrian navigation and exploration using GPS and speech-in speech-out interaction on a mobile device.
Strategic Conversation. An EU FP7 "Ideas" project: University Paul Sabatier (Toulouse, Fra), University of Edinburgh (UK), Heriot-Watt University (UK). In this project we study non-cooperative, strategic conversation, in particular developing an artificial player of the game Settlers of Catan.
Computational Learning in Adaptive Systems for Spoken Conversation. An EU FP7 project, in which several European partners collaborate: University of Edinburgh, France Telecom, University of Geneva, Supelec, and the University of Cambridge.
An EPSRC funded project, focusing mainly on the Hidden Information State (HIS) POMDP dialogue system.
IMIX / PARADIME
Interactive Multimodal Information eXtraction. Completed in 2007, this research programme was funded by the Dutch national science foundation NWO, involving the collaboration between several Dutch universities. One of the projects within IMIX was PARADIME (PARallel Agent-based DIalogue Management Engine), which focused on multidimensional dialogue management within the context of interactive question-answering.
The general aim of the PARADIME project was to develop a theoretically and experimentally founded framework for dialogue management in a multimodal environment, based on the modelling of cooperativity, rational agency, and social conventions within the context of a dialogue. Starting-point of the research was a theoretical framework that was developed for analysis of information-seeking dialogues, also based on empirical data from telephone conversations for inquiries. This framework, which defines communicative acts semantically in terms of changes in the information state of an addressee when that addressee has understood the act, is in principle quite general. This framework, which is known as 'Dynamic Interpretation Theory (DIT)', was further engineered for multimodal information extraction on the basis of empirical research and computer simulation of dialogue management in the IMIX demonstrator system for the medical domain (in particular, the subdomain of RSI), based on a multi-agent architecture.
Dialogue systems are computer systems users can interact with by means of utterances in natural language, for example, in Dutch or English. Although systems with such a natural language interface can be more user-friendly than conventional systems, users may also show unpredicted behaviour. Any model for natural language interaction will inevitably leave relevant phenomena uncaptured. Furthermore, the system only has partial information about a user utterance and the context in which it was uttered, and therefore, interpretations can never be made with absolute certainty.
In this thesis, the use of Bayesian networks in dialogue systems is discussed, as a way of reasoning under uncertainty in the process of interpreting user utterances. The interpretations are based on the view that utterances can be seen as communicative actions, or dialogue acts. Experiments are presented concerning the task of recognising the dialogue act type of a given utterance in context. Machine learning techniques have been applied to induce different classifiers or this task (Bayesian networks in particular) from the data in an annotated dialogue corpus.