Amazon Alexa Prize Simbot Challenge

The Commander has oracle task details (a), object locations (b), a map (c), and egocentric views from both agents. The Follower carries out the task and asks questions (d). The agents can only communicate via language (Figure adapted from the TEACh paper).

Conversational Embodied AI

Amazon’s Alexa Prize SimBot Challenge asks teams to create help for a futuristic world that can understand natural language, interact with humans and accomplish tasks and missions.  

 

We are the only non-US group to pass the initial selection process, demonstrating our world-leading expertise in embodied conversational AI research and teaching. The team will receive a grant of $250,000 to support its development costs.


Our team will create EMMA (Embodied Multimodal AI), a next-generation assistant capable of learning continuously. We will focus on natural language generation and reasoning, machine perception, navigation, manipulation and dialogue creation to further push the boundaries of AI through our research.


The SimBot Challenge has two phases including a public benchmark which requires teams to design a machine-learning model with language-guided visual navigation and task completion. This is followed by a live interaction phase during which teams will test their bots to respond to customer commands and multimodal sensor inputs from within a virtual world. 


We are planning to contribute to the multimodal machine learning community with datasets, models, etc. Make sure to follow us on Github!


UPDATE: EMMA is one of the finalists of the Simbot challenge. You can read about it here: https://www.amazon.science/alexa-prize/simbot-challenge/one 

The EMMA Team

Amit Parekh 

(Team leader - PhD Student)

Bhathiya  Hemanthage

(PhD student)

Malvina Nikandrou

(PhD student)

Georgios Pantazopolous

(PhD student)

Dr. Alessandro Suglia 

(Project Manager)

Prof. Oliver Lemon 

(Advisor)

Prof. Verena Rieser

(Advisor)

Dr. Arash Eshghi 

(Advisor)

Dr. Ioannis Konstas 

(Advisor)