This project focuses on achieving Human Robot Interaction (HRI) along with the concept of Perceptual Anchoring. It consisted of four phases
1. Distributed Object Recognition (DOR)
2. Perceptual Anchoring
3. Dialogue - Speech processing (NLP)
4. Semantic-Ontology search
Having achieved Phase I - Distributed Object Recognition (DOR) with SIFT, FREAK (Fast Retina Keypoint) algorithm and MongoDB were used to complete Phase II - Perceptual Anchoring.
In dialogue processing phase Google Speech API was used for conversion of Speech to Text and Stanford Parser was used to analyze the text (dialogue).
The final phase was Semantic-Ontology search for which online semantic-ontology like WordNet, Sig.ma were used for gathering semantic relations and MongoDB was used for maintaining the offline database.
A detailed yet straightforward report is available here.