This project is meant to be a proof of concept of increasing one's spacial awareness using computer vision. The goal is to allow a user to query the app using voice commands and have the app interpret a live video feed to learn about the user's environment and then respond to the query. We will be using Alexa Voice Service (AVS) API to allow communication with the user and Google Cloud Vision API to gather information about the user's environment in real time.
Interacting with the app using AVS
Cloud Vision API not only detects the existence of a sign in this image, but it can accurately read the content of the sign