Blogpost 4

Training the Model and Simulating Robot Experience

This week we worked on the person-recognition model that the robot will use, and created a virtual reality environment to help us test and simulate the interaction between person and robot.

Training the person-recognition model

During these two weeks, we recorded the training data that we are using for training the model and began training the model for person detection. The plan is for the robot to track the person using the camera to make sure that the robot does not lead too far from the user.

Testing Experience Using Virtual Reality

We also built a virtual reality simulation to test out the interaction between the user and the robot. Using Virtual Reality is a quick way to simulate a real-world environment and program any interaction that we might need between humans and robots. This is being used to test the method that we will be using to communicate with the user as well. Specifically, we're testing the interaction between if the user will be able to navigate and follow the robot through the corridor by using the audio queues that the robot provides. This can be simulated using the game engine, which has spatial 3D positioning of the audio that is being tracked by the position of the user's head using the VR headset. During the simulation, once the user has selected the the desired destination, the VR headset will turn off, and the robot will start to emit sounds to help the user locate the robot.