ARGOS aims to be a teleconference option that gives both remote and local users an immersive experience which more closely resembles the feeling of presence of in-person interactions. The current version of ARGOS that our team is working in is denoted as ARGOS 2.1 and is the third iteration of the product. ARGOS is designed to accommodate circumstances when a single individual virtually attends an otherwise in-person meeting. The virtual individual is called the "Remote User" and interacts with the "Remote Device." The in-person group consists of "Local Users" who interact with the "Local Device."
Project Goals:
Provide a better alternative to video conferencing through use of a telepresence robot
Translate physical gestures and line-of-sight into mechanical movements
Replicate the nuances of face-to-face interactions
Improve upon the previous ARGOS design teams
On the local side of ARGOS, a tablet is mounted onto a moveable stand. The stand has a collapsible center shaft that allows the height of the product to the be easily adjusted so that it can fit a variety of local side setups. The remote user is displayed on the tablet so that they can be seen and heard. Additionally a local camera and microphone records audio and video to send back to the remote side. By processing instructions sent from the remote side, which are based on the remote user's eye tracking data, a set of motion instructions are generated for the two on board motors which control the screen's tilt and pan angles. The screen's motion allows the remote user to look around the room within both a horizontally and vertical range similar to as if they were physically there and simply moving their head. An on board battery pack located at the base of the stand allows for use of the system even when an outlet isn't within reach.
The local side consists of a computer with attached eye tracker, camera, microphone and speaker. Similar to the local side, the remote side is also able to both send and receive audio and video. The eye tracker connected to the remote device keeps track of where remote user is looking on the monitor's screen. This information is then wirelessly sent to the local device and used to create instructions for how the local device should move. This system allows the remote user to control the local screen without having to actively think about doing so.
For example, if the remote user is looking at the far left edge of the screen, the local device will automatically turn to the left, allowing the remote users to effortlessly look around the room as if they were actually there.