Remote Control

The purpose of the remote control app is to allow users to remotely control the UR5 robotic arm during surgery and get feedback from the onboard SafeVision system. The app communicates with the windows server on the host machine via a TCP client. Arm positions and sensor data is sent to the Android app to be used to both create a live simulation modeling the robotic arm’s current positions and to alert the user of any obstructions. The app lets the user move the arm by giving them control of each of the six individual joints. In the case of conflicting commands between local control and this remote control, local control will always take priority to avoid potential mishaps.

The IP address and port login allow the user to login to any robot on the same network, and password verification is handled server-side for security.

There are three different modes for the arm. Stop mode is the absence of either free-drive or auto. Free-drive allows the arm to be moved locally, manually, but not with the app. Auto allows the app to move the arm as long as it does not override local commands. The current mode cannot be directly controlled from the app, again to prevent conflicts between local and remote commands.

The sliders are non-interactable. The only way to change the joint angles is the press (or hold) the arrow buttons on either side of the sliders. This is to prevent the user from moving the arm too far at once. The issue with this is the arm would attempt to move too fast, which is definitely a safety hazard. The buttons move their respective joints one degree at a time, and one degree every 50 milliseconds when held.

The app is entirely programmed in Android, while the Windows server is programmed in C++, using the Windows Socket API (Winsock). The Winsock server constantly polls the Polyscope API that controls the UR5 arm for changes in the UR5 arm's positions. When a change is detected, it sends an update to the app, to update the sliders and the simulation. The server simultaneously listens to updates from the app. The server checks the current state of the arm, and then parses the input from the app, determining whether or not to actually send the update to the arm. The simulation is simulated using OpenGL and models the arm in real time, along with active sensor detections. Furthermore, if an obstacle is detected, the arm in the app will not move, as is expected.