I contributed to the front end UI and data acquisition side of the program, designing the UI and camera module that transfers a image file into the Firebase API creating an image URI that can be accessed my the Google computer vision. I also worked on getting the HTTP post response and displaying it on the UI. For the front-end we learned a significant amount about starting from a very simple react js template for a web app and turning it into a cross-platform application that can integrate both a storage database as well as a computer vision API. At first we hoped that we could just connect the computer vision back-end to the react JS front end, but realized that this would only work for a single platform because of the way the API was designed. Therefore, to keep our accessibility high, we chose to find a workaround. We first researched and learned how to use a database to just store primitive data types, and then furthered the use of the database so that we were able to upload images and create a public HTTP URL that the API could access. The extensive amount of network features we had to develop for this application served as a highly valuable learning opportunity for our group, especially since our experience up to this point was mostly concentrated to each of our local machines.