For the final week of the project I focused on finishing the project website, creating the final presentation, and fixing the last bugs on the application. Given that my team was presenting rather earlier we started to work on our website and final presentation this week. I decided to add elements to our website that would mirror an actual website for a commercial android application. This was to add more content besides project updates and technical explanations.
As for the bug fixes I finished this week, I revamped the UI of our application to better present the images in our feed. I also changed the feed to include a stylized header for the bin location you are in. Ultimately, the UI was cleaner and provided a smoother experience for the user. The other huge bug I fixed was the order of the posts on the feed. There was a major flaw where our app would have a random order in showing the posts that were uploaded. In addition, recently added posts would initially appear first and then jump to the end of the list. I wanted to achieve the classic feed sort where the newest items are displayed first. To accomplish this, I added a timestamp to our PhotoPost class that represents each post. In the backed I would order them based on the timestamp. However, Firebase only supports ordering on ascending order. I tried multiplying the timestamps by negative 1 but that did not work. In the end I just took the posts and reversed their order client side.
By Wednesday night, I had put the final touches on the application. My partner and I simply tested it to make sure we were ready for the final presentation and demo. The last bit of work left is to add some more information to our website. The project was a great learning experience where I was able to freshen up on Java programming and learn to develop on a new platform with new API’s.
For this week I worked on the final piece of our functionality, the ‘explore’ activity. This feature would allow users to visit other locations by viewing their respective feeds. However, they would not be allowed to post to any of these feeds as they are not in that particular geographic location. I want to note that we pivoted form our original idea for our third feature. At first we were going to allow users to manage their posts, but after realizing we would have to implement SQLite functionality in our application with the limited amount of time we had left, we decided to pivot. Learning a new tool and language seemed like a large amount of work, instead we chose to implement our new ‘explore’ feature. This would increase user interaction with our application as they would have more than just their current feed to interact with.
As for the implementation, it was very similar to the main feed I had implemented last week. In fact, it was simpler as I did not have to account for location changes. I created an activity that would allow the user to select other locations to view. It was made by passing in the current location from the main feed via an intent. I also created a separate activity that would act as the feed for these separate locations. The structure of the xml was very similar to that of the main feed which made it an easy task.
Finally finished the camera activity to capture a livestream. This livestream activity is fullscreen and has a small capture button in the bottom, so whenever the user wants to capture an image, a new activity will be spawned to upload the image into firebase.
This took so much more time than it should have. The camera preview or livestream has a size that was slightly smaller than the width of the screen in either portrait or landscape. The error was caused by a use of the Graphic Overlay class specific to another activity, so the change was fixed and the camera works fine. Next step is to add upload functionality.
This week I finished off the main functionality of the main activity of our application. This activity is where the users can browse through the images that belong to his/her particular geographic location. The first task I had to accomplish was to connect our application to our database. For this project we are using Google’s Firebase Realtime Database. I set up the project on the Firebase console and connected the application to be able to read off the database. I set up the database so that we stored the uri’s of each photo and then just referred to our Firebase storage for each individual image.
Another huge part of my work this week, was setting up the listeners that would check for changes in the database so that they would work in sync with the location listeners I set up last week. This is important because depending on what the user’s location is we want them to be receiving data from a specific part of our database. This required me to explore the different states one would encounter jumping from location to location.
At this point, the functionality of our main activity is complete. I am looking to revamp the UI so that it looks more pleasant.
Created a new camera branch from merging the master into the camera branch and starting dev work on it. Created a camera intent to start, in order to use the camera application to gather data, but then promptly removed it and created a custom camera similar to the first assignment.
Finally, we decided that fragments was not the way we wanted to go and initialize our capture, so we decided from then that we will use full activities in order to capture an image and upload it to firebase. So the Camera UI got a bit of an upgrade to use the full screen for the camera using a Full Screen Activity template.
For this week my contributions for the project have revolved around the main activity of our application. This Is the application that handles viewing images and videos that have been uploaded around the user’s geographic area. Specifically, this week I developed the location tracking portion of our application. Using both GPS and network connections (we prioritize network and then use GPS if it is not responding fast enough) we are now able to retrieve the location of a user.
The second half of my contribution involved using the user’s retrieved location to place in what we call ‘bins’. These bins are geographical areas that I have pre-determined on a map. What I did was write a couple of classes and an algorithm that would take in the user’s location and decide which bin the user belonged to. At the moment, the name of the bin is displayed as a simple textview but I plan to enhance the UI next week.
This week was mostly just centered around setting up the project and understanding what needs to work together for this project to work out. Many diagrams and UI mock-ups were drawn until a final version was put together. Finally, I helped set up the github repository with a preliminary application using a bottom navigation bar to move around different intents as the master branch to eventually add different features as different branches and then merge them into the master.
The main progress this week was getting the exact map of progress planned out. Next order of business is understanding how fragments could help us and if we even need it.