Data collection was an integral portion of our project, as we performed mass data collection to evaluate the consistency and quality of our code.
We used data collection of our final versions of code before code submissions (e.g., before the Round 1 submission) to analyze how our code is performing by evaluating the consistency of our scoring.
In the Google Sheet below (Run Logs: Qual_Code All Runs, Qualification Code Outliers, Problems pages), it can be seen that we have highlighted both low (score < 90) and high (score > 99) outlier scores. We discovered that low outlier scores were caused by failed movement to plot because we attempted to access the plot from too far, therefore we tuned the "dist_to_stop" (from the center point of the target zone) variable to achieve more consistent scoring (before the speed updates, 98-99, after the speed updates, around 107).
2. In addition, we collected data on the time it takes to travel between different locations and the rate of battery depletion to develop our strategies. We used the data to conclude that one-plot method was most optimal, as traveling between plots would yield lower scores due to excessive battery and time depletion.