Before diving into the creation of the extension, we created a paper prototype which serves as the brainstorming foundation of what we wanted our project layout to look like. Since we are making a chrome extension, we wanted to show 3 user test cases that could arise during the use of our extension. The three are showcased below with their corresponding wireframes.
As a team, we understand that our algorithm will have bug issues that need to be fixed as more and more data is collected and applied. For our first user test case, we will highlight the detected misinformation and ask the user to give a rating on whether or not what they are actually reading is misinformation.
The first step, in this case, is that our algorithm will detect misinformation based on data we have collected over time.
The second step is that Climate Watch presents a link to more accurate information based on the same topic. The user is then asked to rate the new article that we presented to them.
Once the user submits the corresponding feedback, we thank them and process the data to make any changes necessary to the information we are providing.
The second user scenario is where the user identifies that the algorithm has mistakenly identified a phrase that is not actually misinformation. They are greeted with a pop-up which displays a menu allowing for them to submit their feedback to improve the dataset.
Our algorithm will detect misinformation based on data we have collected over time. The user will then click report if they identify an erroneous highlight.
After the user clicks report, they are greeting with a pop-up menu that prompts them to explain why they are reporting this information as a misidentification.
Not every report is going to be accurate, so we've asked our users to include cited sources to support their claims. We collect the reports, review them, and then make the necessary changes.
While using our Chrome extension, we provide the option for users to personalize their experience for opitimized use. Settings include turning on and off the external links, only flagging certain types of misinformation, and personalizing the color of the highlights (for accessibility reasons).
Once the algorithm detects the misinformation the user may want to make some changes so they get a better browsing experience. They are presented with a settings icon in the pop-up to make any necessary changes.
After clicking the settings icon, the user is brought to the settings page and is presented with a list of options they can change such as external links, misinformation to flag, and highlight colors.
With the feedback we received from testing our paper prototype we made the following changes.
Our third iteration was a fully function version of our prototype for testing users without the styling that would be seen in the finished prototype.
Our finished prototype in action highlighting misinformation detected in real time.
In order to give the user the most personalized experience, we included settings that allow for the customization of which information they want to be flagged and the specific highlight color of the warnings.