Our project helps fill the issue of finding and reserving study rooms in the Miller Learning Center. There are two main functions of our system, displaying the available rooms and allowing for rooms to be reserved. In our requirement elicitation survey, we included a question to determine the demand for such a system. Based on the results, we determined that many students found it difficult to find a study room in the Miller Learning Center with over 75% rating it at 8 out of 10 for difficulty. In addition, 80% of our respondents said that they would use a system that allowed them to reserve study rooms in the MLC with only two people saying that they wouldn’t. We decided to fill this gap by including two features in our system, a graphical representation of the floors to display which rooms are currently open and a system to reserve rooms. We also discovered through our preliminary interviews that many people thought that not all of the rooms in the MLC should be available to reserve, there should still be some that are on a first come, first serve basis. To fill both requirements, we decided to make our design so that all rooms on the third floor were on a first come, first serve basis while all the rooms on the fourth floor are available for reservation.
For our heuristic evaluation, we asked an evaluator to go through our prototype and gave them a list of heuristics to evaluate our prototype based on. We asked them to identify any problems that they located and give them a severity rating based on how much they thought it would affect the end user.
The first step of conducting our cognitive walkthrough was to define the tasks that we wanted the user to complete. These are the same tasks that we used in our retrospective testing interviews and think-aloud evaluations and can be found under the tasks section of this document. We then subdivided these tasks into the processes that composed each task. We asked an evaluator to follow the process series that resulted in order to complete the tasks. For each process we asked the evaluator to determine if the system allowed the user to find the correct action, allowed the user to associate the correct action with the outcome they expect to achieve, and if the correct action is taken the user would be able to perceive that progress has been made. We recorded what the evaluator said and attempted to find any issues in the design of our prototype.
We will do this evaluation using Fitt’s law which predicts the amount of time it takes to interact between each functionalities of an application. We will conduct this using the amount of pixels that are between each click with a constant variable for average delay between each click.
For the retrospective testing interview, we asked a series of questions regarding our application ranging between how they felt about using it, and what we should do to improve it. (What they didn’t like) Most importantly we asked whether or not they would ever use our application as a final gauge on the overall opinion of it.
For the think-aloud evaluation, we sat down with each of our evaluators and explained the principles of the think-aloud evaluation technique. We asked them to narrate exactly what they are doing throughout the execution of each of the tasks and to explain their reasoning. We stressed that we were not testing their ability to accomplish these tasks but instead were testing the systems ease of use and performance. We then gave them the prototype and each of the tasks. We recorded important points in their narration, paying particular attention to portions of the prototype that they were having trouble navigating or that they found confusing.
For our questionnaire, we used a google form to poll our evaluators from the retrospective testing interview and think-aloud evaluation in order to determine their opinions on the system, get additional feedback, gather basic statistical information, and determine their representation of the overall target population. We administered our questionnaire immediately following the interview to ensure that the information we were gathering was accurate and that the user’s experience was fresh in our evaluators mind. The results of our questionnaire can be found under the results and conclusion section.
Average time spent in MLC per month: 18.5
100% of evaluators live off-campus
List of Majors for Evaluators:
Our results may be skewed as most of those who completed the questionnaire are upperclassmen that live off campus. Frequently, those who live off campus are less likely to come back to campus to study unless it is necessary. Based on MLC’s location on campus it is more likely for underclassmen to attend as their residence halls are closer. Our interview population had a very good distribution of majors and therefore should be an accurate representation of our target audience. So while we had a good distribution for majors, we did not for both campus living and school year which means that we are missing a major portion of the target audience, namely the students living on campus and the lowerclassmen, and the results of our interviews will not accurately reflect the opinions of these groups.
For the purposes of our retrospective testing interview and our think-aloud evaluation, we had four main benchmarking tasks that we wanted our testers to accomplish.
Check the availability of study rooms on the third floor
The purpose of this task was to examine the implementation of the checking for study room availability. We were particularly looking for ease of use and readability of the graphical representation. This is one of the two main features of our system and should be the quickest and easiest to use.
Make a reservation on the fourth floor
This task examined the other major task of our system, making a reservation. This task required the user to go through the login process and filtering of rooms in order to successfully complete the reservation. We were looking particularly at the ease of use for filtering the rooms.
Check availability of the third floor then make a reservation on the fourth floor
This task combines the two previous tasks and while it may seem redundant, provides insight into how easy it is to navigate between the different features of our system. Since each of our systems is distinct in the role that it fills, it is necessary to look at how to integrate them together into a single program.
Cancel a reservation
This task is linked to making a room reservation since once you make a reservation, there needs to be a method for cancelling it efficiently. This is very important since the purpose of this system is to help groups find study rooms and since it is possible for a group to decide that they no longer need a room, we want them to be able to cancel it and free it up for another group to use.
Our cognitive walkthrough identified a number of issues with our current prototype. The foremost was in the third task which required the user to check the availability of study rooms on the third floor then make a reservation. The primary issue was during the process of moving from checking availability to making a reservation on the fourth floor. What should happen is that the user clicks a button labeled “4th floor” which takes them to the beginning of the reservation system. However, because of the context of the button, namely that it is located in the view occupancy portion of the system and the similarity to the label containing the occupancy of the third floor, the user would interpret this button as a button to view occupancy of the fourth floor. This demonstrated a gap between what a user would think a button does and what it actually does.
This application has buttons that are located very close to each other so the amount of time spent between each click is very small. These buttons, along with their relative distance from one another, are large so the chances of a user missing a click and wasting more time is very low. Once a button is clicked, the next state of the page has a similar layout of button size and distance, so the application maintains a similar feel throughout.
Input method: mouse
Distance size: 20, 35 pixels
Constant delay between clicks: 250 ms
Average Time to Reserve MLC Room: 11.3 Seconds
When the user was done testing our prototype, we would immediately ask them their thoughts on the application. We wanted to see if their initial reaction was positive or negative. We then would ask if there were any issues that they found. One user noted that from the “Occupancy” page, it wasn't exactly clear how to go and book a room on the fourth floor without going back to the home screen. We then asked for any positive comments that they enjoyed. A few users noted they liked having fewer screens as it was more efficient and not “a bunch of jumps and hoops to complete the objective”. Lastly we would ask if they would use this application if it was developed. Most said definitely, as they always have trouble with finding study space in the MLC.
The think-aloud evaluation went well, as the users would explain what they were doing and why they went about it to complete the benchmark tests we supplied them. The tasks were: Check the availability of study rooms on the third floor, make a reservation on the fourth floor, check availability of the third floor then make a reservation on the fourth floor, and cancel a reservation. After we would tell them to do each task, they would say what they were doing and where they would get stuck. One user noted that from the “Occupancy” page, it wasn't exactly clear how to go and book a room on the fourth floor without going back to the home screen. After a few seconds, she was able to maneuver her way by clicking the appropriate button. This told us that we might need clearer language or labels so they know they can reserve a room from that screen.
Questions:
Throughout our evaluation we were able to identify a number of areas where our prototype failed to provide a user friendly and intuitive experience. The biggest issue is the difficulty of transitioning from viewing the occupancy to making a reservation. While the home page makes it easy to transition between the systems, we should not require the user to return to the homepage in order to move between. Our current implementation involves the use of a button the resembles the label for the percentage of the third floor occupied. This implies that the button has a similar function and should move between floors. There are a couple solutions to this issue. The first is to dissociate the reservation button with the third floor occupancy label. However this is not ideal since any rooms without reservations on the fourth floor can be filled on a first come first serve basis. Therefore, a more optimal solution would be to create a new button for making reservations on the occupancy screen and have an occupancy map for the fourth floor, similar to that of the third floor and allow the user to switch between these two maps using the buttons with the percent occupied labels.
Another issue that many of the evaluators brought up was that it was difficult to tell which rooms were not occupied on the occupancy map. Part of the issue is that it is difficult on the current map to tell which rooms are study rooms and which are not. The best solution for this would be to color code all the unoccupied rooms, similar to how we color coded the occupied rooms except with green.
Several other evaluators brought up the lack of a confirmation screen when cancelling a reservation. It is important to give users feedback based on the actions that they take and while our reservation system gives a confirmation page, our cancellation system does not.
A few suggestions that evaluators had was to add a legend to the main page. For color blind people, the difference between the red occupied rooms and green unoccupied rooms would be difficult to discern. By adding a legend and using different intensities such as light green and dark red, users with green/red color blindness would quickly be able to determine which rooms are unoccupied. Another suggestion was to black out the dates on the calendar drop down when choosing a date to reserve a room if no rooms were available for that date. This would allow users to quickly identify what days they would be able to make reservations. We could take this a step further by blacking out dates based on a specific desired time that the user inputs for start and end times.
Based on the results of our follow up questionnaire, many users found the system easy to use and said that they would use the system regularly. A few additional things we may need to take into consideration is that the evaluators said that the number of clicks that it takes to accomplish a task is the most important attribute of the design. Therefore, we should look for ways to allow users to make reservations, cancel reservations, and view occupancy quicker and easier.