Usability Analysis of Enterprise's Rent-A-Car app
Usability Analysis of Enterprise's Rent-A-Car app
Interaction Design Method - School of Informatics and Computing
October 2022 - December 2022
Mandar Bhoyar, Saransh Gupta, Tinkle Mittal, & Sakshi Shirbhate
Goals
To ascertain people's pain points with Enterprise's Rent-A-Car mobile application
To test the usability and accessibility of the application for first-time and international users
Tools and Skills used:
Qualtrics
Microsoft Excel, Word, and PowerPoint
Google Doc
Google Scholar, ACM Library
FigJam
Literature Review
User Interviews
Think Aloud
System Usability Scale
Correlation
Affinity Mapping
Enterprise Rent-A-Car's mobile app was tested for usability by six participants who rated the application to receive an average rating (SUS = 79.6; SD = 6.41) and scores in the B+/A- range.
We categorized the design recommendations into immediate changes and future scope based on qualitative and quantitative data analysis.
Immediate changes include updating font size and color scheme, improving the location and date-time selection interface, and displaying key legal policies on the confirmation page.
Future scope recommendations include the ability to view prices without selecting a car type, adding a definition of car types, providing a 360° view of cars on the confirmation page, and offering the option of having the car delivered to the user's location.
We suggest implementing these changes and continuing to work with users to ensure a user-centered website.
Interviews:
We conducted semi-structured interviews to delve into users' feelings, experiences, and expectations regarding the Enterprise Rent-A-Car app. For those with previous car rental experiences, we explored their history and past interactions with similar platforms. For those without prior experience, we delved into their preferences and reasons for not renting.
Rationale: Interviews provided a rich source of qualitative data, allowing us to understand users' attitudes and gather feedback about their experiences. This method enabled us to uncover the emotional aspects of user interactions and uncover pain points that might not be apparent through quantitative data alone.
Think Aloud:
Think Aloud sessions were used to observe users' thought processes and actions as they attempted to complete specific tasks within the app, such as renting a Standard SUV for a specified period. This method aimed to identify any challenges, issues, or difficulties users encountered while interacting with the app.
Rationale: Think Aloud sessions offered valuable insights into users' real-time interactions with the app, allowing us to capture their frustrations, satisfaction, errors, and general feedback. This method provided a window into the users' cognitive processes and highlighted usability issues.
System Usability Scale (SUS):
The SUS is a standardized questionnaire used to measure the overall usability of a system or application. It consists of ten items, each rated on a five-point scale, and is widely accepted in both academic and industry research.
Rationale: The SUS provided a quantitative measure of the app's usability, allowing for direct comparison with other products and the identification of areas for improvement. It offered a standardized usability score that complemented the qualitative insights gathered through interviews and Think Aloud sessions.
In summary, our methodology selection was driven by the need to gain a comprehensive understanding of the Enterprise Rent-A-Car mobile app's usability and user experiences. By combining qualitative and quantitative methods, we aimed to uncover pain points, usability issues, and opportunities for improvement. The combination of interviews, Think Aloud sessions, the SUS, and data analysis provided a well-rounded perspective on the app's strengths and weaknesses, informing design recommendations for immediate changes and future scope enhancements.
Enterprise Rent-A-Car is a leader in transportation services with over 8,000 locations worldwide. They specialize in car and truck rentals, car sharing, and car sales. People use Enterprise to rent a car when traveling or when their vehicle is being repaired. They offer a variety of vehicles at varying levels of luxury and cost structure.
The Enterprise Rent-A-Car app has a wide range of feature sets, like renting and managing a rental car and purchasing a vehicle. The application helps users rent an available car at their preferred location at a set pickup and drop-off time. The application's primary user is a person above 18 years of age with a valid driving license and driver’s insurance. If the user does not have valid driver’s insurance, they can bundle it with the car reservation through Enterprise’s insurance partners. We evaluated their mobile app (iOS and Android), given that most users will use their mobile app to search for and rent a car.
Overall, to understand the extent to which the Enterprise’s app delivers a consistent experience for young car renters, we conducted a battery of tests – driven by our research questions – such as interviews and Think-Aloud sessions. Through these tests, we aimed to explore the expectations of the sub-30-year-olds, discover the limitations and failures of the app, and identify obstacles the users face with the app. Specifically, we captured the subjects’ comments, concerns, frustrations, satisfaction, errors, and general feedback.
To drive our insight discovery and data collection process, I derived the following research questions:
RQ1: What are users’ general feelings about the Enterprise Rent-A-Car app?
RQ2: How easily and successfully do users start the car renting process?
RQ3: How well does the application support the paths and goals of the users? That is, how closely does the organization and flow of the application match user expectations?
RQ4: Do users encounter obstacles while completing the process of renting a car?
We believed that testing the usability of the app with international students at Indiana University – Purdue University Indianapolis (IUPUI) would yield the best results for our core testing principles, given that we want to test how user-friendly the Enterprise Rent-A-Car app is. Their inexperience in American car rental practices makes them the ideal candidate to compare the process of renting a car in America against the students’ personal experience in their country, and increases the chances that they have no prior experience with the app.
Thus, we collected data on six IUPUI students (2 female; X̄ age = 26.4 years, SD = 2.7 years), with an equal distribution of car renting experience (50% low vs. high renting experiences’ cut off = three car rentals). As hoped, only 33% had used the Enterprise Rent-A-Car app.
Using the aforementioned research questions as a basis to develop our research design, we decided to use the following methodologies:
We developed a semi-structured interview protocol to query the users on their car renting history. Specifically, if users had rented a car in the past, then we asked questions such as, “When did you last rent?”, “Which platform did you use to rent a car?” and “What was your experience like?” Similarly, if users did not have a history of renting a car, then we asked them about what their preference for renting a car would be and what their reasons were for not renting.
Think Aloud is the gold standard test to understand and observe what users think and do while completing a task. Through this, we aimed to identify any challenges and issues the users encountered while renting a car, their thoughts about the app, and general grievances.
We tasked the users to start the process of renting a Standard SUV for a four-day period a week from today in your zip code area. “Standard SUV” was a criterion defined by the app. To generate meaningful and quantifiable data, we identified nine important, data-driving subtasks (Table. 1). These tasks would help us identify the errors and issues users have, develop success/failure criteria, and discover the time it takes for our users to complete each task.
Table 1. Substasks for Think Aloud
The System Usability Scale (SUS) is the gold standard tool for measuring the usability of a system (Brooke, 1986). It is a 10-item questionnaire that asks the users to respond on a five-point scale from “Strongly Agree” to “Strongly Disagree”. Given its widespread use in academic and industry-wide research, a usability score on the SUS is directly comparable to any products, especially Enterprise’s competitor’s car rental apps. The SUS is known to be quite poor at diagnosing the “why” behind a certain score, hence why we used it to back our qualitative findings and generate research questions for future studies. Overall, 68/100 is considered to be the average score, with 90 and above being offered an “A” letter grade.
The qualitative data generated to answer RQ1, RQ3, and RQ4 through user interviews and the Think Aloud procedure was analyzed via thematic analysis and affinity diagramming. These methods allowed us to generate the key patterns of thoughts, expectations, and pain points our users had regarding the Enterprise Rent-A-Car app.
The quantitative data generated via Task Success/Failure rates were assessed by comparing them against a predefined threshold of success and failure, as identified in Table. 1, these were also our key performance indicators. We also collected Time on Task as a measure to understand how long users took to complete a task and to discover if there was a pattern of time taken on a task and success versus failure on a given task. Overall, these data would help answer RQ2, RQ3, and RQ4.
Thematic analysis and affinity diagramming (Figure. 1) helped us discover the key thoughts of users for the Enterprise Rent-A-Car app. We found that the data fit into six categories of grievances with the app: App UI, In-App Issues, Information Shared [with the users], Missing Features, and Enterprise’s Business Model Issues.
Figure 1. Affinity Diagram - Themes in the Qualitative Data
Generally, users were satisfied with the way the app functioned and the application worked as they thought it would. But they did have some grievances with the app. Specifically, the green-on-white with a really small font made it really difficult for our users to successfully navigate the app. (Figure 2). Also, they complained about the cramped UI. This coupled with the poor information highlight and selection meant that key information could be missed. This was particularly visible when a participant said, “Oh shit. I need to click on start reservation first. Okay, I thought it was since there were like hours written there…”
This user insight provided us the evidence for the unorganized information architecture where the user expected the date and time to be on the same page as the location option since their mental models are framed in such a manner through using similar applications. The flow of the information was not intuitive to the users. Furthermore, all six of our participants were frustrated with the number of steps they had to take to get to the final confirmation page; they felt that the pick-up location, date selection, and car model selection (even after placing a filter) took way too many steps and could have been narrowed down. Thus, suggesting that this presentation of information was against the mental model of the users.
Figure 2. Screenshot of the Enterprise App
We were unable to conduct in-depth quantitative data analysis due to the small size leading to poor power in the data to discover any meaningful causational relationships. However, we did find some interesting trends. Overall, the Enterprise Rent-A-Car app scored really well on the SUS with an average of 79.6 (SD = 6.41), thus earning a B+/A- grade (highlighted in green in Table. 2). This suggests that our users found the app to have good usability. Similarly, the users scored the app 6.58/7 when they were asked how easy the navigation around the app was (7 = “very easy”).
All the users successfully completed the task, with 33% needing assistance (more than three prompts). However, a significant number of users failed tasks ST2, ST3, ST6, and ST9. These tasks were related to selecting a pickup location, rental start date and time, and adding the car type filter. The SUS scores for these tasks support our finding that the task flow and information architecture needs to be improved. However, it is not a priority change since most people who used the application earlier were able to learn these steps quickly. These failures corroborate well with the qualitative findings in which the users found the app navigation to be confusing at times.
Table 2. Quantitative Data: Think Aloud
Table 3. Correlation Matrix
The most crucial consideration a user experience team should consider is the way an application looks and behaves. Unfortunately, our data suggests both of these aspects of the app need major rework. Through the data analysis of qualitative and quantitative data, we have categorized the design recommendations into two sets, immediate changes that Enterprise should consider and future scope.
The team at Enterprise must update the font size and its color scheme to meet modern standards. The current design does not take the need for accessibility into account.
Most of our users had an issue with the location and date-time selection user interface and experience, thus suggesting that it is in dire need of a refreshed experience. This can be done by either providing these options on the same page without needing a confirmation button till all the data points are selected or by simply providing an indicator for no steps remaining to follow to reach the final goal.
Two of our users asked for the confirmation page to display the key legal policies, such as late fees. This would reduce the number of steps they have to take to find this information and give them peace of mind to know what exactly they must do to stay within the legal limits of their rental agreement.
Our participants highlighted that being able to see the prices without selecting a car type will reduce the number of steps they have to take to finalize their car options. Sometimes users may not know what car type they need; they may be open to different options and be limited by their budget. Hence it is vital to incorporate a sort and filter feature based on price. In extension, the ability to compare cars at different price points should also be incorporated.
On the subject of car type, Enterprise needs to add a definition of what they mean by different types of cars. They allow the user to filter car options but do not explain what they mean until the second last page where they give an example of what a car in that category looks like. Having this information early in the selection process will help the users make a quick, swift, and decisive decision about what car they need for their travel.
Furthermore, a 360° view of cars from the inside and outside on the final confirmation page will be helpful since it will give the user a sense of the dimensions inside the car, storage, and luggage capacity, and what features it has. For example, a user may really want support for Android Auto or Apple CarPlay but they will not know if the car they rented supports those features or not.
Lastly, having the option to have the rented car dropped off and picked up from the user’s location of choice for a fee was requested by multiple users. This mainly comes as a convenience feature since people renting a car may not have an accessible mode of transport to get to the rental place.
We tested the usability of the Enterprise Rent-A-Car mobile app with six participants Even though the average rating and scores given by participants to the application were in B+/A- grade range with an average SUS score of 79.58, participants mentioned few changes that would benefit the enterprise application to be more user friendly.
These changes revolved mainly around task flow, the grouping of information, and being able to sort the options by price. We found evidence of these findings in the quantitative data. Thus, we recommended design suggestions for two phases- current scope and future scope. Having a centralized site to find information is critical to many, if not all, of the participants. Implementing the recommendations and continuing to work with users will ensure a continued user-centered website.