Enhance agents' performance and motivation through the implementation of gamification techniques in call centers utilizing Amazon Connect, fostering a spirit of healthy competition and collaboration within the work environment.
Emily Boo
Waynelle Ize-Iyamu
Kevin Li
Jaime Markkern
Namit K. Srivastava
Rosa Thomas
Amazon Connect (AWS)
Sr. Software Development Manager
2 quarters (20 weeks)
For two quarters, we diligently worked on the gamification project in 2-week sprints. Each sprint began and ended with a meeting with our sponsor, Rosa, where we provided updates on our progress, showcased our work, and received valuable feedback. We used our meeting notes to plan the upcoming sprint, aligning with Rosa's expectations and our own review of the project's ethics. If any design work or feature adjustments were needed, we would map it out on Figma before moving on to code development. Throughout the sprint, we maintained constant communication, including weekly stand-ups, quick daily updates, and another weekly meeting dedicated to group work.
*click to see sprint more details
Sprint 1: Our team began with only a project description and no initial codebase or design files. We conducted research on motivation, gamification techniques, and Amazon Connect. Additionally, we developed detailed user stories to ensure diverse use cases were considered during development.
Sprints 2-3: After our first meeting with the project sponsor, we presented our understanding of the project and received approval to create low-fidelity mockups. During this sprint, we produced the first iterations of the leaderboard, agent dashboard, and supervisor dashboard. We also explored the Amazon Connect Instance to understand the data it collects and reviewed the Amazon Connect API.
Sprint 4: We commenced the coding phase by starting the development of the leaderboard. A framework had to be chosen to develop a dynamic and engaging user interface. We considered many options, but landed on React since it was intuitive to learn and structured HTML elements well. As all five of the team members were new to this framework, we took the week of spring break to learn it.
Sprint 5: This sprint focused on developing the frontend for the agent dashboard and supervisor dashboard, and fixing bugs in the leaderboard. At the midway point, we felt it was important to conduct an ethics review to identify and address any potential ethical issues with our application.
Sprint 6-8: Our focus during these weeks was on functionality. We developed features that allowed for user input to be stored in the backend, retrieved, and displayed on the front end. Key features completed included adding badges to the leaderboard, allowing supervisors to create and reward trophies, calculating and displaying challenge progress bars, and enabling the deletion of trophies and challenges.
Sprint 9: In the final weeks, we concentrated on quality testing, preparing the project for handover to the next development team, and readying our presentation. We added detailed explanatory comments to our code and removed unnecessary lines, such as console log statements.
Since we started the project with no prior codebase or preconceived design ideas, we had to do research on Amazon Connect, workplace motivation, and gamification in the office. After gathering information about call centers and what data would be available to us by using Amazon Connect's API, we created some user personas according to our understanding of the potential users of the software. We also did some competitive analysis on similar software to get some ideas of what the interface of the gamified call center could look like.
In our research on workplace motivation we found that workers can be motivated by several factors, including working toward a common goal, individual competition, being recognized by supervisors, and being able to help peers. We felt that this subset of motivational factors could be used to inform our application design, so we created challenges to address working toward a common goal and helping peers, trophies to address being recognized and rewarded, and the leaderboard to create individual competition, flag people who others can look toward for help, and acknowledge their expertise.
Figma: Design of low and medium fidelity
Google Drive: Creating and organizing documents
GitHub: Repository to hold code
Discord: Communicate quickly between group members in addition to in person regular and standup meetings
Amazon Connect: Generating and analyzing call center metrics
React: Frontend of our application
Node: Backend of our application
JS/JSX: Programming language to communicate between front and back end.
Bootstrap/Material Design: Library used for common icons
Sprint Planning
At the start of our project we were iterating using a waterfall method, where we did all the low fi designs, then high fi designs, then code, etc for the whole project. This was extremely slow as it took time to get approval to move on to the next step, which left the team feeling stagnant. To address this challenge we worked with our professor to re-examine how we do sprint planning to usher in agile planning. After this point, we broke down sprint items into de coupled tasks that could be worked on efficiently. And we focused on having new functionality and working code at the end of every sprint.
Ethical Implications
Understanding that some call agents may feel pressured by the use of our application in their workplace was something we were challenged to keep in mind as developed. We handled it by creating an ethics document.
The key takeaways of how we are aiming to handle the challenge of our ethical implications is to only show the top 5 agents on the public central leaderboard, so people with lower rankings are not put on display. Additionally, the ranking on the leaderboard should reset after 90 days, so people with a lower ranking don't feel hopeless and instead feel empowered to rise in the ranks. This feature will also reduce the pressure the people in the top 5 feel to maintain their position for an overly-extended period, since they know the leaderboard will reset. Finally, we also want to ensure assigned breaks are not accounted for when retrieving metric data for the ranking. We want the employees to have assigned break times, with peace of mind, knowing their metrics aren't being tracked so they aren't tempted to increase their performances by overworking.
Following extensive research, meticulous iterative design, coding, and numerous meetings, we have successfully developed a gamification application for call centers utilizing Amazon Connect. We carefully designed the interfaces of the gamified call center to reflect Amazon's brand identity. This application features four key functionalities and offers comprehensive dashboards designed specifically for both supervisors and agents.
Amazon Connect tracks various metrics, but for our proof of concept, we focused on three key metrics to stay within scope constraints. Agents were ranked based on their performance in each metric to recognize and reward top performers for their hard work and expertise.
Average Non-Talk Time: This metric reflects the quality of the agent's communication, demonstrating how effectively they engage with customers.
Average Handle Time: This metric highlights the agent's efficiency, showcasing the volume of work they manage to complete.
Average Active Time: This metric indicates the agent's level of engagement, illustrating their dedication and integrity in their work.
Trophies are awarded by supervisors in two distinct ways. Firstly, they are given to affirm the progress and success an agent has been demonstrating, acknowledging how well they have been performing over time. This form of recognition serves as a motivational tool, encouraging agents to continue their hard work and dedication. Secondly, trophies are awarded through winning challenges, providing a competitive element that not only highlights individual achievements but also fosters a spirit of healthy competition and excellence among the agents.
Challenges are created by the supervisor to provide a common goal that agents can work towards as a group. This group challenge fosters camaraderie and a sense of purpose, encouraging agents to collaborate and support each other in achieving the shared objective.
Streaks represent the consecutive days an agent has achieved a specific metric goal set by the supervisor. The current streak indicates the number of consecutive days the goal has been met up to the current date, while the longest streak represents the maximum duration the goal has been consistently achieved.
The leaderboard is intended to be displayed on a large central screen, showing the top five agents for each metric. This discourages toxicity toward those below the top 5 as well as informing them who may be able to help them improve if they are struggling.
The supervisor is allowed to view all the details of all of the people in the team, which helps them monitor how well the team is doing. They are also able to create and award trophies to agents, as well as create challenges to promote teamwork. The team leaderboard in the supervisor dashboard also displays all of the team members, rather than restricting them to the top five agents.
Agents are able to view only their own ranks, streaks, and trophies. They are also able to view team-assigned challenges that the supervisor created and their details. They can also navigate to the leaderboard, for ease of viewing, which only displays the top five agents.
Understanding that this is a proof of concept, there are key areas that need further development to refine this gamified call center:
Develop a user interface and functionality to set clear challenge goals that take into account real data obtained from each agent.
Updating trophies to agent dashboard upon completion of challenges.
Adding trophies to agent dashboard, when supervisor assign individual trophies.