UX Design Case Study by Cam Stokes
Project Breakdown:
The Challenge: Providing a basis for a more concrete grading scale and metric system for the Sentiment Analysis Metric (SAM).
My Role: UX Designer, UX Researcher, Customer Service Subject Matter Expert
Project Duration: 6 months
What Brought Me Here:
The Sentiment Analysis Metric, a.k.a. SAM, is an AI-based grading system used by Spectrum, designed to evaluate the tone and effectiveness of customer interactions. Agents must maintain a 75% SAM score for four consecutive months to qualify for advancement, making it one of the most influential measures of success in the call center environment.
While the metric is positioned as a tool to uphold quality and consistency, its AI-driven nature, lack of transparency, and inconsistent application have created a system that feels more punitive than supportive. Understanding the weight SAM carries for agents, supervisors, and overall customer satisfaction is essential to rethinking how it can better serve all stakeholders.
“Despite its significance, SAM is graded inconsistently and has limited visibility for agents.”
Seeing Both Sides:
From a business perspective, Spectrum’s priority is to protect and grow its bottom line. The SAM scoring system, as it currently operates, plays directly into that objective. By setting a performance threshold that many agents consistently fall short of, the company reduces the likelihood of frequent merit raises, promotions, and the training investments tied to advancing staff. This ensures labor costs remain tightly controlled, even as overall performance metrics appear to be managed by AI-driven oversight. While this approach may benefit Spectrum financially in the short term, it also creates friction for agents and supervisors—friction that ultimately impacts morale, trust, and customer experience. A reimagined SAM system presents an opportunity to balance business growth with agents' progress, aligning cost efficiency with a more motivated, high-performing workforce.
Seeking to Understand Better:
The current SAM system is positioned as a performance benchmark but functions more as a source of confusion and frustration for both agents and supervisors. Because the AI-driven grading lacks transparency, agents are often left uncertain about what actions or behaviors contributed to their score. Supervisors, tasked with coaching, provide varying and sometimes contradictory feedback, further weakening consistency across teams. On average, agents fall below the required 75% threshold, with scores hovering in the 60–70% range, if not worse. This not only impacts opportunities for career growth but also erodes trust in the evaluation process itself. Without a unified scale or real-time visibility, SAM fails to guide agents toward actionable improvement—ultimately creating inefficiencies that ripple outward to customer interactions and overall satisfaction.
Diving Deeper:
The research behind this case study was designed to uncover the lived experiences of those impacted by SAM. My goals were fourfold: first, to understand how agents currently interpret and navigate their scores; second, to identify the specific barriers preventing consistent success; third, to explore opportunities for real-time visibility and actionable feedback; and finally, to imagine what a fair, unified grading system could look like. These goals were not just about identifying flaws but about framing opportunities to create a system that supports growth, clarity, and alignment between agents and supervisors.
The most eye-opening statistic from my research is that approximately 50% of the agents on my team are not meeting their SAM goals.
Effective Problem-Solving:
The proposed direction is a SAM performance dashboard that integrates real-time scoring visibility, actionable coaching tips, and a standardized grading framework. Instead of waiting for inconsistent feedback after calls, agents would see performance cues during and immediately after interactions, helping them adjust behaviors in the moment. Supervisors would have access to the same standardized data, ensuring alignment in how coaching is delivered. By introducing transparency into the AI-driven scoring system and presenting it in a way that is understandable and actionable, the solution empowers agents while giving leadership a reliable, fairer tool for performance evaluation.
The conceptual user flow and supporting call flow diagram outline the coordinated process followed by both the agent and the system during a customer interaction. Calls begin with a system-triggered inbound connection and proceed through identity verification, issue discovery, problem-solving, and resolution. In parallel, the SAM functions in the background, continuously monitoring customer tone and agent response. Real-time sentiment insights are surfaced through a dashboard visible only to the agent, offering contextual prompts that reinforce empathetic communication and professional delivery.
Key checkpoints—such as during verification, issue exploration, and resolution—allow sentiment analysis to track shifts in customer experience across the interaction. At the conclusion of the call, final sentiment data is compiled into structured coaching feedback for supervisors. This integrated flow demonstrates how introducing AI-driven sentiment analysis can elevate live support quality, enhance agent performance, and strengthen overall customer satisfaction.
Live call system flow
Effective Problem-Solving:
Implementing this solution has the potential to improve outcomes across three dimensions. For agents, clearer feedback and real-time visibility lower stress, boost confidence, and make SAM a tool for growth instead of a barrier. For supervisors, a unified grading scale and coaching framework reduces ambiguity, creating consistency across teams and improving coaching effectiveness. For Spectrum as an organization, the improvements translate into higher customer satisfaction, stronger agent performance, and reduced turnover—ultimately benefiting both employee morale and the company’s bottom line.
Final Thoughts:
Moving forward, the next phase is to validate these concepts with real users and stakeholders. Prototype testing with agents and supervisors will measure the usability and impact of real-time feedback features. Workshops with the strategy and IT teams will refine the grading framework to ensure it is feasible within existing systems. Iteration will be central to this process—using feedback to adapt the design until it balances business efficiency with agent empowerment. With continued refinement, the solution can evolve from concept into a scalable improvement for Spectrum’s call center environment.
Until Next Time:
This case study highlights how design can reframe a flawed system into a catalyst for growth. By applying UX methodologies—empathizing with stakeholders, defining clear problems, and reimagining solutions—SAM transforms from a source of frustration into a tool for empowerment. The opportunity is not only to make the AI grading system more transparent and consistent but also to strengthen the human side of Spectrum’s operations. In doing so, the business’s bottom line will be aligned with the personal growth of its agents, proving that intentional design can benefit both the company and the people who power it.
Behind the Study:
UX Designer
UX Researcher