ANALYZE
Feedback Labs Members' Community Learning Site
Feedback Labs Members' Community Learning Site
The Analyze step of the feedback loop is where raw feedback starts to become meaningful.
Here, you look across everything you’ve collected and identify patterns, themes, and trends that can guide decisions. In an AI- and tech-enabled loop, tools and large language models can help cluster open-ended responses into common themes, quantify how often each issue shows up, and visualize results over time.
But, more analysis is not always better: your choices about methods and tools should be driven by the questions and decisions you surfaced in the buy-in and design steps, the type of data you have (quantitative, qualitative, or both), and your team’s actual capacity.
Because AI can misread nuance or reinforce bias—especially across languages and dialects—it’s essential to use models trained on diverse linguistic patterns and have staff review and interpret outputs, combining automated insights with human judgment and, where possible, with the perspectives of the people whose feedback you’re analyzing.
Here is a common challenge that we have found organizations experience in the Analyze stage of their feedback loop:
Response Variation
Large volume of open-ended responses
Here are two ways that we have found organizations using technology tools in the Analyze stage of their feedback loop:
Tools
Quantify qualitative data
Quantify open-ended question responses into common themes and mark the frequency/volume of feedback
Some organizations have found using Large Language Models (LLMs) to help them during the Analyze stage of their listening and feedback strategy.
Large Language Models (LLMs)
Response Categorization
Use AI text clustering to categorize open-ended survey responses by theme (e.g., “access,” “staff communication,” “wait times”)
Here is a resource to consider if using an LLM to design your strategy is the right move for your organization.
Staff, Organizational, and AI Bias
Ensure the AI model is trained on diverse linguistic patterns; staff review outputs for nuance
Learn more about ethical considerations and recommendations here.