Exploring the potential of AI-generated music in retail: a study with representatives from the retail and wholesale sectors revealing a preference for human-made music.
Figure 1: Observed and expected responses concerning attitudes toward AI-powered music. The dashed line represents the expected distribution if there were no preference for any particular response category.
Figure 2: Performance matrix for images (% correct answers).
Figure 3: Performance matrix for sounds (% correct answers).
Keywords
Sound Design, User Research, Data Science, Human-Centered AI.
Links
Github page.
Technologies
Youform (online survey data collection), Mentimeter (real-time data collection), Python (Jupyter Notebook for data cleaning, analysis, and visualization).
Background
This project was part of the research initiative 'Sound Environment in Retail: A Cross-Industry Study on Sound Design and Music Strategies'. The goal of this work is to explore how background music and environmental sounds in retail spaces influence employee-customer interactions, as well as their impact on satisfaction and well-being.
Aim
This workshop aimed to explore the impact of sound environments on retail customer experience with representatives from the retail and wholesale sectors. A key question addressed was: 'Is generative AI the solution for in-store music, or does it risk overengineering and alienating customers?' Through hands-on activities, participants discussed how music shapes atmosphere and consumer behavior, while considering the role of AI-generated content in this context.
Approach
Pre-Workshop Survey: An online survey was distributed to retail and wholesale professionals to assess their attitudes towards AI-powered music solutions. G-tests were used to analyze categorical data, with Benjamini-Hochberg FDR corrections applied to control for multiple comparisons.
On-Site Quiz: A quiz was conducted during the workshop to evaluate the participants' ability to distinguish between sounds and images from specific stores, and human-created vs. AI-generated music. Cochran's Q test and McNemar's test were employed to assess differences in performance.
Survey data was analyzed in real-time during the workshop to provide immediate insights and inform discussions.
Findings
Pre-workshop survey results (n=42) provided insights into preferences for AI-powered in-store music solutions. Participants strongly preferred human-made music over AI-generated music. Music in stores was significantly preferred to silence. While there was some interest in personalization, participants also expressed concerns about data privacy.
The on-site quiz (n=13) revealed significant performance differences between image-based and sound-based questions. The performance matrix above visually illustrates this, showing a clearer diagonal pattern for image-based questions, indicative of higher accuracy (see Figures 2-3).
These findings highlight the potential of AI-powered music solutions to enhance in-store experiences. However, it's crucial to balance personalization with privacy considerations to ensure consumer trust and satisfaction in such contexts.