Customer Feedback Analysis

GOAL

Understanding customer feedback is the most fundamental task to provide good customer services. However, there are two major obstacles for international companies to automatically detect the meanings of customer feedback in a global multilingual environment. Firstly, there is no widely acknowledged categorization (classes) of meanings for customer feedback. Secondly, the applicability of one meaning categorization to customer feedback in multiple languages is questionable.

In a joint ADAPT-Microsoft research project, we extracted representative real world samples of customer feedback from Microsoft Office customers in four languages, i.e. English, French, Spanish and Japanese and concluded a five-plus-one-classes categorization (comment, request, bug, complaint, meaningless and undetermined) for meaning classification that could be used across languages in the realm of customer feedback analysis. In this shared task, participants will have access to multilingual corpora annotated by the proposed meaning categorization scheme and develop their own systems to determine what class(es) the customer feedback sentences should be annotated as in four languages.

CORPUS

For the shared task on customer feedback analysis, corpora annotated using the proposed categorization of meanings will be provided in four languages, i.e. English, French, Japanese and Spanish. Additional sentences of customer feedback without annotation for each language are also prepared for the development of systems if participants chose to use them, e.g. being used by semi-supervised methods.

Please find below for the samples of the annotated Japanese and Spanish customer feedback sentences. Each sentence will be annotated with six tags (the five-classes comment, request, bug, complaint, meaningless, and undetermined). Each sentence will have at least one tag assigned to it and might be annotated with multiple tags. The systems will be trained to predict the tags (output) of unseen customer feedback sentences (input).

BACKGROUND

Customer feedback analysis nowadays has become an industry on its own; there are dozens of notable internet companies (which we refer to as the app companies) who are doing customer feedback analysis for other, often much larger, companies. The business model for these app companies is to acquire customer feedback data from their clients and they will do the analysis using their internal tools and give the reports to their clients periodically (Freshdesk, Nebula).

However, most app companies not only treat the contents of these reports as confidential materials, which is understandable, but also regard things such as categorization of customer feedback as business secrets. To the best of our knowledge, there are three different openly available categorizations from these app companies. The first is the commonly used categorization which could be found in many websites, i.e. the five-class Excellent-Good-Average-Fair-Poor (Yin et al., SurveyMonkey). The second one is a combined categorization of sentiment and responsiveness, i.e. another five-class Positive-Neutral-Negative-Answered-Unanswered, used by an app company Freshdesk. The third one is used by another app company called Sift and the categorization is a seven-class Refund-Complaint-Pricing-Tech Support-Store Locator-Feedback-Warranty Info (Sift). There are for sure many other categorizations for customer feedback analysis, however, most of them are not publicly available (Clarabridge, Inmoment, Equiniti).

To provide an open resource for international customer feedback analysis, we prepared a corpus using our proposed five-class categorization of meanings as annotation scheme. We hope this would serve as a foundation for future analysis on customer feedback.

RELATED WORK

Bentley, M., & Batra, S. (2016, December). Giving voice to office customers: Best practices in how office handles verbatim text feedback. In Big Data (Big Data), 2016 IEEE International Conference on (pp. 3826-3832). IEEE.

Potharaju, R., Jain, N., & Nita-Rotaru, C. (2013, April). Juggling the Jigsaw: Towards Automated Problem Inference from Network Trouble Tickets. In NSDI (pp. 127-141).

Burns, Michelle. (2016, February). Kampyle Introduces the NebulaCX Experience Optimizer. Retrieved from http://www.kampyle.com/kampyle-introduces-the-nebulacx-experience-optimizer/

Equiniti. (2017, April). Complaints Management. Retrieved from https://www.equiniticharter.com/services/complaints-management/#.WOH5X2_yt0w

Freshdesk Inc. (2017, February). Creating and sending the Satisfaction Survey. Retrieved from https://support.freshdesk.com/support/solutions/articles/37886-creating-and-sending-the-satisfaction-survey

Inmoment. (2017, April). Software to Improve and Optimize the Customer Experience. Retrieved from http://www.inmoment.com/products/

Keatext Inc. (2016, September). Text Analytics Made Easy. Retrieved from http://www.keatext.ai/

SurveyMonkey Inc. (2017, April). Customer Ser-vice and Satisfaction Survey. Retrieved from https://www.surveymonkey.com/r/BHM_Survey

UseResponse. (2017, April). Customer Service & Customer Support are best when automated. Re-trieved from https://www.useresponse.com/

Yin, D., Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Chang-sung Kang, Hongbo Deng, Chikashi Nobata, Jean-Mark Langlois, and Yi Chang. 2016. Ranking relevance in yahoo search. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 323–332. ACM.

ORGANIZERS

Chao-Hong Liu <chaohong.liu@adaptcentre.ie>

Declan Groves <degroves@microsoft.com>

Alberto Poncelas <alberto.poncelas3@mail.dcu.ie>

Akira Hayakawa <akira.hayakawa@adaptcentre.ie>

Yasufumi Moriya <yasufumi.moriya@adaptcentre.ie>