This project explores the ability for generative AI to assist with two key aspects of survey development:
1) the writing of robust questions in a primary language, and
2) preparing survey questions for translators.
The project utilizes a zero-shot, experimental prompt approach. Of working and published studies regarding generative AI, most studies address the application to fields like education, legal, and computer programmers rather than research applications for social science . Some work pertains to translation and the comparatively better translations than other free online tools but no work has been conducted regarding preparing survey questions for multi linguistic and multi cultural settings. Informed by social science surveys, we contribute exploratory and empirical findings regarding the systematic use of generative AI to evaluate question working and to prepare survey questions for translation.
The first paper of the project is currently under submission. You can find a link to an earlier version of the paper below under Working Papers. We are currently working to complete the second and final paper of the project by the end of Summer 2025.
Exploring the Potential Role of Generative AI in the TRAPD Procedure for Survey Translation
https://arxiv.org/abs/2411.14472
Abstract This paper explores and assesses in what ways generative AI can assist in translating survey instruments. Writing effective survey questions is a challenging and complex task, made even more difficult for surveys that will be translated and deployed in multiple linguistic and cultural settings. Translation errors can be detrimental, with known errors rendering data unusable for its intended purpose and undetected errors leading to incorrect conclusions. A growing number of institutions face this problem as surveys deployed by private and academic organizations globalize, and the success of their current efforts depends heavily on researchers' and translators' expertise and the amount of time each party has to contribute to the task. Thus, multilinguistic and multicultural surveys produced by teams with limited expertise, budgets, or time are at significant risk for translation-based errors in their data. We implement a zero-shot prompt experiment using ChatGPT to explore generative AI's ability to identify features of questions that might be difficult to translate to a linguistic audience other than the source language. We find that ChatGPT can provide meaningful feedback on translation issues, including common source survey language, inconsistent conceptualization, sensitivity and formality issues, and nonexistent concepts. In addition, we provide detailed information on the practicality of the approach, including accessing the necessary software, associated costs, and computational run times. Lastly, based on our findings, we propose avenues for future research that integrate AI into survey translation practices.
November 2024
July 2024
Seoul, South Korea
May 2024
Gothenburg, Sweden