Bringing the crowd in the post-editing process

Post date: Jul 13, 2012 3:57:38 PM

For most software companies, paying for human translation services for localization is no longer an option. Apart from that, although most companies have embraced a Machine Translation + human post-editing approach, this is still the most expensive and time-consuming phase of the localization process.

We propose the development of a semi-automatic management platform that allows the integration of crowd computing strategies with post-editing practices in order to reduce the cost of the post-editing phase of localization.

In this trustworthy crowdsourcing system, workers are ranked based on the quality of the deliverables: the higher the ranking, the more trustworthy the worker is and the higher rate per word the worker will get. Quality will be achieved through an Action-Verification pattern: one machine translator post-editor performs a post-editing task (Action) and several "verifiers" provide feedback and vote for the best translation (Verification). The quality methods applied during the process will vary depending on the ranking of the workers involved in the task. The Action-Verification pattern also guarantees that the localization will be performed at the minimum possible cost. Finally, the system can be scaled up thanks to well established parallelization techniques.

We are currently conducting a pilot project to evaluate the quality achieved through the Action-Verification pattern in collaboration with UPC. We encourage you to participate!!!