Understanding and Implementing Effective Translation Quality Evaluation Techniques
Stephen Doherty & Federico Gaspari
Thursday, 03 October, 2013
Stephen Doherty and Federico Gaspari from the Centre for Next Generation Localization at Dublin City University present evaluation techniques for translation quality. With the growing variety of methods and tools at our disposal for translation quality assessment, we still tend to stick to the basics of accuracy and fluency in human evaluations, especially for machine translation outputs. Given limited resources and tighter budgets however, full and thorough human evaluations are not always possible. Here, we can use automatic evaluation methods to improve the breadth of our quality assessment workflows and increase productivity, especially when machine translation systems are deployed. This presentation gives a critical overview of commonly used human and automatic translation evaluation metrics; provides advice on their practical implementation into workflows; discusses how advances in translation quality assessment methods and tools can be effectively implemented, including QTLaunchPad’s Multidimensional Quality Metric framework; provides accessible training materials for translation and localization professionals for later.
Stephen Doherty, BA, HDip, PhD, MBPsS, is a post-doctoral researcher in the Centre for Next Generation Localisation in Dublin City University. He conducts research on topics of language and cognition, human-computer interaction, machine translation, and translation technologies. He is currently working on QTLaunchPad, a collaborative European research initiative dedicated to overcoming barriers in machine translation and language technologies.
Federico Gaspari has a background in translation studies and holds a PhD in machine translation from the University of Manchester. He has more than 10 years’ experience as a university lecturer in specialized translation and translation technology in Italy and the UK. He is a postdoctoral researcher at the Centre for Next Generation Localisation at Dublin City University, specializing in translation quality evaluation as part of the QTLaunchPad project.
Most machine translation output is fine for basic Western LSP needs, as they usually translate within European...
Machine translation is great when it works, but it fails often and in unpredictable ways. What causes bad machine...