E.g., 11/13/2019
E.g., 11/13/2019

Establishing quality review standards for assessing translations with LPET

By: University of Maryland

21 April 2015

For language professionals in the United States Government (USG), translating foreign language materials (both written and spoken) into written English and assessing the resulting language products are critical tasks. How can organizations ensure that language products are of high quality and that methods and standards for translation evaluation are consistent across the workforce?

The Language Product Evaluation Tool (LPET) is an electronic form that offers a standardized framework for evaluating a range of language products, including full translations and various types of summary translations. Researchers at the University of Maryland Center for Advanced Study of Language (CASL) developed the LPET based on methods from educational and psychological measurement with the goal of achieving a balance between practical applicability and theoretical soundness.[1] In addition to ensuring systematic assessment, the LPET also enables documentation of the difficulty of the source material and facilitates the process of providing meaningful feedback to translators.

What are the fundamental principles behind the LPET?

The quality of a language product is generally a composite of three main factors: Language Performance, Analysis, and Presentation.

In other words, quality is a function of more than just the accuracy with which the source language is rendered into English (Language Performance). In the USG context, a language product must often reflect an additional level of analysis—beyond the analysis inherent in translation—to ensure that the text conveys the significance of the source material and provides additional contextual information (Analysis). The use of accepted conventions for format and writing is also important for a language product to communicate clearly to its reader (Presentation).

The three main factors are further divided into six dimensions, each of which provides users with a 5-point rating scale for evaluating that aspect of the language product:

  • Accuracy of Explicit Content
  • Accuracy of Implicit Content
  • Coverage
  • Context
  • Format
  • Writing

Each dimension has a corresponding set of checkboxes that allow users to identify particular problem areas that contribute to the rating (e.g., Accuracy of Explicit Content includes checkboxes for words and expressions and syntax). These checkboxes are not intended to provide exhaustive coverage of all types of errors that might occur within a particular dimension; rather, they represent common error types and features that are especially important for reviewers to consider when evaluating a language product.

In addition, several language-specific versions of the LPET are available; these versions include an extra section that provides checkboxes for more fine-grained error types that may be especially prevalent when translating to or from a particular language.

The quality of a language product should be understood in the context of the difficulty of the source material.

This principle is implemented via a detailed description of the source material. Users document a variety of source characteristics, including language(s), difficulty level, and topic. Users can then select checkboxes to identify common content factors (e.g., cultural information, highly specific domain knowledge) and mode factors (e.g., non-native accent, poor spelling) that can contribute to the difficulty of the source material.

Documenting the characteristics of the source material serves multiple purposes. For example, organizations can use information about the typical difficulty level of their material to estimate staffing needs, and translation managers can keep detailed records indicating which translators excel with which types of material.

How reliable is the LPET?

To assess the reliability, validity, and sensitivity of the LPET, CASL researchers conducted a set of experiments in which experienced quality reviewers evaluated language products manipulated to vary in quality in systematic ways.[2] Reviewers were highly accurate and reliable at using the LPET checkboxes to categorize errors, with moderate levels of agreement among reviewers on the rating scales (similar to levels of reliability that have been observed in other studies involving ratings of job performance). To achieve maximum reliability for LPET use in the workplace, reviewers should receive proper training and teams should meet regularly to calibrate their use of the rating scales.

What are the workforce benefits of using the LPET?

Simple-to-use interface

The LPET checkboxes and rating scales can be viewed on a single screen. Easily accessible pop-ups provide precise definitions, examples, and additional instructions for each item.

Easy integration into existing workflows

Quality reviewers piloting the LPET found that documenting feedback with the LPET did not add much time to the quality review process, but did add value. Users reported that the LPET helped increase their awareness of the relationship between source characteristics and product quality and improved the objectivity, structure, and comprehensiveness of the feedback they provided.

Aggregation of data for a “big picture” view

Language product assessments can be completed and submitted electronically to facilitate data aggregation. Clearly illustrated results from checkboxes and rating scales allow managers to track both individual and group progress over time.

What are the strategic benefits for an organization?

The LPET was developed to establish translation standards and metrics within a U.S. government context to ensure high quality and provide meaningful feedback for improvement. Although some aspects of the LPET were designed specifically for translation work within USG contexts, the general method of evaluation is adaptable to many other settings. The LPET’s unique combination of checklist and rubric approaches can enable any organization that handles translations to obtain detailed cataloguing of errors as well as more holistic assessments of the critical dimensions of product quality. Standardizing the quality review process will ultimately increase efficiency for both translators and quality reviewers and improve the quality of the resulting translation products.

To learn more about the LPET, visit http://www.casl.umd.edu/lpet.  

ABOUT THE AUTHOR

Erica B. Michael is an associate research scientist at the University of Maryland Center for Advanced Study of Language (CASL). She is also affiliated with the university’s Second Language Acquisition Program and Department of Psychology. Her training is in cognitive psychology and psycholinguistics, and she received her PhD in psychology in 1998 from The Pennsylvania State University. Before joining CASL’s research staff in 2005, Dr. Michael received postdoctoral training at Carnegie Mellon University and served as a visiting assistant professor at Bryn Mawr College. Her research interests include lexical and semantic processing in bilinguals and second language learners, and her CASL work focuses on cognitive processing in language and analysis tasks such as translation and summarization.

ABOUT CASL

The University of Maryland Center for Advanced Study of Language (CASL) conducts innovative, academically rigorous research in language and cognition that supports national security. CASL research is interdisciplinary and collaborative, bringing together people from the government, academia, and the private sector. CASL research is both strategic and tactical, so that it not only advances areas of knowledge, but also directly serves the critical needs of the nation. For more information, visit www.casl.umd.edu.


[1] Michael, E. B., Massaro, D., & Perlman, M. (2009). What’s the bottom line? Development of and potential uses for the Summary Translation Evaluation Tool (STET). The Next Wave, 18, 42-49.

[2] Michael, E., Saner, L., Massaro, D., Bailey, B., de Terra, D., Messenger, S., Rhoad, K., Castle, S., & Campbell, S. (2014). Establishing standards and metrics for translation: Experiments to validate the Language Product Evaluation Tool (LPET). In J. Schwieter & A. Ferreira (Eds.), Psycholinguistic and Cognitive Inquiries in Translation Studies (pp. 169-200). Newcastle upon Tyne, UK: Cambridge Scholars Publishing.

[3] Michael, E. B., Massaro, D., Bailey, B., de Terra., D., Messenger, S., Blodgett, A., Saner, L., Rhoad, K., Castle, S., & Gannon-Kurowski, S. (2011, November). The Language Product Evaluation Tool: Establishing standards and developing workforce expertise. Paper presented at the Translating and the Computer Conference, London, UK.  

 
Note: The views expressed here are those of the authors and do not necessarily represent or reflect the views of GALA.