E.g., 11/13/2019
E.g., 11/13/2019

Letter from the Guest Editor: MT Post-editing - A Fresh Perspective

By: Olga Beregovaya (Welocalize) - Welocalize


25 November 2014

Olga Beregovaya, Vice President of Language Tools at Welocalize, welcomes readers to GALAxy Q4 2015, which shines a light on the current trends, opportunities, and challenges in MT post-editing. Seeking a fresh look at the much-talked-about subject, Olga explains her focus on the roles machine translation and post-editing are playing now, and the roles they may play in the future of the translation and localization industry.

When I was offered the role of the Guest Editor for this issue of GALAxy Newsletter, I knew immediately who I’d want to reach out to for their contribution and what aspects of this exciting new field I’d want the issue to cover.

In my position at a major LSP, I work with post-editing daily, so everything I know about the field I know from experience. But then there was a bit of fear – while it is a relatively new field (most certainly the newest compared to the other translation optimization tools), it has been covered so much in professional publications, blogs, conference presentations, marketing materials and webinars that I really wanted to make sure that we are telling something new, viewing the issues that accompany the adoption of post-editing from several fresh angles, discussing the challenges that we have experienced or see coming, and providing our input from various perspectives – that of a buyer, of a translator, of a software developer, and a researcher.

First and foremost we want global content to make an impact. We measure the success of our work by the real and measurable value the translated content brings to our clients’ business, and this is what dictates the quality expectations and the way the translations will be produced. The recent developments in translation automation have brought MT output quality to levels where post-editing can be a viable way for translators to become more productive without introducing any quality degradation. To the contrary, nowadays translators see the benefits of relying on the accuracy of the MT output.

These concepts have indeed gained traction and a lot of very exciting things are happening in the field. Rarely has there been a conversation with our clients, where the “promise of post-editing” has not come up. There are so many variables to consider, and “is my content right for MT” is just one of them.

Our own experience at Welocalize shows that unless we are talking about some high-visibility Marketing content, there usually will be room for some application of machine translation. However, we would probably all agree that no new exciting things have ever come about without bringing along some equally exciting challenges and stirring some controversy.

So, here’s the issue of the GALAxy dedicated to the role machine translation and post-editing are playing now and will be playing in the future in the translation and localization industry.

A few words about the articles included in this issue and their authors:

An article by Lena Marg, who herself has been on the journey of translator-turned-post-editor, and who now manages post-editor training and engine evaluation at Welocalize, is called Post-Editing 2.0. In this article Lena brings up the concept of “dynamic quality” and how it ties in to various levels of post-editing. While it is fairly easy for the industry to define the levels of post-editing as “full” and “light” (and now they are getting even more granular), at the end of the day it is up to the translator to make all the practical decisions around what to change to what extent, and what to leave untouched. Being very familiar with the insides of post-editing work from her years “in the field,” Lena looks at different types of translation tasks and possible quality evaluation (QE) scenarios, which will in turn help define the necessary level of post-editing.

Lena’s article presents the translators’ perspective, and in the next article we’ll look at the enterprise-scale implementation of machine translation and post-editing as seen through the buyer’s eyes. The piece by Wayne Bourland, who is a Director of Globalization at Dell, has a rather provocative title: MT, Future of eEverything, or Precursor to Unemployment. Wayne brings up the very important issue of MT adoption. If machine translation is indeed so great, or, rather, since it already has proven to be so great, why are many companies still rather slow at adding MT to their toolchain and harvesting all the benefits the use of MT brings with it? The paradigm of “cost x quality x velocity" is most definitely a key part of the future of our industry, and in his article Wayne talks about the role MT is playing in reaching the desired balance between these three components and discusses the risks and the challenges we are facing and the perceptions that we need to overcome.

The above articles that are written from the perspective of MT practitioners are followed by two articles from developers of software products that aiding industry adoption of machine translation and post-editing. John Moran has been a professional translator and a software developer, and now is a researcher in the field of translation automation. In From Lab to Market: Can an industry research collaboration fix the post-editing pricing problem?, John shares his views on an aspect of post-editing that is both of great interest and of utmost importance for buyers and service providers alike: what is the right and, most importantly, the most fair way of pricing the post-editing effort.

Obviously, we have evolved past “one-size-fits-all” post-editing discounts, but where have we arrived and what comes next? John discusses approaches that, from his point of view, have a potential of satisfying both the buyers’ expectations and at the same time reflect the realistic impact adoption of machine translation has on the translators’ productivity; ways of capturing and analyzing these gains, and then translating them into pricing models that are fair to everyone involved. John analyzes the concept of “User Activity Data” and presents technology that has already been proven to help in making educated, data-driven pricing decisions.

In his article, Automatic Prevention of Translation Disasters, Mirko Plitt, a computational linguist and a founder of Modulo, a company developing solutions that measure translation quality, analyzes the impact human intervention makes on otherwise automated translation processes. While humans most definitely make improvements to machine translation output, we can also at times introduce errors, thus having negative impact on the quality of the final product. In Mirko’s own words, “Poor translation can slip through the net of quality assurance”. The good news is that there are ways of automating the capturing of human errors, and Mirko presents the tool that his company created, Swiss Precision Score (Mirko is a German living in Neuchatel, Switzerland, and is most definitely exposed to the highly acclaimed Swiss accuracy and precision - could there be a more appropriate place to work on his application?).

So, here’s the new GALAxy issue. Think with us, argue with us, and enjoy!

Olga Beregovaya is Vice President of Language Tools at Welocalize. She is a recognized industry expert in automation, machine translation, and innovating translation with language tools.