E.g., 03/28/2020
E.g., 03/28/2020

MT Post Editing: My Perspective Following a Lively Discussion at FIT

By: Michel Lopez (e2f translations) - e2f, inc.

27 August 2011

Michel elaborates on his deep knowledge of MT post-editing from a production perspective and provides related advice to linguists ready to take the plunge.

A discussion in San Francisco

I was recently invited to participate to a discussion panel hosted by GALA at the FIT 2011 Conference in San Francisco and moderated by Laura Brandon (GALA), together with Uwe Muegge (CSOFT) and Kåre Lindahl (Venga).

The objective was to explain to an audience composed mostly of freelance translators how to “Create lasting partnerships with LSPs in the Age of Post-Editing.”

As the largest English->French single-language vendor, we are in the unique position to embody both “The Translator” to our clients (multi-lingual vendors and end-clients), and “The Agency” to freelance translators (who make up about 40 percent of our production capacity).

So, while the other panelists concentrated on the relationship part of the equation, seen from an LSP perspective, I decided to focus on MT post -editing, seen from a production perspective. This article extends my presentation.

A brief history

A general understanding of the machine translation technology is a good place to start. Simply put, a machine translation engine takes the source files as input, uses a variety of strategies (rule-based, statistical or hybrid) and tools (generic corpus, domain-specific corpus, translation memories, glossaries, etc.), and creates target files, that are then normally reviewed (“post-edited”) by a linguist to produce final files.

The early generations of engines were rule-based (or “deterministic” as we used to say). They relied on syntactic rules, semantic rules and dictionaries to try and “understand” each source sentence, convert it into a “language-independent representation”, and from there create a sentence in the target language.

These engines perform relatively well when the domain and terminology are very limited (for example, in the car industry), and the sentence structure is very simple, in other words when the language is “controlled.” However, they are easily thrown off by unknown words, awkward grammar or long sentences.

Early this century, statistical engines started to appear. They rely on statistical methods and huge bilingual corpora instead of trying to understand the meaning of the source text. Google Translate is probably the best known example.

These engines are much more flexible and the quality of their output increases considerably with the size of the corpora they have been fed.

Meanwhile, linguists have used CAT tools for many years, such as Trados or Wordfast. These employ modern search technologies to match source segments against a database of previous translations, and can be seen as translation memory (TM) engines.

Translation memory techniques perform extremely well in the case of projects, such as user manuals, where source files are usually new versions of previously translated files, but are almost useless when translating something completely new.

Current MT engines, such as Pro-MT, SYSTRAN, or Language Weaver combine all these techniques in proprietary ways and attempt to provide an output that is supposed to be the best possible trade-off. The idea is to provide a match from the TM if there is one, or to use a variety of machine translation techniques if none is available, while relying on the domain glossary to ensure terminology adequacy.

In real life, although the quality of the output is still relatively poor and the engines still exhibit a lot of limitations, we have found they have at least reached the level where they improve the overall productivity of the linguist, which is one of the main objectives of the industry.

A matter of attitude

At e2f, we have been embracing MT post-editing for the following reasons:

  • Many of us have technical backgrounds, so we’re not afraid of technology. As a matter of fact, the subject of my Computer Science Master thesis, over 25 years ago, was a software interface between a human and a mobile robot, in other words a machine translation engine from a subset of the French language into an ad hoc “robotic language.”
  • We recognize that it’s almost never beneficial in the long term to wage a war against new technology. You can always reject it for a while and convince yourself that your old ways of doing things are better, but eventually the wave is going to submerge you, so you might as well surf it early on!
  • Whereas most LSPs need to “convince” freelancers to move to MT post-editing, we are a single-language vendor, and most of our linguists are employees. As such, we didn’t need to convince them, but rather to train them. Also, as most of the current MT post-editing projects are large or very large, they are well-suited to SLVs.

With this positive attitude, we have been able to understand, adopt and embrace the MT post-editing frontier, rather than fight against it.

A trend without ambiguity

In 2009, out of the 15M words we translated, virtually all of them were “hand-translated.”
In 2010, MT post-editing represented 200k out of about 20M words.
In 2011, over 2M of 26M words will have been MT post-edited.
In 2012, we expect a total volume of over 30M words, with at least 7M “post-edited.”

It is clear from this trend that machine translation, which started over 50 years ago during the Cold War and showed more promises than successes for many years, is no longer confined to universities, research laboratories and pilot projects. It has become mainstream and it is not going to disappear anytime soon from the translation industry.

It is also clear from these numbers that while our post-editing work has increased so has our standard TEP volume.  At least at e2f translations, MT post-editing is not reducing work; we actually have more work!

A few important concepts

One important characteristic of our industry is that every single project is different from all others. We never do the same thing twice. This makes the work interesting, but when looking at it from a production perspective, we still need to measure and analyze in order to have some control over our processes.

Expected final quality

MT post-editing projects can be divided into two main categories, depending on the expected level of quality of the final output:

  • Perfection: The objective is to get final files indistinguishable from files that would have been handled only by humans through a standard translation process.
  • Readability: The objective is only to get final files that have the same meaning as the source files, are correct from grammar, spelling, and terminology standpoints, but whose style is not necessarily perfect.

For marketing content, “perfection” is clearly a must, but for technical manuals, “readability” can be deemed sufficient.

Edit-distance ratio

The edit-distance ratio measures the difference between the MT engine output and the post-edited files. Every time a word is added, modified, moved or deleted, the distance between the files increases.

  • If the distance is 0 percent, the linguist didn’t change anything.
  • If it’s 100 percent, pretty much everything has been changed!
  • In practice, we have found out that for most such projects, the distance lies somewhere between 40 and 70 percent.

Of course, the quality of the MT engine output is an important factor here, but when the expected final quality is “perfection,” the ratio increases by a very high percentage.

Productivity Metrics

In the standard translation process (TEP), the industry uses the following metrics:

  • Translation: 250 words per hour
  • Editing: 1000 words per hour
  • Proofing: 4000 words per hour

For MT post-editing, the productivity lies somewhere between 350 and 800 words per hour.

Of course, high productivity is only possible when the edit distance ratio is small, which in turn means that the expected final quality has a large influence on productivity.

All projects are not created equal

Thanks to the large number of projects we have been handling, we have been able to categorize them as “Good” or “Bad” from a production perspective. Unfortunately, we often had to wait until the post-mortem phase to know whether the project was “Good” or “Bad”!

Good projects
The following are some of the characteristics of a “Good” project:

  • Source files have been written or edited for machine translation: either the source text was written in very simple and consistent language, with short sentences, straightforward word order and little redundancy, or the files have been processed through a “content cleaning software” such as Acrolinx in order to achieve the same results.
  • The glossary is comprehensive and well translated, and the engine uses it in a systematic manner.
  • The project is large, it has been divided into batches and each batch is processed individually, after incorporation into the MT engine of final output from the previous batch.
  • Specific linguist feedback is incorporated into the engine (fine-tuning of grammar rules, updates to the glossary, etc.), and the linguist is financially rewarded for this step.

When all of the above is true, the linguist feels involved and the quality of the output increases throughout the project, along with the productivity and happiness of the linguist!

Bad projects
In “Bad” projects, the opposite happens:

  • Source files are poorly written, terminology is inconsistent, sentences are long, grammar is awkward, etc.
  • The glossary is too small or inadequate and/or it’s not being used consistently by the engine.
  • Even though the project is large, the machine translation engine has been run only once at the onset.

In this type of project, the linguist gets increasingly frustrated, as the same mistakes have to be corrected over and over again, while the overall productivity remains unchanged.

Best post-editing practices

In order to increase productivity while editing MT output, we have found that it is best to abide by the following rules:

  • Read the sentence in the target language first:
    • If the sentence is very long, erase it and translate from scratch (the longer the sentence, the more likely it is that the engine will have made a large number of mistakes and that it will be faster to start over).
    • If the sentence is short but does not make sense, erase it and translate from scratch (if you are going to change most of the words, you might as well start over).
    • Otherwise, read the source text and edit the target text as little as possible.
  • Don’t overcorrect for styles and synonyms.

Advice to freelance translators

To summarize, the best advice we can give to freelance translators willing to take the plunge into MT post-editing is:

  • Clarify expectations at the project onset (so you don’t end up getting paid for “Readable” quality while providing “Perfect” quality).
  • Look for “Good” projects and stay away from “Bad” projects, unless you would rather feel frustrated than involved!
  • Use best post-editing practices to increase your productivity.
  • Finally, calculate your productivity and adapt your rate accordingly!

As Kåre and Uwe commented during our discussion, very similar advice can be applied to standard translation projects, which proves that MT engines are just another tool and not the revolution some linguists are scared about!

Born and raised in France, Michel has been living abroad since 1987 after earning an M.S Degree in Computer Science. In 2004, he founded e2f translations, now a 50-people group with offices in California, France, Madagascar & Mauritius. The leading English to French single language vendor, e2f partners with many of the largest multi language vendors as well as localization departments of large companies in all sectors (IT, Marketing, Legal, Financial, Life Sciences, etc.) and geographies. With a 30 to 50% growth rate seven years in a row, e2f translates today over 25 million new words per year, exclusively from English to French.