Automatic Prevention of Translation Disasters
By: Mirko Plitt (Modulo Language Automation)
24 November 2014
Until recently, the possibility of automating the assessment of post-editing work received much less attention than the evaluation of MT output. Translation automation now fills an important hole in the translation industry’s traditional approach to translation quality — and protects against translation accidents.
Not many industries can pride themselves on such dedication to quality as the translation industry. Translation quality is the single-most important industry topic and systematically the number one concern raised when discussing changes to generally accepted translation practices. And even the latest generation of crowd-sourced translation vendors promises “quality at scale”, “pay less for quality” or simply “high quality” -- at word rates starting from $0.026.
It is not uncommon for large buyers to spend 20% of their budget on their own translation quality control. Enterprises often employ internal staff to manage and sometimes even execute translation quality assurance. Internal as well as external in-country domain experts support the translation teams, where the cost of the former is typically hidden from the translation budget.
The translation vendors’ own quality processes are typically included in the word rate, under the labels of Review and QA, where review applies to the entire translation and QA is carried out on sample checks. Some vendors also have style guides, run their own translator training programs, and often maintain extremely rigorous and demanding recruitment programs.
The overarching aim of all these efforts is traditionally to produce excellence, as defined by our industry’s own experts. This inward-focus of our definition of quality has received growing criticism in recent years; the concept of “good enough” has gained ground and somewhat lowered the barrier. But the objective to make translations meet the standards which we as an industry guarantee remains, and in practice, it still relies on some variation of the venerable four-eyes principle combined with sample checks.
But how reliable is this principle really, and overall, how effective is our industry’s relentless commitment to quality? For the sake of approaching the question from a different angle, let’s not look at how close we come to the best-case scenario of excellence but how well we protect our customers against the worst that can happen in translation.
The sad truth is that most of us who have been in this industry for a while have been in situations where poor translations slipped through the net of quality assurance, and ended up being reported back by the client, or worse, the client’s customers. These cases may be rare but they’re not exceptional. Their impact varies; it can be limited to a tense phone call, or damage a brand, cause an LSP to lose a client, and even have severe legal and financial consequences.
One recent example forced a large software maker to re-release a flagship product localized by one of our industry’s most respected LSPs. It was a perfect-storm scenario: a new team of translators demotivated by currency exchange fluctuations affecting their pay, combined with an excessively complex workflow leading to file versioning issues, lax reviewers, and an unfortunate selection of samples by both the vendor’s and the client’s QA teams. The process followed our industry’s stringent rules, yet the disaster could not be avoided.
Another example of a major translation accident that was recently commented on in social media involved a community of translators which failed to auto-regulate itself -- proving that collaborative approaches are as exposed to the risk of producing poor translations as traditional role-based processes.
The existence of such disasters has always been acknowledged but used to be met with a surprising level of fatalism, considering the importance we all attach to quality. The measures that clients and vendors take when accidents happen boil down to replacing individuals identified as the culprits plus sometimes a temporary increase of the sample sizes sent to QA until the dust settles.
But as a side-effect of the rise of machine translation, new solutions are emerging, and every new translation disaster will push another translation buyer to investigate how these solutions can help them prevent disasters from happening in the future.
At first, companies specialized in post-editing started using edit distances and detailed timing to obtain an insight into the work of the individual post-editor, always looking for outliers with an increased risk of delivering poor translations. Modulo Language Automation, a translation automation start-up based in Switzerland (of which the author of this article is the co-founder), goes now a step further; it has developed a method to automatically assess the work carried out by post-editors.
The Swiss Post-Editing Score, as Modulo has named its service, combines error injection and statistical process control as used in the manufacturing industry and applies it to the human correction of machine translation. Because this method is fully automatic, it can cover the entire translation production -- in real time.
And as translation automation is not just about machine translation, Modulo is now developing a second automatic service which retrofits its method to the review of human translation, to ensure that this task is effectively carried out. Only when reviews are guaranteed to happen, sample-based translation QA can indeed fully focus on excellence.
With or without MT, closing the vulnerability to translation disasters is the logical complement to our industry’s unabating commitment to quality.
Mirko Plitt is the co-founder of Modulo Language Automation, a Swiss start-up committed to offering novel translation solutions made possible by today's technological advances. He previously was senior language technologies manager at Autodesk where he led the implementation of an integrated authoring and translation management system and developed one of the industry's broadest portfolio of machine translation and other linguistic technology solutions. Prior to his career in the enterprise, Mirko got to know the full variety of localization job profiles during his time at Bowne Global Solutions. With a background in computational linguistics, he started his career in the early nineties as developer of the translating fax machine which worked impressively well -- as long as the wording of that fax wasn't changed.