Localization on the Fly via API: Where to Look and What to Consider

Graffiti of a computer on blue wood


The sphere of translation activities is undergoing a significant transformation. The pace of implementation of Computer-Assisted Translation systems has increased significantly over the past five years. With them, tools for delivering finished materials to customers are improving.

Since 2018, API connectors (a technology that allows direct access to the functionalities of software products, bypassing transferring and activating through human touch) have been actively introduced.

Apart from the evident advantages, the main benefit is dynamic on-the-fly localization. But there are also pitfalls to consider when designing such a system.

Engine Bandwidth Limit

The servers’ capacity varies depending on the machine translation provider. Therefore, if you want to translate all the data on the site/platform on the fly, you have to keep into account that both the workflow and the pace of the project could slow down because of the bandwidth of the chosen system. While the big players like Amazon, Google, Microsoft, or a local Yandex are doing well, and their servers can translate almost any volume thanks to the immense computing power of their servers, niche solutions of customizable platforms cannot boast such performance.

As an option, you can install a system on your servers. If the budget allows it, the optimal combination of server and GPU for significant volumes and short deadlines (up to 2,500 words per minute) will cost up to 5,000 U.S. dollars.

Format and Length of Query

Machine translation engines support a variety of data transfer formats. However, they are still limited to the most popular file formats. You will inevitably need to refine your system further to upload the data correctly. Moreover, suppose the intention is to translate the platform/website into multiple languages, and a different machine translation provider is selected for each language pair. In that case, the set of supported formats will be different. Furthermore, you will have to modify the system to your format for each language separately.

The length of the queries is also essential. It makes sense to pre-design data bundling to send them for translation in the longest possible lists. Otherwise, there is a risk that too many short queries will overload the system. For example, Memsource supports up to 2,000 queries per day for their Team edition and provides an unlimited number for the Ultimate edition only. Given that requests of various kinds (not just translation requests) are sent to the server, the speed of progress of the project may be lower than planned.

Material Prioritization

The localization project of a large marketplace has many materials of different levels of importance. There are also different quality requirements for the translation of every single product. Prioritization can be based on the number of visits, frequency of requests on the site, location on the page or in a particular section. Human involvement is required to avoid gross errors for priority materials, which would draw the end user’s attention.

Translation of Graphic Elements and Video

Although API connectors have proven their worth in localization projects, they are by no means omnipotent. You cannot translate graphic elements and videos as easily or as quickly as you can translate text. To localize such content, you will need to download and transfer the materials to the vendor. Furthermore, the production time will often increase because design specialists, sound directors, titling directors, and many others (as well as the translators themselves) will have to be involved in the project.

Product and Company Names

This is another delicate issue that cannot be resolved quickly. Imagine Apple and its well-known products. When translated into Russian, for example, its name should remain in English, not be translated as ‘Yabloko’. The same applies to any brand name consisting of actual words. Machine translation technology providers assure that their systems support glossary add-ins and that you can add your dictionaries, according to which the engines will translate. However, such dictionaries are a list of source-target types. The function of the glossary add-in is usually reduced by replacing a word in any part of the text with a word from the dictionary. This causes one kilogram of apples on the marketplace to turn into one kilogram of the Apple company. It is not possible to prescribe any rules either. That is why you cannot do without the involvement of post-editors.

General Quality Control

Live content is accessible, fashionable, and trendy. Nevertheless, this ease can hide objective problems with the quality of the material. Furthermore, it is worth weighing the risks of deteriorating your reputation because of poor-quality content on your site. A reputational loss can be much more significant than the primary economic effect of entering a new market. Therefore, it is better to have quality control done by professional linguists and involve them in post-facto quality control. The second option is a partial quality check. In this case, there is a greater risk of missing annoying inconsistency errors or simply pieces of text that users do not understand. Anyway, the risk will be lower than when you don’t check the quality at all and entrust your business entirely to a newfangled machine.

In today’s fast-paced, fast-moving times, on-the-fly translation offers quick and easy access to new markets and great opportunities for business expansion. However, it is advisable to entrust creating a link between CMS and machine translation systems to professionals by bringing in subject matter experts from outside or launching a full-fledged department to work on a localization project. There are numerous solutions on the market, but which one to choose and how to deal with the risks is a question for your company’s analysts.