Localizing at Cloud Speed

technical documentation


Sign up here for our newsletter on globalization and localization matters.


NetApp Cloud Data Services offers speed, scale, and agility to help transform business.

Information Engineering team at NetApp uses AsciiDoc/GitHub to rapidly author, manage, and publish the NetApp Cloud Documentation that helps customers use our cloud products and services. Information Engineering and Globalization teams came together and successfully launched a pilot to meet our global customers’ expectation of seeing our cloud documentation in their local language.

Traditional localization workflow is riddled with some business challenges:

• Content update cycles are rapid and hence the long translation and review cycles render the workflow ineffective.

• Human Translation (HT) and Machine Translation (MT) with Post-Editing (PE) process is difficult to sustain both from the cost and time perspectives.

To help our global customers access localized content quickly, we came together to discuss content localization challenges and brainstorm potential solutions.

With the speed that NetApp cloud content is written and updated in English, the team knew that traditional localization workflows wouldn’t meet cloud business requirements. How could we provide localized content quickly when the source content itself was changing so frequently? And how could we find a solution that reduced costs, improved turnaround time drastically, and minimized touch points in the content localization workflows? We had to find a way to leverage neural machine translation (NMT) technology.

These questions and explorative thought process steered our thinking towards adopting raw MT and leveraging translation memory (TM). The team decided that the outcome of this approach would be good enough to produce localized technical content (Product Documentation). So, we launched an innovation project and identified cost, quality, scalability, and turnaround time as the most important levers for this project. We identified key milestones and objectives, tracked status, and kept our key stakeholders informed. We explored multiple theories about how various prototypes might meet the challenges.

We quickly realized that we needed a tangible, proof-of-concept (POC) deployment to uncover hidden challenges and to give us time to adapt the prototypes as needed. In a few months, we deployed the POC prototype, localizing content for several key products. We laughed! We cried! Everything worked flawlessly! OK, “flawless” is a lie, but we certainly practiced the leadership tenet of trying and failing quickly. More importantly, the challenges we uncovered enabled us to make a fair assessment of a realistic approach going forward.

By the time we wrapped up the project, here’s what we accomplished:

• Designed three prototypes to rapidly localize the content

• Prepared a functional automated translation workflow through third-party translation APIs.

• Connected in-house tools to test a low-touch, fully automated translation workflow

• Set up a mechanism to capture the usage of localized content

This project not only helped us discover approaches to rapidly localize content, but it turned into a business enabler to meet the partner and OEM requirements. So, what did we accomplish? In the short term, we used a third-party translation API’s to rapidly translate the content.

In the long run, we developed a technical solution to optimize and scale our NMT capabilities. This customized technical solution is based on a workflow that includes an NMT engine and a Translation Management Systems (TMS). As part of this solution, we did leverage the TM database that was rich with years of HT effort. The workflow accounts for how well the TM database is architected. Differences between the raw MT and the HT output has to be understood. This demarcation of TM databases comes with its own of set of challenges during the initial stages. Besides, building a small terminology dictionary on the authoring infrastructure optimizes the workflow to a certain extent. It helps reduce the processing overhead on the TMS side.

At the end of this second phase, we see the following benefits for the customers:

• Expanded localization services including languages and projects on the GitHub platform.

• Localized content in sync with the source content

• A  community experience for content users leading to improved localized experience.

Having talked about the customer benefits, some challenges around localizing bigger payloads in the pipeline remain. However, those instances are less. With the MT engines evolving to improve the quality of translation and the TMS becoming better in ingesting content and processing higher payloads, 
we see opportunities to further improve the efficiencies and up the game.

Do you want to contribute with an article, a blog post or a webinar?

We’re always on the lookout for informative, useful and well-researched content relative to our industry.

Write to us.