E.g., 11/17/2019
E.g., 11/17/2019

The Automation of Globalization UI Testing

By: Peter Jonasson, Senior QE Manager, Product Globalization - VMware, Inc.

18 October 2016

Let’s go back to 2014 (an eternity ago in software releases). Back then, manual testing was the norm in our Product Globalization (G11n) QE team, and that was with more than 60 product releases annually. As a part of that, the teams ran hundreds of thousands of major and minor tests.

As a Senior QE Manager, I had been managing global teams running large-scale functional UI test matrices covering locale, browser, guest, host, and databases. With each successive year, the list of test requirements (and the number of tests) steadily grew. While the manual tests did produce high-value results, greater scale and efficiency required investment in automation, especially given that three-month release trains and agile releases were on the horizon.

We in the G11n QE team saw automating testing as a one hundred percent necessary task.

As soon as we chose to move towards automation, we saw a fork in the road. It quickly dawned on us that there were two paths for automated testing: UI (i.e. record-replay tool) or API. Three experts on the team provided a down-to-earth analysis for the lay-of-the-software-land comparing pros and cons (cost versus time to production) for each.

It quickly dawned on us that there were two paths for automated testing: UI or API.

Automating using APIs would mean that we would be doing this in Java. Java had a steep learning curve (remember that we’re comparing this to manual testing). APIs tended to stabilize early in a release cycle, which would allow us to script the automated tests in good time for releases. We were also comfortable with Java due to its maturity in our industry, and most of the team already knew this language. Another plus for Java would be that most internal QE team uses Java for their libraries so we could easily exchange work and code reviews. And finally, hiring additional skilled engineers would be much easier in Java than just about any other language.

UI testing would mean that we’d need to use Selenium (or a similar record-replay tool) to test. Selenium is easier to learn and use. However, constant UI changes throughout any given release cycle would place tremendous pressure on us, such as the fact that we would need to validate the tests for accuracy before each run (even using APIs to do so).

When we looked around to see what other companies were doing, we found out that other enterprise peers had also gone the API route. This is because UI automation is usually more focused on personal end-user software scenarios or as a supplement to manual regression testing. For complex back-end enterprise software it seemed that automated testing using APIs was best. And thus, we decided to automate testing using APIs (for the most part).

Wait, what are we doing?!

As we started implementing a new automated testing framework, we came up with two rules:

  1. Capture new and regressed failures in the functionality of product; old builds with known bugs were used to trigger failures during validation cycles prior to product regression testing
  2. Each automated test on multiple OS locales must be followed by a trailing manual test-cycle of a single OS locale to minimize missed bugs

And there it was...

With harsh doses of caffeine, relevant training, loads of emails, stacks of laughter, gratuitous amounts of beer, and heaps of verbal discussion (not necessarily in that order), test automation started to look like our missing hockey-sticks.

By 2015, all installation and upgrade tests were automated.

By 2016, close to 85% of general UI tests were automated.

Now, while this was swell, there was a secondary parallel effort under way which impacted the amounts of tests that could be automated. This effort was the test case review for relevance and priority.

A swimming pool full of tests

Looking at our vast list of functional tests (and their results), it quickly became clear that some tests were not relevant as products changed. I mean, some tests passed all the time… even through multiple releases… across multiple years. Thousands of the tests we ran also seemed to be minor permutations of each other, with each test starting off the same way with minor differences (to test a specific feature).

To solve these unnecessary tests, each feature owner reviewed tests for priority and relevance. The test review lasted for almost three months, and this exercise reduced our test pool by 30%. The reduced test pool meant that we had fewer tests to automate.

Internationalization Testing

Internationalization (i18n) testing was a different beast. For i18n critical path testing using the product's APIs for high-/non-ASCII input/output and non-default system path testing was decidedly the best entry point, as it sped up our test delivery. Here, by using the API calls, we could begin early in the release cycle, which would allow us to verify that each feature worked on a non-English system locale.

The speed resulting from using our API-based automated framework was much faster than from UI testing. The teams brought the test automated test numbers up quickly with internal training and productive coding. Once the framework was operational we added automated branch stability tests as well continuous testing at defined intervals.

Using Protractor and Selenium

Two of the teams decided to move ahead with Selenium for their UI test cases covering end-user-type scenarios as predicted. One engineer ramped up on using Protractor to automate tests. Protractor sits on top of the Selenium WebDriver and is customized for AngularJS applications. Test cases are then written in Javascript and run in Node.js for AngularJS applications like web clients. Protractor has built-in support for angular JS pages and actions. For example, the Angular-specific locator makes querying for elements easier; it also automatically sets up Angular page-objects.

Automating Keyboard Layout Testing

Testing keyboard layouts is mission critical, so much so that keyboard functionality should be tested as if it was a feature. I mean, imagine trying to punch a letter inside a VM and out comes nothing (or something completely different)?! Scary!

Due to this, we test for all major languages in the world (bidirectional input is currently not supported, but plans are under-way).

Keyboard testing came into its own in 2015, but it used a manual, non-scalable (watch out for “fat-finger” input!), time-consuming effort. Luckily Qiang Wan, Ming Liu, and Zhenjun Zhuo took a very hard look at the keyboard test effort and invented the automated keyboard input/output tool. This automated tool reduces our keyboard testing from 16 to 3 days per quarter. The tool and test strategy for international keyboard handling was also selected for session #8 for the International Unicode Conference 40 this year!

Are we done now?!

Not yet. Automated software solutions for early UI layout scanning has been proposed. Creating default images to use in tracking truncated UI text is coming along as well. Because of how much value there is in automated tests, there are more use-cases being found every day.

Imagination, coding skills and, well, time are the limits!

 

Peter Jonasson

Product Globalization QE Manager for VMware focusing on i18n and L10n end-to-end UI testing via manual and automated effort. Leading a host of global QE releases including vSphere, NSX, vSAN, and vCD.

randomness