Machine Translation Tools and Gender

Be honest: no matter how well you speak another language, isn't it sometimes helpful to paste a sentence into a translation tool like DeepL or Google Translate and get an instant translation?

The problem is that if you don't speak the target language reasonably well, you may not notice if the tool churns out a result that would be nails on the chalkboard to a native speaker. There's one hot topic this happens frequently with: gender. No matter how inclusive and open-minded we may be in our writing, gender inflections are simply baked into many languages. And that spells potential pitfalls for machine translation tools.

Take a simple sentence like "The student asked the teacher a question." With machine translation, the German is rendered as: "Der Schüler hat dem Lehrer eine Frage gestellt" - male student, male teacher. Automatically. Did DeepL miss the memo about emancipation and equal opportunity? In languages where professions distinguish grammatically between male and female forms, many are automatically assigned to the "stereotypical" gender. In other words, gender bias appears to be programmed in.

Machine translation tools, gerndering, translations, localization, ACT Translations, language service providers

 

The team at ACT Translations tested this out with various jobs and languages, and the results were sobering. "Nurse" was automatically translated into German as "Krankenschwester" (the female form). "Doctor Sarah Miller," on the other hand, came out as "Doktor Sarah Miller" (the male form), with "Ärztin Sarah Miller" offered as an alternative ("Ärztin" is technically the female form but not the way you would actually address a woman doctor). In various languages we sampled, the machine translated "mechanic" as "Mechaniker," "mecánico" or "mécanicien" - nary a woman in the bunch. Yet the default option for "physical therapist" is the female "Physiotherapeutin." How about "professor" and "housekeeper"? Take a guess.

The AI Industry: White, Male, Not Very Diverse

So, are deep learning, machine learning and artificial intelligence by definition testosterone-driven macho systems? If not, why is it that translation tools don't suggest a woman doctor or mechanic? The answer is not as simple as you might think. For one thing, artificial-intelligence research is dominated by white cis men. According to the MIT Technology Review, women make up just 18% of the authors at leading AI conferences, 20% of AI professors, and a measly 15% and 10% of the researchers employed by Facebook and Google, respectively. But that's just one aspect of a larger issue.

AI Is Based on Massive Amounts of Data

There's no question that machine learning systems can be powerful tools. But they're only as good as the data that goes into them. If there's a systematic flaw in the data being used to train an algorithm, the resulting model will reflect it.

In most cases, it's not purely about preconceptions or stereotypes. Nor is it the fault of the people who may have been superficial in selecting their data or training their models. No, machine translations manifest inherent social biases, just as historical recordings shine a light on the general situation at the time. These data sets in turn pass their biases on to the machines that learn from them. If more men than women were doctors in the past, the statistical model that was trained using historical data will learn that doctors are far more likely to be men than women, no matter what the actual percentages are in the profession today.

Translating Based on Patterns Rather Than Context

Machine translation models are trained using vast amounts of text in context. These include previously translated sentence pairs. But the nuances particular to each language complicate the matter and often make it difficult to get the translation right. When translating from English into inflected languages like German, Spanish and French, many gender-neutral nouns have to be translated into gender-specific nouns. A "friend" in English is either an "amiga" (feminine) or an "amigo" (masculine) in Spanish. A human translator can easily tell from the context whether the friend is a man or a woman. A machine translation tool can't.

The perils of disregarding context in translation are well documented, from diplomatic near misses to epic advertising fails. Take a well-known carmaker's effort to introduce a new model in Belgium: the original tagline "Every car has a high-quality body" was rendered in the local language as "Every car has a high quality corpse." Luckily, advertisers have since learned the value of context - not to mention focus groups, who quickly point out such errors (once they've stopped laughing).

Neural Networks Replacing Statistical Models

Errors like this can be explained by the methodology found in tools such as Google Translate for years. Their statistical translation models went along word by word and could not recognize context. Fails of the "body/corpse" magnitude have grown scarce, due to the fact that Google Translate now uses neural networks - i.e. artificial intelligence - for many of the 100+ languages it can process. AI can take context into account. As for the gender question, the engineers are currently exploring ways to build that in, too.

Human-Machine Collaboration

While translation tools have grown more accurate and capable in recent years, and many studies have delved into the challenges gender poses for translation tools, there is still plenty of room for improvement in this area - as there is in the larger issues of gender research and equality. Luckily, human translators have all the requisite linguistic skills and instincts to fix any hiccups or gaffes and pick up on subtleties the machines miss. Sometimes, when you put artificial and human intelligence together, you get the best of both worlds.