The Growing Importance of Ethical AI

decorative

 

Sign up here for our newsletter on globalization and localization matters.

 

Artificial intelligence (AI) provides ample opportunities, from automating critical business processes to generating multilingual content, almost instantly. AI plays an integral role in our daily lives. It helps us with customer service issues, financial decisions, healthcare queries, fact-checking, and can even control our light switches in our very own homes.

However, it also presents several ethical concerns and issues.

What Is Ethical AI?

In short, ethical AI is a process that companies will take to implement AI initiatives to preserve human dignity while ensuring no harm is caused to the user.

These initiatives are normally centered around global laws, false news, data protection regulations, and discrimination against marginalized communities including BIPOC, LGBTQIA+, women, and those with disabilities. They would also focus on the company’s own legal and medical liabilities.

Why is Ethical AI a Big Deal?

More businesses are using AI to create solutions that elevate their offerings to gain a competitive edge, which can be incredibly profitable. However, scale-up too quickly and these companies could be putting their reputations on the line. In addition, lack of ethical standards in AI technologies and capabilities can lead to bias within ranking algorithms, such as job applications and highly technical, medical, and legal content.

For example, in 2019 Optum was investigated for their algorithm that reportedly recommended that doctors and nurses pay closer attention to white patients over sicker black patients.

In the same year, Goldman Sachs was investigated for their algorithm that supposedly granted larger credit limits to men than women on their Apple cards. Amazon Alexa, Microsoft’s Siri, and Google’s voice assistant have also come under fire for discriminating against women. All these tech giants used female voices for their voice assistants, which some argued reaffirmed the stereotype that women are subservient.

And finally, one of the most famed ethical AI incidents was brought to us by Facebook. In 2018, the company granted the political firm, Cambridge Analytica, access to the personal data of over 50 million users. Because of these recent issues, some of the world’s leading conglomerates like Microsoft, Twitter, Google, and Instagram are creating new algorithms that focus on the ethical side of AI.

These new algorithms aim to intercept any problems that could arise when collecting and implementing thousands of data sources into their machine learning models.

How AI Ethics Impacts the Language Space

AI ethics plays a crucial role in handling the ethical implications of AI practices within the language transformation space. Some ways AI ethics can help companies maintain moral practices across language understanding, generation and translation procedures include the following:

1. Identify Social Bias

In 2016, Microsoft revealed its conversational understanding experiment—an AI-powered Twitter bot called "Tay." And, on the same day, the program started tweeting offensive and racist messages. It set a dangerous precedent for future AI projects in the language space. That is, if we continue to develop language bots without the ability to identify and control cultural issues and social biases. AI-driven tools can be configured to identify and analyze moral issues within source and target texts and alert stakeholders promptly.

For instance, you can use word spotting, which allows humans and training bots to find well-known biases within the linguistic expression of medically, socially, and politically sensitive questions.

The training data can include gender and racial inclusiveness, suggested corrections on dangerously ambiguous language, and collected information on signals relevant to improving work and quality evaluation.

Many machine translation engines can even handle gender-friendly translation issues and errors accurately. Welocalize utilizes different components of AI and exhaustive language-specific corpora to detect non-inclusive language and offensive terms and synonyms.

We can then make replacement recommendations and provide synonym definitions for more accurate understanding. Generally, systemic bias is unlikely to turn into a major multilingual AI translation problem, partly because humans still play a big role in the process.

2. Monitoring Issues in User-Generated Social Media Content

User-generated content holds more influence than branded content over customer perceptions and purchase decisions. Some examples are:

• Customer reviews

• Social media comments

• Discussion threads in forums

• Social media posts

If you’re a translation buyer, you can’t always control the input quality upstream of social media user-generated content and online commentaries. This is where AI comes in, since language service providers can use the cutting-edge technology to automatically scan phrase or word signals for fake content and dangerous social bias.

You can use your content to identify bias in your target markets, to conduct sentiment analysis, and adapt future marketing messaging and social media content for better engagement.

Do you want to contribute with an article, a blog post or a webinar?

We’re always on the lookout for informative, useful and well-researched content relative to our industry.

Write to us.