Artificial Intelligence (AI) has left the pages of science fiction and become a tangible reality. While some are already describing our current era as the "Age of AI," it remains difficult to find a precise definition of artificial intelligence that satisfies all stakeholders.

To help you see more clearly, the GALA Global community has developed a glossary of key terms related to artificial intelligence (AI), intended for those who wish to familiarize themselves with the terminology specific to this discipline.

Algorithm

An algorithm is a mathematical formula or procedure that allows a computer to solve a given problem. In computer science, it translates into a sequence of elementary operations, called instructions, that can be executed by a computer.

Algorithmic bias

The term "algorithmic bias" refers to a situation in which an algorithm systematically produces biased results toward certain categories of people, usually based on characteristics such as ethnicity, gender, age, or religion. In marketing, for example, this can occur when algorithms are used to disproportionately target advertisements or offers to certain groups of customers. However, by being aware of these biases and taking steps to mitigate them, companies can take advantage of them.

Artificial intelligence

Artificial Intelligence (AI) is a field of research to design and program machines with cognitive capabilities inspired by human behavioral models.

The software developed for this purpose is capable of autonomously execute a defined task and making decisions that are normally entrusted to humans. One of the current developments is to be able to entrust a machine with complex tasks previously delegated to humans.
 

Artificial intelligence maturity model

The AI Ethical Maturity Model is a framework to help organizations assess and improve their ethical practices related to the use of AI. The model describes how they can analyze the ethical nature of their current AI practices and then move toward more responsible and trustworthy use. This includes issues related to transparency, fairness, privacy, accountability, and the subjectivity of predictions.

Artificial General Intelligence

Artificial General Intelligence, or AGI, is a type of AI that can understand, learn, and perform complex tasks in a human-like manner.

The goal of an AGI is to come as close as possible to replicating the human mind and cognitive abilities. An AGI must demonstrate cognitive versatility, being capable of learning from different experiences, understanding and adapting to a wide range of situations without the need for task-specific programming.
 

Artificial Intelligence of Things

Artificial Intelligence of Things (AIoT) is the combination of Artificial Intelligence (AI) within Internet of Things (IoT) solutions. The Internet of Things (IoT) is based on the idea of "smart" everyday objects that are interconnected (thanks to the Internet) and can exchange information that they own, collect and/or process.

Through artificial intelligence this network will be able to process data and exchange information with other objects, improving the management and analysis of huge amounts of data.  Examples are autonomous vehicles, remote healthcare, smart office buildings, predictive maintenance.

Big Data

Big data refers to the massive amounts of data that organizations have created and continue to create daily. This data can be analyzed and transformed into valuable information that enables companies to make better decisions and automate processes.
 
Analyzing a large amount of data allows companies to make more informed decisions, such as describing the current and past situation of their business processes, answering questions about what might happen in the future, or proposing strategic solutions based on the analysis performed.

Bot

A bot—or chatbot— is a piece of software designed to communicate with humans in natural language, with the purpose of automating certain tasks or retrieving information from databases. It is a tool capable of providing 24/7 assistance to customers and employees via text or audio and is suitable for different applications in different sectors.

A bot can live within another application such as Facebook or WhatsApp or be integrated into websites to handle initial contacts in call centers or help desks, or it can automate dialogues via email and SMS to provide support for a company or a specific product.
 

Computer Vision

Computer vision algorithms process the content of images or videos to recognize objects, people, or animals in them, and reconstruct a context around the image, giving it real meaning.

To work properly, computer vision algorithms need to be trained on many images to form a dataset from which it could learn. Computer vision algorithms have many applications, from face recognition to industrial and manufacturing applications. 

Data Mining

Data mining refers to the (automated) process of extracting information from large amounts of unstructured data (found in databases or files) to allow computers identify trends and (recurring) patterns, that can be used as a basis for decision making in areas such as marketing, business and finance, science, industry, etc.

Data Science

Data science aims to understand and analyze real-world phenomena by searching for logical relationships in data. Patterns and models are developed to gain new information that can be used in other areas.

Data scientists, or researchers who apply these methods, transform large amounts of "raw" data or big data into valuable information that companies can use to improve their products or gain competitive advantage.

Dataset

A dataset is a structured collection of data, usually presented in tabular form. In this table, each column represents a specific variable and each row corresponds to an observation. Datasets are essential in the fields of data science, machine learning and artificial intelligence, because they provide the information needed to train models and perform analysis.

Deepfake

A deepfake is the product of an artificial intelligence model created from a real-world base of images, videos, or audio recordings, that can be perceived as real by altering or reproducing the facial features, expressions, or voice timbre of a person.

Beyond a mere ludical scope, deepfake material can also be used to create fake news, hoaxes and scams, or to commit cybercrimes of various kinds.

Deep Learning

Deep learning is an artificial intelligence (AI) technique that teaches computers to process data in a way that mimics the human brain. Deep learning models can recognize complex patterns in images, text, sounds, and other data to produce accurate information and predictions.

You can use deep learning methods to automate tasks that typically require human intelligence, such as describing images or transcribing an audio file into text.
 

Expert Sytem

An expert system is a computer program made to artificially reproduce the performance of a person who is an expert in a particular field of knowledge or subject matter. After being properly trained by an expert, they can infer information from a set of data and source information.

Expert systems can be rule-based or tree-based. In the first case, they start from a set of facts to infer new facts by following true-false logic or the cause-and-effect model. In the second case, they start from a sequence of facts or decisions to build a tree of possible alternatives and situations to reach a conclusion.

 

Facial recognition

Facial recognition is an artificial intelligence technology that can identify a person based on the unique features of their face. The challenge is to find the right balance between hyper-personalization and respect for privacy.

Forecasting Algorithm

A forecasting algorithm is a type of algorithm used to make probable forecasts or future estimates based on historical patterns and trends.
These algorithms analyze patterns and trends in past data to identify patterns that can be used to make possible predictions about the future.

Forecasting algorithms are used to anticipate future events or outcomes, make informed decisions, plan resources, and mitigate risks in weather, business, finance, and manufacturing.
 

 

Generator

A generator is AI-based software that generates content from a given query or input. The system stores any training data provided and then creates new information that mimics the patterns and characteristics of that data.

Among text generators, the most notable is ChatGPT, released by OpenAI.

Hallucination

A hallucination occurs when a GenAI (generative artificial intelligence) system analyzes a piece of content, but comes to an incorrect conclusion and produces a new piece of content that does not match reality.

For example, if an AI trained on thousands of animal photos is asked to create a new image of an animal, but combines the head of a giraffe with the trunk of an elephant, the result is considered a hallucination. Although they may be interesting, hallucinations are undesirable results and indicate a problem in the responses of the generative model.

Image Processing

Image processing systems can perform certain operations on images, such as obtaining an enhanced image, recognizing people, animals, and things present, or generally extracting some useful information or feature from it.
Applications range from medicine to geological processing to auto damage assessment in insurance accidents.

Image Recognition

Image recognition is a subset of computer vision. It is a technology for detecting and identifying places, people, objects, features, and many other types of elements in an image or video through pre-trained neural networks to spot a specific element or to classify an image and assign it to a category.

Intelligent Data Processing

Intelligent Data Processing (IDP) algorithms are used to collect data and extract information to initiate and process specific actions based on the acquired information.

This type of AI is applied directly to structured data rather than to extract relevant information, such as in financial fraud detection systems or predictive analytics.

Large Language Model

Large Language Models (LLMs) are artificial neural networks that are able to perform natural language processing tasks. LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a computationally intensive training process.
These models are trained on huge text datasets (billions of parameters) and use transformative neural networks (transformers) to devise language structures, language nuances, and word relationships within texts in a human-like manner.

Machine Learning

Machine learning systems can learn from experience, with a mechanism seemingly similar  to what a human being does from birth.

By analyzing large amounts of data, machine learning algorithms build models to explain the world and make predictions based on their experience. This type of programs can improve their analysis and predictions based on accumulated experience and additional samples of analyzed data.

Machine learning bias

Machine learning bias refers to systematic errors that can occur when a machine learning algorithm makes predictions based on training data. These biases can occur for a variety of reasons, mainly due to the quality of the training data, the method of data collection, or the way the algorithm is designed. Bias can lead to inaccurate or unfair results, so it is important to understand and manage it.

Narrow Artificial Intelligence

Narrow Artificial Intelligence refers to AI systems that specialize in specific tasks.

Narrow AI is designed to perform limited and well-defined tasks such as speech recognition or machine translation. Narrow AI systems are highly efficient at performing such tasks.

Natural Language Processing

Natural Language Processing (NLP) refers to Artificial Intelligence (AI) algorithms that can analyze and understand natural language, the language people use every day.

NLP enables human-machine communication and deals with text or sequences of words, but also understands spoken language and text (speech recognition). Purposes can range from simple content understanding, to translation, to autonomous text production from input data or documents.

NLP is used in spell checkers, machine translation systems for written text, chatbots and voice assistants for spoken language.

Neural network

Artificial neural networks (ANN) are mathematical models conceived around artificial neurons that are inspired by the functioning of human biological neural networks. Neural networks are now in daily use and are used to solve Artificial Intelligence engineering problems related to various technological fields.

 

 

Pattern Recognition

A pattern is a recurring occurrence of behaviors, actions, or situations.

Pattern recognition involves the analysis and identification of patterns in raw data. This data is classified based on previously acquired knowledge or information extracted from previously stored patterns. Input data can be words or text, images, or audio files.
Pattern recognition is used in image processing, speech and text recognition, and optical character recognition in scanned documents such as contracts and invoices. 

Predictive Analysis  

 

Predictive analysis is the use of data, statistical algorithms, and machine learning techniques to make predictions about future outcomes by looking for patterns in historical and transactional data to assess the likelihood and possibility of certain events occurring in the future.

Recommendation System

Recommendation systems are machine learning applications designed to recommend and target the user's preferences, interests and decisions based on various factors and information provided by the user, either directly or indirectly.

Algorithms track users' actions and learn their preferences and interests by comparing them with those of others. In this way, similarities between users and items for recommendation are found, and as the user uses the platform, the algorithms become more precise in their suggestions. 

These systems are now the main pillar of the business model of all social and e-commerce platforms (Amazon, Netflix, Spotify, YouTube...).

Robotic Process Automation

Robotic Process Automation (RPA) encompasses all technologies and applications that mimic human interaction with computer systems. Specifically, it is the automation of work processes using software (bots) that can automatically perform repetitive tasks and mimic human behavior.

Unlike traditional automated tasks that rely on structured data (e.g., APIs), RPA can also handle unstructured data (e.g., images and documents). 

 

 

Sentiment Analysis

Sentiment analysis is a Natural Language Processing (NLP) technique used to listen to and analyze the feelings and opinions expressed by users in social networks, forums, or blogs about a product, company, or service by gathering data from online content about the emotions the user has felt in specific contexts, focusing on polarity (positive, negative, neutral), but also on feelings, emotions (angry, happy, sad, etc.), urgency (urgent, non-urgent), and intentions (interested, not interested). It is often used to monitor customer feedback about a particular product or service, to analyze brand reputation, or to understand customer needs.

Speech Recognition

Speech recognition is the ability of a computer to process human speech in written or other data formats. 

This type of speech recognition enables manual tasks that typically require repetitive commands, such as voice-activated chatbots, call routing in contact centers, dictation and voice transcription solutions, or user interface controls for PC, mobile, and in-vehicle systems.

Synthetic Data

Synthetic data is artificially reproduced data using generative machine learning algorithms. Based on real data sets, a new data set is created that has the same statistical properties as the original data set but does not share any real data.

Synthesizing allows the data to be anonymized and created based on user-specified parameters to be as close as possible to data collected from real-world scenarios.

Turing Test

The Turing test is a test developed by the British scientist Alan Turing in the 1950s to test a machine's ability to imitate human behavior and to assess the presence or absence of "human" intelligence in a machine.

This test is also known as the "Imitation Game". It involves a judge standing in front of a terminal through which he could communicate with two entities: a human and a computer. If the judge could not distinguish the human from the machine, then the computer had passed the test and could be called "intelligent".

Transformer

A transformer is a type of neural network architecture. It was introduced by Google in 2017 in the article "Attention Is All You Need."

Transformers are based on attention, a mechanism that allows the network to learn the relationships between different parts of an input, such as words and sentences. As a result, they are effective at handling relationships between words or linguistic units within a text.
Transformers are particularly well suited for natural language processing (NLP) tasks such as machine translation, text generation, natural language classification, and more.