What is a transformer model?

What is a transformer model?

A transformer model is a type of deep learning model that was introduced in 2017. These models have quickly become fundamental in natural language processing (NLP), and have been applied to a wide range of tasks in machine learning and artificial intelligence.

The model was first described in a 2017 paper called "Attention is All You Need" by Ashish Vaswani, a team at Google Brain, and a group from the University of Toronto. The release of this paper is considered a watershed moment in the field, given how widespread transformers are now used in applications such as training LLMs.

These models can translate text and speech in near-real-time. For example, there are apps that now allow tourists to communicate with locals on the street in their primary language. They help researchers better understand DNA and speed up drug design. They can hep detect anomalies and prevent fraud in finance and security. Vision transformers are similarly used for computer vision tasks.

OpenAI’s popular ChatGPT text generation tool makes use of transformer architectures for prediction, summarization, question answering and more, because they allow the model to focus on the most relevant segments of input text. The “GPT” seen in the tool’s various versions (e.g. GPT-2, GPT-3) stands for “generative pre-trained transformer.” Text-based generative AI tools such as ChatGPT benefit from transformer models because they can more readily predict the next word in a sequence of text, based on a large, complex data sets.

The BERT model, or Bidirectional Encoder Representations from Transformers, is based on the transformer architecture. As of 2019, BERT was used for nearly all English-language Google search results, and has been rolled out to over 70 other languages.1

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

How transformer models are different?

The key innovation of the transformer model is not having to rely on recurrent neural networks (RNNs) or convolutional neural networks (CNNs), neural network approaches which have significant drawbacks. Transformers process input sequences in parallel, making it highly efficient for training and inference — because you can’t just speed things up by adding more GPUs. Transformer models need less training time than previous recurrent neural network architectures such as long short-term memory (LSTM).

RNNs and LSTM date back to the 1920s and 1990s, respectively. These techniques compute each component of an input in sequence (e.g. word by word), so computation can take a long time. What’s more, both approaches run into limitations in retaining context when the “distance” between pieces of information in an input is long.

Mixture of Experts | Podcast

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Two big innovations

There are two primary innovations that transformer models bring to the table. Consider these two innovations within the context of predicting text.

  1. Positional encoding: Instead of looking at each word in the order that it appears in a sentence, a unique number is assigned to each word. This provides information about the position of each token (parts of the input such as words or subword pieces in NLP) in the sequence, allowing the model to consider the sequence's sequential information.

  2. Self-attention: Attention is a mechanism that calculates weights for every word in a sentence as they relate to every other word in the sentence, so the model can predict words which are likely to be used in sequence. This understanding is learned over time as a model is trained on lots of data. The self-attention mechanism allows each word to attend to every other word in the sequence in parallel, weighing their importance for the current token. In this way, it can be said that machine learning models can “learn” the rules of grammar, based on statistical probabilities of how words are typically used in language.

How do transformer models work?

Transformer models work by processing input data, which can be sequences of tokens or other structured data, through a series of layers that contain self-attention mechanisms and feedforward neural networks. The core idea behind how transformer models work can be broken down into several key steps.

Let’s imagine that you need to convert an English sentence into French. These are the steps you’d need to take to accomplish this task with a transformer model.

  1. Input embeddings: The input sentence is first transformed into numerical representations called embeddings. These capture the semantic meaning of the tokens in the input sequence. For sequences of words, these embeddings can be learned during training or obtained from pre-trained word embeddings.

  2. Positional encoding: Positional encoding is typically introduced as a set of additional values or vectors that are added to the token embeddings before feeding them into the transformer model. These positional encodings have specific patterns that encode the position information.

  3. Multi-head attention: Self-attention operates in multiple "attention heads" to capture different types of relationships between tokens. Softmax functions, a type of activation function, are used to calculate attention weights in the self-attention mechanism.

  4. Layer normalization and residual connections: The model uses layer normalization and residual connections to stabilize and speed up training.

  5. Feedforward neural networks: The output of the self-attention layer is passed through feedforward layers. These networks apply non-linear transformations to the token representations, allowing the model to capture complex patterns and relationships in the data.

  6. Stacked layers: Transformers typically consist of multiple layers stacked on top of each other. Each layer processes the output of the previous layer, gradually refining the representations. Stacking multiple layers enables the model to capture hierarchical and abstract features in the data.

  7. Output layer: In sequence-to-sequence tasks like neural machine translation, a separate decoder module can be added on top of the encoder to generate the output sequence.

  8. Training: Transformer models are trained using supervised learning, where they learn to minimize a loss function that quantifies the difference between the model's predictions and the ground truth for the given task. Training typically involves optimization techniques like Adam or stochastic gradient descent (SGD).

  9. Inference: After training, the model can be used for inference on new data. During inference, the input sequence is passed through the pre-trained model, and the model generates predictions or representations for the given task.

Related solutions IBM watsonx.ai

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Discover watsonx.ai
Artificial intelligence solutions

Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Get one-stop access to capabilities that span the AI development lifecycle. Produce powerful AI solutions with user-friendly interfaces, workflows and access to industry-standard APIs and SDKs.

Explore watsonx.ai Book a live demo
Footnotes

1 Google’s BERT Rolls Out Worldwide (link resides outside ibm.com), Search Engine Journal, Dec 9, 2019

  翻译: