This repository provides an implementation of the original transformer encoder-decoder model described in the paper Attention is All You Need.
- The encoder is composed of stack of identical layers (typically 6).
- Each layer contains two main components:
- Multihead Self-Attention Mechanism
- Position-Wise Fully Connected Feedforward Network
-
The decoder is also composed of stack of identical layers (typically 6).
-
Each layer consists of three key components:
- Multihead Masked Self-Attention Mechanism
- Multihead Cross-Attention Mechanism (attends to the encoder output)
- Position-Wise Fully Connected Feedforward Network
-
In the final decoder layer, the output is projected into a space of dimensionality
|V|
(the size of the vocabulary).
-
Residual connections are used around each sublayer, followed by layer normalization:
$Y = LayerNorm(X + layer(X))$ - Dropout is applied after each sublayer to avoid overfitting.
- Multihead Attention Mechanisms (Self, Masked, and Cross) utilize several Scaled Dot-Product Attention (SDPA) heads (typically 8) in parallel. The outputs of all SDPA heads are concatenated and projected to the desired output dimension.
In the transformer architecture, there are three distinct types of attention heads. When used in parallel, these form their respective Multihead Attention Mechanisms.
- Used in the encoder, where all three matrices (Q, K, V) are computed from the same input — either the initial embeddings (in the first layer) or the output of the previous encoder layer (in subsequent layers).
- The embedding dimension and the dimension of the value vector are the same.
Where:
$A_{ij} = \text{score}(q_i, k_j) $
-
Used in the decoder, it operates similarly to self-attention, but with a crucial modification: the attention scores for any query are set to 0 for key vectors that correspond to future positions in the sequence.
-
In other words:
- This ensures that the attention mechanism only considers past and current tokens, avoiding "peeking" at future tokens. As a result, the attention matrix is lower triangular.
- Cross-attention allows the decoder to focus on relevant parts of the encoder output. Here, the query matrix
$Q$ is derived from the decoder input, while the key$K$ and value$V$ matrices are computed from the encoder output.
- The attention mechanism works as usual, attending to the encoder output based on the decoder's current input.
my_enc = ENCODER(encoder_dimension, kq_dimension, vocab_size, max_seq_len,
num_heads, linear_stretch, num_layers, padding_index,
use_pos_enc, device, dtype)
INPUT: Tensor of shape
-
$B$ is batch size and$L_{enc}$ is sequence Length (if sequence length is variable use padding) -
$D_{enc}$ is encoder dimension
my_dec = ENCODER(decoder_dimension, encoder_dimension, kq_dimension, vocab_size,
max_seq_len, num_heads, linear_stretch, num_layers, padding_index,
use_pos_enc, dropout, device, dtype)
INPUT: Tensor of shape
-
$B$ is batch size and$L_{dec}$ is max sequence Length upto which decoding should take place -
$D_{dec}$ is encoder dimension
-
decoder_dimension: dimension of embeddings, outputs of all sublayers and value vectors in decoder
-
encoder_dimension: dimension of embeddings, outputs of all sublayers and value vectors in encoder
-
kq_dimension: dimension of key and query vectors in attention block
-
vocab_size: number of unique words in vocabulary
-
num_heads: number of SDPA heads in each multi head attention block
-
num_layers: number of layers in decoder/encoder
-
padding_index: index of pad token used in case of variable length sequences
-
use_pos_enc(bool): to use sinusoiodal positional encodings alngside the embeddings
-
dropout: fraction (0-1) of nodes to be turned off in dropout layer
-
dtype: datatype of tensors used
-
devide: device on which tensors are stored and calculated