Positional Encoding
Positional encoding is a layer in the Transformer architecture which injects positional information to the elements of the input sequence and, when pertinent, the output sequence. It can be fixed (i.e. defined by specific formulas) or learned as part of the training process.