Self-attention Weights in Transformers April 6, 2023 eGittySelf-attention is a core component of the Transformers (Vaswani et al., 2017) which looks for the relation between different positions of a single sequence of token representations