A family of machine learning models based on the transformer architecture that are pre-trained through self-supervised learning on large data sets of unlabelled text. This is the current predominant architecture for large language models.
Sources:
NIST AI 100-2e2025
under generative pre-trained transformer