Skip to content

Pre-training

The initial phase of training a foundation model on a large, diverse dataset to learn general-purpose representations. For language models, pre-training involves predicting text on billions of documents. The resulting base model is then fine-tuned for specific tasks.

Related terms

Foundation ModelFine-TuningTraining Data
← Back to glossary