Skip to content

LoRA (Low-Rank Adaptation)

A parameter-efficient fine-tuning technique that trains small, low-rank matrices attached to existing model layers instead of updating all weights. LoRA dramatically reduces memory and compute for fine-tuning, making it feasible to customize large models on consumer hardware.

Related terms

Fine-TuningQLoRAAdapter
← Back to glossary