Prompt Injection
A security vulnerability where an attacker crafts input that overrides or manipulates a language model's system prompt. Hidden instructions in user-provided text can make the model ignore safety guidelines or leak confidential prompt content.