Technically, negative prompts work through classifier-free guidance (CFG). During generation, the model computes two predictions: one conditioned on the positive prompt and one conditioned on the negative prompt. The final prediction moves toward the positive conditioning and away from the negative: final = negative + scale × (positive − negative). The guidance scale controls how strongly the model follows the prompts.
The community has developed standard negative prompts for common issues: "blurry, low quality, jpeg artifacts" (quality), "extra fingers, deformed hands, extra limbs" (anatomy), "text, watermark, signature, logo" (unwanted elements), "ugly, disfigured, bad proportions" (general quality). Many users have a default negative prompt they include with every generation. Custom negative prompts address domain-specific issues.
Negative prompts work with models that support classifier-free guidance (most Stable Diffusion variants, Flux). DALL-E 3 and Midjourney don't expose negative prompts as a user-facing feature — they handle quality issues through their prompt rewriting and internal quality mechanisms. The trend in newer models is to reduce the need for negative prompts by improving default quality, but they remain valuable for precise control in open models.