Zubnet AILearnWiki › Bias
Safety

Bias

Systematic patterns in AI outputs that reflect or amplify societal prejudices present in training data. Bias can appear in text generation, image creation, hiring tools, and anywhere models make decisions that affect people differently.

Why it matters

If the training data says nurses are women and engineers are men, the model will perpetuate that. Bias isn't always obvious — it hides in word associations, default assumptions, and who gets represented.

Related Concepts

← All Terms
← Benchmark Black Forest Labs →
ESC