Zubnet AILearnWiki › AI Security
Safety

AI Security

Also known as: LLM Security, AI Safety Engineering
The practice of protecting AI systems from adversarial attacks, data poisoning, prompt injection, model theft, and misuse — while also defending against AI-enabled threats like deepfakes and automated cyberattacks. AI security sits at the intersection of traditional cybersecurity and the unique vulnerabilities introduced by machine learning systems.

Why it matters

AI systems are simultaneously powerful tools and novel attack surfaces. A prompt injection can make your customer-support bot leak internal data. A poisoned training dataset can insert backdoors. As AI gets deployed in critical infrastructure, healthcare, and finance, security isn't optional — it's existential.

Related Concepts

← All Terms
← AI Privacy Alibaba Cloud →
ESC