Zubnet AI學習Wiki › AI Regulation
Safety

AI Regulation

EU AI Act, AI Policy
治理 AI 系統開發和部署的法律和政策。歐盟 AI 法案(2024)是最全面的,按風險級別分類 AI 系統並相應施加要求。美國用執行命令和機構指南採取更部門化的方法。中國有針對生成式 AI、深度偽造、推薦演算法的監管。

為什麼重要

監管塑造 AI 公司能建構什麼、必須怎麼建構、必須披露什麼。歐盟 AI 法案影響任何服務歐洲使用者的公司。理解監管格局對任何建構或部署 AI 的人都越來越必要 — 不合規可能意味著罰款、禁令或責任。

Deep Dive

The EU AI Act uses a risk-based framework. Unacceptable risk (banned): social scoring, real-time biometric surveillance in public (with exceptions). High risk (strict requirements): AI in hiring, education, law enforcement, critical infrastructure — these require conformity assessments, data governance, human oversight, and documentation. Limited risk (transparency obligations): chatbots must disclose they're AI, deepfakes must be labeled. Minimal risk (no requirements): spam filters, video game AI.

Foundation Model Rules

The EU AI Act specifically addresses foundation models (called "general-purpose AI models"). Providers must publish training data summaries, comply with copyright law, and implement safety evaluations. Models deemed to pose "systemic risk" (roughly: frontier models with significant compute budgets) face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. This directly affects companies like Anthropic, OpenAI, Google, and Meta.

The Global Patchwork

AI regulation is developing unevenly worldwide. The EU leads with comprehensive legislation. The US relies on executive orders, NIST frameworks, and sector-specific agencies (FDA for medical AI, FTC for consumer protection). China requires algorithmic transparency, content labeling, and government approval for public-facing generative AI. This patchwork creates compliance challenges for global AI companies that must navigate different rules in different markets.

相關概念

← 所有術語
← AI Privacy AI Security →