The EU AI Act uses a risk-based framework. Unacceptable risk (banned): social scoring, real-time biometric surveillance in public (with exceptions). High risk (strict requirements): AI in hiring, education, law enforcement, critical infrastructure — these require conformity assessments, data governance, human oversight, and documentation. Limited risk (transparency obligations): chatbots must disclose they're AI, deepfakes must be labeled. Minimal risk (no requirements): spam filters, video game AI.
The EU AI Act specifically addresses foundation models (called "general-purpose AI models"). Providers must publish training data summaries, comply with copyright law, and implement safety evaluations. Models deemed to pose "systemic risk" (roughly: frontier models with significant compute budgets) face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. This directly affects companies like Anthropic, OpenAI, Google, and Meta.
AI regulation is developing unevenly worldwide. The EU leads with comprehensive legislation. The US relies on executive orders, NIST frameworks, and sector-specific agencies (FDA for medical AI, FTC for consumer protection). China requires algorithmic transparency, content labeling, and government approval for public-facing generative AI. This patchwork creates compliance challenges for global AI companies that must navigate different rules in different markets.