Zubnet AIApprendreWiki › AI Regulation
Safety

AI Regulation

EU AI Act, AI Policy
Les lois et politiques qui gouvernent le développement et le déploiement des systèmes d'IA. L'EU AI Act (2024) est la plus complète, classifiant les systèmes IA par niveau de risque et imposant des exigences en conséquence. Les US ont pris une approche plus sectorielle avec des ordres exécutifs et des guidelines d'agences. La Chine a des régulations ciblant l'IA générative, les deepfakes et les algorithmes de recommandation.

Pourquoi c'est important

La régulation façonne ce que les compagnies IA peuvent construire, comment elles doivent le construire et ce qu'elles doivent divulguer. L'EU AI Act affecte chaque compagnie qui sert des utilisateurs européens. Comprendre le paysage réglementaire est de plus en plus nécessaire pour quiconque construit ou déploie de l'IA — la non-conformité peut vouloir dire des amendes, des bans ou de la responsabilité.

Deep Dive

The EU AI Act uses a risk-based framework. Unacceptable risk (banned): social scoring, real-time biometric surveillance in public (with exceptions). High risk (strict requirements): AI in hiring, education, law enforcement, critical infrastructure — these require conformity assessments, data governance, human oversight, and documentation. Limited risk (transparency obligations): chatbots must disclose they're AI, deepfakes must be labeled. Minimal risk (no requirements): spam filters, video game AI.

Foundation Model Rules

The EU AI Act specifically addresses foundation models (called "general-purpose AI models"). Providers must publish training data summaries, comply with copyright law, and implement safety evaluations. Models deemed to pose "systemic risk" (roughly: frontier models with significant compute budgets) face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. This directly affects companies like Anthropic, OpenAI, Google, and Meta.

The Global Patchwork

AI regulation is developing unevenly worldwide. The EU leads with comprehensive legislation. The US relies on executive orders, NIST frameworks, and sector-specific agencies (FDA for medical AI, FTC for consumer protection). China requires algorithmic transparency, content labeling, and government approval for public-facing generative AI. This patchwork creates compliance challenges for global AI companies that must navigate different rules in different markets.

Concepts liés

← Tous les termes
← AI Privacy AI Security →