Zubnet AI学习Wiki › Dual Use
Safety

Dual Use

Dual-Use Technology
能同时用于有益和有害目的的技术。AI 本质上是双用的:同一个帮医生诊断疾病的模型,也可能帮坏行为者合成危险化合物。同一个加速软件开发的代码生成模型,也可能帮助创建恶意软件。管理双用风险是 AI 治理的核心挑战。

为什么重要

双用是 AI 开发的根本张力。让模型更能干,不可避免地让它更能造成伤害。你不可能构建一个只推理好事情的强大推理引擎。这种张力驱动关于开源发布、API 限制、监管的辩论 — 当同样的能力既启用好又启用坏时,你怎么在最小化伤害的同时最大化益处?

Deep Dive

Dual use isn't unique to AI — nuclear physics, biology, and cryptography all face it. What makes AI different is the speed of proliferation: a dangerous biological technique requires a lab; a dangerous AI technique requires only a computer. This means traditional dual-use governance (export controls, lab safety regulations) translates imperfectly to AI, where the "lab" is a laptop and the "materials" are open-source code.

The Capability Evaluation Approach

Leading AI labs evaluate models for dangerous capabilities before release: Can it provide detailed instructions for bioweapons? Can it help with cyberattacks? Can it generate convincing disinformation at scale? These "dangerous capability evaluations" determine what safety measures are needed. Models that show elevated risk in specific areas receive additional guardrails, and capabilities are sometimes removed or restricted.

The Open-Source Tension

Dual use creates acute tension around open-weight model releases. Open models (Llama, Mistral) can be freely modified to remove safety guardrails, enabling misuse. But they also enable security research, academic study, privacy-preserving applications, and innovation that proprietary models don't allow. The debate has no easy resolution — both sides have legitimate arguments, and the optimal policy likely evolves as capabilities and risks change.

相关概念

← 所有术语
← Dropout Edge AI →