Zubnet AI學習Wiki › Dual Use
Safety

Dual Use

Dual-Use Technology
能同時用於有益和有害目的的技術。AI 本質上是雙用的:同一個幫醫生診斷疾病的模型,也可能幫壞行為者合成危險化合物。同一個加速軟體開發的程式生成模型,也可能幫助創建惡意軟體。管理雙用風險是 AI 治理的核心挑戰。

為什麼重要

雙用是 AI 開發的根本張力。讓模型更能幹,不可避免地讓它更能造成傷害。你不可能建構一個只推理好事情的強大推理引擎。這種張力驅動關於開源發布、API 限制、監管的辯論 — 當同樣的能力既啟用好又啟用壞時,你怎麼在最小化傷害的同時最大化益處?

Deep Dive

Dual use isn't unique to AI — nuclear physics, biology, and cryptography all face it. What makes AI different is the speed of proliferation: a dangerous biological technique requires a lab; a dangerous AI technique requires only a computer. This means traditional dual-use governance (export controls, lab safety regulations) translates imperfectly to AI, where the "lab" is a laptop and the "materials" are open-source code.

The Capability Evaluation Approach

Leading AI labs evaluate models for dangerous capabilities before release: Can it provide detailed instructions for bioweapons? Can it help with cyberattacks? Can it generate convincing disinformation at scale? These "dangerous capability evaluations" determine what safety measures are needed. Models that show elevated risk in specific areas receive additional guardrails, and capabilities are sometimes removed or restricted.

The Open-Source Tension

Dual use creates acute tension around open-weight model releases. Open models (Llama, Mistral) can be freely modified to remove safety guardrails, enabling misuse. But they also enable security research, academic study, privacy-preserving applications, and innovation that proprietary models don't allow. The debate has no easy resolution — both sides have legitimate arguments, and the optimal policy likely evolves as capabilities and risks change.

相關概念

← 所有術語
← Dropout Edge AI →