Zubnet AILearnWiki › Dual Use
Safety

Dual Use

Dual-Use Technology
Technology that can be used for both beneficial and harmful purposes. AI is inherently dual-use: the same model that helps a doctor diagnose diseases could help a bad actor synthesize dangerous compounds. The same code-generation model that accelerates software development could help create malware. Managing dual-use risk is a central challenge of AI governance.

Why it matters

Dual use is the fundamental tension of AI development. Making models more capable inevitably makes them more capable of harm. You can't build a powerful reasoning engine that only reasons about good things. This tension drives debates about open-source releases, API restrictions, and regulation — how do you maximize benefit while minimizing harm when the same capability enables both?

Deep Dive

Dual use isn't unique to AI — nuclear physics, biology, and cryptography all face it. What makes AI different is the speed of proliferation: a dangerous biological technique requires a lab; a dangerous AI technique requires only a computer. This means traditional dual-use governance (export controls, lab safety regulations) translates imperfectly to AI, where the "lab" is a laptop and the "materials" are open-source code.

The Capability Evaluation Approach

Leading AI labs evaluate models for dangerous capabilities before release: Can it provide detailed instructions for bioweapons? Can it help with cyberattacks? Can it generate convincing disinformation at scale? These "dangerous capability evaluations" determine what safety measures are needed. Models that show elevated risk in specific areas receive additional guardrails, and capabilities are sometimes removed or restricted.

The Open-Source Tension

Dual use creates acute tension around open-weight model releases. Open models (Llama, Mistral) can be freely modified to remove safety guardrails, enabling misuse. But they also enable security research, academic study, privacy-preserving applications, and innovation that proprietary models don't allow. The debate has no easy resolution — both sides have legitimate arguments, and the optimal policy likely evolves as capabilities and risks change.

Related Concepts

← All Terms
← Dropout Edge AI →