The core argument for x-risk: (1) AI systems are becoming increasingly capable, (2) sufficiently capable systems could be difficult to control, (3) an uncontrolled system optimizing for the wrong objective could cause irreversible damage. This is the "alignment problem" at scale — the same challenge that causes today's chatbots to occasionally misbehave, but with much higher stakes as capabilities increase.
AI researchers' views on x-risk span a wide spectrum. Some (Yoshua Bengio, Geoffrey Hinton) consider it a serious near-term concern. Others (Yann LeCun, Andrew Ng) consider current concerns overblown and worry that x-risk focus distracts from present-day AI harms. Most researchers fall somewhere between — acknowledging the concern while focusing on concrete, tractable safety problems. The difficulty is that x-risk is hard to study empirically because the scenarios haven't happened yet.
X-risk concerns have directly influenced AI policy: the Bletchley Declaration (signed by 28 countries), executive orders on AI safety, and proposals for international AI governance all reference catastrophic risks. Critics argue that industry-funded x-risk narratives serve to concentrate AI power among large labs (who can afford compliance) while stifling open-source development. The debate is as much about power and economics as about technical risk.