The first AI winter (1974–1980) followed early optimism about symbolic AI and machine translation. Herbert Simon predicted in 1965 that machines would be capable of any work a human can do within 20 years. When funding agencies realized this was nowhere close to reality, they slashed budgets. DARPA cut AI funding, and the British government's Lighthill Report effectively killed AI research funding in the UK for a decade.
The second winter (1987–1993) followed the expert systems boom. Companies invested billions in rule-based AI systems that were brittle, expensive to maintain, and couldn't handle edge cases. When the AI industry contracted, even promising neural network research lost funding. Backpropagation (1986) and convolutional networks (1989) were invented during this period but couldn't be developed further due to insufficient compute and data.
The current boom has advantages previous cycles lacked: the technology demonstrably works at scale (billions of people use LLMs daily), the economic value is concrete (companies are saving real money and building real products), and compute keeps improving. But risks remain: if AGI timelines prove as optimistic as past predictions, if the current scaling paradigm plateaus, or if a major AI incident erodes public trust, funding could contract. The lesson from history isn't that winters are inevitable — it's that honest expectations are the best prevention.