The core legal question is whether training AI on copyrighted works constitutes fair use (in US law) or falls under similar exceptions in other jurisdictions. The fair use argument: training is "transformative" because the model doesn't store or reproduce the works, it learns statistical patterns. The counter-argument: the model can sometimes reproduce near-verbatim passages, and it competes economically with the original works by generating substitutes.
Most jurisdictions currently hold that AI-generated content with no human creative input cannot be copyrighted (the US Copyright Office has been explicit about this). But content where a human provides substantial creative direction — detailed prompts, curation, editing — may qualify. The line between "human-directed" and "AI-generated" is blurry and being actively litigated. For practical purposes, most companies treat AI-assisted output as copyrightable when there's meaningful human involvement.
The industry is splitting into camps. Some companies are licensing training data (OpenAI's deals with publishers, Google's agreements with Reddit). Others argue that training on public data is inherently fair use. Open-source models face unique challenges — if a court rules that training requires licenses, the cost could be prohibitive for non-commercial projects. The EU AI Act requires disclosure of copyrighted training data, adding transparency requirements regardless of the fair use question.