A prompt isn't just "a question you type." In the API world, a prompt is a structured sequence of messages — typically a system message (setting the model's behavior), followed by alternating user and assistant messages that form a conversation. When you use a chat interface like Claude.ai, you see a simple text box, but underneath, your message is wrapped in this structure before reaching the model.
Effective prompts tend to share a few traits: they state what you want (not just the topic but the format, length, and audience), they provide context the model needs, and they include constraints that prevent drift. "Tell me about Python" gets you a generic overview. "Write a 200-word explanation of Python's GIL for a developer who knows Java but not Python, focusing on practical implications for web servers" gets you something useful. The difference isn't magic — it's specificity.
There's a reason "prompt engineering" became a discipline. At the API level, prompts are essentially programs written in natural language. You can include examples (few-shot), ask the model to reason step by step (chain of thought), assign roles ("You are a senior security auditor"), or constrain output format ("Respond only in valid JSON"). These aren't hacks — they're techniques that reliably change model behavior because they shift the probability distribution the model samples from. The model isn't "following instructions" the way a human does; it's generating text that's statistically consistent with the pattern you established.
A common misconception is that the right prompt can make any model do anything. In reality, prompts interact with the model's training data, its alignment tuning, and its architectural constraints. A prompt can't give a model knowledge it was never trained on, bypass its safety training reliably, or exceed its context window. Understanding what prompts can and can't do saves time and prevents the frustration of expecting miracles from clever wording.