Target updated its terms of service to make customers liable for any mistakes made by its upcoming AI shopping assistant, which runs on Google's Gemini. The new language states that transactions performed by the "Agentic Commerce Agent" are "considered transactions authorized by you" — meaning if the AI buys the wrong item or an expensive version without consent, customers pay. Target explicitly warns it "does not purport to guarantee that an Agentic Commerce Agent will act exactly as you intend in all circumstances."

This reveals the absurd double standard driving retail AI adoption: companies rush to deploy AI agents as competitive advantages while legally distancing themselves from the technology's failures. Target joins Walmart, which similarly updated its terms to cover AI assistant "Sparky" mistakes, stating generative AI responses "may not be accurate, complete or up-to-date and may be misleading." Both retailers essentially admit their AI tools are unreliable while still pushing customers to use them.

What's particularly telling is the timing — these liability shifts come as AI agents gain actual purchasing power, not just recommendation capabilities. When AI can execute real transactions with real money, suddenly the "move fast and break things" mentality hits legal departments. The fact that major retailers feel compelled to preemptively absolve themselves suggests they know their AI agents will make costly mistakes.

For developers building AI agents, this is a warning shot about liability and user trust. If you're giving AI systems transactional capabilities, you need robust safeguards, clear user controls, and honest communication about limitations. Target and Walmart's approach — deploy first, disclaim responsibility later — might work for retail giants, but it's not sustainable for building user confidence in AI agents long-term.