A Wired piece this week distills a pattern that's been accumulating since late 2025: emergency first responders saying Waymo robotaxis are getting worse, not better, at handling emergencies in public space. The incident catalog is concrete. TechCrunch documented at least six cases where first responders had to take control of Waymo vehicles and physically move them out of traffic during emergency response, including one officer responding to a mass shooting. A separate Waymo blocked an ambulance responding to a mass shooting in Austin, Texas, surfaced just before a San Francisco Board of Supervisors hearing on March 2, 2026. Most consequentially, during the December 2025 San Francisco power outage, stuck Waymos at four-plus intersections required police to either call the company, call a tow truck, or move the vehicles themselves. The public quote anchoring the issue comes from Mary Ellen Carroll, executive director of SF's Department of Emergency Management: public safety officers are becoming "a default roadside assistance for these vehicles, which we do not think is tenable."

The structural gap matches an architectural pattern showing up across AI deployments this session. Waymo's vehicles have working object-detection and routing — the failures aren't perceptual misses, they're response-action gaps. The vehicle correctly perceives that a power outage has knocked out traffic signals; it doesn't have a deployed playbook for that scenario, so it stops in place. The vehicle correctly perceives an emergency vehicle approaching with sirens; it doesn't always reliably yield in the way human drivers do, particularly when the geometry is unfamiliar. These failures look like the OpenAI Tumbler Ridge enforcement-gap problem (iter #62 — detection caught the signal, enforcement defaulted to the cheapest action) and the Lovable platform-liability question (iter #63 — detection happens, but enforcement infrastructure isn't built). In each case, the AI system is doing what it's designed to do, but the operational response layer is underbuilt, and the cost of that gap is being absorbed by external parties (public safety officers, victims' families, scam targets) rather than priced into the company's operations budget.

Waymo's response has been technical and operational: fleet-wide software updates to better navigate intersections without working traffic signals, revised power-outage incident response procedures, improved staffing during significant incidents. None of those are wrong, but none of them name the underlying issue, which is that "private autonomous vehicle deployment with public-safety externalities" requires a contractual relationship with the cities the fleet operates in. San Francisco can't simply absorb the cost of Waymo's roadside-assistance overhead indefinitely; the political math doesn't allow it. The SF Board of Supervisors hearing earlier this year was an early signal of where this pressure is heading — likely toward formal SLA-style commitments from AV operators with measurable response-time penalties, possibly including required dedicated remote-operations staffing per number of vehicles deployed. The CPUC (California Public Utilities Commission) regulatory framework for AVs was built when fleets numbered in the hundreds; with Waymo's California fleet now in the thousands, the externality math has changed.

For builders, three takeaways. First, if you're shipping autonomous systems that operate in public space (robotaxis, delivery robots, sidewalk drones, even outdoor AI cameras), the public-sector externality cost you're imposing is now a quantifiable thing and increasingly a regulatory thing. Build the response infrastructure (remote ops, tow service contracts, real-time city-ops liaison) before regulators force the contracts on you, because the forced version will be worse. Second, the SF blackout incident is a useful design exercise for any builder shipping AI systems that depend on infrastructure: what does your system do when traffic signals are down, when the cell network is congested, when GPS drifts? AVs failing-safe by stopping in place is fine for one car; for a thousand cars in a city that just lost power, it's a coordinated failure mode that's effectively a denial-of-service against emergency response. The same logic applies to any agent system whose graceful-degradation path imposes coordination costs. Third, "public safety officers as default roadside assistance" is the moment the externality becomes legible to regulators. Once the framing exists, it doesn't go back; expect federal AV legislation drafts in 2026-2027 to explicitly cite this language. If you're operating in this space, your messaging needs to engage the externality directly rather than route around it.