Futurism reported on April 29 on Kyle Dausman, a resident of Cherry Hills Village south of Denver, whose truck got flagged by Flock Safety's automated license plate readers (ALPRs) and now triggers alerts to local police every time he drives. Cherry Hills Village Police told 9News Dausman has done nothing wrong — his license plate is "erroneously connected" to a warrant in the Colorado Crime Information Center database, traced to a data-entry error in a warrant issued out of Gilpin County. Dausman tried to fix it through the Gilpin court system; officials said he would need to name the suspect from the erroneous warrant, which no law-enforcement agency will share because the case is still active. He is stuck in a catch-22 the warrant database itself created.
This is a different failure mode than the one we documented in our earlier Flock piece, where the Institute for Justice analysis showed 14 cases of police using ALPRs to stalk romantic partners. Stalking-by-cop is intentional misuse. Database-error-by-system is unintentional misuse, and arguably more concerning for AI-enabled surveillance design because it does not require any bad actor — it just requires bad data and a system that treats database matches as authoritative. Arapahoe County alone has at least 283 active Flock cameras, per DeFlock, a grassroots citizen-built tool for tracking ALPR deployments. The DeFlock reference is the builder-relevant detail: when surveillance infrastructure proliferates faster than oversight, civil-society tooling fills the gap. Watch DeFlock-style tools expand to other AI surveillance categories — facial-recognition deployments, social-media monitoring, employer surveillance — over the next 18 months.
Two patterns matter. First, the catch-22 Dausman is in is not a Flock bug — it is a database integrity issue that Flock's ALPR network amplifies. The same erroneous warrant flag would have caused occasional police stops in a pre-ALPR world. With 283 cameras in one county alone, the same flag now causes near-constant police contact. AI-enabled surveillance does not introduce new errors; it converts low-frequency errors into high-frequency systemic harms by removing the friction that previously limited their reach. Second, the burden-of-correction is on the victim. "Once you're in the Flock system, it's on you to get out," Dausman said, and the process agrees: removal from the alert list happens only when the victim manages to navigate a process the system itself obstructs. This is the dystopian inversion of due process — you are presumed flagged, and clearing yourself requires fighting the institutions that flagged you.
For builders, three concrete things. First, if you build any system that triggers law-enforcement alerts based on database matching, the burden-of-correction problem is your design responsibility. Build the appeal pathway into the product, with a documented timeline and a way for a flagged user to verify that the underlying record is correct without requiring information they cannot legally access. Otherwise you have built a Flock — a system that can trap innocent people in expensive feedback loops. Second, citizen-tracking tools like DeFlock are now part of the surveillance landscape; if you ship surveillance infrastructure, expect adversarial documentation of where you operate and how. Bake transparency into the product before a third party builds it for you under hostile framing. Third, the "AI converts low-frequency errors into high-frequency harms" pattern generalizes. Any product that automates a previously friction-limited process — content moderation, fraud flagging, identity verification — will surface errors it did not introduce, but at scale. Audit your error rates per ten thousand and per million events, not per hundred. A 0.1% false positive rate is fine until it lands on a million people.
