Los Angeles startup Harm.AI is building an AI-powered platform to intercept non-emergency calls before they hit the city's overwhelmed 911 system, which answered only 57.43% of calls within 15 seconds in 2024. Founded by CEO Aitan Segal and VP Connor MacLeod, the platform lets users describe their situation and attempts to connect them to appropriate resources—mental health crisis lines, homeless services, or other support—within roughly a minute instead of routing through emergency dispatch.

This tackles a real infrastructure problem. LA's 911 system wasn't designed for the volume of mental health crises, harassment reports, and housing disputes it now handles daily. Current workarounds like crisis response teams still require trained dispatchers to screen calls first, just redistributing bottlenecks rather than solving them. The city lacks genuine separate lanes for non-emergency response, forcing operators to juggle between secondary calls and primary 911 duties.

What's missing from the coverage is critical technical detail. How does Harm.AI's assessment actually work? What prevents misrouting genuine emergencies? The company positions itself as "not a 911 replacement," but the line between urgent and non-urgent isn't always clear to people in crisis. One miscategorized domestic violence call or medical emergency could be catastrophic. Without transparency about their decision-making algorithms or integration with existing emergency services, this feels like a well-intentioned band-aid on a systemic problem that might create new failure modes while claiming to solve old ones.