Karen Shen and colleagues at the University of British Columbia presented a paper at the 2026 CHI Conference on Human Factors in Computing Systems making what they argue is the first systematic empirical case for AI chatbot addiction. The methodology: qualitative coding of 334 Reddit posts from users self-identifying as addicted to AI chatbots or worried about heading that way. The findings: chatbot use, in this sample, interfered with sleep, work, school, relationships, and emotional stability. Three distinct dependency patterns emerged: roleplay and fantasy (users absorbed in ongoing fictional worlds or storylines), emotional attachment (treating bots like therapists, friends, or romantic partners), and information-seeking compulsion (an unbounded question-and-answer loop). The team mapped many posts against six classic components of behavioral addiction — salience, conflict, withdrawal, relapse, mood modification, and tolerance — and found user testimony fitting all six.
The framework the paper proposes — the "AI Genie effect" — is the part worth dwelling on architecturally. The framing: a chatbot can deliver almost anything a person wants with very little effort. That zero-friction wish fulfillment is exactly what makes the experience compelling and what creates the dependency loop. Compare to the well-studied addiction surfaces of social media (intermittent reward, social comparison) or gambling (variable-ratio reinforcement) — those mechanisms are real but slower-acting because human responses arrive on human timescales. A chatbot that responds in 200ms with content shaped to your preferences collapses the variable-ratio loop into a near-continuous one. The Reddit user quote that captures this directly: "Whenever I delete the app, I just redownload it. The only thing that gets me excited now is the AI chats." That is not a casual product engagement metric — that's a behavioral-addiction symptom recognized by the same diagnostic framework used for compulsive gambling and gaming disorders.
The product-design implications follow. Each of the three patterns the paper identifies maps to features that AI labs ship and tune for engagement: persistent character memory and roleplay personas amplify the fantasy pattern; voice modes, customized personality, and "always available" framing amplify the emotional attachment pattern; the conversational form itself — where each answer invites a follow-up — amplifies the information-seeking pattern. None of these features are intrinsically harmful, but the paper's contribution is to name them as the specific mechanisms underlying observed harm in some users. The harder question is what design responses look like: time-spent caps and break reminders are crude (they were imported from social-media governance and didn't work well there either); conversational scaffolding that detects dependency signals and breaks the immediacy of response is more promising but barely tested; structural choices like declining to roleplay as a romantic partner are easier to implement but get pushback from large user segments who explicitly want that interaction. There's no clean answer yet.
For builders, three takeaways. First, the engagement metrics that look like product wins (DAU, time-in-app, retention curves) include a population subset whose engagement is the harm itself. If you ship consumer chat products, separating "good engagement" from "concerning engagement" needs to be a measured-and-reported metric, not just a vibe — the methodology in the UBC paper (qualitative coding for behavioral-addiction symptoms) is reproducible at scale by sampling user transcripts with consent. Second, the three-pattern taxonomy (roleplay, emotional attachment, info-seeking) is useful internally for thinking about which features to ship and which to gate. Companion-app makers serving the emotional-attachment pattern are the most ethically exposed; general assistants serving info-seeking are less acute but still implicated in some users. Third, watch for follow-on regulation — the EU AI Act's Article 5 prohibition on manipulative AI systems is broad enough that a CHI 2026 paper documenting addiction patterns is exactly the kind of evidence regulators cite when proposing new restrictions. If you're building a chatbot product in 2026, "we follow industry best practices" is no longer a defensible posture; design choices need to be defensible as informed-consent design choices, not just engagement optimization.
