Technical

Why We Monitor 63 AI Providers And You Should Care

AI providers change their APIs without telling you. Models disappear. Names change. Pricing shifts. We built an automated system that catches all of it — before our users notice.
Sarah Chen March 2026 5 min read

When you integrate one AI provider, you watch one API. When you integrate 63, you need a system. Because at that scale, something breaks every single day — and the providers almost never tell you first.

That’s why we built the ProviderModelScanner: an automated monitor that queries every provider’s /models endpoint daily, diffs the results against our database, and tells us exactly what changed. New models, removed models, renamed models, pricing changes — all of it, caught within 24 hours.

This isn’t a nice-to-have. It’s the reason our platform doesn’t break when providers make silent changes.

The Problem Nobody Talks About

AI providers are moving fast. Multiple releases per week. New model variants, deprecations, endpoint changes, pricing adjustments. The big ones — OpenAI, Anthropic, Google — usually announce major changes. But “usually” isn’t “always,” and the smaller providers? They change things whenever they want.

If you’re building on top of these APIs, every silent change is a potential production incident. Your users click “Generate,” and instead of getting a result, they get a 404 or a cryptic error because the model ID they were using yesterday doesn’t exist today.

We learned this the hard way. Multiple times.

Real Incidents We Caught

Vidu’s Silent Rename

Vidu, the Chinese video generation provider, renamed their models without any announcement. viduq2 became viduq2-pro. No deprecation notice. No migration guide. No email. The old model ID just stopped working.

Our scanner caught it the next morning. We saw the diff: viduq2 removed, viduq2-pro added. We updated our database, tested the new endpoint, and our users never noticed. Without the scanner, we’d have shipped broken video generation until someone filed a bug report.

Kling O1 Vanishes from Together.ai

Together.ai is a meta-provider — they host models from multiple companies on shared infrastructure. One day, our scanner flagged that Kling’s O1 model had disappeared from Together’s model list. No announcement from Together. No deprecation timeline. Just … gone.

This matters because Together is one of the few places you could access Kling models without going through Kling’s own API. If you had built a workflow around together/kling-o1, it silently broke. We caught it and routed traffic to Kling’s direct endpoint instead.

Pricing Drift

Several providers have quietly adjusted their per-token or per-generation pricing. Not dramatic changes — a few cents here, a rounding change there. But when you’re displaying cost estimates to users and billing accordingly, even small drift matters. Our scanner diffs pricing fields and flags any change, so we can update our cost calculations the same day.

How It Works

The scanner runs as a Symfony console command: app:monitor:api-changes. A cron job fires it every morning at 6 AM. Here’s the pipeline:

63 Provider APIs ProviderModelScanner (queries /models endpoints) Fuzzy Matching Engine (normalizes model IDs) Diff Engine (compares against ai_model table) api_change_log (stores diffs, grouped by batch) Admin Dashboard (/admin/api-monitor)

The key innovation is the fuzzy matching engine. Without it, you’d get hundreds of false positives every day.

The Fuzzy Matching Problem

AI model IDs are a mess. The same model might appear as:

gpt-4o-2024-11-20
gpt-4o-latest
gpt-4o
openai/gpt-4o

Are these four different models or one model with four aliases? If you do a naive string comparison, every date-suffixed release looks like a “new model” and the previous version looks “removed.” Your change log fills up with noise.

Our fuzzy matcher handles this by normalizing model keys before comparison:

Normalization rules:

• Strip -latest suffixes
• Strip date suffixes (-2024-11-20)
• Strip trailing -0 and -001
• Strip -preview and -standard- qualifiers
• Strip vision v suffixes
• Strip provider prefixes (openai/, meta/)
• Separator-aware prefix matching (- and . boundaries)

The matcher also filters by model type, so a provider adding a new embedding model doesn’t get confused with an LLM of a similar name. Each provider has a skip_types configuration to ignore model categories we don’t integrate.

Together.ai is a special case — they host models from over a dozen companies, so our scanner attributes Together models to their original creators: OpenAI, ByteDance, NVIDIA, Meta, and others. One scan covers fifteen providers’ worth of models.

The Admin Dashboard

Every change gets logged to the api_change_log table, grouped by batch (one batch per daily scan). The admin dashboard at /admin/api-monitor shows:

New models — models that appeared in the provider’s API but aren’t in our database yet
Removed models — models in our database that the provider no longer lists
Changed models — pricing, naming, or configuration changes on existing models
Batch history — scroll back through days of changes to spot trends

The table auto-prunes to 500 entries, so it doesn’t grow forever. Old noise disappears. Active changes stay visible.

Why This Matters Beyond Our Platform

If you’re building anything that depends on AI APIs, you have this problem too — you just might not know it yet. The more providers you integrate, the more surface area you have for silent breakage.

Signs you need an API monitor:

• You integrate more than 3 AI providers
• Users choose models by name (and those names can change)
• You display pricing or cost estimates
• You’ve been surprised by a “model not found” error in production
• You depend on any provider that doesn’t have a changelog

Most AI startups solve this with manual checking — someone skims the provider’s blog once a week, maybe. That works when you have 3 providers. It does not work when you have 63. And it definitely doesn’t work when providers don’t blog about their changes at all.

The Uncomfortable Pattern

After months of running this scanner, a pattern has emerged: the smaller the provider, the less they communicate changes. The major labs — OpenAI, Anthropic, Google — have deprecation timelines, migration guides, and developer relations teams. The mid-tier providers sometimes post on X. The smaller ones just change things and hope nobody notices.

This isn’t malicious. Small teams move fast and don’t have the bandwidth for developer communications. But the result is the same: your integration breaks, and you have no idea why until you manually test every model.

Or you build a scanner that does it for you, every morning at 6 AM, before anyone’s coffee.

The takeaway: Trust but verify. Every AI provider, no matter how reliable, will eventually make a change that affects your integration. The question is whether you find out from your monitoring system or from your users. We chose the monitoring system.

The ProviderModelScanner has been running in production since early 2026, scanning 63 providers daily. It has caught over two dozen silent changes that would have affected our users. The code runs as a Symfony console command with a single cron entry.

Building on AI APIs? Zubnet gives you 361 models across 61 providers — and we watch all of them so you don’t have to.

Sarah Chen
Zubnet · March 2026
ESC