Why We Don’t Use Meta or OpenAI — By Choice
When you open Zubnet for the first time, the default text model is Claude by Anthropic. The default image model is from Google. The default video model is Kling. You won’t find Llama anywhere on the platform, and GPT is available only for specific use cases where users explicitly ask for it.
This wasn’t an accident. It was a decision. And it’s one we think about every time a new Meta or OpenAI model drops and the industry rushes to integrate it.
The Meta Problem
Meta’s business model is surveillance. That’s not an opinion — it’s their revenue structure. Over 97% of Meta’s revenue comes from targeted advertising, which is powered by the most comprehensive personal data collection operation in human history. Facebook, Instagram, WhatsApp, and Threads exist to gather data. The AI products are an extension of that machine.
Llama is open-weight, which sounds great. But “open-weight” doesn’t mean “privacy-respecting.” It means you can download and run the model yourself. If you use Llama through Meta’s own infrastructure, you’re feeding data to a company that has been fined billions of dollars across multiple jurisdictions for privacy violations. A company that tracks you across the web even when you’re logged out. A company that built shadow profiles of people who never signed up for their products.
We could host Llama ourselves and avoid Meta’s infrastructure entirely. But here’s the thing: we don’t need to. Qwen from Alibaba is competitive with Llama at every parameter count. Mistral builds exceptional models in Europe under the GDPR. DeepSeek offers remarkable quality at a fraction of anyone’s pricing. The models exist. Meta isn’t filling a gap — they’re leveraging brand recognition built on a surveillance platform.
We chose not to participate.
The OpenAI Problem
OpenAI’s relationship with user data has been, to put it charitably, evolving. They launched ChatGPT training on user conversations by default. They scraped the internet without clear consent frameworks. They faced lawsuits from publishers, authors, and artists over training data. Their privacy policy has been rewritten multiple times, each version subtly expanding what they can do with your inputs.
Credit where it’s due: OpenAI introduced opt-out mechanisms, published data usage policies, and created an enterprise tier with stronger privacy guarantees. They’re better than they were in 2023. But “better than we were” isn’t a privacy standard — it’s a direction of travel.
• Default training on user inputs required active opt-out, not opt-in
• History of expanding data usage terms retroactively
• Corporate governance instability raises questions about long-term policy commitments
• Aggressive data collection posture relative to alternatives like Anthropic
• Pricing that doesn’t reflect the cost savings they gain from training on user data
What We Chose Instead
Our default provider stack reflects our values. Each choice was deliberate:
Anthropic (Claude) — Our primary text model. Anthropic was founded explicitly to build safe AI. Their Constitutional AI approach, their transparency about training methods, and their clear privacy policies made them the obvious choice. Claude doesn’t train on API inputs. Their business model is selling API access, not harvesting data. The incentives are aligned.
Google (Gemini) — Yes, Google has its own data practices to answer for. But their API terms are clear: API inputs are not used for training. Gemini’s multimodal capabilities — especially for image understanding and generation — are excellent. And Google’s enterprise DNA means their data handling infrastructure is mature and auditable.
DeepSeek — Open weights, transparent pricing, published research. Their API terms are straightforward. They don’t pretend to be something they’re not. And the quality-to-cost ratio is unmatched in the industry.
Mistral — European-built, GDPR-native. Mistral doesn’t just comply with European privacy law — they were built within it. Their models are competitive with anything from the US, and their data practices reflect European standards for privacy by design.
Cohere — Canadian-built, enterprise-focused. Cohere’s entire business model is built around data privacy for enterprise clients. They don’t train on customer data. Their API terms are among the clearest in the industry.
• Clear, unambiguous API terms that prohibit training on user inputs
• Business models aligned with privacy (they sell the service, not the data)
• Transparent about training data sources and methods
• No history of retroactively expanding data usage rights
• Competitive or superior model quality
The Honest Exception
We’re not purists to the point of being useless. We do serve some OpenAI models on Zubnet.
GPT Image (gpt-image-1) is available because users specifically requested it. Sora is available for video generation. These are accessed through OpenAI’s API with enterprise-grade terms, not the consumer ChatGPT terms. When users choose these models, they’re making an informed decision — we show the provider for every model, and our privacy documentation is clear about what each provider’s terms say.
But they’re not the default. When you open Zubnet, you get Claude for text, Google for images, Kling for video. You have to actively choose OpenAI. That distinction — opt-in versus opt-out — matters. It’s the same principle we criticize others for ignoring.
This Isn’t Anti-OpenAI. It’s Pro-Privacy.
We want to be clear about something: this isn’t about hating OpenAI or boycotting Meta. GPT-4o is a good model. Llama pushed open-weight AI forward. These companies employ brilliant people doing important work.
But when we had to choose defaults — when we had to decide which companies would handle our users’ most intimate creative work by default — we chose the ones whose approach to privacy and transparency was strongest. Not perfect. Stronger.
Every provider has trade-offs. Google’s consumer products are ad-driven. Anthropic takes investment from companies we might side-eye. DeepSeek operates under Chinese data laws. Nobody is pure. The question isn’t “which provider is perfect?” It’s “which providers are making genuine, structural commitments to privacy?”
The best privacy isn’t a feature toggle. It’s a business model that doesn’t need your data to survive.
Anthropic sells API access. Mistral sells API access. Cohere sells API access. Their revenue doesn’t depend on training on your inputs. That structural alignment — not just policy promises, but business model alignment — is why they’re our defaults.
This is an opinion piece, and we own it as one. We have no financial relationship with any provider mentioned beyond standard API agreements. Anthropic, Google, DeepSeek, Mistral, and Cohere don’t pay us to recommend them. OpenAI and Meta don’t pay us either — which is part of why we can write this honestly.
361 models, 61 providers, defaults chosen by values. That’s Zubnet.