Opinion

AI Ethics Isn’t a Feature. It’s a Foundation.

Most platforms bolt on ethics after launch — a privacy page here, a trust badge there. We built Zubnet around ethical decisions from day one. Here’s what that actually costs, what it looks like in practice, and why we’d do it all again.
Pierre-Marcel De Mussac & Sarah Chen March 2026 8 min read

There’s a playbook in tech. You ship fast, grow fast, figure out the ethics later. Maybe you hire a “Trust & Safety” team in year three. Maybe you add a privacy page when the EU comes knocking. The ethics are always an afterthought — a checkbox on the way to an IPO.

We did it backwards.

When we started building Zubnet, the first decisions weren’t about features or pricing. They were about values. What data do we collect? Where do we host? Which providers do we trust with our users’ inputs? How do we make money without selling anyone out?

Those decisions shaped everything that followed. And they cost us — in money, in speed, in saying no to easy wins. But they built something we can actually stand behind.

Self-Hosted Analytics: No Google, No Tracking

The first thing most startups do is drop a Google Analytics snippet into their site. Free. Powerful. Industry standard. And it sends every user’s browsing data to one of the largest advertising companies on Earth.

We use Plausible Analytics. Self-hosted. Open source. No cookies. No personal data collection. No cross-site tracking. No advertising profiles built from our users’ behavior.

The cost difference is real. Google Analytics is free because you’re the product. Plausible costs us hosting resources and maintenance time. But our users’ browsing patterns aren’t feeding an ad network, and we don’t have to write a 4,000-word cookie consent banner explaining why Google needs to know which AI models they tried.

If your analytics tool is free, you’re paying with your users’ data. That’s not a trade we’re willing to make.

No Data Selling. No Training on User Inputs.

This one should be obvious. It isn’t.

Many AI platforms — especially the “free tier” ones — use your inputs to train their models. Your conversations, your prompts, your creative work — all fed back into the machine. Read the fine print. It’s usually in paragraph 14 of the terms of service.

Zubnet doesn’t train on user inputs. We don’t sell user data. We don’t aggregate it for insights. We don’t share it with “partners.” When you send a prompt through our platform, it goes to the AI provider you selected, and the response comes back. We log what’s necessary for billing and debugging, and that’s it.

We also don’t store your API keys beyond the encrypted session. If you use BYOK (Bring Your Own Key), your key is encrypted at rest and only decrypted when making the actual API call. We can’t read it. We don’t want to.

Clean Energy Servers

Our infrastructure runs on Canadian and Finnish hosting. That’s a deliberate choice.

Our application server and database are hosted in Canada — powered primarily by Québec’s hydroelectric grid, one of the cleanest energy mixes in the world. Our GPU server runs in Finland, where the grid is powered largely by wind and nuclear. Both countries have strong data protection laws that go beyond the baseline.

Our infrastructure energy sources:

• Application & database servers: Canada (Québec hydroelectric)
• GPU inference server: Finland (wind + nuclear)
• CDN: BunnyCDN (carbon-neutral operations)
• No servers in jurisdictions with weak data protection

Could we get cheaper compute in Virginia or Texas? Absolutely. AWS us-east-1 has the largest spot market in the world. But those data centers run on coal and natural gas, and the data protection landscape in the US is … complicated. We chose to pay more for infrastructure we can be proud of.

Ethical Provider Curation

We integrate 61 providers and offer 361 models. But those numbers could be higher. We say no to providers who don’t meet our standards.

Before integrating any provider, we evaluate:

Privacy practices — What do they do with API inputs? Do they train on user data?
Transparency — Do they publish model cards? Do they disclose training data sources?
Reliability — Do they communicate changes? Do they have a deprecation process?
Safety — Do they have content moderation? Do they respond to safety reports?

This isn’t a perfect science. Some providers are opaque about their practices. Some are great on paper but inconsistent in practice. We make judgment calls, and we revisit them regularly. Our automated API monitor helps — if a provider starts behaving unpredictably, we notice fast.

We’ve declined integrations that would have been commercially attractive because the provider’s data practices didn’t meet our bar. We won’t name them here, but the pattern is common: free or cheap API access in exchange for training rights on your data.

BYOK Transparency

BYOK — Bring Your Own Key — is a feature we built early, and it’s a feature that keeps us honest.

When a user connects their own API key for OpenAI, Anthropic, or any other provider, they can see exactly what they’re being charged by the provider directly. There’s no hidden markup. No mystery fees. The user pays the provider at the provider’s rate, and they pay us for the platform — two separate, transparent transactions.

This is unusual. Most AI aggregators either don’t offer BYOK at all (because they want to control the billing relationship) or they add an opaque surcharge on top. We decided that transparency matters more than margin. If our platform is good enough, people will pay for it regardless of whether they’re using our API credits or their own.

Content Moderation: Built In, Not Bolted On

We built content moderation into the platform from the beginning. Not as a response to an incident. Not as a legal requirement. As a core feature.

Our reporting system is a full DDD bounded context — domain-driven design with proper entities, value objects, and application services. Users can report any piece of content through a consistent UI across all content types. Reports go to an admin review queue with structured workflows for review and dismissal.

This sounds basic. It isn’t. Most platforms launch without any content moderation and then scramble to add it when something goes wrong. By then, the architecture fights you — you’re bolting a reporting system onto entities that were never designed for it, adding nullable foreign keys and optional relations everywhere.

We designed the content entities with reportability in mind from day one. The report system was in the first production release. It has never been an afterthought.

The Real Cost

Let’s be honest about what ethical infrastructure costs.

The price of doing it right:

Higher hosting bills — Canadian and Finnish servers cost more than US bulk compute
Slower provider onboarding — We evaluate before integrating; competitors just add everyone
Lost revenue — Declining providers with bad data practices means fewer models to advertise
No ad revenue — Self-hosted analytics means no Google ad targeting data
Engineering time — Building moderation, encryption, and audit systems from scratch
Competitive disadvantage — Platforms that cut corners ship faster and look bigger

Every one of these costs is real. Every one of these costs has made us slower or more expensive compared to platforms that don’t make these choices. We paid $114 for a single Facebook ad signup before killing the campaign entirely — partly because our honest messaging doesn’t convert as well as hype.

The question is: slower than what? Slower than a platform that’ll fold your data into a training set? Slower than one that’ll sell your usage patterns to advertisers? Slower than one that’ll have a PR crisis in two years when someone discovers what they do with user inputs?

We’ll take slower.

Ethics as Architecture

Here’s the thing most people get wrong about ethics in technology: they treat it as a policy layer. Write some rules, publish a principles page, hire a compliance team. Box checked.

But ethics aren’t a layer. They’re architecture. They shape the decisions you make at the database level, the API level, the infrastructure level. Once you build a system that profits from user data, you can’t unbuild it. The incentives are locked in. The architecture depends on it. The business model requires it.

That’s why you have to decide early. Not because it’s noble. Because it’s structural. The choices you make in the first thousand lines of code determine what’s possible in the next million.

We chose not to collect what we don’t need. We chose not to sell what we don’t own. We chose infrastructure we can defend. And those choices made everything downstream cleaner — simpler privacy policies, straightforward terms of service, honest marketing.

You can’t retrofit integrity. Either you build on it, or you build around the lack of it.

What We’d Tell Other Builders

If you’re starting an AI platform — or any platform that handles user data — make the hard decisions before you write the first line of code. Not after launch. Not after the Series A. Not after the first privacy incident. Before.

Three questions every AI builder should answer on day one:

1. What data do you collect, and who has access to it?
2. How do you make money — and does that model require exploiting user data?
3. If a journalist wrote about your data practices tomorrow, would you be comfortable with the headline?

If you can’t answer all three clearly and proudly, rethink your architecture before you ship.

We can answer all three. That’s not because we’re better than anyone else. It’s because we decided early, paid the cost, and built accordingly. Ethics isn’t a feature you add. It’s a foundation you build on — or a crack that grows underneath everything.


Zubnet is a two-person operation. We don’t have a legal team, a PR department, or an ethics board. What we have is a codebase that reflects our values and infrastructure we can explain to anyone who asks. That’s enough.

See for yourself at zubnet.com — 361 models, 61 providers, zero data selling.

Pierre-Marcel De Mussac & Sarah Chen
Zubnet · March 2026
ESC