If you're building a SaaS product, you'll eventually have to make a decision about how to handle multiple customers — called "tenants" in this world — sharing the same application. That decision will shape your infrastructure bill, your security story, your migration strategy, and how fast you can onboard the next 1,000 customers. Most teams make this call by accident. It's worth making it on purpose.
We've built multi-tenant systems across the spectrum, from scrappy MVPs serving a handful of companies to platforms running thousands of tenants on shared infrastructure. This post walks through the three main architectural approaches in plain language, what each actually feels like to operate, and how to pick one without overthinking it.
What "multi-tenant" actually means
A tenant is a customer who should see only their own data. If you're building a project management tool and Acme Corp and Widgets Inc both use it, neither should ever see the other's projects, users, or files. How you enforce that — in the database, in the application code, or in separate infrastructure — is the architecture question.
Three common approaches:
- Shared database, shared schema — one database, one set of tables, a
tenant_idcolumn on every tenant-specific row. - Shared database, separate schema per tenant — one database, but each tenant gets their own schema (namespace).
- Separate database per tenant — each tenant gets their own full database.
Each has real tradeoffs. Let's walk through them.
Approach 1: Shared schema (the default, for good reason)
In this model, every tenant's data lives in the same tables. A projects table has a tenant_id column. Every query the application makes filters by tenant_id. The database sees all tenants; your application code is responsible for making sure tenants never see each other.
What it looks like in practice:
CREATE TABLE projects (
id UUID PRIMARY KEY,
tenant_id UUID NOT NULL,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_projects_tenant ON projects(tenant_id);
Every query includes WHERE tenant_id = $1. Most ORMs can automate this with query scopes or middleware.
Why teams pick it:
- Cheapest to run. One database, one set of indexes, one backup pipeline.
- Easiest to evolve the schema — one
ALTER TABLEupdates everyone at once. - Simplest reporting across all tenants (your own analytics, not theirs).
- Fastest to build and onboard new tenants (just insert a row in
tenants).
Why teams regret it:
- A single bug that forgets
WHERE tenant_id = ...is a data breach. This is a real, career-defining risk. - Noisy-neighbor problems: one tenant's heavy query can slow everyone down.
- Compliance-driven customers (healthcare, finance, government) often require stronger isolation than "trust me, we filter."
- Backup-and-restore for a single tenant is awkward (you're restoring all tenants or hand-extracting).
Our default recommendation: shared schema with Postgres Row-Level Security (RLS). RLS lets you define a policy at the database level that automatically filters rows by tenant_id based on a session variable, so even a buggy query can't leak data across tenants. We've used this pattern on products serving thousands of tenants and it holds up well.
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON projects
USING (tenant_id = current_setting('app.current_tenant')::uuid);
The application sets SET app.current_tenant = '...' at the start of every request. The database does the rest.
Approach 2: Schema-per-tenant
In this model, each tenant gets their own Postgres schema (or equivalent in other databases). Acme Corp's projects table lives in tenant_acme.projects. Widgets Inc lives in tenant_widgets.projects. Same database server, same connection pool, but logically isolated namespaces.
Why teams pick it:
- Stronger isolation than shared schema. No
tenant_idcolumn to forget. - Per-tenant backup and restore becomes tractable.
- Tenant-specific customizations (extra columns, indexes) are possible.
- Still cheaper than separate databases.
Why teams regret it:
- Schema migrations become painful. A migration that takes 10 seconds on one schema takes 10,000 seconds across 1,000 schemas, often one at a time.
- Many tools, ORMs, and connection poolers don't handle large numbers of schemas well. Postgres in particular starts to feel it past a few thousand schemas.
- Cross-tenant queries (your analytics) require
UNION ALLacross every schema, which doesn't scale. - Connection pooling gets tricky — you need
search_pathset correctly per connection.
When it makes sense: You have a moderate number of tenants (dozens to low hundreds), each with enough data that shared schema noisy-neighbor problems would hurt, and you need per-tenant backup/restore but don't want to run separate database servers.
We've used this approach for products where each tenant was a mid-sized enterprise with meaningful data volume. It worked, but migrations eventually became the bottleneck.
Approach 3: Database-per-tenant
Each tenant gets their own database — sometimes even their own database server. Complete physical isolation.
Why teams pick it:
- Maximum isolation. A bug in one tenant's instance cannot affect another.
- Per-tenant performance tuning is possible.
- Compliance and data residency become much simpler ("Your data lives in a dedicated database in the region you specified.")
- Noisy neighbors literally can't exist.
Why teams regret it:
- Most expensive to run by a wide margin. Every tenant pays the fixed cost of a database.
- Operational overhead explodes. Backups, migrations, monitoring, and upgrades happen N times.
- Onboarding a new tenant takes minutes instead of milliseconds (you're provisioning infrastructure).
- Cross-tenant analytics become a data pipeline problem.
When it makes sense: Enterprise customers paying six or seven figures who require dedicated infrastructure, or regulated industries where data isolation is a hard compliance requirement. Also makes sense if you have a small number of very large tenants.
The decision framework we actually use
For most SaaS products at most stages, the right answer is shared schema with RLS. We recommend moving off it only when specific pressure forces the move:
- Start with shared schema + RLS. You'll be fine here until you have real scale or real compliance pressure.
- Move to schema-per-tenant if you hit per-tenant backup/restore requirements or if a few tenants are dominating query time.
- Move to database-per-tenant only for specific enterprise customers — usually as a paid tier — not as the default architecture.
A hybrid approach is common in practice: most tenants on shared infrastructure, with a handful of enterprise customers on dedicated databases. The application layer abstracts over the difference so the product team doesn't think about it.
Things that bite teams regardless of approach
A few gotchas that apply no matter which model you pick:
- Your first tenant is always the test one. Don't put a real customer on day one of a new architecture. Use your own staging tenant to shake out migration scripts first.
tenant_idshould be on everything, including logs. Your observability platform needs to let you filter by tenant. When a customer says "it's slow," you need to see their traces specifically.- Rate limiting is per-tenant, not global. Otherwise one tenant's burst traffic rate-limits everyone else.
- Think about "the shared tables" carefully. Usually there's a tenants table, a users table (if users belong to multiple tenants), and a billing table. These don't fit the per-tenant pattern and need their own rules.
The cost story in round numbers
Rough numbers from the projects we've shipped:
- Shared schema: a single modest Postgres instance can comfortably serve several hundred tenants at low-to-moderate volumes. Monthly infra cost for the database layer is often a few hundred dollars.
- Schema-per-tenant: same instance scales, but tooling overhead grows. You'll spend more engineering hours per migration.
- Database-per-tenant: at a typical managed Postgres, each dedicated instance is $50–200+/month minimum. 500 tenants = $25K–$100K/month just in database fees.
The cheapest approach isn't always the right one, but the cost curve is real and worth planning around.
Where to go from here
If you're architecting a new SaaS product or feeling the pain of a decision you made early, the good news is that multi-tenancy is a solvable problem with well-understood tradeoffs. Pick intentionally, design the escape hatch before you need it, and be willing to have different tiers of customers on different physical isolation models.
Our team has shipped multi-tenant SaaS across the spectrum — if you want a second opinion on your architecture, see our SaaS development services or reach out to talk it through.
