How 31 Products Share One Database
Multi-tenant architecture for the solo founder
People keep asking how I'm going to manage 31 separate databases.
I'm not. That would be insane.
Here's the actual setup—and why it's simpler than you'd think.
The panic moment
When I first mapped out all 31 products, I imagined the infrastructure: 31 PostgreSQL instances. 31 Redis clusters. 31 sets of credentials. 31 things to monitor.
My laptop would catch fire just thinking about it.
There had to be a better way.
One database server, many databases
TimescaleDB (my PostgreSQL of choice) can host multiple databases in one instance. Each product gets its own database, but they share the same server:
timescaledb/
├── recall_development
├── recall_production
├── reflex_development
├── reflex_production
├── pulse_development
└── ... (you get the idea)
Same server. Separate databases. Clean isolation. If Recall has a bug that corrupts data, Reflex is untouched.
In the Rails apps, each product just points to its own database:
# recall/config/database.yml
production:
url: postgres://user:pass@timescaledb:5432/recall_production
# reflex/config/database.yml
production:
url: postgres://user:pass@timescaledb:5432/reflex_production
Nothing fancy. Standard Rails.
Redis: the database number trick
Redis has this neat feature I'd forgotten about: database numbers. Out of the box, you get databases 0-15 in a single Redis instance.
redis://localhost:6379/0 → default
redis://localhost:6379/1 → Recall
redis://localhost:6379/2 → Reflex
redis://localhost:6379/3 → Pulse
redis://localhost:6379/4 → Vault
Same server. Isolated keyspaces. A cache key in Recall will never collide with one in Reflex.
I assign each product a database number and move on. Simple.
Traefik ties it all together
Each product runs on its own port:
| Product | Port |
|---------|------|
| Recall | 3001 |
| Reflex | 3002 |
| Pulse | 3003 |
| Vault | 3006 |
Traefik (my reverse proxy) routes by subdomain:
# docker-compose.yml
recall:
labels:
- "traefik.http.routers.recall.rule=Host(`recall.localhost`)"
Request comes to recall.localhost → Traefik sends it to port 3001 → Recall handles it.
One command starts everything: docker-compose up. The whole stack comes alive.
The docker-compose.yml
Here's what it actually looks like:
services:
timescaledb:
image: timescale/timescaledb:latest-pg15
environment:
POSTGRES_USER: brainzlab
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
redis:
image: redis:7-alpine
recall:
image: brainzllc/recall:latest
environment:
DATABASE_URL: postgres://brainzlab:${POSTGRES_PASSWORD}@timescaledb:5432/recall
REDIS_URL: redis://redis:6379/1
depends_on:
- timescaledb
- redis
reflex:
image: brainzllc/reflex:latest
environment:
DATABASE_URL: postgres://brainzlab:${POSTGRES_PASSWORD}@timescaledb:5432/reflex
REDIS_URL: redis://redis:6379/2
depends_on:
- timescaledb
- redis
# ... and so on for each product
One file. The entire infrastructure. I can spin up the whole company on any machine with Docker.
"But that's not microservices!"
I know. I don't care.
The microservices purists would say each product needs its own database cluster. Its own Redis. Complete isolation.
But here's my reality: I'm one person. Managing 31 infrastructure stacks would consume all my time.
Shared infrastructure means:
- One thing to monitor
- One thing to back up
- One thing to scale (when the time comes)
I can always split later. For now, this works.
Local development is identical
The killer feature: my laptop runs the exact same stack as production.
git clone https://github.com/brainz-lab/stack.git
cd stack
docker-compose up
All 31 products. Running locally. Same architecture. Same configs. No "works on my machine" surprises.
When I test Recall locally, it's hitting the same TimescaleDB setup it'll use in production. When I debug Vault, it's the same Redis configuration.
This alone has saved me countless hours of debugging environment differences.
What I'd change
Honestly? Not much.
If I were building a single product, I'd probably still do this. One database server, one Redis, simple infrastructure.
The only thing I might add later: read replicas when query load gets heavy. But that's a good problem to have.
For now, boring infrastructure is best infrastructure.
— Andres