Backups Nobody Runs: A Solo Builder's Data Survival Guide

Every solo builder I know has a database running somewhere. A Mac Mini under a desk, a $5 VPS, a Raspberry Pi in a closet. They've spent weeks building the application on top of it. They've spent zero time thinking about what happens when the disk fails.

The disk will fail. The question is whether you'll lose a day or lose everything.

The Uncomfortable Math

Hard drives have a mean time between failure measured in years, but that number is an average across millions of drives. Your specific drive could fail tomorrow. SSDs are better but not immune. Apple Silicon Macs have soldered storage with no replacement path. If your Mac Mini's SSD dies, the machine is finished. Every byte on it is gone.

That's the hardware story. The software story is worse. A bad migration can corrupt your PostgreSQL data directory in seconds. A mistyped DROP TABLE executes before your brain finishes processing the regret. rm -rf doesn't ask twice. I've watched a single accidental UPDATE without a WHERE clause overwrite 13,000 rows of production data. Recovery took four hours, and that was with a backup. Without one, the project would have been dead.

Then there's the SaaS risk. If you're storing critical data in a third-party service, your data exists at the pleasure of their terms of service. Accounts get suspended for false-positive fraud detection. Companies shut down with 30 days notice. APIs change without warning and your export pipeline breaks silently. If you don't have a local copy of data you can't afford to lose, you don't have that data. You have access to it, temporarily.

The 20-Minute PostgreSQL Backup

The gap between "no backup" and "automated daily backup" is about 20 minutes of setup. Not 20 minutes of learning followed by hours of configuration. Twenty minutes total, from nothing to a working cron job that dumps your database every night.

Here's the exact setup I run.

First, create a directory for your backups and a script to manage them:

mkdir -p ~/backups/postgres
cat > ~/backups/postgres/backup.sh << 'SCRIPT'
#!/bin/bash
BACKUP_DIR="$HOME/backups/postgres"
TIMESTAMP=$(date +%Y-%m-%d_%H%M)
KEEP_DAYS=14

pg_dump -Fc --no-owner your_database > "$BACKUP_DIR/db-$TIMESTAMP.dump"

find "$BACKUP_DIR" -name "db-*.dump" -mtime +$KEEP_DAYS -delete
SCRIPT
chmod +x ~/backups/postgres/backup.sh

The -Fc flag produces a custom-format dump. It's compressed, it supports selective restore, and it handles large objects correctly. Plain SQL dumps are human-readable but slower to restore and larger on disk. Use custom format.

The find command at the end deletes dumps older than 14 days. Without this, you'll run out of disk space in a few months and the backup will fail silently because the disk is full. I learned this the specific way you'd expect.

Second, schedule it with cron:

crontab -e
# Add this line:
0 3 * * * /Users/you/backups/postgres/backup.sh >> /Users/you/backups/postgres/backup.log 2>&1

This runs at 3 AM every night. The log redirect means you can check backup.log to verify it's actually running, because a cron job you never check is a backup you don't have.

On macOS, cron needs Full Disk Access in System Settings > Privacy & Security. Without it, the job will fail silently. This trips up everyone the first time. Add /usr/sbin/cron to the Full Disk Access list and you're set.

Getting Backups Off the Machine

A backup on the same disk as your database is better than nothing. Barely. If the disk fails, the power supply fries, or someone steals the machine, both copies are gone. A backup needs to exist on a different physical device, preferably in a different location.

Two approaches, in order of effort:

rsync to another machine on your network. If you have a second computer, a NAS, or even an external drive that stays plugged in, rsync handles this with one line:

rsync -avz --delete ~/backups/postgres/ user@other-machine:/backups/postgres/

Add this to your backup script after the pg_dump line. Total cost: zero dollars. Limitation: both machines are probably in the same room. A fire, a flood, a power surge on the same circuit takes both out.

rclone to Backblaze B2 for offsite. This is the real answer. Backblaze B2 charges $0.006 per GB per month for storage. A typical PostgreSQL dump for a solo builder's application is 50-500 MB compressed. At the high end, you're paying $0.003 per month. Three tenths of a cent. The free tier covers the first 10 GB.

brew install rclone
rclone config

The interactive config walks you through connecting to B2. You'll need a Backblaze account and an application key. The setup takes about five minutes.

Once configured, add the sync to your backup script:

rclone sync ~/backups/postgres/ b2:your-bucket-name/postgres/ \
  --transfers 4 \
  --b2-hard-delete

The --b2-hard-delete flag means files deleted locally get deleted in B2 too. Without it, old backups accumulate in B2 indefinitely. With the 14-day retention from the find command, B2 always mirrors your local backup window. You keep two weeks of daily snapshots, offsite, for functionally zero cost.

The complete backup script, from dump to offsite sync, is about 15 lines. It runs unattended every night. Total cost: under $0.10 per month for most solo operations.

The Backup You Have Never Restored

A backup you've never tested restoring is not a backup. It's a hope. And hope fails at 2 AM on a Sunday when your database won't start and you discover the dump file has been silently corrupted for the last three months because pg_dump was connecting to the wrong cluster.

Test your restore. Do it right now, before you need it.

createdb restore_test
pg_restore -d restore_test --no-owner ~/backups/postgres/db-latest.dump
psql -d restore_test -c "SELECT count(*) FROM your_important_table;"
dropdb restore_test

If the row count matches what you expect, your backup works. If pg_restore throws errors, you get to fix the problem now instead of during a crisis.

I run this check monthly. It takes 30 seconds for a small database, a few minutes for a large one. Every time it feels unnecessary right up until the time it catches something. I've found two problems in eight months of testing: once the dump was using a different PostgreSQL major version than the restore target, and once the -Fc flag had been accidentally removed so the dump was plain SQL but the restore was expecting custom format. Both would have been catastrophic discoveries during an actual failure.

Schedule the test if you don't trust yourself to remember. Add it to the same cron setup, running weekly on a different schedule than the daily backup:

# Weekly restore test, Sundays at 4 AM
0 4 * * 0 createdb restore_test && \
  pg_restore -d restore_test --no-owner ~/backups/postgres/db-$(date +\%Y-\%m-\%d)_0300.dump 2>> ~/backups/postgres/restore-test.log && \
  psql -d restore_test -c "SELECT 'restore_ok'" >> ~/backups/postgres/restore-test.log 2>&1; \
  dropdb restore_test 2>/dev/null

Check restore-test.log occasionally. If it stops showing restore_ok, something is wrong.

Beyond the Database

PostgreSQL is usually the highest-value target, but it's not the only thing worth backing up. Configuration files, environment variables, SSL certificates, cron jobs, and application code all matter. Losing your database is painful. Losing your database and the configuration needed to rebuild the server that runs it is a different category of problem.

For configuration and dotfiles, a Git repository is the right tool. Not a backup script. Version-controlled configuration means you can see what changed and when, roll back specific changes, and rebuild a machine from a single git clone. Keep secrets out of the repo (use .gitignore and reference a secrets manager or encrypted file), but everything else belongs in version control.

For application code, if it's not already in Git, stop reading and fix that first. Your code should exist in at least two places at all times: your local machine and a remote repository. GitHub, GitLab, a self-hosted Gitea instance, it doesn't matter. What matters is that your laptop getting run over by a bus doesn't mean your codebase ceases to exist.

For SaaS data you can't afford to lose, set up periodic exports. Most services offer an API or a data export feature. A weekly cron job that pulls your critical data into a local JSON or CSV file costs nothing and means a vendor shutdown gives you inconvenience instead of catastrophe.

The Catch

This entire setup has a single point of failure: you. Automated backups work until something changes and they don't. A macOS update resets cron permissions. A disk fills up. A password rotates and the B2 connection fails. The backup script runs successfully every night for six months, then silently stops, and you don't notice because you stopped checking the log.

Monitoring is the piece most solo builders skip. Enterprise teams have PagerDuty and on-call rotations. Solo builders have a log file they opened once. The minimum viable monitoring is a script that checks whether today's backup file exists and is larger than zero bytes, and sends you an alert if it isn't:

TODAY=$(date +%Y-%m-%d)
BACKUP="$HOME/backups/postgres/db-${TODAY}_0300.dump"

if [ ! -f "$BACKUP" ] || [ ! -s "$BACKUP" ]; then
  echo "Backup missing or empty: $BACKUP" | \
    mail -s "BACKUP FAILED" you@yourdomain.com
fi

Run this at 4 AM, an hour after the backup. If the backup didn't produce a file, you get an email. This is crude, but crude monitoring that runs beats sophisticated monitoring you never set up.

The other catch is discipline. Setting up backups takes an afternoon. Maintaining them takes ongoing attention in small doses: checking logs, testing restores, updating scripts when paths or passwords change. The solo builder who sets up a perfect backup system in January and ignores it until August is not meaningfully better off than the one who never set it up at all.

What You Own Versus What You Rent

The solo builder stack is built on ownership. You own your server, your database, your deployment pipeline. That ownership means you're responsible for things that managed services handle invisibly, and backups are the most consequential item on that list.

Managed databases handle backups automatically. That's a real advantage, and it's worth the $50-$200 per month if your application generates enough revenue to justify it. But if you're running on a Mac Mini or a $5 VPS because the economics make sense at your scale, the backup responsibility is yours. Not later. Not when you have more revenue. Now, while the stakes are low enough that learning from a mistake costs you a day instead of a business.

Twenty minutes for the script. Five minutes for B2. Thirty seconds a month to test the restore. The cost of not doing it is everything you've built, sitting on a single disk, with no second copy, waiting for the failure that's coming whether you prepare for it or not.