In November 2025, a self-hosted Gitea instance used by a 50-person development team lost all data when a faulty RAID controller corrupted both drives in a mirror simultaneously. The team had backups β daily mysqldump files saved to the same server. When the server died, the backups died with it. Two years of Git history, issue trackers, CI/CD configurations, and documentation were gone permanently. The cost of rebuilding exceeded $200,000 in developer time.
This story repeats itself thousands of times per year. The problem is never "we did not have backups" β it is "our backups were not offsite," "our backups were not encrypted," "our backups were not tested," or "our backup retention was too short to recover from a gradual data corruption." This guide implements the complete 3-2-1 backup strategy with automated scripts, encrypted offsite storage, and β most critically β automated restore testing.
The 3-2-1 Rule Explained
The 3-2-1 backup rule is simple but non-negotiable:
- 3 copies of your data: the production data plus two backups
- 2 different storage media: local disk + object storage (or local disk + external drive)
- 1 offsite location: at least one copy must be in a different physical location (different data center, different cloud provider, or different geographic region)
For self-hosted applications, the practical implementation is: (1) production data on the server, (2) local backup on a separate disk or volume, and (3) encrypted offsite backup on Backblaze B2, Wasabi, or AWS S3 Glacier. The offsite backup is the one that saves you when the server is completely lost.
Automated Database Backups with Rotation
Database backups are the most critical backup type. Your files can often be regenerated, but your database contains user data, orders, configurations, and application state that cannot be recreated:
#!/bin/bash
# /opt/backups/scripts/backup-database.sh
set -euo pipefail
# Configuration:
BACKUP_DIR="/opt/backups/database"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backups/database.log"
# Create directories:
mkdir -p "$BACKUP_DIR" "$(dirname $LOG_FILE)"
log() {
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $1" | tee -a "$LOG_FILE"
}
log "Starting database backup..."
# === PostgreSQL Backup ===
if command -v pg_dump &>/dev/null; then
PG_BACKUP="$BACKUP_DIR/postgres_$TIMESTAMP.sql.gz"
log "Backing up PostgreSQL..."
# Use custom format for parallel restore capability:
sudo -u postgres pg_dumpall | gzip -9 > "$PG_BACKUP"
PG_SIZE=$(du -h "$PG_BACKUP" | cut -f1)
log "PostgreSQL backup complete: $PG_BACKUP ($PG_SIZE)"
fi
# === MySQL Backup ===
if command -v mysqldump &>/dev/null; then
MYSQL_BACKUP="$BACKUP_DIR/mysql_$TIMESTAMP.sql.gz"
log "Backing up MySQL..."
mysqldump --all-databases \
--single-transaction \
--routines \
--triggers \
--events \
--set-gtid-purged=OFF \
--default-character-set=utf8mb4 \
-u backup_user -p"$MYSQL_BACKUP_PASSWORD" | gzip -9 > "$MYSQL_BACKUP"
MYSQL_SIZE=$(du -h "$MYSQL_BACKUP" | cut -f1)
log "MySQL backup complete: $MYSQL_BACKUP ($MYSQL_SIZE)"
fi
# === Redis Backup (RDB snapshot) ===
if command -v redis-cli &>/dev/null; then
REDIS_BACKUP="$BACKUP_DIR/redis_$TIMESTAMP.rdb"
log "Backing up Redis..."
redis-cli -a "$REDIS_PASSWORD" --no-auth-warning BGSAVE
sleep 5 # Wait for background save
cp /var/lib/redis/dump.rdb "$REDIS_BACKUP"
log "Redis backup complete: $REDIS_BACKUP"
fi
# === Rotation: Delete backups older than retention period ===
DELETED=$(find "$BACKUP_DIR" -name "*.gz" -o -name "*.rdb" -mtime +$RETENTION_DAYS -delete -print | wc -l)
log "Deleted $DELETED backups older than $RETENTION_DAYS days"
log "Database backup completed successfully."
Encrypted Offsite Backups with Restic
Restic is the best open-source backup tool for offsite backups. It provides deduplication (only backs up changed blocks, not entire files), encryption (AES-256 with a user-provided password), and supports multiple backends (S3, B2, SFTP, local disk). A full backup of a 50 GB application typically transfers less than 500 MB after the initial backup because of deduplication:
# Install restic:
sudo apt install restic -y # Or download from GitHub releases
# Initialize a backup repository on Backblaze B2:
export B2_ACCOUNT_ID="your-b2-account-id"
export B2_ACCOUNT_KEY="your-b2-account-key"
export RESTIC_PASSWORD="your-encryption-password" # SAVE THIS SECURELY
export RESTIC_REPOSITORY="b2:your-bucket-name:server-backups"
restic init
# First backup (full):
restic backup /opt/backups/database \
/var/www \
/etc/nginx \
/etc/letsencrypt \
/opt/docker-compose \
--exclude="*.log" \
--exclude="node_modules" \
--exclude=".git" \
--tag daily
# Subsequent backups are incremental (only changed blocks):
# The same command β restic detects what has changed automatically.
# List snapshots:
restic snapshots
# Verify backup integrity (checks that all data can be decrypted and read):
restic check --read-data
# Restore from backup:
restic restore latest --target /tmp/restore-test
# Restore a specific snapshot:
restic restore abc12345 --target /tmp/restore-test
# Restore specific files:
restic restore latest --target /tmp/restore-test --include "/etc/nginx"
# Retention policy β keep 7 daily, 4 weekly, 6 monthly, 1 yearly:
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --keep-yearly 1 --prune
Complete Backup Automation Script
#!/bin/bash
# /opt/backups/scripts/full-backup.sh
set -euo pipefail
LOG="/var/log/backups/full-backup.log"
WEBHOOK="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $1" | tee -a "$LOG"; }
alert() {
curl -s -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"Backup Alert: $1\"}" "$WEBHOOK" >/dev/null 2>&1 || true
}
log "=== Full backup started ==="
# Step 1: Database backups
if /opt/backups/scripts/backup-database.sh; then
log "Database backup: SUCCESS"
else
log "Database backup: FAILED"
alert "Database backup FAILED on $(hostname)"
exit 1
fi
# Step 2: Offsite backup with restic
export B2_ACCOUNT_ID="your-b2-id"
export B2_ACCOUNT_KEY="your-b2-key"
export RESTIC_PASSWORD_FILE="/root/.restic-password"
export RESTIC_REPOSITORY="b2:your-bucket:server-backups"
if restic backup /opt/backups/database /var/www /etc/nginx /etc/letsencrypt \
--exclude="*.log" --exclude="node_modules" --tag daily --quiet; then
log "Offsite backup: SUCCESS"
else
log "Offsite backup: FAILED"
alert "Offsite backup FAILED on $(hostname)"
exit 1
fi
# Step 3: Apply retention policy
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --keep-yearly 1 --prune --quiet
# Step 4: Verify backup integrity (weekly β resource intensive)
if [ "$(date +%u)" -eq 7 ]; then # Sunday
log "Running integrity check (Sunday)..."
restic check --read-data-subset=10% --quiet
log "Integrity check: COMPLETE"
fi
SNAPSHOT_COUNT=$(restic snapshots --json | jq length)
log "=== Backup complete. Total snapshots: $SNAPSHOT_COUNT ==="
alert "Backup SUCCESS on $(hostname). Snapshots: $SNAPSHOT_COUNT"
# Crontab entry (run daily at 2 AM):
# 0 2 * * * /opt/backups/scripts/full-backup.sh 2>&1 | tee -a /var/log/backups/cron.log
Automated Restore Testing
The most important part of any backup strategy is testing restores. A backup that cannot be restored is not a backup β it is a false sense of security. Automate restore testing so it runs monthly without human intervention:
#!/bin/bash
# /opt/backups/scripts/test-restore.sh
# Run monthly to verify backups can actually be restored
set -euo pipefail
LOG="/var/log/backups/restore-test.log"
RESTORE_DIR="/tmp/restore-test-$(date +%Y%m%d)"
WEBHOOK="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $1" | tee -a "$LOG"; }
log "=== Restore test started ==="
mkdir -p "$RESTORE_DIR"
# Step 1: Restore the latest restic snapshot:
export RESTIC_PASSWORD_FILE="/root/.restic-password"
export RESTIC_REPOSITORY="b2:your-bucket:server-backups"
if restic restore latest --target "$RESTORE_DIR"; then
log "Restic restore: SUCCESS"
else
log "Restic restore: FAILED"
curl -s -X POST -H 'Content-type: application/json' \
--data '{"text":"CRITICAL: Backup restore test FAILED!"}' "$WEBHOOK"
exit 1
fi
# Step 2: Verify database dump can be loaded:
LATEST_PG=$(ls -t "$RESTORE_DIR"/opt/backups/database/postgres_*.sql.gz 2>/dev/null | head -1)
if [ -n "$LATEST_PG" ]; then
if gunzip -t "$LATEST_PG"; then
log "PostgreSQL dump integrity: VALID"
else
log "PostgreSQL dump integrity: CORRUPTED"
exit 1
fi
fi
# Step 3: Verify critical files exist:
CRITICAL_FILES=(
"$RESTORE_DIR/etc/nginx/nginx.conf"
"$RESTORE_DIR/etc/letsencrypt/live"
)
for f in "${CRITICAL_FILES[@]}"; do
if [ -e "$f" ]; then
log "Critical file check: $f EXISTS"
else
log "Critical file check: $f MISSING"
exit 1
fi
done
# Step 4: Clean up:
rm -rf "$RESTORE_DIR"
log "=== Restore test PASSED ==="
curl -s -X POST -H 'Content-type: application/json' \
--data '{"text":"Monthly restore test PASSED."}' "$WEBHOOK"
# Crontab: 0 4 1 * * /opt/backups/scripts/test-restore.sh
Backups are insurance β you pay the cost every day and hope you never need them. But unlike insurance, you can verify that your backups actually work by testing restores regularly. The 3-2-1 strategy with automated rotation, encrypted offsite storage, and monthly restore testing ensures that when disaster strikes β and it will β recovery is a matter of running a script, not a matter of scrambling. ZeonEdge implements automated backup pipelines for self-hosted infrastructure. Learn about our backup and disaster recovery services.
Alex Thompson
CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.