Skip to content

Backup System Setup Guide

Step-by-step guide to setting up the automated backup system for Lager Guru.

Prerequisites

  • Super-admin access to Lager Guru
  • Access to Supabase project (or AWS S3)
  • Server/CI environment with Node.js 18+
  • PostgreSQL client tools (for pg_dump, optional)

Step 1: Database Migration

Apply the backup system migration:

bash
# Via Supabase CLI
supabase migration up

# Or manually via SQL Editor
# Run: supabase/migrations/20251216000000_create_backup_tables.sql

Verify tables created:

sql
SELECT * FROM backup_settings;
SELECT * FROM backup_history;

Step 2: Create Storage Bucket

  1. Open Supabase Dashboard → Storage
  2. Click "New bucket"
  3. Name: backups
  4. Set to Private (not public)
  5. Click "Create bucket"

RLS Policy (if needed):

sql
-- Allow service role full access
CREATE POLICY "Service role can manage backups"
ON storage.objects FOR ALL
TO service_role
USING (bucket_id = 'backups')
WITH CHECK (bucket_id = 'backups');

AWS S3 (Optional)

  1. Create S3 bucket in AWS Console
  2. Configure bucket policy:
json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNT_ID:user/BACKUP_USER"
      },
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-backup-bucket/*",
        "arn:aws:s3:::your-backup-bucket"
      ]
    }
  ]
}
  1. Enable versioning (recommended)
  2. Configure lifecycle policies for archival

Step 3: Configure Environment Variables

Required Variables

bash
# Supabase Configuration
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key

# Backup Encryption
BACKUP_ENCRYPTION_KEY=your-32-byte-encryption-key-here

# Database (for pg_dump, optional)
DATABASE_URL=postgresql://user:password@host:5432/database

Optional S3 Variables

bash
S3_ACCESS_KEY_ID=your-access-key
S3_SECRET_ACCESS_KEY=your-secret-key
S3_REGION=us-east-1
S3_BUCKET=your-bucket-name

Generate Encryption Key

bash
# Generate secure random key (32 bytes = 256 bits)
openssl rand -hex 32

# Or using Node.js
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"

⚠️ Security:

  • Never commit encryption keys to repository
  • Store in secure secret management (AWS Secrets Manager, HashiCorp Vault, etc.)
  • Use different keys for staging/production
  • Rotate keys periodically

Step 4: Initial Backup Settings

Via Super-Admin UI

  1. Login as super-admin
  2. Navigate to /admin → Backups
  3. Click "Settings"
  4. Configure:
    • Enable backups: ON
    • Schedule: 0 3 * * 1 (weekly Monday 03:00 UTC)
    • Retention: 365 days
    • Storage provider: supabase or s3
    • Bucket name: backups
    • Encryption: ON
    • Notification webhook/email (optional)
  5. Click "Save Settings"
  6. Click "Test Upload" to verify storage connection

Via API

bash
curl -X POST https://your-project.supabase.co/functions/v1/backup-trigger/settings \
  -H "Authorization: Bearer YOUR_AUTH_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "tenant_id": null,
    "enabled": true,
    "weekly_cron": "0 3 * * 1",
    "retention_days": 365,
    "storage_provider": "supabase",
    "storage_bucket": "backups",
    "encryption": true
  }'

Via Database (Service Role)

sql
INSERT INTO backup_settings (
  tenant_id,
  enabled,
  weekly_cron,
  retention_days,
  storage_provider,
  storage_bucket,
  encryption
) VALUES (
  NULL, -- Global setting
  true,
  '0 3 * * 1',
  365,
  'supabase',
  'backups',
  true
);

Step 5: Test Manual Backup

Via UI

  1. Navigate to /admin → Backups
  2. Click "Trigger Backup"
  3. Wait for backup to complete (check status in history table)
  4. Verify backup file in storage bucket

Via CLI

bash
# Set environment variables
export SUPABASE_URL=...
export SUPABASE_SERVICE_ROLE_KEY=...
export BACKUP_ENCRYPTION_KEY=...

# Trigger backup
npm run backup:trigger

# Check status
# Via UI or API: GET /functions/v1/backup-trigger/history

Step 6: Schedule Automated Backups

  1. Copy workflow file:

    bash
    cp .github/workflows/backup.example.yml .github/workflows/backup.yml
  2. Add GitHub Secrets:

    • SUPABASE_URL
    • SUPABASE_SERVICE_ROLE_KEY
    • BACKUP_ENCRYPTION_KEY
    • DATABASE_URL (optional)
    • S3 credentials (if using S3)
  3. Uncomment schedule in workflow:

    yaml
    schedule:
      - cron: '0 3 * * 1'  # Weekly Monday 03:00 UTC
  4. Commit and push:

    bash
    git add .github/workflows/backup.yml
    git commit -m "Enable scheduled backups"
    git push

Option B: System Cron (Self-Hosted)

  1. Create cron script:

    bash
    #!/bin/bash
    # /usr/local/bin/lager-guru-backup.sh
    
    cd /path/to/lager-guru
    export SUPABASE_URL=...
    export SUPABASE_SERVICE_ROLE_KEY=...
    export BACKUP_ENCRYPTION_KEY=...
    
    npm run backup:run
    npm run backup:rotate
  2. Make executable:

    bash
    chmod +x /usr/local/bin/lager-guru-backup.sh
  3. Add to crontab:

    bash
    # Weekly Monday 03:00 UTC
    0 3 * * 1 /usr/local/bin/lager-guru-backup.sh >> /var/log/lager-guru-backup.log 2>&1

Option C: Systemd Timer (Linux)

  1. Create service file /etc/systemd/system/lager-guru-backup.service:
    ini
    [Unit]
    Description=Lager Guru Backup Service
    After=network.target
    
    [Service]
    Type=oneshot
    User=your-user
    WorkingDirectory=/path/to/lager-guru
    Environment="SUPABASE_URL=..."
    Environment="SUPABASE_SERVICE_ROLE_KEY=..."
    Environment="BACKUP_ENCRYPTION_KEY=..."
    ExecStart=/usr/bin/npm run backup:run
    ExecStartPost=/usr/bin/npm run backup:rotate
  2. Create timer file /etc/systemd/system/lager-guru-backup.timer:
    ini
    [Unit]
    Description=Weekly Lager Guru Backup Timer
    
    [Timer]
    OnCalendar=Mon *-*-* 03:00:00
    Persistent=true
    
    [Install]
    WantedBy=timers.target
  3. Enable and start:
    bash
    sudo systemctl enable lager-guru-backup.timer
    sudo systemctl start lager-guru-backup.timer

Step 7: Configure Retention Rotation

Retention rotation runs automatically after backups. To test:

bash
# Dry run (preview)
npm run backup:rotate -- --dry-run

# Run cleanup
npm run backup:rotate

Schedule retention rotation separately if needed:

bash
# Daily at 04:00 UTC
0 4 * * * /usr/local/bin/lager-guru-backup-rotate.sh

Step 8: Set Up Monitoring

Health Check Endpoint

Monitor backup health:

bash
curl https://your-project.supabase.co/functions/v1/backup-trigger/health \
  -H "Authorization: Bearer YOUR_TOKEN"

Alerting

Set up alerts for:

  • Backup failures (check backup_history.status = 'failed')
  • Missing backups (no backup in last 8 days)
  • Storage quota warnings
  • Encryption key rotation reminders

Integration Examples

Prometheus:

yaml
scrape_configs:
  - job_name: 'backup-health'
    metrics_path: '/functions/v1/backup-trigger/health'
    static_configs:
      - targets: ['your-project.supabase.co']

Uptime Monitoring:

  • Ping health endpoint every hour
  • Alert if status != "ok" or last_backup > 8 days

Step 9: Test Restore Procedure

Monthly Test (Recommended):

  1. Create staging environment
  2. Download latest backup
  3. Run restore:
    bash
    npm run backup:restore <backup-id> --confirm-restore
  4. Verify data integrity
  5. Test application functionality
  6. Document results

Step 10: Configure Notifications

Webhook Setup

  1. Create webhook endpoint (receives POST requests)
  2. Add URL to backup settings:
    https://your-app.com/api/backups/webhook
  3. Handle webhook payload:
    typescript
    {
      event: "backup_completed",
      tenant_id: string | null,
      backup_id: string,
      status: "success" | "failed",
      message: string,
      timestamp: string
    }

Email Setup

  1. Configure email provider (SendGrid, AWS SES, etc.)
  2. Update sendEmailNotification in lib/backupHelpers.ts
  3. Add email address to backup settings

Troubleshooting Setup

Migration Fails

  • Check Supabase connection
  • Verify RLS policies allow service role access
  • Check migration file syntax

Storage Access Denied

  • Verify bucket exists and is private
  • Check service role key has storage permissions
  • Test upload via UI "Test Upload" button

Encryption Key Issues

  • Verify key is 32+ bytes (64 hex characters)
  • Check key matches between backup and restore
  • Never use production key in development

Cron Not Running

  • Check cron service status: systemctl status cron
  • Verify cron syntax: crontab -l
  • Check logs: /var/log/cron or journalctl

Next Steps

  • [ ] Review Backup Documentation
  • [ ] Set up monitoring and alerting
  • [ ] Schedule monthly test restores
  • [ ] Document tenant-specific backup procedures
  • [ ] Train team on restore procedures

Support

For issues or questions:

Veröffentlicht unter kommerzieller Lizenz