Backup Configuration
Configure automated backups for your Supabase instances with flexible scheduling, multiple storage options, and comprehensive retention policies.
Backup Configuration
Protect your data with Supascale's comprehensive backup system. Configure automated backups, manage retention policies, and ensure business continuity with flexible backup strategies.
Overview
Supascale backup system provides:
- Automated Scheduling: Cron-based backup scheduling
- Multiple Storage Options: Local, S3, Google Cloud Storage, Azure Blob
- Backup Types: Full, incremental, and differential backups
- Compression & Encryption: Secure and space-efficient backups
- Retention Management: Automated cleanup based on policies
- Monitoring & Alerts: Backup success/failure notifications
Backup Types
Full Backups
Complete database dump including all data and schema:
Use Cases:
- Initial backup creation
- Weekly or monthly comprehensive backups
- Before major system changes
- Disaster recovery preparation
Characteristics:
- Contains complete database state
- Largest backup size
- Can restore independently
- Slowest backup process
Incremental Backups
Only changes since the last backup (any type):
Use Cases:
- Daily backup schedules
- High-frequency backup needs
- Bandwidth-conscious environments
- Large databases with frequent changes
Characteristics:
- Smallest backup size
- Fastest backup process
- Requires chain of backups for restore
- Most complex restore process
Differential Backups
Changes since the last full backup:
Use Cases:
- Balance between full and incremental
- Weekly backup schedules
- Simplified restore procedures
- Medium-sized databases
Characteristics:
- Moderate backup size
- Moderate backup speed
- Requires full + differential for restore
- Simpler than incremental restore
Storage Configuration
Local Storage
Store backups on the same server as the instance:
storage: type: "local" path: "/var/backups/supascale" permissions: "0600" cleanup: enabled: true old_backups: 30d # Remove backups older than 30 days
Advantages:
- Fast backup and restore operations
- No external dependencies
- No additional storage costs
- Simple configuration
Disadvantages:
- Single point of failure
- Limited disaster recovery protection
- Storage space limitations
- Not suitable for compliance requirements
Amazon S3
Store backups in Amazon S3 buckets:
storage: type: "s3" config: bucket: "my-company-backups" region: "us-west-2" access_key: "AKIA..." secret_key: "secret..." endpoint: "" # Optional custom endpoint # Path structure within bucket path_template: "supascale/{instance}/{date}/{backup_type}/" # Storage class for cost optimization storage_class: "STANDARD_IA" # or GLACIER, DEEP_ARCHIVE # Server-side encryption encryption: enabled: true kms_key_id: "arn:aws:kms:us-west-2:123456789:key/12345678-1234"
S3 Storage Classes:
- STANDARD: Frequently accessed data
- STANDARD_IA: Infrequently accessed (cheaper storage)
- GLACIER: Archive storage (very cheap, slower retrieval)
- DEEP_ARCHIVE: Long-term archive (cheapest, slowest retrieval)
Google Cloud Storage
Store backups in Google Cloud Storage:
storage: type: "gcs" config: bucket: "my-company-backups" project_id: "my-project" key_file: "/path/to/service-account.json" # Or use service account key content service_account_key: | { "type": "service_account", "project_id": "my-project", ... } # Storage class storage_class: "NEARLINE" # STANDARD, NEARLINE, COLDLINE, ARCHIVE # Path structure path_template: "supascale/{instance}/{date}/"
Azure Blob Storage
Store backups in Azure Blob Storage:
storage: type: "azure" config: account_name: "mystorageaccount" account_key: "base64-encoded-key" container: "backups" endpoint: "" # Optional custom endpoint # Access tier for cost optimization access_tier: "Cool" # Hot, Cool, Archive # Path structure path_template: "supascale/{instance}/{date}/"
Creating Backup Configurations
Basic Configuration
Navigate to Backups
- Go to Dashboard → Backups
- Click "Create Backup Configuration"
Basic Settings
name: "Daily Production Backup" description: "Automated daily backups for production database" target_type: "instance" target_name: "production-api"
Backup Settings
backup_type: "incremental" compression_enabled: true encryption_enabled: true
Storage Configuration Choose storage type and provide necessary credentials
Schedule Configuration
is_scheduled: true schedule_cron: "0 2 * * *" # Daily at 2 AM schedule_timezone: "UTC"
Advanced Configuration
Complex Scheduling:
schedules: # Full backup weekly full_backup: cron: "0 1 * * 0" # Sunday at 1 AM backup_type: "full" retention_days: 365 # Incremental backup daily incremental_backup: cron: "0 2 * * 1-6" # Monday-Saturday at 2 AM backup_type: "incremental" retention_days: 30
Multi-Target Configuration:
targets: - type: "instance" name: "production-api" include_storage: true - type: "instance" name: "production-auth" database_only: true
Retention Policies
Time-Based Retention
Automatically delete old backups based on age:
retention: type: "time_based" days: 30 # Keep backups for 30 days exceptions: # Keep first backup of each month for 1 year monthly: 365 # Keep first backup of each year forever yearly: 0 # 0 means keep forever
Count-Based Retention
Keep a specific number of recent backups:
retention: type: "count_based" count: 10 # Keep last 10 backups strategy: "delete_oldest" # or "delete_failed"
Hybrid Retention
Combine time and count-based policies:
retention: policies: - type: "count" count: 7 priority: 1 # Keep at least 7 recent backups - type: "time" days: 30 priority: 2 # Delete backups older than 30 days - type: "selective" keep_monthly: 12 # Keep monthly backups for 1 year keep_yearly: 5 # Keep yearly backups for 5 years
Custom Retention Logic
retention: type: "custom" rules: # Keep daily backups for 1 week - pattern: "daily-*" days: 7 # Keep weekly backups for 1 month - pattern: "weekly-*" days: 30 # Keep monthly backups for 1 year - pattern: "monthly-*" days: 365 # Keep failed backups for analysis - status: "failed" days: 7
Monitoring and Notifications
Backup Monitoring
Track backup performance and health:
Success Rate Monitoring:
monitoring: success_rate: threshold: 95 # Alert if success rate < 95% window: "7d" # Calculate over 7 days backup_size: growth_rate: 20 # Alert if size grows > 20% per week size_limit: "100GB" # Alert if backup exceeds limit duration: max_duration: "2h" # Alert if backup takes > 2 hours trend_analysis: true # Detect increasing backup times
Notification Configuration
Email Notifications:
notifications: email: enabled: true recipients: - "admin@company.com" - "backup-team@company.com" events: - "backup_completed" - "backup_failed" - "retention_cleanup" - "storage_quota_warning" templates: backup_success: | Backup completed successfully: - Instance: {{instance_name}} - Type: {{backup_type}} - Size: {{backup_size}} - Duration: {{duration}} backup_failure: | URGENT: Backup failed! - Instance: {{instance_name}} - Error: {{error_message}} - Timestamp: {{timestamp}}
Slack Integration:
notifications: slack: enabled: true webhook_url: "https://hooks.slack.com/services/..." channels: success: "#backups" failure: "#alerts" warnings: "#ops" message_format: | {{status_emoji}} Backup {{status}} Instance: `{{instance_name}}` Type: {{backup_type}} {{#if error}}Error: {{error_message}}{{/if}}
Backup Verification
Automatic Verification
Verify backup integrity automatically:
verification: enabled: true checks: # Verify file integrity checksum: enabled: true algorithm: "sha256" # Test backup restoration restore_test: enabled: true frequency: "weekly" target: "test-environment" # Validate backup content content_validation: enabled: true queries: - "SELECT COUNT(*) FROM users" - "SELECT MAX(created_at) FROM logs"
Manual Verification
Test backup restoration manually:
Download Backup
# Download specific backup supascale backup download \ --instance "production-api" \ --backup-id "backup_123" \ --output "/tmp/backup.sql.gz"
Test Restoration
# Create test database createdb test_restore # Restore backup gunzip -c /tmp/backup.sql.gz | psql test_restore # Verify data psql test_restore -c "SELECT COUNT(*) FROM users;"
Backup Management
Backup Operations
Manual Backup Creation:
# Create immediate backup supascale backup create \ --instance "production-api" \ --type "full" \ --description "Pre-migration backup"
Backup Restoration:
# List available backups supascale backup list --instance "production-api" # Restore specific backup supascale backup restore \ --backup-id "backup_123" \ --target-instance "staging-api" \ --confirm
Backup Information:
# View backup details supascale backup info --backup-id "backup_123" # Check backup status supascale backup status --instance "production-api"
Bulk Operations
Multiple Instance Backup:
bulk_backup: instances: - "production-api" - "production-auth" - "production-storage" settings: backup_type: "full" parallel: true max_concurrent: 3
Cross-Region Replication:
replication: enabled: true primary_storage: type: "s3" region: "us-west-2" replica_storage: type: "s3" region: "us-east-1" sync_frequency: "daily"
Disaster Recovery
Recovery Planning
Recovery Time Objective (RTO):
- Target time to restore service
- Plan for different disaster scenarios
- Test recovery procedures regularly
Recovery Point Objective (RPO):
- Maximum acceptable data loss
- Determine backup frequency requirements
- Balance cost vs. data protection
Recovery Procedures
Full Instance Recovery:
- Provision new infrastructure
- Install Supascale agent
- Download latest backup
- Restore database
- Restore configuration
- Update DNS/load balancer
Point-in-Time Recovery:
- Identify target recovery time
- Find appropriate backup set
- Apply incremental backups in order
- Verify data consistency
- Test application functionality
Troubleshooting
Common Issues
Backup Failures:
# Check backup logs supascale logs --instance "production-api" --service "backup" # Verify storage connectivity supascale backup test-storage --config-id "config_123" # Check disk space df -h /var/backups/supascale
Performance Issues:
optimization: # Reduce backup impact nice_level: 19 # Low CPU priority ionice_class: 3 # Idle I/O priority # Parallel compression compression_threads: 4 # Network optimization transfer_chunk_size: "100MB" retry_attempts: 3
Storage Issues:
# Test S3 connectivity aws s3 ls s3://my-backup-bucket/ # Check Azure connectivity az storage blob list --container-name backups # Verify GCS access gsutil ls gs://my-backup-bucket/
Next: Explore the API Reference for programmatic access to Supascale features.