Infrastructure Backup & Security Scanning
Table of Contents
- Overview
- System Architecture
- Orchestrator: Daily Backup Pipeline
- R2 Backup Flow
- Threat Scanning Flow
- MySQL Backup Flow
- OneDrive Sync
- Email Notifications
- Recovery
- Configuration
- Key Files
Overview
The infrastructure backup and security scanning system runs as two interconnected services on the same server. Together they protect application data stored in Cloudflare R2 (the app's primary file storage) by:
- Backing up R2 blobs — incremental, age-encrypted backups written to a mounted block volume
- Scanning uploaded files — hourly Microsoft Defender scans of newly uploaded R2 blobs, with automatic quarantine on detection
- Backing up MySQL databases — full encrypted dumps of all databases, run nightly
- Exporting Cloudflare configuration — DNS zones, WAF rules, SSL settings, and worker routes
- Syncing to OneDrive — rclone sync of the entire backup volume to Microsoft OneDrive for off-site redundancy
- Sending email alerts — Postmark notifications for malware detections and daily backup summaries
The two services are:
| Service | Path | Technology |
|---|---|---|
unified-backup | /root/unified-backup/ | Bash orchestrator |
azure-blob-backups | /root/azure-blob-backups/ | Node.js / TypeScript |
System Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ INFRASTRUCTURE BACKUP SYSTEM │
└─────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────┐
│ Cloudflare R2 (Object Storage) │
│ bucket: time24, attachments, ... │
└──────────┬───────────────────┬──────────────┘
│ │
│ Daily backup │ Hourly scan
│ (2 AM cron) │ (0 * * * *)
│ │
┌──────────▼─────────┐ ┌──────▼──────────────────────┐
│ backup.ts │ │ scan.ts │
│ │ │ │
│ • List R2 blobs │ │ • Download blobs to /tmp │
│ • Check ETag vs │ │ • Run mdatp on-demand scan │
│ .manifest.json │ │ • Parse threat list │
│ • Stream + age │ │ • Move infected → quarantine│
│ encrypt │ │ • Send Postmark alert │
│ • Write .age file │ │ • Delete /tmp/R2_SCAN │
└──────────┬─────────┘ └─────────────────────────────┘
│
┌──────────▼─────────────────────────────────┐
│ /mnt/HC_Volume_104390340/r2/ │
│ {bucket}/{blob}.age │
│ │
│ (Hetzner block volume — local to server)│
└──────────┬─────────────────────────────────┘
│
┌──────────▼─────────┐
│ rclone sync │
│ → Microsoft │
│ OneDrive │
└────────────────────┘Orchestrator: Daily Backup Pipeline
The master orchestrator runs at 2 AM every day via cron and executes six steps sequentially. If any step fails, the orchestrator logs the failure and continues with the remaining steps.
Cron schedule: 0 2 * * *
Script: /root/unified-backup/scripts/orchestrator.sh
Step 1: backup-blobs.sh
└─ Incremental encrypted backup of all R2 buckets to block volume
Step 2: backup-mysql.sh
└─ Full dump of all MySQL databases → gzip → age-encrypt → .sql.gz.age
Step 3: backup-cloudflare-dns.sh
└─ Export DNS records for all zones (JSON + BIND zone files)
Step 4: backup-cloudflare-config.sh
└─ Export WAF rules, firewall rules, SSL settings, worker routes (JSON)
Step 5: sync-to-onedrive.sh
└─ rclone sync of entire block volume to Microsoft OneDrive
Step 6: notify.sh
└─ Send daily summary email via Postmark
(step results, counts, sizes, any errors)R2 Backup Flow
Incremental Strategy
The backup service tracks each blob's ETag in a .manifest.json file at the root of each bucket's backup directory. On each run:
- List all blobs in the R2 bucket via the S3-compatible API.
- For each blob, compare its current ETag to the stored ETag in
.manifest.json. - If the ETag is unchanged, skip the blob (no re-download, no re-encrypt).
- If the ETag has changed (or the blob is new), stream the blob through
ageencryption and write the output.
This means only changed or new files are processed on each backup run, keeping the process fast and bandwidth-efficient.
Encryption
Each blob is encrypted individually using age asymmetric encryption:
R2 stream → age -r {AGE_RECIPIENT} → {blob}.age- Only the public key (
AGE_RECIPIENT) is stored on the backup server. - The private key is never present on the backup server. It is stored exclusively in 1Password.
- Recovery requires the private key to be retrieved from 1Password and provided via
AGE_IDENTITY_FILE.
Private Key Storage
The private key (AGE_IDENTITY_FILE) is stored only in 1Password. Without it, encrypted backups cannot be decrypted. Ensure the key is accessible before a disaster recovery scenario arises.
Bucket Selection
Buckets are configured in config/config.yaml:
buckets:
- name: "*" # wildcard matches all R2 buckets
excludeBuckets:
- some-bucket # explicitly excluded buckets
excludePatterns:
- "*.tmp" # blob key patterns to skipThe "*" wildcard means the service automatically discovers and backs up all R2 buckets, with the exception of any listed in excludeBuckets.
Output Layout
/mnt/HC_Volume_104390340/
└── r2/
├── {bucket-name}/
│ ├── .manifest.json # ETag tracking per blob
│ ├── uploads/file.pdf.age
│ ├── uploads/image.png.age
│ └── ...
└── {other-bucket}/
└── ...Data Flow Detail
┌─────────────────────────────────────────────────────────┐
│ R2 BACKUP DATA FLOW │
└─────────────────────────────────────────────────────────┘
R2 Bucket
├── uploads/report.pdf (ETag: "abc123")
├── uploads/photo.png (ETag: "def456")
└── uploads/old.docx (ETag: "ghi789")
│
│ 1. List blobs (S3 ListObjectsV2)
▼
.manifest.json check
├── uploads/report.pdf → ETag match → SKIP
├── uploads/photo.png → ETag changed → PROCESS
└── uploads/old.docx → ETag match → SKIP
│
│ 2. Stream new/changed blobs
▼
S3 GetObject stream
│
│ 3. Pipe through age encryption
▼
age -r age1{public_key}
│
│ 4. Write encrypted file
▼
/mnt/HC_Volume_104390340/r2/{bucket}/uploads/photo.png.age
│
│ 5. Update manifest
▼
.manifest.json → { "uploads/photo.png": "def456" }Threat Scanning Flow
Schedule
The scanner runs every hour via cron: 0 * * * *
It targets blobs modified in the last 65 minutes, providing a 5-minute overlap to avoid gaps between scans.
Process
┌─────────────────────────────────────────────────────────┐
│ THREAT SCANNING FLOW │
└─────────────────────────────────────────────────────────┘
1. Query R2 for blobs modified in last 65 minutes
└─ Skips any blob with key prefix "quarantine/"
2. Download each blob (raw, unencrypted) to:
/tmp/R2_SCAN/{bucket}/{key}
3. Run Microsoft Defender on-demand scan:
mdatp scan custom --path /tmp/R2_SCAN --ignore-exclusions
4. Parse "mdatp threat list" output
└─ Filter results by:
• scan time window
• path within /tmp/R2_SCAN
5. For each detected threat:
├─ Copy blob in R2: {bucket}/{key} → {bucket}/quarantine/{key}
├─ Delete original: {bucket}/{key}
└─ Send Postmark alert email with threat details
6. Delete /tmp/R2_SCAN entirely (clean temporary files)Why --ignore-exclusions
/tmp/R2_SCAN is registered as an mdatp folder exclusion to prevent real-time protection from interfering with downloaded files while they are being staged. The on-demand scan uses --ignore-exclusions specifically to bypass that exclusion so the custom scan path is fully inspected.
Quarantine Logic
Infected blobs are not deleted from R2 outright. They are moved to a quarantine/ prefix within the same bucket:
Original: {bucket}/uploads/malware.exe
Quarantine: {bucket}/quarantine/uploads/malware.exeThis preserves the file for investigation while preventing the application from serving it. The quarantine prefix is excluded from future scans to avoid repeated alerts.
Alert Email
For each detected threat, a Postmark transactional email is sent containing:
- Bucket and blob key
- Threat name reported by Defender
- Detection timestamp
- Quarantine destination path
MySQL Backup Flow
backup-mysql.sh runs as step 2 of the orchestrator pipeline:
- Dump all databases using
mysqldump --all-databases. - Compress the dump with
gzip. - Encrypt the compressed dump with
age -r {AGE_RECIPIENT}. - Write the final file to the block volume as
mysql_{timestamp}.sql.gz.age.
The same public-key-only encryption model applies — recovery requires the private key from 1Password.
OneDrive Sync
sync-to-onedrive.sh runs as step 5 of the orchestrator pipeline:
rclone sync /mnt/HC_Volume_104390340/ onedrive:{remote_path}- Uses rclone's
synccommand, making the OneDrive destination an exact mirror of the block volume. - rclone configuration (remote credentials, OAuth tokens) is stored in the path defined by
RCLONE_CONFIGinbackup.env. - All files on the block volume — R2 backups, MySQL dumps, Cloudflare exports — are included in the sync.
Email Notifications
All email is sent via Postmark using its REST API (/email endpoint).
Notification types:
| Event | Trigger | Content |
|---|---|---|
| Threat detected | Each infected file found during scan | Bucket, key, threat name, quarantine path |
| Daily backup summary | Step 6 of orchestrator (2 AM) | Per-step results, file counts, errors |
The Postmark client (src/postmark.ts) sends plain HTTP POST requests to https://api.postmarkapp.com/email using the POSTMARK_API_KEY server token.
Recovery
The recover.ts service decrypts backed-up blobs and restores them. It requires the private key, which must be retrieved from 1Password before running.
Full Bucket Recovery
# Using Docker Compose (recommended)
AGE_IDENTITY_FILE=/path/to/private-key.txt docker compose run --rm recoverFiltered Recovery
# Recover a specific bucket only
node dist/index.js recover --bucket time24
# Recover blobs matching a path pattern
node dist/index.js recover --bucket time24 --pattern "uploads/**"Recovery Flow
1. Read private key from AGE_IDENTITY_FILE
2. List .age files in /mnt/HC_Volume_104390340/r2/{bucket}/
3. For each file:
├─ Decrypt: age --decrypt -i {key_file} {blob}.age
└─ Upload decrypted stream back to R2Recovery Checklist
Before starting recovery:
- Retrieve the private key from 1Password.
- Confirm the block volume is mounted at
/mnt/HC_Volume_104390340/. - Confirm R2 credentials are set in environment variables.
- Use
--bucketand--patternflags to limit scope when recovering individual files rather than a full restore.
Configuration
Environment Variables
Both services read from environment variables. For the unified-backup orchestrator, these are loaded from /root/unified-backup/config/backup.env.
| Variable | Description |
|---|---|
R2_ACCOUNT_ID | Cloudflare account ID |
R2_ACCESS_KEY_ID | R2 API access key |
R2_SECRET_ACCESS_KEY | R2 API secret key |
AGE_RECIPIENT | age public key (age1...) — used for all encryption |
POSTMARK_API_KEY | Postmark server API token |
POSTMARK_FROM | Sender email address |
POSTMARK_TO | Recipient email address for alerts and summaries |
R2 Client Endpoint
The Node.js service uses the AWS SDK v3 S3 client pointed at the Cloudflare R2 S3-compatible endpoint:
https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.comBucket Config (config/config.yaml)
buckets:
- name: "*"
excludeBuckets:
- some-bucket
excludePatterns:
- "*.tmp"name: "*"— process all R2 buckets discovered via the APIexcludeBuckets— bucket names to skip entirelyexcludePatterns— glob patterns matched against blob keys; matching blobs are skipped
Block Volume
Backup files are written to a Hetzner block volume mounted at:
/mnt/HC_Volume_104390340/This path is configured in backup.env and must be mounted before any backup or sync step runs.
Key Files
| File | Purpose |
|---|---|
/root/azure-blob-backups/src/backup.ts | Incremental encrypted R2 backup logic |
/root/azure-blob-backups/src/scan.ts | Download, mdatp scan, quarantine, and alert |
/root/azure-blob-backups/src/r2.ts | R2 S3 client (list, download, copy, delete) |
/root/azure-blob-backups/src/postmark.ts | Postmark REST email client |
/root/azure-blob-backups/src/recover.ts | Decrypt and restore blobs from backup |
/root/azure-blob-backups/config/config.yaml | Bucket selection and exclusion config |
/root/unified-backup/scripts/orchestrator.sh | Master daily backup runner (all 6 steps) |
/root/unified-backup/scripts/backup-blobs.sh | Step 1: triggers the Node.js R2 backup |
/root/unified-backup/scripts/backup-mysql.sh | Step 2: MySQL dump + compress + encrypt |
/root/unified-backup/scripts/backup-cloudflare-dns.sh | Step 3: DNS zone export |
/root/unified-backup/scripts/backup-cloudflare-config.sh | Step 4: WAF/SSL/workers export |
/root/unified-backup/scripts/sync-to-onedrive.sh | Step 5: rclone sync to OneDrive |
/root/unified-backup/scripts/notify.sh | Step 6: Postmark summary email |
/root/unified-backup/config/backup.env | Paths, age public key, rclone config path |