Off‑site backups to S3‑compatible object storage are one of the most cost‑effective ways to protect cPanel and VPS workloads. Instead of relying only on local disks or a single data center, you can send encrypted, deduplicated backups to remote object storage automatically every night. In this article we will walk through a practical, battle‑tested setup using three tools that work beautifully together: rclone (sync to S3‑compatible storage), restic (encrypted, deduplicated backups) and Cron (scheduling). We will focus on real Linux servers and cPanel environments like the ones we manage every day at dchost.com, and we will keep everything provider‑neutral so you can use any S3‑compatible platform or your own MinIO cluster on a VPS or dedicated server.
If you already understand why backups are important, your next question is usually: “How do I automate this in a clean, testable way so I can forget about it until I really need it?” That is exactly what we will cover: what to back up on cPanel/VPS, how to structure your object storage buckets, how to configure rclone and restic securely, how to schedule jobs with Cron, and how to test restores so you know the whole chain really works.
İçindekiler
- 1 Why Off‑Site Backups to Object Storage Matter
- 2 Architecture: cPanel/VPS + Object Storage + rclone + restic + Cron
- 3 Preparing Your Object Storage Target
- 4 Installing and Configuring rclone for S3‑Compatible Storage
- 5 Setting Up Restic Repositories for Encrypted, Deduplicated Backups
- 6 Example Backup Flow for a cPanel Server
- 7 Example Backup Flow for a Generic VPS (Without cPanel)
- 8 Scheduling and Monitoring with Cron
- 9 Testing Restores: Don’t Wait for an Incident
- 10 Hardening, Cost Control and Practical Tips
- 11 Bringing It All Together on dchost.com Infrastructure
Why Off‑Site Backups to Object Storage Matter
Modern backup strategy is no longer just “copy files to another folder”. Between ransomware, accidental deletions, hardware failures and region‑wide incidents, you need a design that survives multiple failure modes. The classic pattern for this is the 3‑2‑1 backup strategy: three copies of your data, on two different media, with one copy off‑site. If this sounds new to you, we recommend reading our detailed guide explaining the 3‑2‑1 backup strategy and how to automate it on cPanel and VPS.
S3‑compatible object storage is a perfect fit for the off‑site part of 3‑2‑1:
- Durable and replicated: objects are typically stored redundantly across multiple disks and sometimes across multiple availability zones.
- Pay for what you use: you pay per GB stored and per GB transferred, which works well for growing backup sets.
- API‑driven: tools like rclone and restic speak the S3 API natively, so automation is straightforward.
- Geographically decoupled: you can store backups in a different data center or even a different country for stronger disaster recovery.
If you want a broader comparison of storage types, we also have a detailed article on Object Storage vs Block Storage vs File Storage and which one to choose for web apps and backups.
Architecture: cPanel/VPS + Object Storage + rclone + restic + Cron
Before writing any scripts, it helps to be clear on the overall architecture. For a typical cPanel or generic Linux VPS server, a clean design looks like this:
- Source server: cPanel server or unmanaged VPS where your sites, databases and emails live.
- Local backup staging: a directory on the same server (for example
/backupor/var/backups) where we generate daily backups or cPanel account archives. - restic repository: an encrypted backup repository stored directly in S3‑compatible object storage (or optionally on local disk, then synced with rclone).
- rclone remote: a configured connection to your S3‑compatible storage endpoint.
- Cron jobs: scheduled tasks that run backup scripts (files + databases) and send them to the restic repository via S3.
There are two main patterns you can combine:
- cPanel built‑in backups + rclone
Use cPanel/WHM's native backup system to produce compressed account backups under/backup, then use rclone to sync those archives to object storage. - restic directly to S3‑compatible storage
Use restic to back up specific directories (for example/home,/var/lib/mysqldumps,/etc) directly to an S3 backend with encryption and deduplication.
In practice, many teams do both: they preserve native cPanel restore points and also maintain a restic repository for quick granular restores. Our other article on off‑site backups with restic/Borg to S3‑compatible storage goes deeper into the theory; here we will focus more on concrete scripts for cPanel/VPS.
Preparing Your Object Storage Target
First, you need an S3‑compatible object storage endpoint and credentials. This could be:
- A third‑party S3‑compatible object storage service.
- Your own MinIO cluster or single‑node deployment on a VPS or dedicated server from dchost.com (see our guide on running production‑ready MinIO on a VPS).
Create a dedicated bucket for backups, for example:
- Bucket name:
company-backups - Optional folder structure inside the bucket:
company-backups/cpanel/hostname/...
company-backups/vps/app1/...
Then create an access key and secret key for a user that has permissions restricted to that bucket only (least privilege). Keep these credentials safe; we will store them in a root‑only configuration file or in environment variables on the server.
Installing and Configuring rclone for S3‑Compatible Storage
rclone is a versatile command‑line tool that can sync files and directories to many cloud backends, including any S3‑compatible service.
Install rclone on a Linux VPS or cPanel server
On most distributions, you can either use the package manager or the installer script. For example on Debian/Ubuntu:
sudo apt update
sudo apt install rclone
On AlmaLinux/Rocky/other RHEL derivatives:
sudo dnf install rclone
If your repository versions are too old, you can use the official install script:
curl -fsSL https://rclone.org/install.sh | sudo bash
Configure an S3 remote in rclone
Run the configuration wizard as root:
sudo rclone config
Follow the interactive prompts:
- Choose
nfor a new remote. - Name it, for example
backup-s3. - Storage type: select
s3. - Provider: choose
Other(for generic S3‑compatible) unless your provider is listed specifically. - Enter your S3 endpoint URL (for example
https://objects.example.com). - Region: set according to your provider, or leave blank if not required.
- Enter your access key ID and secret access key.
- For advanced S3 options like ACLs or storage classes, you can accept defaults initially.
When done, test the configuration:
sudo rclone ls backup-s3:company-backups
If the bucket is empty, you may see nothing, but you should not see an error.
Setting Up Restic Repositories for Encrypted, Deduplicated Backups
restic is a modern backup program that offers encrypted, incremental, deduplicated backups and can store data directly in S3‑compatible object storage.
Install restic
On Debian/Ubuntu:
sudo apt update
sudo apt install restic
On RHEL‑based systems, you may install from EPEL or download a binary from the restic releases page:
curl -L -o restic.bz2 https://github.com/restic/restic/releases/download/v0.16.4/restic_0.16.4_linux_amd64.bz2
bunzip2 restic.bz2
chmod +x restic
sudo mv restic /usr/local/bin/
Environment variables for restic + S3
We normally keep restic settings in a root‑only file such as /root/.restic-env and load it in our backup scripts. Example:
export RESTIC_REPOSITORY="s3:https://objects.example.com/company-backups/vps-app1"
export RESTIC_PASSWORD="super-strong-long-random-password"
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
Secure the file:
sudo chown root:root /root/.restic-env
sudo chmod 600 /root/.restic-env
Initialize the restic repository
Once the environment variables are set, initialize the repository:
source /root/.restic-env
restic init
You should see a confirmation that the repository has been initialized in the bucket path. This creates the encrypted structure that restic will use for snapshots, deduplication and metadata.
Example Backup Flow for a cPanel Server
On cPanel/WHM, you usually want two layers:
- cPanel's native backups (account‑level archives for easy full account restores).
- Optionally, a restic layer for deduplicated, long‑retention backups across many days or weeks.
1. Enable and tune native cPanel backups
In WHM, go to Backup Configuration and:
- Enable backups.
- Choose compressed or incremental backups.
- Set the backup directory (often
/backupon a separate disk). - Decide how many daily/weekly/monthly copies to retain locally.
If you want a detailed walkthrough of cPanel backup options, see our Full cPanel Backup and Restore guide.
2. rclone script to sync cPanel backups to object storage
Create a script such as /root/backup-cpanel-rclone.sh:
#!/usr/bin/env bash
set -euo pipefail
BACKUP_SRC="/backup" # cPanel backup directory
REMOTE="backup-s3:company-backups/cpanel/$(hostname -f)"
LOG_FILE="/var/log/backup-cpanel-rclone.log"
{
echo "[$(date -Is)] Starting cPanel backup upload..."
# Sync local backup directory to S3-compatible storage
rclone sync "${BACKUP_SRC}" "${REMOTE}"
--fast-list
--transfers=4
--checkers=8
--delete-excluded
echo "[$(date -Is)] Upload completed successfully."
} >> "${LOG_FILE}" 2>&1
Make it executable:
sudo chmod +x /root/backup-cpanel-rclone.sh
Now you have a simple, repeatable way to upload all generated cPanel backup archives to the remote object storage bucket.
3. Optional: restic layer for deduplicated history
If you want more efficient long‑term history (for example dozens of daily snapshots) without keeping all old cPanel tarballs, add restic on top. Create /root/.restic-env for this server:
export RESTIC_REPOSITORY="s3:https://objects.example.com/company-backups/restic-cpanel-$(hostname -s)"
export RESTIC_PASSWORD="another-very-strong-random-password"
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
Initialize once:
source /root/.restic-env
restic init
Create /root/backup-cpanel-restic.sh:
#!/usr/bin/env bash
set -euo pipefail
source /root/.restic-env
LOG_FILE="/var/log/backup-cpanel-restic.log"
{
echo "[$(date -Is)] Starting restic backup of critical paths..."
restic backup
/home
/etc
/var/named
/var/lib/mysql-backups
--exclude-caches
--one-file-system
echo "[$(date -Is)] Backup finished. Running retention policy..."
restic forget
--keep-daily 7
--keep-weekly 4
--keep-monthly 6
--prune
echo "[$(date -Is)] Retention completed."
} >> "${LOG_FILE}" 2>&1
This assumes you have a separate process (for example a MySQL dump cron job) that keeps recent database dumps in /var/lib/mysql-backups. Adjust paths to your environment.
Example Backup Flow for a Generic VPS (Without cPanel)
On an unmanaged VPS (for example hosting a Laravel or Node.js app), you usually control everything via SSH. Here is a simple, flexible pattern that we also use internally.
1. Define what to back up
For a typical Linux VPS running web applications, you want:
- Application code and uploads: for example
/var/wwwor/opt/apps. - Databases: MySQL/MariaDB, PostgreSQL, etc. Usually via logical dumps or snapshot tools.
- Configuration:
/etc(web server, PHP‑FPM, system config). - Optional logs: at least for short retention, if you need to investigate incidents.
We have a separate article on MySQL backup strategies (mysqldump vs XtraBackup vs snapshots) if you want to design your database backup layer in more detail.
2. Example: MySQL dump + restic
Create a dump directory and a simple script, for example /root/mysql-dump.sh:
#!/usr/bin/env bash
set -euo pipefail
BACKUP_DIR="/var/backups/mysql"
mkdir -p "${BACKUP_DIR}"
DATE="$(date +%F_%H-%M-%S)"
mysqldump --all-databases --single-transaction --routines --events
| gzip > "${BACKUP_DIR}/mysqldump-${DATE}.sql.gz"
# Optional: delete local dumps older than 7 days
find "${BACKUP_DIR}" -type f -mtime +7 -delete
Make it executable:
sudo chmod +x /root/mysql-dump.sh
Next, define restic environment /root/.restic-env (if you haven't already):
export RESTIC_REPOSITORY="s3:https://objects.example.com/company-backups/vps-app1-restic"
export RESTIC_PASSWORD="super-long-random-password"
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
Initialize once:
source /root/.restic-env
restic init
Now create the main VPS backup script /root/backup-vps-restic.sh:
#!/usr/bin/env bash
set -euo pipefail
source /root/.restic-env
LOG_FILE="/var/log/backup-vps-restic.log"
{
echo "[$(date -Is)] Starting VPS backup..."
# 1) Ensure fresh MySQL dump
/root/mysql-dump.sh
# 2) Run restic backup
restic backup
/var/www
/etc
/var/backups/mysql
--exclude-caches
--one-file-system
echo "[$(date -Is)] Backup completed. Running retention policy..."
restic forget
--keep-daily 7
--keep-weekly 4
--keep-monthly 6
--prune
echo "[$(date -Is)] Retention finished."
} >> "${LOG_FILE}" 2>&1
Make it executable:
sudo chmod +x /root/backup-vps-restic.sh
Scheduling and Monitoring with Cron
Once your scripts work when run manually, the next step is automation with Cron. We have a separate guide on automating backups, reports and maintenance with Cron on cPanel and DirectAdmin, but we will highlight the essentials here.
Scheduling on a generic Linux VPS
Edit the root user's crontab:
sudo crontab -e
Example schedule:
# m h dom mon dow command
# Daily VPS backup at 02:30
30 2 * * * /root/backup-vps-restic.sh
# Daily cPanel backup upload at 03:30 (if using cPanel pattern)
30 3 * * * /root/backup-cpanel-rclone.sh
# Daily cPanel restic backup at 04:00
0 4 * * * /root/backup-cpanel-restic.sh
Cron will email the output to the root account if there are errors (assuming local email is configured). In addition, you can:
- Monitor log files under
/var/log/backup-*.log. - Push log metrics into your existing monitoring stack (for example Prometheus + alerting), similar to what we describe in our article on VPS monitoring and alerts with Prometheus and Grafana.
Scheduling from cPanel/WHM
On cPanel servers, you can also schedule scripts using:
- WHM → Cron Jobs for server‑wide tasks.
- cPanel → Cron Jobs for per‑account tasks (less relevant for system‑wide backups).
The commands will be the same (/root/backup-cpanel-rclone.sh etc.), but you manage them through WHM's UI instead of crontab -e.
Testing Restores: Don’t Wait for an Incident
A backup that has never been restored is an assumption, not a guarantee. We strongly recommend including regular restore tests in your maintenance schedule. This is also a key part of any serious disaster recovery plan; if you are designing one, our guide on how to write a disaster recovery (DR) plan with realistic RPO/RTO and backup tests may help.
Testing restic restores
List snapshots:
source /root/.restic-env
restic snapshots
Restore a specific directory to a temporary path (never overwrite live data blindly):
mkdir -p /tmp/restic-restore-test
restic restore latest --target /tmp/restic-restore-test
Check that files, permissions and timestamps look reasonable. For database dumps, run a test import into a temporary database on a non‑production server.
Testing cPanel restores
For cPanel native backups:
- Download a backup archive from object storage using rclone if necessary.
- Use WHM's Restore a Full Backup/cpmove File to restore into a test account or test server.
This validates that your backup archives are complete and that all required settings (DNS zones, SSL, databases, email accounts) are properly preserved.
Hardening, Cost Control and Practical Tips
A few real‑world lessons from running this kind of setup across many servers:
1. Protect credentials and limit access
- Store S3 access keys in root‑owned files with
chmod 600, never in world‑readable scripts. - Create a dedicated S3 user with bucket‑scoped permissions (no access to other buckets or admin APIs).
- Rotate access keys periodically and test after rotation.
2. Use encryption correctly
- restic encrypts all data by default; keep the
RESTIC_PASSWORDsafe (ideally in a separate password manager). - If you use rclone without restic, consider rclone's
cryptbackend to encrypt data client‑side before sending to object storage. - Some object storage platforms also support object lock and WORM (Write‑Once Read‑Many) for ransomware protection; we cover this in depth in our guide to ransomware‑proof backups with S3 Object Lock.
3. Exclude junk and temporary data
Backups are not a trash bag for everything on the disk. Excluding some paths improves speed and reduces costs:
- Cache directories (for example
storage/framework/cachein Laravel,wp-content/cachein WordPress). - System caches under
/var/cachethat can be rebuilt. - Big application logs if you already centralize logs elsewhere (for example Loki/ELK stack).
Add --exclude rules or a file like --exclude-file=/root/backup-excludes.txt in restic to manage this cleanly.
4. Watch bandwidth and storage costs
- Try to run backups during off‑peak hours to reduce impact on production traffic.
- restic deduplication significantly reduces storage usage, especially when many files change little between days.
- Use a retention policy (for example keep 7 daily, 4 weekly, 6 monthly) that fits your legal and business needs. Our article on how to decide backup retention periods under KVKK/GDPR vs real storage costs is a good companion here.
5. Integrate with monitoring and runbooks
- Add simple sanity checks: for example, after each backup, record the snapshot ID or last backup date in a small status file.
- Use your monitoring system to alert if backups haven't run in N hours or if last backup failed.
- Document a clear restore runbook so that anyone on the team can restore a site, database or whole server under pressure.
Bringing It All Together on dchost.com Infrastructure
When we design backup strategies for customers on dchost.com VPS, dedicated servers or colocation, the pattern above is the one we keep coming back to: local snapshots + off‑site encrypted copies on S3‑compatible storage, automated with rclone, restic and Cron. It is simple enough to understand, but flexible enough to handle everything from a single WordPress site to complex multi‑app stacks. Most importantly, it is easy to test and to restore.
If you are starting from scratch, your practical next steps are:
- Choose or deploy an S3‑compatible object storage backend and create a dedicated backup bucket.
- Install and configure rclone and restic on your cPanel/VPS servers.
- Write minimal backup scripts that you can run manually and verify.
- Add Cron schedules and basic monitoring for success/failure.
- Plan and rehearse at least one restore scenario (single site, full server, or critical database).
As your environment grows, you can refine retention, add cross‑region replication or object lock, and integrate backup metrics into your wider observability stack. The foundation, however, stays the same: clear scope, simple scripts, reliable automation and regular restore tests. If your servers run on dchost.com infrastructure and you want to validate your backup plan or move to an object‑storage‑based design, our team is happy to help you map these concepts to your actual workloads.
