Technology

Offsite Backups Without the Drama: Restic/Borg to S3-Compatible Storage (Versioning, Encryption, Retention)

So there I was, sipping a very necessary cup of coffee, when a client pinged me with that sinking-message of the week: “Our VPS died, and I think the backups are on the same server.” You know that pause your brain does, like it’s buffering bad news? That was me. It’s a classic trap: we all mean to set up offsite backups, but it’s surprisingly easy to postpone. And then one quiet Tuesday becomes “why is everything broken?” Tuesday.

Ever had that moment when you realize your backup plan is basically a hope and a prayer? I’ve been there. It’s why I’ve become a fan of simple, boring, dependable systems. In this guide, I want to show you a calm, practical path: using Restic or Borg with S3-compatible storage, and getting versioning, encryption, and retention policies that don’t require a PhD or a full-time ops team. I’ll share what’s worked for me, what tripped me up, and a few small choices that make a huge difference when you actually need to restore. Because that’s what matters, right? Not the backup, the restore.

Why Offsite S3 (and Why It Saves Your Bacon)

I remember my first “oh thank goodness” restore like it was yesterday. The client had a database corruption after a rushed plugin update, and the local snapshots were all toast because they lived on the same disk array that was failing. What saved us wasn’t a complicated enterprise system; it was a quiet little Restic repo tucked away in an S3-compatible bucket, with a retention policy we had set and forgotten about. We restored last night’s clean snapshot, they bought me coffee, and life moved on.

Here’s the thing: offsite storage breaks the blast radius. If your server melts, if your provider has a hiccup, or if ransomware worms its way in, you’ve got a copy somewhere else. And if that “somewhere else” is object storage via an S3-compatible endpoint, you gain a few extra goodies: effortless scalability, robust durability, and an interface most tools already know how to speak. There’s also the cost angle. With a smart retention strategy—keeping dailies, weeklies, and monthlies—you’re not paying for a hoarder’s attic full of stale snapshots.

When I say “S3-compatible,” I mean anything that speaks the S3 API: major cloud providers, specialty object storage vendors, and even self-hosted solutions. The beauty of this is you can pick what fits your budget and geography, and Restic (and, with a small twist, Borg) will hum along without caring who happens to store the bytes. That flexibility is calming—no lock-in, no weird agent running on your servers, just a tool and a bucket.

Restic and Borg: How They Think (and Why That Matters)

Both Restic and Borg came out of the same “I’m tired of backup pain” lineage. They’re deduplicating, they’re encrypted-by-default (when configured as intended), and they treat backups as snapshots you can manage over time. In my experience, the success of your backups often comes down to how intuitive the snapshot model feels while you’re under pressure. Restic calls them snapshots; Borg calls them archives. The vibe is the same: a point-in-time view of your data that you can restore later.

Restic really shines when it comes to object storage. It talks S3 natively, sets up quickly, and makes it easy to plug in retention rules. Borg is a beast at efficient, secure backups too, but it historically leans on SSH to talk to a remote repository. That’s not a limitation—just a workflow difference. With Borg, I usually back up to a small VPS via SSH and then replicate that repo into an S3 bucket. It’s one more hop, but it gives you full control and strong consistency. Some folks mount S3 via FUSE and write directly with Borg. Personally, I’ve had mixed results there under load, and I prefer the SSH-first, replicate-second pattern for reliability.

Think of it like this: Restic hands you the keys to S3 and says “go for it.” Borg hands you a rock-solid vault and says “place this vault where it’s safe” (an SSH server), then you optionally mirror that vault to S3 for extra resilience. Both are smart choices. Your preference might come down to the ecosystem you like, the debugging experience you prefer, and how many moving parts you’re comfortable managing.

Setting Up Restic with S3-Compatible Storage (The Smooth Ride)

Let me walk you through how I typically set up Restic. The steps are comfortably boring. You pick an S3-compatible provider, create a bucket, and get an access key and secret. Then you choose a strong repository password for Restic (this is your client-side encryption passphrase), and you wire the whole thing together with a few environment variables. I like environment variables because they keep scripts clean and reduce typos when it matters.

Environment variables that keep your setup tidy

export RESTIC_REPOSITORY="s3:https://s3.example.com/my-backups"
export RESTIC_PASSWORD="use-a-long-unique-passphrase-here"
export AWS_ACCESS_KEY_ID="YOURACCESSKEY"
export AWS_SECRET_ACCESS_KEY="YOURSECRETKEY"
export AWS_DEFAULT_REGION="us-east-1"

The repository path format is the little nuance to get right. Restic supports S3 natively, so you can point to an S3 endpoint over HTTPS with the bucket name at the end. Once those variables are set, initialize the repo:

restic init

It will ask for the password (or read it from the environment variable). Restic encrypts data and metadata before anything leaves your server, so your provider doesn’t see filenames or contents—just encrypted chunks.

Your first backup: keep it targeted, tag it for sanity

restic backup 
  --tag server:web01 
  --tag type:daily 
  /etc /var/www /var/lib/mysql-dumps

I’m a big fan of tagging. When you’re scrolling through snapshots later, tags work like sticky notes: what’s this backup for, which server, and what role. If you’re backing up large datasets, consider excluding transient directories like cache folders. Restic’s deduplication loves stable, not constantly-changing garbage.

List snapshots and sleep better

restic snapshots

Seeing your snapshots in the repository is a little confidence boost. You can filter by tag, host, or path to get the view you need. Restic is predictable like that, which matters when you’re tired and just need answers.

Retention that doesn’t hoard your budget

Here’s a sane starter policy I use a lot: keep seven dailies, five weeklies, and twelve monthlies. That gives you recent history for fast rollbacks and a year of monthly safety for long-tail problems. The forget command removes snapshot references matching older intervals and optionally prunes unreferenced data:

restic forget 
  --keep-daily 7 
  --keep-weekly 5 
  --keep-monthly 12 
  --prune

Run that after your backup. A quick “backup then forget/prune” pair is a nice daily cadence. You can adjust the numbers later; the policy is yours.

Quick restore drills that pay off in confidence

# list what's inside the latest snapshot
restic ls latest

# restore a folder to a temp directory
restic restore latest --include /var/www --target /tmp/restore-test

I treat restore drills like fire drills. Not daily, but regular enough that I’m not googling under stress. If you’ve ever restored a whole server in a hurry, you know the tiny things—permissions, owners, SELinux contexts—can nibble at your time. Practice once and your future self sends you a thank-you note.

Borg with an S3 Twist: SSH First, Then Replicate

Now, let’s talk Borg. In my playbook, Borg takes a slightly different route to the same happy place. We create a repository on a remote SSH target (a small hardened VPS works well), then run Borg backups over SSH, and finally replicate that repository into S3-compatible storage. The replication step is where object storage joins the party. I do it this way because Borg’s locking and consistency are excellent over SSH, and replication becomes a clean, one-directional push.

Initialize a Borg repository with encryption

export BORG_REPO="ssh://[email protected]:22/~/borg-repos/web01"
export BORG_PASSPHRASE="use-a-long-unique-passphrase-here"

borg init --encryption=repokey-blake2 "$BORG_REPO"

I like repokey-blake2 for a balance of speed and security. If you’ve got HSM/KMS requirements or more advanced key handling, plan that upfront. Borg will store the key material on the client by default, and you’ll want a secure offsite copy of that key for disaster scenarios.

Create archives with compression and sanity

borg create --stats --compression zstd,10 
    "$BORG_REPO"::"{hostname}-{now:%Y-%m-%d-%H%M}" 
    /etc /var/www /var/lib/mysql-dumps

Compression level is a trade-off. On beefy CPUs I don’t mind going higher, but most of the time zstd at a moderate level is perfect. Borg deduplicates beautifully, so after the first run, subsequent archives are fast and lean.

Prune old archives (same idea as Restic)

borg prune -v --list "$BORG_REPO" 
  --keep-daily 7 --keep-weekly 5 --keep-monthly 12

Same retention philosophy: keep recent short-term, keep some weeklies, keep a year of monthlies. It’s clean, predictable, and budget-friendly.

Replicate the repository into S3-compatible object storage

After pruning, you can replicate the whole repo to S3-compatible storage. I prefer a dedicated sync step over clever piping during backups because it keeps each responsibility clear. Make sure the repo is quiescent (no backup running) when you mirror it. Many teams use a tool to sync directory trees into S3. Run it from the VPS that stores the Borg repo, after prune and compact.

# Example idea (run on the VPS that holds the repo):
# 1) compact the repo to reclaim space
borg compact "$BORG_REPO"

# 2) then mirror the repo directory into S3-compatible storage
# (replace with your preferred sync tool/command)
# rclone sync /home/backup/borg-repos/web01 s3:my-bucket/web01

In my experience, decoupling backup creation from replication simplifies your logs and makes troubleshooting easier. If replication fails one night, you still have the SSH-side repository intact and can retry the mirror step without panic.

Versioning, Retention, and Encryption: The Real-World Balancing Act

Let’s translate buzzwords into something useful. Versioning, in the context of Restic and Borg, is simply the presence of multiple snapshots/archives over time. You don’t have one backup—you have a story of your data’s changes. That’s gold when you realize something went wrong three days ago, not five minutes ago. The more your changes matter, the tighter your retention around the present should be. Dailies save you from today’s typo, weeklies catch the slow-moving mistakes, and monthlies guard against the long-tail “oops we never noticed.”

But there’s a second kind of versioning: bucket-level versioning. In S3-compatible storage, you can turn on object versioning to protect against accidental or malicious deletion of objects in the bucket. This isn’t the same thing as Restic/Borg snapshots; it’s a safety net under your safety net. It can increase storage usage, so it’s something you do mindfully. If you want to go even further, look at S3 Object Lock for immutability—think of it as “write once, then hands off.” When ransomware is part of your threat model, immutability is a comforting lever.

Now, encryption. Restic encrypts client-side with a repository password; the provider never sees plaintext. Borg encrypts too, with key material tied to your client. Both approaches mean you control the crypto. If your compliance team wants belts and suspenders, you can layer in server-side encryption on the bucket as well, but that’s defense-in-depth rather than a replacement for client-side encryption. The crucial part is key hygiene. The password or key material that decrypts your repository is a crown jewel. Treat it like one.

Retention is where cost meets peace of mind. Most of my clients are relieved when I show them a policy that’s easy to say aloud: keep a week of dailies, a month or so of weeklies, and a year of monthlies. If a system is especially volatile, we add a couple of hourly snapshots during business hours for a rolling day. The point isn’t precision; it’s having enough history to make “we need to go back to Tuesday at 11 AM” possible without paying for infinite history.

Small choices that help in big ways

One trick I’ve grown to love is tagging snapshots with the nature of the backup—daily, weekly, monthly—and letting a single forget/prune policy keep the right ones. Another is aligning backup windows with your quiet hours so you don’t compete with deployments or heavy batch jobs. And if you can, stagger backups between servers so your object storage traffic doesn’t spike all at once. Calm graphs make for calm nights.

Automation, Monitoring, Restore Drills, and Practical Guardrails

The best backup is the one that actually runs, every day, without you babysitting it. I’ve come to prefer systemd timers for servers already using systemd because they’re resilient and log-friendly, but cron is just fine if that’s your comfort zone. The pattern is simple: one unit that runs the backup, and one timer that schedules it. Then a second unit/timer pair later in the night that runs the forget/prune and kicks off a repository check. You don’t have to reinvent the automation wheel—simple and obvious beats clever and fragile.

What to monitor (and what to ignore)

I like backup logs that say a few key things clearly: did a backup run, how much new data was added, did retention succeed, and did a check pass. Alerts should trigger on failure to run, failure to complete, or an unexpected surge in changed data. That last one can save you from quietly backing up a log file that somebody forgot to rotate. If you don’t have a monitoring stack yet, you might enjoy my friendly walkthrough on how to set up simple uptime and alerting with Prometheus, Grafana, and Uptime Kuma. It’s a great companion to backups because you catch problems while they’re small.

Restore drills that won’t wreck your day

Pick a small restore scenario and rehearse it. For Restic, restore a single directory to a temporary path and validate a couple of files. For Borg, restore an archive’s subset and make sure permissions and ownership look sane. Time how long it takes. Put a sticky note in your runbook that says, “In a real incident, budget 20% more time than the drill.” The goal isn’t perfection; it’s having muscle memory.

A quick word on performance and bandwidth

Backup throughput is a dance between CPU (for encryption and compression), disk I/O, and network bandwidth. If your server is pegged during working hours, schedule backups late at night. If you have strict egress costs, consider pushing first to a nearby VPS or cache location, then replicating to S3 during off-hours. Restic and Borg both let you exclude noisy paths; cut out caches, temporary files, and anything that regenerates automatically. Less noise equals faster, cheaper, cleaner backups.

Costs: better to be deliberate than surprised

Object storage is usually a great deal, but versioning and immutability can multiply storage if you’re not watching. Keep an eye on unintentional churn (like huge changed log files). For Restic on S3-compatible storage, bucket versioning can be your safety net, but don’t forget it has a price tag. For Borg replication, only mirror after prune/compact so you’re not paying to store garbage you’ve already decided to discard.

Security: least privilege and split secrets

Use credentials that can only reach the one bucket they need and only do the operations required by your tool. For Restic, your bucket user needs to put objects, list them, and delete when pruning. Don’t hand out admin keys if you can avoid it. Keep your Restic password or Borg key material somewhere that survives your worst day—an encrypted password manager, a secure vault, even a sealed envelope in a safe if you’re old-school. Consider enabling MFA delete, bucket versioning, or—even better where available—immutability where it makes sense. Also, if you want a deeper dive into the philosophy behind Restic’s approach, the excellent restic documentation is like a calm friend who answers all your late-night questions.

Reality checks and regular maintenance

Run restic check or borg check on a schedule. If a check fails, treat it with the same seriousness as a failed backup. Make prune/compact part of your habit so repositories don’t grow forever on the back of old data. Every few months, take a quick look at your retention policy and see if it still matches your business. Did you add a new directory that needs special handling? Did your database grow tenfold? These tiny reviews keep your plan faster, cheaper, and more trustworthy.

Putting It All Together: A Day in the Life of Calm Backups

Let me paint you the rhythm that’s worked for me across a bunch of servers. Nightly, Restic runs on each server and pushes fresh, encrypted snapshots directly into S3-compatible storage. Immediately after, it forgets and prunes based on our retention rules. Then, a quick check confirms the repository is healthy. For the Borg setups, servers talk over SSH to a backup VPS, create fresh archives, prune, and compact, and then a scheduled job mirrors those repositories into object storage for offsite peace of mind. Monitoring pings me only if something’s off, not for every successful job. That’s important because signal beats noise every time.

The reason this approach feels so safe isn’t because it’s fancy. It’s because it’s simple enough to do every day without shortcuts, and robust enough to survive a bad week. That’s what you want from backups. Not wizardry—just boring reliability, plus a couple of well-chosen belts and suspenders like bucket versioning and immutability when appropriate. And if you ever have to restore under pressure, you’ll be grateful for those tags, those retention rules, and that one practice drill you almost skipped but didn’t.

A Few Final Tips I Wish Someone Had Told Me

First, name your buckets and repositories clearly—future you will thank past you at 3 AM. Second, document the backup paths, exclusions, and retention policy in a short README that lives next to your scripts. It sounds quaint, but it makes handoffs painless. Third, don’t mix credentials across environments. Production should have its own keys and its own buckets so staging can’t rewrite your lifeboat by mistake. And finally, when in doubt, start simple. A single Restic repository and a basic retention policy beats a grand design you never finish.

For deeper background on Borg itself, Borg’s official docs are thorough without being overwhelming. They’ll walk you through edge cases like partial file restores, patterns for excludes, and repo maintenance tasks you’ll probably only need once in a blue moon—but when you need them, you’ll be glad they’re there.

Wrap-Up: Calm, Boring, and Always There When You Need It

If you’ve ever been burned by a missing backup, you know the gift of a system that just works. Restic and Borg, paired with S3-compatible storage, give you that. You get client-side encryption that keeps your data private, versioning that turns “oh no” into “let’s just roll back,” and retention that keeps costs and clutter in line. The setup is a few thoughtful steps, the daily routine is a pair of predictable jobs, and the restoration is a practiced move rather than a frantic scramble.

My advice: pick one path to start—Restic to S3 if you want the cleanest on-ramp, or Borg via SSH plus a replication step if you prefer that ecosystem. Set a sane retention policy, tag your snapshots, and schedule a monthly restore drill that never gets canceled. Add bucket versioning or immutability if tamper-resistance is on your checklist. And hook everything into your monitoring so silence means success, not “we forgot to run it.” Do this, and you’ll sleep better. Hope this was helpful! And if you want more friendly deep dives like this, stick around—we’ll keep the coffee warm.

Frequently Asked Questions

Great question! If you want the fastest path to object storage, Restic is wonderfully straightforward and speaks S3 natively. If you love Borg’s ecosystem, back up over SSH to a small VPS and then replicate that repo to S3. Both are rock-solid; pick the workflow that feels easier to monitor and troubleshoot for your team.

Snapshots and bucket versioning serve different purposes. Snapshots give you historical versions of your data at the tool level. Bucket versioning protects the objects inside the bucket from accidental or malicious deletion. If you want a belt-and-suspenders approach—especially against ransomware—adding bucket versioning or immutability is a smart move.

I like a small drill monthly and a bigger one quarterly. For a quick check: list snapshots, restore a single directory to a temp path, verify files, and note how long it took. For the quarterly version, restore a larger subset that mimics a real incident. The goal is muscle memory so you don’t lose time to surprises when it counts.