{"id":1447,"date":"2025-11-06T21:40:09","date_gmt":"2025-11-06T18:40:09","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/offsite-backups-without-the-drama-restic-borg-to-s3-compatible-storage-versioning-encryption-retention\/"},"modified":"2025-11-06T21:40:09","modified_gmt":"2025-11-06T18:40:09","slug":"offsite-backups-without-the-drama-restic-borg-to-s3-compatible-storage-versioning-encryption-retention","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/offsite-backups-without-the-drama-restic-borg-to-s3-compatible-storage-versioning-encryption-retention\/","title":{"rendered":"Offsite Backups Without the Drama: Restic\/Borg to S3-Compatible Storage (Versioning, Encryption, Retention)"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was, sipping a very necessary cup of coffee, when a client pinged me with that sinking-message of the week: \u201cOur <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a> died, and I think the backups are on the same server.\u201d You know that pause your brain does, like it\u2019s buffering bad news? That was me. It\u2019s a classic trap: we all mean to set up offsite backups, but it\u2019s surprisingly easy to postpone. And then one quiet Tuesday becomes \u201cwhy is everything broken?\u201d Tuesday.<\/p>\n<p>Ever had that moment when you realize your backup plan is basically a hope and a prayer? I\u2019ve been there. It\u2019s why I\u2019ve become a fan of simple, boring, dependable systems. In this guide, I want to show you a calm, practical path: using Restic or Borg with S3-compatible storage, and getting versioning, encryption, and retention policies that don\u2019t require a PhD or a full-time ops team. I\u2019ll share what\u2019s worked for me, what tripped me up, and a few small choices that make a huge difference when you actually need to restore. Because that\u2019s what matters, right? Not the backup, the restore.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#Why_Offsite_S3_and_Why_It_Saves_Your_Bacon\"><span class=\"toc_number toc_depth_1\">1<\/span> Why Offsite S3 (and Why It Saves Your Bacon)<\/a><\/li><li><a href=\"#Restic_and_Borg_How_They_Think_and_Why_That_Matters\"><span class=\"toc_number toc_depth_1\">2<\/span> Restic and Borg: How They Think (and Why That Matters)<\/a><\/li><li><a href=\"#Setting_Up_Restic_with_S3-Compatible_Storage_The_Smooth_Ride\"><span class=\"toc_number toc_depth_1\">3<\/span> Setting Up Restic with S3-Compatible Storage (The Smooth Ride)<\/a><ul><li><a href=\"#Environment_variables_that_keep_your_setup_tidy\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Environment variables that keep your setup tidy<\/a><\/li><li><a href=\"#Your_first_backup_keep_it_targeted_tag_it_for_sanity\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Your first backup: keep it targeted, tag it for sanity<\/a><\/li><li><a href=\"#List_snapshots_and_sleep_better\"><span class=\"toc_number toc_depth_2\">3.3<\/span> List snapshots and sleep better<\/a><\/li><li><a href=\"#Retention_that_doesnt_hoard_your_budget\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Retention that doesn\u2019t hoard your budget<\/a><\/li><li><a href=\"#Quick_restore_drills_that_pay_off_in_confidence\"><span class=\"toc_number toc_depth_2\">3.5<\/span> Quick restore drills that pay off in confidence<\/a><\/li><\/ul><\/li><li><a href=\"#Borg_with_an_S3_Twist_SSH_First_Then_Replicate\"><span class=\"toc_number toc_depth_1\">4<\/span> Borg with an S3 Twist: SSH First, Then Replicate<\/a><ul><li><a href=\"#Initialize_a_Borg_repository_with_encryption\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Initialize a Borg repository with encryption<\/a><\/li><li><a href=\"#Create_archives_with_compression_and_sanity\"><span class=\"toc_number toc_depth_2\">4.2<\/span> Create archives with compression and sanity<\/a><\/li><li><a href=\"#Prune_old_archives_same_idea_as_Restic\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Prune old archives (same idea as Restic)<\/a><\/li><li><a href=\"#Replicate_the_repository_into_S3-compatible_object_storage\"><span class=\"toc_number toc_depth_2\">4.4<\/span> Replicate the repository into S3-compatible object storage<\/a><\/li><\/ul><\/li><li><a href=\"#Versioning_Retention_and_Encryption_The_Real-World_Balancing_Act\"><span class=\"toc_number toc_depth_1\">5<\/span> Versioning, Retention, and Encryption: The Real-World Balancing Act<\/a><ul><li><a href=\"#Small_choices_that_help_in_big_ways\"><span class=\"toc_number toc_depth_2\">5.1<\/span> Small choices that help in big ways<\/a><\/li><\/ul><\/li><li><a href=\"#Automation_Monitoring_Restore_Drills_and_Practical_Guardrails\"><span class=\"toc_number toc_depth_1\">6<\/span> Automation, Monitoring, Restore Drills, and Practical Guardrails<\/a><ul><li><a href=\"#What_to_monitor_and_what_to_ignore\"><span class=\"toc_number toc_depth_2\">6.1<\/span> What to monitor (and what to ignore)<\/a><\/li><li><a href=\"#Restore_drills_that_wont_wreck_your_day\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Restore drills that won\u2019t wreck your day<\/a><\/li><li><a href=\"#A_quick_word_on_performance_and_bandwidth\"><span class=\"toc_number toc_depth_2\">6.3<\/span> A quick word on performance and bandwidth<\/a><\/li><li><a href=\"#Costs_better_to_be_deliberate_than_surprised\"><span class=\"toc_number toc_depth_2\">6.4<\/span> Costs: better to be deliberate than surprised<\/a><\/li><li><a href=\"#Security_least_privilege_and_split_secrets\"><span class=\"toc_number toc_depth_2\">6.5<\/span> Security: least privilege and split secrets<\/a><\/li><li><a href=\"#Reality_checks_and_regular_maintenance\"><span class=\"toc_number toc_depth_2\">6.6<\/span> Reality checks and regular maintenance<\/a><\/li><\/ul><\/li><li><a href=\"#Putting_It_All_Together_A_Day_in_the_Life_of_Calm_Backups\"><span class=\"toc_number toc_depth_1\">7<\/span> Putting It All Together: A Day in the Life of Calm Backups<\/a><\/li><li><a href=\"#A_Few_Final_Tips_I_Wish_Someone_Had_Told_Me\"><span class=\"toc_number toc_depth_1\">8<\/span> A Few Final Tips I Wish Someone Had Told Me<\/a><\/li><li><a href=\"#Wrap-Up_Calm_Boring_and_Always_There_When_You_Need_It\"><span class=\"toc_number toc_depth_1\">9<\/span> Wrap-Up: Calm, Boring, and Always There When You Need It<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"Why_Offsite_S3_and_Why_It_Saves_Your_Bacon\">Why Offsite S3 (and Why It Saves Your Bacon)<\/span><\/h2>\n<p>I remember my first \u201coh thank goodness\u201d restore like it was yesterday. The client had a database corruption after a rushed plugin update, and the local snapshots were all toast because they lived on the same disk array that was failing. What saved us wasn\u2019t a complicated enterprise system; it was a quiet little Restic repo tucked away in an S3-compatible bucket, with a retention policy we had set and forgotten about. We restored last night\u2019s clean snapshot, they bought me coffee, and life moved on.<\/p>\n<p>Here\u2019s the thing: offsite storage breaks the blast radius. If your server melts, if your provider has a hiccup, or if ransomware worms its way in, you\u2019ve got a copy somewhere else. And if that \u201csomewhere else\u201d is object storage via an S3-compatible endpoint, you gain a few extra goodies: effortless scalability, robust durability, and an interface most tools already know how to speak. There\u2019s also the cost angle. With a smart retention strategy\u2014keeping dailies, weeklies, and monthlies\u2014you\u2019re not paying for a hoarder\u2019s attic full of stale snapshots.<\/p>\n<p>When I say \u201cS3-compatible,\u201d I mean anything that speaks the S3 API: major cloud providers, specialty object storage vendors, and even self-hosted solutions. The beauty of this is you can pick what fits your budget and geography, and Restic (and, with a small twist, Borg) will hum along without caring who happens to store the bytes. That flexibility is calming\u2014no lock-in, no weird agent running on your servers, just a tool and a bucket.<\/p>\n<h2 id=\"section-2\"><span id=\"Restic_and_Borg_How_They_Think_and_Why_That_Matters\">Restic and Borg: How They Think (and Why That Matters)<\/span><\/h2>\n<p>Both Restic and Borg came out of the same \u201cI\u2019m tired of backup pain\u201d lineage. They\u2019re deduplicating, they\u2019re encrypted-by-default (when configured as intended), and they treat backups as snapshots you can manage over time. In my experience, the success of your backups often comes down to how intuitive the snapshot model feels while you\u2019re under pressure. Restic calls them snapshots; Borg calls them archives. The vibe is the same: a point-in-time view of your data that you can restore later.<\/p>\n<p>Restic really shines when it comes to object storage. It talks S3 natively, sets up quickly, and makes it easy to plug in retention rules. Borg is a beast at efficient, secure backups too, but it historically leans on SSH to talk to a remote repository. That\u2019s not a limitation\u2014just a workflow difference. With Borg, I usually back up to a small VPS via SSH and then replicate that repo into an S3 bucket. It\u2019s one more hop, but it gives you full control and strong consistency. Some folks mount S3 via FUSE and write directly with Borg. Personally, I\u2019ve had mixed results there under load, and I prefer the SSH-first, replicate-second pattern for reliability.<\/p>\n<p>Think of it like this: Restic hands you the keys to S3 and says \u201cgo for it.\u201d Borg hands you a rock-solid vault and says \u201cplace this vault where it\u2019s safe\u201d (an SSH server), then you optionally mirror that vault to S3 for extra resilience. Both are smart choices. Your preference might come down to the ecosystem you like, the debugging experience you prefer, and how many moving parts you\u2019re comfortable managing.<\/p>\n<h2 id=\"section-3\"><span id=\"Setting_Up_Restic_with_S3-Compatible_Storage_The_Smooth_Ride\">Setting Up Restic with S3-Compatible Storage (The Smooth Ride)<\/span><\/h2>\n<p>Let me walk you through how I typically set up Restic. The steps are comfortably boring. You pick an S3-compatible provider, create a bucket, and get an access key and secret. Then you choose a strong repository password for Restic (this is your client-side encryption passphrase), and you wire the whole thing together with a few environment variables. I like environment variables because they keep scripts clean and reduce typos when it matters.<\/p>\n<h3><span id=\"Environment_variables_that_keep_your_setup_tidy\">Environment variables that keep your setup tidy<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">export RESTIC_REPOSITORY=&quot;s3:https:\/\/s3.example.com\/my-backups&quot;\nexport RESTIC_PASSWORD=&quot;use-a-long-unique-passphrase-here&quot;\nexport AWS_ACCESS_KEY_ID=&quot;YOURACCESSKEY&quot;\nexport AWS_SECRET_ACCESS_KEY=&quot;YOURSECRETKEY&quot;\nexport AWS_DEFAULT_REGION=&quot;us-east-1&quot;<\/code><\/pre>\n<p>The repository path format is the little nuance to get right. Restic supports S3 natively, so you can point to an S3 endpoint over HTTPS with the bucket name at the end. Once those variables are set, initialize the repo:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">restic init<\/code><\/pre>\n<p>It will ask for the password (or read it from the environment variable). Restic encrypts data and metadata before anything leaves your server, so your provider doesn\u2019t see filenames or contents\u2014just encrypted chunks.<\/p>\n<h3><span id=\"Your_first_backup_keep_it_targeted_tag_it_for_sanity\">Your first backup: keep it targeted, tag it for sanity<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">restic backup \n  --tag server:web01 \n  --tag type:daily \n  \/etc \/var\/www \/var\/lib\/mysql-dumps<\/code><\/pre>\n<p>I\u2019m a big fan of tagging. When you\u2019re scrolling through snapshots later, tags work like sticky notes: what\u2019s this backup for, which server, and what role. If you\u2019re backing up large datasets, consider excluding transient directories like cache folders. Restic\u2019s deduplication loves stable, not constantly-changing garbage.<\/p>\n<h3><span id=\"List_snapshots_and_sleep_better\">List snapshots and sleep better<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">restic snapshots<\/code><\/pre>\n<p>Seeing your snapshots in the repository is a little confidence boost. You can filter by tag, host, or path to get the view you need. Restic is predictable like that, which matters when you\u2019re tired and just need answers.<\/p>\n<h3><span id=\"Retention_that_doesnt_hoard_your_budget\">Retention that doesn\u2019t hoard your budget<\/span><\/h3>\n<p>Here\u2019s a sane starter policy I use a lot: keep seven dailies, five weeklies, and twelve monthlies. That gives you recent history for fast rollbacks and a year of monthly safety for long-tail problems. The forget command removes snapshot references matching older intervals and optionally prunes unreferenced data:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">restic forget \n  --keep-daily 7 \n  --keep-weekly 5 \n  --keep-monthly 12 \n  --prune<\/code><\/pre>\n<p>Run that after your backup. A quick \u201cbackup then forget\/prune\u201d pair is a nice daily cadence. You can adjust the numbers later; the policy is yours.<\/p>\n<h3><span id=\"Quick_restore_drills_that_pay_off_in_confidence\">Quick restore drills that pay off in confidence<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># list what's inside the latest snapshot\nrestic ls latest\n\n# restore a folder to a temp directory\nrestic restore latest --include \/var\/www --target \/tmp\/restore-test<\/code><\/pre>\n<p>I treat restore drills like fire drills. Not daily, but regular enough that I\u2019m not googling under stress. If you\u2019ve ever restored a whole server in a hurry, you know the tiny things\u2014permissions, owners, SELinux contexts\u2014can nibble at your time. Practice once and your future self sends you a thank-you note.<\/p>\n<h2 id=\"section-4\"><span id=\"Borg_with_an_S3_Twist_SSH_First_Then_Replicate\">Borg with an S3 Twist: SSH First, Then Replicate<\/span><\/h2>\n<p>Now, let\u2019s talk Borg. In my playbook, Borg takes a slightly different route to the same happy place. We create a repository on a remote SSH target (a small hardened VPS works well), then run Borg backups over SSH, and finally replicate that repository into S3-compatible storage. The replication step is where object storage joins the party. I do it this way because Borg\u2019s locking and consistency are excellent over SSH, and replication becomes a clean, one-directional push.<\/p>\n<h3><span id=\"Initialize_a_Borg_repository_with_encryption\">Initialize a Borg repository with encryption<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">export BORG_REPO=&quot;ssh:\/\/backup@backup-vps.example.com:22\/~\/borg-repos\/web01&quot;\nexport BORG_PASSPHRASE=&quot;use-a-long-unique-passphrase-here&quot;\n\nborg init --encryption=repokey-blake2 &quot;$BORG_REPO&quot;<\/code><\/pre>\n<p>I like <strong>repokey-blake2<\/strong> for a balance of speed and security. If you\u2019ve got HSM\/KMS requirements or more advanced key handling, plan that upfront. Borg will store the key material on the client by default, and you\u2019ll want a secure offsite copy of that key for disaster scenarios.<\/p>\n<h3><span id=\"Create_archives_with_compression_and_sanity\">Create archives with compression and sanity<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">borg create --stats --compression zstd,10 \n    &quot;$BORG_REPO&quot;::&quot;{hostname}-{now:%Y-%m-%d-%H%M}&quot; \n    \/etc \/var\/www \/var\/lib\/mysql-dumps<\/code><\/pre>\n<p>Compression level is a trade-off. On beefy CPUs I don\u2019t mind going higher, but most of the time zstd at a moderate level is perfect. Borg deduplicates beautifully, so after the first run, subsequent archives are fast and lean.<\/p>\n<h3><span id=\"Prune_old_archives_same_idea_as_Restic\">Prune old archives (same idea as Restic)<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">borg prune -v --list &quot;$BORG_REPO&quot; \n  --keep-daily 7 --keep-weekly 5 --keep-monthly 12<\/code><\/pre>\n<p>Same retention philosophy: keep recent short-term, keep some weeklies, keep a year of monthlies. It\u2019s clean, predictable, and budget-friendly.<\/p>\n<h3><span id=\"Replicate_the_repository_into_S3-compatible_object_storage\">Replicate the repository into S3-compatible object storage<\/span><\/h3>\n<p>After pruning, you can replicate the whole repo to S3-compatible storage. I prefer a dedicated sync step over clever piping during backups because it keeps each responsibility clear. Make sure the repo is quiescent (no backup running) when you mirror it. Many teams use a tool to sync directory trees into S3. Run it from the VPS that stores the Borg repo, <strong>after<\/strong> prune and compact.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Example idea (run on the VPS that holds the repo):\n# 1) compact the repo to reclaim space\nborg compact &quot;$BORG_REPO&quot;\n\n# 2) then mirror the repo directory into S3-compatible storage\n# (replace with your preferred sync tool\/command)\n# rclone sync \/home\/backup\/borg-repos\/web01 s3:my-bucket\/web01<\/code><\/pre>\n<p>In my experience, decoupling backup creation from replication simplifies your logs and makes troubleshooting easier. If replication fails one night, you still have the SSH-side repository intact and can retry the mirror step without panic.<\/p>\n<h2 id=\"section-5\"><span id=\"Versioning_Retention_and_Encryption_The_Real-World_Balancing_Act\">Versioning, Retention, and Encryption: The Real-World Balancing Act<\/span><\/h2>\n<p>Let\u2019s translate buzzwords into something useful. Versioning, in the context of Restic and Borg, is simply the presence of multiple snapshots\/archives over time. You don\u2019t have one backup\u2014you have a story of your data\u2019s changes. That\u2019s gold when you realize something went wrong three days ago, not five minutes ago. The more your changes matter, the tighter your retention around the present should be. Dailies save you from today\u2019s typo, weeklies catch the slow-moving mistakes, and monthlies guard against the long-tail \u201coops we never noticed.\u201d<\/p>\n<p>But there\u2019s a second kind of versioning: <strong>bucket-level versioning<\/strong>. In S3-compatible storage, you can turn on object versioning to protect against accidental or malicious deletion of objects in the bucket. This isn\u2019t the same thing as Restic\/Borg snapshots; it\u2019s a safety net under your safety net. It can increase storage usage, so it\u2019s something you do mindfully. If you want to go even further, look at <a href=\"https:\/\/docs.aws.amazon.com\/AmazonS3\/latest\/userguide\/object-lock.html\" rel=\"nofollow noopener\" target=\"_blank\">S3 Object Lock for immutability<\/a>\u2014think of it as \u201cwrite once, then hands off.\u201d When ransomware is part of your threat model, immutability is a comforting lever.<\/p>\n<p>Now, encryption. Restic encrypts client-side with a repository password; the provider never sees plaintext. Borg encrypts too, with key material tied to your client. Both approaches mean you control the crypto. If your compliance team wants belts and suspenders, you can layer in server-side encryption on the bucket as well, but that\u2019s defense-in-depth rather than a replacement for client-side encryption. The crucial part is key hygiene. The password or key material that decrypts your repository is a crown jewel. Treat it like one.<\/p>\n<p>Retention is where cost meets peace of mind. Most of my clients are relieved when I show them a policy that\u2019s easy to say aloud: keep a week of dailies, a month or so of weeklies, and a year of monthlies. If a system is especially volatile, we add a couple of hourly snapshots during business hours for a rolling day. The point isn\u2019t precision; it\u2019s having enough history to make \u201cwe need to go back to Tuesday at 11 AM\u201d possible without paying for infinite history.<\/p>\n<h3><span id=\"Small_choices_that_help_in_big_ways\">Small choices that help in big ways<\/span><\/h3>\n<p>One trick I\u2019ve grown to love is tagging snapshots with the nature of the backup\u2014daily, weekly, monthly\u2014and letting a single forget\/prune policy keep the right ones. Another is aligning backup windows with your quiet hours so you don\u2019t compete with deployments or heavy batch jobs. And if you can, stagger backups between servers so your object storage traffic doesn\u2019t spike all at once. Calm graphs make for calm nights.<\/p>\n<h2 id=\"section-6\"><span id=\"Automation_Monitoring_Restore_Drills_and_Practical_Guardrails\">Automation, Monitoring, Restore Drills, and Practical Guardrails<\/span><\/h2>\n<p>The best backup is the one that actually runs, every day, without you babysitting it. I\u2019ve come to prefer systemd timers for servers already using systemd because they\u2019re resilient and log-friendly, but cron is just fine if that\u2019s your comfort zone. The pattern is simple: one unit that runs the backup, and one timer that schedules it. Then a second unit\/timer pair later in the night that runs the forget\/prune and kicks off a repository check. You don\u2019t have to reinvent the automation wheel\u2014simple and obvious beats clever and fragile.<\/p>\n<h3><span id=\"What_to_monitor_and_what_to_ignore\">What to monitor (and what to ignore)<\/span><\/h3>\n<p>I like backup logs that say a few key things clearly: did a backup run, how much new data was added, did retention succeed, and did a check pass. Alerts should trigger on failure to run, failure to complete, or an unexpected surge in changed data. That last one can save you from quietly backing up a log file that somebody forgot to rotate. If you don\u2019t have a monitoring stack yet, you might enjoy my friendly walkthrough on how to <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">set up simple uptime and alerting with Prometheus, Grafana, and Uptime Kuma<\/a>. It\u2019s a great companion to backups because you catch problems while they\u2019re small.<\/p>\n<h3><span id=\"Restore_drills_that_wont_wreck_your_day\">Restore drills that won\u2019t wreck your day<\/span><\/h3>\n<p>Pick a small restore scenario and rehearse it. For Restic, restore a single directory to a temporary path and validate a couple of files. For Borg, restore an archive\u2019s subset and make sure permissions and ownership look sane. Time how long it takes. Put a sticky note in your runbook that says, \u201cIn a real incident, budget 20% more time than the drill.\u201d The goal isn\u2019t perfection; it\u2019s having muscle memory.<\/p>\n<h3><span id=\"A_quick_word_on_performance_and_bandwidth\">A quick word on performance and bandwidth<\/span><\/h3>\n<p>Backup throughput is a dance between CPU (for encryption and compression), disk I\/O, and network bandwidth. If your server is pegged during working hours, schedule backups late at night. If you have strict egress costs, consider pushing first to a nearby VPS or cache location, then replicating to S3 during off-hours. Restic and Borg both let you exclude noisy paths; cut out caches, temporary files, and anything that regenerates automatically. Less noise equals faster, cheaper, cleaner backups.<\/p>\n<h3><span id=\"Costs_better_to_be_deliberate_than_surprised\">Costs: better to be deliberate than surprised<\/span><\/h3>\n<p>Object storage is usually a great deal, but versioning and immutability can multiply storage if you\u2019re not watching. Keep an eye on unintentional churn (like huge changed log files). For Restic on S3-compatible storage, bucket versioning can be your safety net, but don\u2019t forget it has a price tag. For Borg replication, only mirror after prune\/compact so you\u2019re not paying to store garbage you\u2019ve already decided to discard.<\/p>\n<h3><span id=\"Security_least_privilege_and_split_secrets\">Security: least privilege and split secrets<\/span><\/h3>\n<p>Use credentials that can only reach the one bucket they need and only do the operations required by your tool. For Restic, your bucket user needs to put objects, list them, and delete when pruning. Don\u2019t hand out admin keys if you can avoid it. Keep your Restic password or Borg key material somewhere that survives your worst day\u2014an encrypted password manager, a secure vault, even a sealed envelope in a safe if you\u2019re old-school. Consider enabling MFA delete, bucket versioning, or\u2014even better where available\u2014immutability where it makes sense. Also, if you want a deeper dive into the philosophy behind Restic\u2019s approach, the <a href=\"https:\/\/restic.readthedocs.io\/en\/stable\/\" rel=\"nofollow noopener\" target=\"_blank\">excellent restic documentation<\/a> is like a calm friend who answers all your late-night questions.<\/p>\n<h3><span id=\"Reality_checks_and_regular_maintenance\">Reality checks and regular maintenance<\/span><\/h3>\n<p>Run <code>restic check<\/code> or <code>borg check<\/code> on a schedule. If a check fails, treat it with the same seriousness as a failed backup. Make prune\/compact part of your habit so repositories don\u2019t grow forever on the back of old data. Every few months, take a quick look at your retention policy and see if it still matches your business. Did you add a new directory that needs special handling? Did your database grow tenfold? These tiny reviews keep your plan faster, cheaper, and more trustworthy.<\/p>\n<h2 id=\"section-7\"><span id=\"Putting_It_All_Together_A_Day_in_the_Life_of_Calm_Backups\">Putting It All Together: A Day in the Life of Calm Backups<\/span><\/h2>\n<p>Let me paint you the rhythm that\u2019s worked for me across a bunch of servers. Nightly, Restic runs on each server and pushes fresh, encrypted snapshots directly into S3-compatible storage. Immediately after, it forgets and prunes based on our retention rules. Then, a quick check confirms the repository is healthy. For the Borg setups, servers talk over SSH to a backup VPS, create fresh archives, prune, and compact, and then a scheduled job mirrors those repositories into object storage for offsite peace of mind. Monitoring pings me only if something\u2019s off, not for every successful job. That\u2019s important because signal beats noise every time.<\/p>\n<p>The reason this approach feels so safe isn\u2019t because it\u2019s fancy. It\u2019s because it\u2019s simple enough to do every day without shortcuts, and robust enough to survive a bad week. That\u2019s what you want from backups. Not wizardry\u2014just boring reliability, plus a couple of well-chosen belts and suspenders like bucket versioning and immutability when appropriate. And if you ever have to restore under pressure, you\u2019ll be grateful for those tags, those retention rules, and that one practice drill you almost skipped but didn\u2019t.<\/p>\n<h2 id=\"section-8\"><span id=\"A_Few_Final_Tips_I_Wish_Someone_Had_Told_Me\">A Few Final Tips I Wish Someone Had Told Me<\/span><\/h2>\n<p>First, name your buckets and repositories clearly\u2014future you will thank past you at 3 AM. Second, document the backup paths, exclusions, and retention policy in a short README that lives next to your scripts. It sounds quaint, but it makes handoffs painless. Third, don\u2019t mix credentials across environments. Production should have its own keys and its own buckets so staging can\u2019t rewrite your lifeboat by mistake. And finally, when in doubt, start simple. A single Restic repository and a basic retention policy beats a grand design you never finish.<\/p>\n<p>For deeper background on Borg itself, <a href=\"https:\/\/borgbackup.readthedocs.io\/en\/stable\/\" rel=\"nofollow noopener\" target=\"_blank\">Borg\u2019s official docs<\/a> are thorough without being overwhelming. They\u2019ll walk you through edge cases like partial file restores, patterns for excludes, and repo maintenance tasks you\u2019ll probably only need once in a blue moon\u2014but when you need them, you\u2019ll be glad they\u2019re there.<\/p>\n<h2 id=\"section-9\"><span id=\"Wrap-Up_Calm_Boring_and_Always_There_When_You_Need_It\">Wrap-Up: Calm, Boring, and Always There When You Need It<\/span><\/h2>\n<p>If you\u2019ve ever been burned by a missing backup, you know the gift of a system that just works. Restic and Borg, paired with S3-compatible storage, give you that. You get client-side encryption that keeps your data private, versioning that turns \u201coh no\u201d into \u201clet\u2019s just roll back,\u201d and retention that keeps costs and clutter in line. The setup is a few thoughtful steps, the daily routine is a pair of predictable jobs, and the restoration is a practiced move rather than a frantic scramble.<\/p>\n<p>My advice: pick one path to start\u2014Restic to S3 if you want the cleanest on-ramp, or Borg via SSH plus a replication step if you prefer that ecosystem. Set a sane retention policy, tag your snapshots, and schedule a monthly restore drill that never gets canceled. Add bucket versioning or immutability if tamper-resistance is on your checklist. And hook everything into your monitoring so silence means success, not \u201cwe forgot to run it.\u201d Do this, and you\u2019ll sleep better. Hope this was helpful! And if you want more friendly deep dives like this, stick around\u2014we\u2019ll keep the coffee warm.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was, sipping a very necessary cup of coffee, when a client pinged me with that sinking-message of the week: \u201cOur VPS died, and I think the backups are on the same server.\u201d You know that pause your brain does, like it\u2019s buffering bad news? That was me. It\u2019s a classic trap: we [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1448,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1447","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1447"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1447\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1448"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}