Technology

mysqldump or XtraBackup? The Friendly Guide to MySQL/MariaDB Backups and Point‑in‑Time Recovery

It always starts the same way. A quiet afternoon, a harmless schema change, a quick “this will only take a second” moment… and then someone pings you with that message no one wants to read: “Checkout is broken.” The blood runs cold, coffee suddenly tastes like regret, and you start hearing the clock tick louder. I’ve been there—more than once—and the only reason those stories didn’t end with a very long weekend was simple: good backups and a calm plan for getting back to the exact second before things went wrong.

Today I want to talk to you like a friend who’s done this in the messy real world—no ivory tower theory. We’ll walk through MySQL and MariaDB backup strategies that actually hold up when the pressure’s on. We’ll explore where mysqldump feels like a warm blanket, when XtraBackup (and its MariaDB sibling, Mariabackup) is the right kind of heavy-duty tool, and how Point‑in‑Time Recovery can turn a disaster into a short coffee break. No stiff checklists, no buzzword bingo—just practical stories, simple analogies, and things you can try this week.

The Oh‑No Moment: Why Backups Fail When You Need Them Most

I remember a client who swore their backups were perfect. “We run them every night,” they said. But here’s the twist: the job ran, yes, but the output was an empty file because a permissions change broke access a week earlier. No alerts. No warnings. Just a false sense of safety. It’s not that they didn’t care—it’s that backups can look fine until the day they matter, and then the cracks show.

Here’s the thing about databases: they’re always moving. While you’re backing up, new rows are being written, transactions are mid-flight, and caches are reshuffling. If your method doesn’t capture a consistent picture, you end up with something that looks like a photo but has pieces from two different moments stitched together. Sometimes that’s fine. Other times, that’s the difference between a clean restore and a headache you can’t aspirin away.

In my experience, backup pain shows up in three sneaky ways. First, consistency—grabbing a snapshot that reflects a single point in time. Second, performance—running backups without slowing down production to a crawl. Third, recovery—getting data back quickly and precisely, not “somewhere near” last Tuesday. With MySQL and MariaDB, the usual suspects are mysqldump for logical backups and XtraBackup/Mariabackup for physical, hot backups. Each has charm. Each has quirks. Let’s get friendly with both.

mysqldump: The Trusty Swiss Army Knife That’s Better Than You Think

mysqldump is like that dependable friend who shows up on moving day with a pickup truck. Not fancy, not the fastest, but you can count on it. It creates a logical export—basically, a set of SQL statements that can rebuild your schema and data from scratch. It’s portable, human‑readable, and great for migrations, small to medium databases, or when you need a quick copy to poke at locally.

When people tell me mysqldump is slow, what they usually mean is: they ran it without the right options. If your tables are InnoDB (and these days, they usually are), you want --single-transaction. That quietly opens a consistent snapshot and avoids locking your tables for the duration, so your app keeps breathing. If you have triggers, routines, events, or views, include the flags that capture them too. I’ve been burned by missing an event scheduler once; never again.

There’s also the question of size. Logical dumps are text. They compress like a dream. Piping mysqldump through gzip or zstd turns gigabytes into something friendlier, and restoring is just as simple—unzip and load. Yes, the restore speed depends on how fast MySQL can execute those INSERTs, and yes, that can be slower than dropping a physical backup into place. But for a lot of teams, the tradeoff is perfectly fine, especially for nightly backups or developer snapshots.

Where mysqldump can trip you up is when you have massive data, very high write rates, or storage engines that don’t play nicely with non‑blocking snapshots. InnoDB is your friend here. MyISAM, less so—it tends to lock in ways that make you unpopular with your colleagues. And if you’re aiming for point‑in‑time recovery, you want the dump to capture binary log coordinates (we’ll talk about that soon) so you can replay changes right up to the moment before the problem.

Bottom line? If your database fits comfortably in a logical dump window and you value portability, mysqldump is still an absolute workhorse. It’s not glamorous, but it’s earned its keep a thousand times over in my world.

XtraBackup (and Mariabackup): Hot, Physical Backups That Don’t Flinch Under Load

Now, when traffic’s heavy and you can’t afford to poke the database too hard, Percona XtraBackup steps in like a pro mover with dollies and straps. It performs physical, non‑blocking backups of InnoDB data files, which means it copies pages from disk while the database keeps serving queries. For MariaDB, there’s Mariabackup, which speaks more fluently to MariaDB’s specific internals but follows the same spirit.

The magic trick is that these tools understand InnoDB’s crash‑recovery mechanics. They capture data and redo logs in sync, then there’s a “prepare” phase that applies the logs to get everything consistent, just like a database would after a power outage. The result? A snapshot you can restore fast by placing files back on disk, fixing permissions, and starting the server. When time‑to‑restore matters—think busy e‑commerce stores or analytics platforms—this approach is worth its weight in uptime.

Physical backups shine as data sizes grow. They also support incremental runs, which is a lifesaver for storage costs and network bandwidth. You take a weekly full, then nightly incrementals that only copy what changed. The restore flow adds a little choreography—apply incrementals in order, prepare the backup, then move it into place—but it’s predictable once you’ve rehearsed it.

There are caveats. You’ll want to restore onto a server with compatible MySQL or MariaDB versions and matching settings (especially file paths and page sizes). Also, while the backup is non‑blocking, it’s not non‑I/O: you will feel the reads if your disk is already busy. Throttle settings help, and in a pinch, I’ve temporarily boosted I/O capacity for the backup window to keep production happy.

If you’re curious about deeper specifics, the Percona XtraBackup documentation is written by folks who’ve been in this fight a long time. It’s practical, and the streaming options (--stream with xbstream or tar) are more useful than they look at first glance, especially when you’re piping backups across the network.

Point‑in‑Time Recovery: Rewind to One Second Before the Oops

Point‑in‑Time Recovery (PITR) is the superpower that turns a big mistake into a minor interruption. The idea is simple: first restore a full backup (logical or physical) that captured a known point in time, then replay the binary logs to re‑apply every change right up to the moment before things went sideways. You can even stop at an exact timestamp or position. It feels like cheating the universe, in the best way.

To make PITR boring—in a good way—you need the binary log turned on and configured sensibly. In most setups I manage, I prefer row‑based logging for determinism. I keep retention long enough to cover my restore windows but not so long that disks weep. And I always verify that backups record the log coordinates. mysqldump can do this with --master-data, which writes the binary log file and position into the dump as a comment. XtraBackup and Mariabackup capture coordinates too, so you know exactly where to start replaying.

When disaster strikes, the flow is calm: restore your full backup to a safe place (not your production server directly if you can avoid it), confirm the snapshot’s binary log position, then use mysqlbinlog to stream changes back in until your chosen time. On MySQL, the official guide to binary logging and recovery with mysqlbinlog is solid and clear. For MariaDB, the vendor has a helpful walkthrough on point‑in‑time recovery using the binary log.

One of my clients had a bad migration script run for about two minutes before someone noticed. We restored the previous night’s XtraBackup snapshot to a temporary server, then replayed binary logs up to exactly five seconds before the script started. We validated that inventory counts and orders looked right, flipped traffic, and nobody outside the room ever knew. I still remember the sigh of relief. PITR does that. It gives you a way to fine‑tune the restore so you don’t lose a whole day because of a two‑minute mistake.

There are a couple of practical footnotes. If you’re replaying logs, connect with an account that has the right privileges but won’t accidentally re‑log the changes. On some versions, you might use SET sql_log_bin=0 in the session to avoid re‑writing during recovery. Also, document your stop conditions clearly. Humans under stress do impulsive things; having a one‑liner that says “stop at timestamp X” or “stop at position Y” removes guesswork.

How I Choose: Stories from the Field

I never pick a backup tool in a vacuum. Real teams have constraints—budgets, time, people, old hardware, compliance quirks. So here’s how I tend to reason it through in practice, with a few real‑world anecdotes to ground it.

For small to medium projects where the database fits neatly into a nightly window, I start with mysqldump. It’s friendly. It’s simple. When a client asked for portable backups they could open and inspect, a logical dump was perfect. We ran daily dumps with --single-transaction and --routines --events --triggers so nothing was missed, piped them through compression, and stored them offsite. For PITR, we enabled binary logs and captured coordinates in the dump. Restores were slower than physical copies, but the comfort of human‑readable backups and easy cross‑version imports outweighed the downsides.

For busy stores with thousands of writes per minute, I go straight to XtraBackup or Mariabackup. One fashion retailer had nightly product updates and a daytime rush they didn’t want to disturb. Physical hot backups made the whole process a background hum, with weekly fulls and nightly incrementals. The one lesson we had to learn: rehearse the prepare‑and‑restore steps until they were muscle memory, because speed mattered. The day we needed a restore, the team was oddly relaxed. We’d already practiced it five times.

For MariaDB environments that lean into MariaDB‑specific features, Mariabackup has been a better fit than XtraBackup in the last couple of years. Compatibility keeps improving across the board, but I like matching tools to engines when possible. That said, the fundamentals don’t change: test the restore on a clean server, check permissions, and warm the buffer pool if you can so the first page loads after a restore aren’t a sloth crawl.

And for data warehousing or read‑heavy analytics where we can tolerate a bit of lag but want snappy restores, physical snapshots combined with PITR strike a sweet balance. The pattern is always the same: take a trustworthy base backup; keep binary logs rolling and safe; practice replaying them; and maintain a simple runbook so anyone on the team can do it without summoning the database wizard at 3 a.m.

Setting It Up So Future‑You Says Thanks

Great backups aren’t just about the tool you choose. They’re about the routine that wraps around it. Schedule it, monitor it, test it, and keep a copy offsite. If you want a warm, practical walkthrough of the bigger picture, I wrote up how I explain backup hygiene to clients in my guide on the 3‑2‑1 backup strategy and automating backups on common hosting panels and VPS. The vibe is the same as this post: friendly, no fluff, and very doable.

On the server, I prefer simple scheduling with systemd timers or cron, with logging that goes somewhere I actually look—usually a centralized log or a Slack notification. The success path should be obvious, and the failure path should be loud. I also check size deltas. If last night’s backup is suspiciously tiny, something drifted. Sometimes it’s a config change; sometimes it’s a silent error. Either way, I don’t want to discover it during a crisis.

Encrypt backups at rest. It’s one of those things that sounds complicated until you do it once. With mysqldump, I often pipe through gpg or age. With XtraBackup/Mariabackup, the native encryption flags make it straightforward. Keep keys outside the server you’re backing up—preferably in a vault—and rotate them on a schedule. On restore day, the last thing you want is a scavenger hunt for secrets.

Compress where it makes sense. Logical dumps compress very well; physical backups benefit too, but the tradeoff is CPU time during the backup window. I’d rather spend a bit of CPU at night and save on storage all month. If the server’s busy, consider offloading compression to a helper host by streaming backups over the network.

Most importantly, test restores regularly. Spin up a temporary instance, restore last night’s backup, and point a staging app at it. Click around. Run a few known queries. If your business has critical data invariants—like “no order without a payment”—codify them as quick checks you can run on a restored copy. It’s amazing how many potential failures you catch just by doing a five‑minute sanity pass every week.

Your First PITR Run: A Calm Walkthrough

Let’s make this tangible. Picture a MariaDB or MySQL server with binary logging enabled and a nightly backup in place. An accidental deletion occurs at 10:42:15. Here’s how the recovery feels when you’ve practiced it a couple of times.

First, restore the last good full backup onto a separate instance. If it’s a mysqldump, you create the schema and replay the dump. If it’s a physical backup from XtraBackup or Mariabackup, you apply logs (prepare), copy files into the data directory, fix ownership, and start the server. The key is that you now have a database that reflects, say, 02:00 that morning.

Next, find the binary log position or GTID captured in your backup. With mysqldump and --master-data, it’s written in the dump file. With physical backups, look for the metadata that the tool saved alongside the snapshot. That tells you where to start. From there, you use mysqlbinlog to stream changes forward. If you know the “bad moment” timestamp, you can stop just before it. If you don’t, you can replay to a safe point you’ve identified through logs or application traces.

Before applying changes, I usually set the session to avoid re‑logging the recovery work. And I do it on an instance that isn’t taking production traffic yet. The last step is what I call the “peel‑open test”—you poke around, check the critical tables, run your invariants, make sure the app reads data correctly. Only then do you swap it into production or replicate from it to your main server. That deliberate calm is what lets you sleep the next night.

Gotchas I See All the Time (And How to Dodge Them)

One classic trap is turning on binary logs but letting them purge before your backup window completes. If you keep seven days of full backups but only three days of logs, you lose the ability to do PITR from older snapshots. Match your log retention to your worst‑case restore scenario, not your best intentions.

Another sneaky one: backups that capture data but not the surrounding objects. I’ve seen dumps that exclude triggers or routines by accident. On restore, the app behaves “almost right” but misses that one housekeeping trigger that cleans a queue, and days later you find an ever‑growing pile of stale records. It’s an avoidable mess—just include the flags for everything your schema needs, and validate after restore that they exist.

For physical backups, test across minor version upgrades. Most of the time, it’s smooth, but I’ve hit cases where a new InnoDB format or a changed directory layout made a direct file copy unhappy. The good news is that because you’ve rehearsed, you catch this in staging, not production. Worst case, you restore to the older version and replicate forward.

Storage is another quiet culprit. A backup that takes three hours to copy back across a slow network isn’t really a three‑hour restore; it’s a half‑day outage waiting to happen. If recovery speed matters, keep a local copy or a nearline snapshot that you can promote quickly, and send the archival copies offsite for safety. This is where that 3‑2‑1 mindset saves you, again and again.

What About Security, Compliance, and Those Boring But Important Boxes?

Backups are sensitive by nature. They often contain everything. So yes, encrypt them. Yes, restrict access. And please don’t leave them in a world‑readable bucket named “prod‑backups” because it’s “just temporary.” I’ve never once regretted being a little paranoid here. For compliance, label what’s inside, set retention based on policy, and verify deletion works. There’s nothing worse than being asked to remove data and finding out a three‑month‑old backup still has it because no one propagated the retention rules.

On servers, watch for credentials baked into scripts. Use service users with minimal privileges and keep secrets in environment files or a proper vault. Rotate them on a schedule that matches your risk tolerance. It sounds like overhead, but the day you rotate a key and realize recovery still works is the day you’ll feel truly confident about your strategy.

Bringing It All Together Without Drama

Here’s the lovely conclusion I’ve come to after a decade of doing this for teams big and small: you don’t need a complicated backup strategy; you need a practiced one. mysqldump is your dependable, readable friend for many workloads. XtraBackup and Mariabackup are your heavy lifters when load and size demand it. Point‑in‑Time Recovery stitches the story together so you can rewind to the moment before a mistake and carry on like nothing happened.

Start with what fits your world today. If your database is modest and you love portability, lean on mysqldump with binary logs and good monitoring. If your data is large and uptime is sacred, go physical, rehearse the prepare‑and‑restore flow, and keep your logs tidy and long enough. No matter what, test restores before you need them, keep copies offsite, encrypt, and document the steps in plain language so the newest person on the team can follow them under pressure.

And remember: the goal isn’t a fancy backup. It’s a boring recovery. The kind where you grab coffee, follow the steps, and fifteen minutes later the room is quiet again. Hope this was helpful! If you want me to dig into your setup or share a runbook template I use with clients, just say the word. See you in the next post.

Frequently Asked Questions

Great question! If your database fits comfortably in a nightly window and you value portability and simplicity, mysqldump is often perfect—use --single-transaction so it’s non-blocking and capture routines, triggers, and events too. If writes are heavy or restores must be fast, XtraBackup (or Mariabackup) will give you hot, physical backups and quicker recovery. Both can do PITR as long as binary logs are enabled, so pick the one that matches your size and recovery speed needs.

Spin up a temporary instance. Restore your latest full backup there (mysqldump or a prepared XtraBackup/Mariabackup snapshot), then use mysqlbinlog to replay changes up to your chosen timestamp or position. Validate with a few critical queries or app clicks. Keep the session from re-logging changes, and only promote the instance after you’re happy. The rehearsal takes minutes but pays off massively when a real incident happens.

I like weekly full backups and daily incrementals for large databases with XtraBackup/Mariabackup, plus continuous binary logging for PITR. For smaller databases, a nightly mysqldump (compressed and encrypted) is solid, with binlogs retained long enough to bridge any restore window. Whatever you choose, monitor sizes and exit codes, keep at least one offsite copy, and do a quick restore drill every week or two. Consistency beats complexity.