{"id":1325,"date":"2025-11-04T17:25:11","date_gmt":"2025-11-04T14:25:11","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/mysqldump-or-xtrabackup-the-friendly-guide-to-mysql-mariadb-backups-and-point%e2%80%91in%e2%80%91time-recovery\/"},"modified":"2025-11-04T17:25:11","modified_gmt":"2025-11-04T14:25:11","slug":"mysqldump-or-xtrabackup-the-friendly-guide-to-mysql-mariadb-backups-and-point%e2%80%91in%e2%80%91time-recovery","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/mysqldump-or-xtrabackup-the-friendly-guide-to-mysql-mariadb-backups-and-point%e2%80%91in%e2%80%91time-recovery\/","title":{"rendered":"mysqldump or XtraBackup? The Friendly Guide to MySQL\/MariaDB Backups and Point\u2011in\u2011Time Recovery"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>It always starts the same way. A quiet afternoon, a harmless schema change, a quick \u201cthis will only take a second\u201d moment\u2026 and then someone pings you with that message no one wants to read: \u201cCheckout is broken.\u201d The blood runs cold, coffee suddenly tastes like regret, and you start hearing the clock tick louder. I\u2019ve been there\u2014more than once\u2014and the only reason those stories didn\u2019t end with a very long weekend was simple: good backups and a calm plan for getting back to the exact second before things went wrong.<\/p>\n<p>Today I want to talk to you like a friend who\u2019s done this in the messy real world\u2014no ivory tower theory. We\u2019ll walk through MySQL and MariaDB backup strategies that actually hold up when the pressure\u2019s on. We\u2019ll explore where <strong>mysqldump<\/strong> feels like a warm blanket, when <strong>XtraBackup<\/strong> (and its MariaDB sibling, Mariabackup) is the right kind of heavy-duty tool, and how <strong>Point\u2011in\u2011Time Recovery<\/strong> can turn a disaster into a short coffee break. No stiff checklists, no buzzword bingo\u2014just practical stories, simple analogies, and things you can try this week.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#The_OhNo_Moment_Why_Backups_Fail_When_You_Need_Them_Most\"><span class=\"toc_number toc_depth_1\">1<\/span> The Oh\u2011No Moment: Why Backups Fail When You Need Them Most<\/a><\/li><li><a href=\"#mysqldump_The_Trusty_Swiss_Army_Knife_Thats_Better_Than_You_Think\"><span class=\"toc_number toc_depth_1\">2<\/span> mysqldump: The Trusty Swiss Army Knife That\u2019s Better Than You Think<\/a><\/li><li><a href=\"#XtraBackup_and_Mariabackup_Hot_Physical_Backups_That_Dont_Flinch_Under_Load\"><span class=\"toc_number toc_depth_1\">3<\/span> XtraBackup (and Mariabackup): Hot, Physical Backups That Don\u2019t Flinch Under Load<\/a><\/li><li><a href=\"#PointinTime_Recovery_Rewind_to_One_Second_Before_the_Oops\"><span class=\"toc_number toc_depth_1\">4<\/span> Point\u2011in\u2011Time Recovery: Rewind to One Second Before the Oops<\/a><\/li><li><a href=\"#How_I_Choose_Stories_from_the_Field\"><span class=\"toc_number toc_depth_1\">5<\/span> How I Choose: Stories from the Field<\/a><\/li><li><a href=\"#Setting_It_Up_So_FutureYou_Says_Thanks\"><span class=\"toc_number toc_depth_1\">6<\/span> Setting It Up So Future\u2011You Says Thanks<\/a><\/li><li><a href=\"#Your_First_PITR_Run_A_Calm_Walkthrough\"><span class=\"toc_number toc_depth_1\">7<\/span> Your First PITR Run: A Calm Walkthrough<\/a><\/li><li><a href=\"#Gotchas_I_See_All_the_Time_And_How_to_Dodge_Them\"><span class=\"toc_number toc_depth_1\">8<\/span> Gotchas I See All the Time (And How to Dodge Them)<\/a><\/li><li><a href=\"#What_About_Security_Compliance_and_Those_Boring_But_Important_Boxes\"><span class=\"toc_number toc_depth_1\">9<\/span> What About Security, Compliance, and Those Boring But Important Boxes?<\/a><\/li><li><a href=\"#Bringing_It_All_Together_Without_Drama\"><span class=\"toc_number toc_depth_1\">10<\/span> Bringing It All Together Without Drama<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"The_OhNo_Moment_Why_Backups_Fail_When_You_Need_Them_Most\">The Oh\u2011No Moment: Why Backups Fail When You Need Them Most<\/span><\/h2>\n<p>I remember a client who swore their backups were perfect. \u201cWe run them every night,\u201d they said. But here\u2019s the twist: the job ran, yes, but the output was an empty file because a permissions change broke access a week earlier. No alerts. No warnings. Just a false sense of safety. It\u2019s not that they didn\u2019t care\u2014it\u2019s that backups can look fine until the day they matter, and then the cracks show.<\/p>\n<p>Here\u2019s the thing about databases: they\u2019re always moving. While you\u2019re backing up, new rows are being written, transactions are mid-flight, and caches are reshuffling. If your method doesn\u2019t capture a consistent picture, you end up with something that looks like a photo but has pieces from two different moments stitched together. Sometimes that\u2019s fine. Other times, that\u2019s the difference between a clean restore and a headache you can\u2019t aspirin away.<\/p>\n<p>In my experience, backup pain shows up in three sneaky ways. First, consistency\u2014grabbing a snapshot that reflects a single point in time. Second, performance\u2014running backups without slowing down production to a crawl. Third, recovery\u2014getting data back quickly and precisely, not \u201csomewhere near\u201d last Tuesday. With MySQL and MariaDB, the usual suspects are <strong>mysqldump<\/strong> for logical backups and <strong>XtraBackup\/Mariabackup<\/strong> for physical, hot backups. Each has charm. Each has quirks. Let\u2019s get friendly with both.<\/p>\n<h2 id=\"section-2\"><span id=\"mysqldump_The_Trusty_Swiss_Army_Knife_Thats_Better_Than_You_Think\">mysqldump: The Trusty Swiss Army Knife That\u2019s Better Than You Think<\/span><\/h2>\n<p>mysqldump is like that dependable friend who shows up on moving day with a pickup truck. Not fancy, not the fastest, but you can count on it. It creates a logical export\u2014basically, a set of SQL statements that can rebuild your schema and data from scratch. It\u2019s portable, human\u2011readable, and great for migrations, small to medium databases, or when you need a quick copy to poke at locally.<\/p>\n<p>When people tell me mysqldump is slow, what they usually mean is: they ran it without the right options. If your tables are InnoDB (and these days, they usually are), you want <code>--single-transaction<\/code>. That quietly opens a consistent snapshot and avoids locking your tables for the duration, so your app keeps breathing. If you have triggers, routines, events, or views, include the flags that capture them too. I\u2019ve been burned by missing an event scheduler once; never again.<\/p>\n<p>There\u2019s also the question of size. Logical dumps are text. They compress like a dream. Piping mysqldump through <code>gzip<\/code> or <code>zstd<\/code> turns gigabytes into something friendlier, and restoring is just as simple\u2014unzip and load. Yes, the restore speed depends on how fast MySQL can execute those INSERTs, and yes, that can be slower than dropping a physical backup into place. But for a lot of teams, the tradeoff is perfectly fine, especially for nightly backups or developer snapshots.<\/p>\n<p>Where mysqldump can trip you up is when you have massive data, very high write rates, or storage engines that don\u2019t play nicely with non\u2011blocking snapshots. InnoDB is your friend here. MyISAM, less so\u2014it tends to lock in ways that make you unpopular with your colleagues. And if you\u2019re aiming for point\u2011in\u2011time recovery, you want the dump to capture binary log coordinates (we\u2019ll talk about that soon) so you can replay changes right up to the moment before the problem.<\/p>\n<p>Bottom line? If your database fits comfortably in a logical dump window and you value portability, mysqldump is still an absolute workhorse. It\u2019s not glamorous, but it\u2019s earned its keep a thousand times over in my world.<\/p>\n<h2 id=\"section-3\"><span id=\"XtraBackup_and_Mariabackup_Hot_Physical_Backups_That_Dont_Flinch_Under_Load\">XtraBackup (and Mariabackup): Hot, Physical Backups That Don\u2019t Flinch Under Load<\/span><\/h2>\n<p>Now, when traffic\u2019s heavy and you can\u2019t afford to poke the database too hard, <strong>Percona XtraBackup<\/strong> steps in like a pro mover with dollies and straps. It performs <strong>physical, non\u2011blocking<\/strong> backups of InnoDB data files, which means it copies pages from disk while the database keeps serving queries. For MariaDB, there\u2019s <strong>Mariabackup<\/strong>, which speaks more fluently to MariaDB\u2019s specific internals but follows the same spirit.<\/p>\n<p>The magic trick is that these tools understand InnoDB\u2019s crash\u2011recovery mechanics. They capture data and redo logs in sync, then there\u2019s a \u201cprepare\u201d phase that applies the logs to get everything consistent, just like a database would after a power outage. The result? A snapshot you can restore fast by placing files back on disk, fixing permissions, and starting the server. When time\u2011to\u2011restore matters\u2014think busy e\u2011commerce stores or analytics platforms\u2014this approach is worth its weight in uptime.<\/p>\n<p>Physical backups shine as data sizes grow. They also support incremental runs, which is a lifesaver for storage costs and network bandwidth. You take a weekly full, then nightly incrementals that only copy what changed. The restore flow adds a little choreography\u2014apply incrementals in order, prepare the backup, then move it into place\u2014but it\u2019s predictable once you\u2019ve rehearsed it.<\/p>\n<p>There are caveats. You\u2019ll want to restore onto a server with compatible MySQL or MariaDB versions and matching settings (especially file paths and page sizes). Also, while the backup is non\u2011blocking, it\u2019s not non\u2011I\/O: you will feel the reads if your disk is already busy. Throttle settings help, and in a pinch, I\u2019ve temporarily boosted I\/O capacity for the backup window to keep production happy.<\/p>\n<p>If you\u2019re curious about deeper specifics, the <a href=\"https:\/\/www.percona.com\/doc\/percona-xtrabackup\/latest\/index.html\" rel=\"nofollow noopener\" target=\"_blank\">Percona XtraBackup documentation<\/a> is written by folks who\u2019ve been in this fight a long time. It\u2019s practical, and the streaming options (<code>--stream<\/code> with <code>xbstream<\/code> or <code>tar<\/code>) are more useful than they look at first glance, especially when you\u2019re piping backups across the network.<\/p>\n<h2 id=\"section-4\"><span id=\"PointinTime_Recovery_Rewind_to_One_Second_Before_the_Oops\">Point\u2011in\u2011Time Recovery: Rewind to One Second Before the Oops<\/span><\/h2>\n<p>Point\u2011in\u2011Time Recovery (PITR) is the superpower that turns a big mistake into a minor interruption. The idea is simple: first restore a full backup (logical or physical) that captured a known point in time, then replay the <strong>binary logs<\/strong> to re\u2011apply every change right up to the moment before things went sideways. You can even stop at an exact timestamp or position. It feels like cheating the universe, in the best way.<\/p>\n<p>To make PITR boring\u2014in a good way\u2014you need the binary log turned on and configured sensibly. In most setups I manage, I prefer row\u2011based logging for determinism. I keep retention long enough to cover my restore windows but not so long that disks weep. And I always verify that backups record the log coordinates. mysqldump can do this with <code>--master-data<\/code>, which writes the binary log file and position into the dump as a comment. XtraBackup and Mariabackup capture coordinates too, so you know exactly where to start replaying.<\/p>\n<p>When disaster strikes, the flow is calm: restore your full backup to a safe place (not your production server directly if you can avoid it), confirm the snapshot\u2019s binary log position, then use <code>mysqlbinlog<\/code> to stream changes back in until your chosen time. On MySQL, the official guide to <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/binary-log.html\" rel=\"nofollow noopener\" target=\"_blank\">binary logging and recovery with mysqlbinlog<\/a> is solid and clear. For MariaDB, the vendor has a helpful walkthrough on <a href=\"https:\/\/mariadb.com\/kb\/en\/point-in-time-recovery-using-the-binary-log\/\" rel=\"nofollow noopener\" target=\"_blank\">point\u2011in\u2011time recovery using the binary log<\/a>.<\/p>\n<p>One of my clients had a bad migration script run for about two minutes before someone noticed. We restored the previous night\u2019s XtraBackup snapshot to a temporary server, then replayed binary logs up to exactly five seconds before the script started. We validated that inventory counts and orders looked right, flipped traffic, and nobody outside the room ever knew. I still remember the sigh of relief. PITR does that. It gives you a way to fine\u2011tune the restore so you don\u2019t lose a whole day because of a two\u2011minute mistake.<\/p>\n<p>There are a couple of practical footnotes. If you\u2019re replaying logs, connect with an account that has the right privileges but won\u2019t accidentally re\u2011log the changes. On some versions, you might use <code>SET sql_log_bin=0<\/code> in the session to avoid re\u2011writing during recovery. Also, document your stop conditions clearly. Humans under stress do impulsive things; having a one\u2011liner that says \u201cstop at timestamp X\u201d or \u201cstop at position Y\u201d removes guesswork.<\/p>\n<h2 id=\"section-5\"><span id=\"How_I_Choose_Stories_from_the_Field\">How I Choose: Stories from the Field<\/span><\/h2>\n<p>I never pick a backup tool in a vacuum. Real teams have constraints\u2014budgets, time, people, old hardware, compliance quirks. So here\u2019s how I tend to reason it through in practice, with a few real\u2011world anecdotes to ground it.<\/p>\n<p>For small to medium projects where the database fits neatly into a nightly window, I start with mysqldump. It\u2019s friendly. It\u2019s simple. When a client asked for portable backups they could open and inspect, a logical dump was perfect. We ran daily dumps with <code>--single-transaction<\/code> and <code>--routines --events --triggers<\/code> so nothing was missed, piped them through compression, and stored them offsite. For PITR, we enabled binary logs and captured coordinates in the dump. Restores were slower than physical copies, but the comfort of human\u2011readable backups and easy cross\u2011version imports outweighed the downsides.<\/p>\n<p>For busy stores with thousands of writes per minute, I go straight to XtraBackup or Mariabackup. One fashion retailer had nightly product updates and a daytime rush they didn\u2019t want to disturb. Physical hot backups made the whole process a background hum, with weekly fulls and nightly incrementals. The one lesson we had to learn: rehearse the prepare\u2011and\u2011restore steps until they were muscle memory, because speed mattered. The day we needed a restore, the team was oddly relaxed. We\u2019d already practiced it five times.<\/p>\n<p>For MariaDB environments that lean into MariaDB\u2011specific features, Mariabackup has been a better fit than XtraBackup in the last couple of years. Compatibility keeps improving across the board, but I like matching tools to engines when possible. That said, the fundamentals don\u2019t change: test the restore on a clean server, check permissions, and warm the buffer pool if you can so the first page loads after a restore aren\u2019t a sloth crawl.<\/p>\n<p>And for data warehousing or read\u2011heavy analytics where we can tolerate a bit of lag but want snappy restores, physical snapshots combined with PITR strike a sweet balance. The pattern is always the same: take a trustworthy base backup; keep binary logs rolling and safe; practice replaying them; and maintain a simple runbook so anyone on the team can do it without summoning the database wizard at 3 a.m.<\/p>\n<h2 id=\"section-6\"><span id=\"Setting_It_Up_So_FutureYou_Says_Thanks\">Setting It Up So Future\u2011You Says Thanks<\/span><\/h2>\n<p>Great backups aren\u2019t just about the tool you choose. They\u2019re about the routine that wraps around it. Schedule it, monitor it, test it, and keep a copy offsite. If you want a warm, practical walkthrough of the bigger picture, I wrote up how I explain backup hygiene to clients in my guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/3-2-1-yedekleme-stratejisi-neden-ise-yariyor-cpanel-plesk-ve-vpste-otomatik-yedekleri-nasil-kurarsin\/\">the 3\u20112\u20111 backup strategy and automating backups on common hosting panels and VPS<\/a>. The vibe is the same as this post: friendly, no fluff, and very doable.<\/p>\n<p>On the server, I prefer simple scheduling with systemd timers or cron, with logging that goes somewhere I actually look\u2014usually a centralized log or a Slack notification. The success path should be obvious, and the failure path should be loud. I also check size deltas. If last night\u2019s backup is suspiciously tiny, something drifted. Sometimes it\u2019s a config change; sometimes it\u2019s a silent error. Either way, I don\u2019t want to discover it during a crisis.<\/p>\n<p>Encrypt backups at rest. It\u2019s one of those things that sounds complicated until you do it once. With mysqldump, I often pipe through <code>gpg<\/code> or <code>age<\/code>. With XtraBackup\/Mariabackup, the native encryption flags make it straightforward. Keep keys outside the server you\u2019re backing up\u2014preferably in a vault\u2014and rotate them on a schedule. On restore day, the last thing you want is a scavenger hunt for secrets.<\/p>\n<p>Compress where it makes sense. Logical dumps compress very well; physical backups benefit too, but the tradeoff is CPU time during the backup window. I\u2019d rather spend a bit of CPU at night and save on storage all month. If the server\u2019s busy, consider offloading compression to a helper host by streaming backups over the network.<\/p>\n<p>Most importantly, test restores regularly. Spin up a temporary instance, restore last night\u2019s backup, and point a staging app at it. Click around. Run a few known queries. If your business has critical data invariants\u2014like \u201cno order without a payment\u201d\u2014codify them as quick checks you can run on a restored copy. It\u2019s amazing how many potential failures you catch just by doing a five\u2011minute sanity pass every week.<\/p>\n<h2 id=\"section-7\"><span id=\"Your_First_PITR_Run_A_Calm_Walkthrough\">Your First PITR Run: A Calm Walkthrough<\/span><\/h2>\n<p>Let\u2019s make this tangible. Picture a MariaDB or MySQL server with binary logging enabled and a nightly backup in place. An accidental deletion occurs at 10:42:15. Here\u2019s how the recovery feels when you\u2019ve practiced it a couple of times.<\/p>\n<p>First, restore the last good full backup onto a separate instance. If it\u2019s a mysqldump, you create the schema and replay the dump. If it\u2019s a physical backup from XtraBackup or Mariabackup, you apply logs (prepare), copy files into the data directory, fix ownership, and start the server. The key is that you now have a database that reflects, say, 02:00 that morning.<\/p>\n<p>Next, find the binary log position or GTID captured in your backup. With mysqldump and <code>--master-data<\/code>, it\u2019s written in the dump file. With physical backups, look for the metadata that the tool saved alongside the snapshot. That tells you where to start. From there, you use <code>mysqlbinlog<\/code> to stream changes forward. If you know the \u201cbad moment\u201d timestamp, you can stop just before it. If you don\u2019t, you can replay to a safe point you\u2019ve identified through logs or application traces.<\/p>\n<p>Before applying changes, I usually set the session to avoid re\u2011logging the recovery work. And I do it on an instance that isn\u2019t taking production traffic yet. The last step is what I call the \u201cpeel\u2011open test\u201d\u2014you poke around, check the critical tables, run your invariants, make sure the app reads data correctly. Only then do you swap it into production or replicate from it to your main server. That deliberate calm is what lets you sleep the next night.<\/p>\n<h2 id=\"section-8\"><span id=\"Gotchas_I_See_All_the_Time_And_How_to_Dodge_Them\">Gotchas I See All the Time (And How to Dodge Them)<\/span><\/h2>\n<p>One classic trap is turning on binary logs but letting them purge before your backup window completes. If you keep seven days of full backups but only three days of logs, you lose the ability to do PITR from older snapshots. Match your log retention to your worst\u2011case restore scenario, not your best intentions.<\/p>\n<p>Another sneaky one: backups that capture data but not the surrounding objects. I\u2019ve seen dumps that exclude triggers or routines by accident. On restore, the app behaves \u201calmost right\u201d but misses that one housekeeping trigger that cleans a queue, and days later you find an ever\u2011growing pile of stale records. It\u2019s an avoidable mess\u2014just include the flags for everything your schema needs, and validate after restore that they exist.<\/p>\n<p>For physical backups, test across minor version upgrades. Most of the time, it\u2019s smooth, but I\u2019ve hit cases where a new InnoDB format or a changed directory layout made a direct file copy unhappy. The good news is that because you\u2019ve rehearsed, you catch this in staging, not production. Worst case, you restore to the older version and replicate forward.<\/p>\n<p>Storage is another quiet culprit. A backup that takes three hours to copy back across a slow network isn\u2019t really a three\u2011hour restore; it\u2019s a half\u2011day outage waiting to happen. If recovery speed matters, keep a local copy or a nearline snapshot that you can promote quickly, and send the archival copies offsite for safety. This is where that 3\u20112\u20111 mindset saves you, again and again.<\/p>\n<h2 id=\"section-9\"><span id=\"What_About_Security_Compliance_and_Those_Boring_But_Important_Boxes\">What About Security, Compliance, and Those Boring But Important Boxes?<\/span><\/h2>\n<p>Backups are sensitive by nature. They often contain everything. So yes, encrypt them. Yes, restrict access. And please don\u2019t leave them in a world\u2011readable bucket named \u201cprod\u2011backups\u201d because it\u2019s \u201cjust temporary.\u201d I\u2019ve never once regretted being a little paranoid here. For compliance, label what\u2019s inside, set retention based on policy, and verify deletion works. There\u2019s nothing worse than being asked to remove data and finding out a three\u2011month\u2011old backup still has it because no one propagated the retention rules.<\/p>\n<p>On servers, watch for credentials baked into scripts. Use service users with minimal privileges and keep secrets in environment files or a proper vault. Rotate them on a schedule that matches your risk tolerance. It sounds like overhead, but the day you rotate a key and realize recovery still works is the day you\u2019ll feel truly confident about your strategy.<\/p>\n<h2 id=\"section-10\"><span id=\"Bringing_It_All_Together_Without_Drama\">Bringing It All Together Without Drama<\/span><\/h2>\n<p>Here\u2019s the lovely conclusion I\u2019ve come to after a decade of doing this for teams big and small: you don\u2019t need a complicated backup strategy; you need a practiced one. mysqldump is your dependable, readable friend for many workloads. XtraBackup and Mariabackup are your heavy lifters when load and size demand it. Point\u2011in\u2011Time Recovery stitches the story together so you can rewind to the moment before a mistake and carry on like nothing happened.<\/p>\n<p>Start with what fits your world today. If your database is modest and you love portability, lean on mysqldump with binary logs and good monitoring. If your data is large and uptime is sacred, go physical, rehearse the prepare\u2011and\u2011restore flow, and keep your logs tidy and long enough. No matter what, test restores before you need them, keep copies offsite, encrypt, and document the steps in plain language so the newest person on the team can follow them under pressure.<\/p>\n<p>And remember: the goal isn\u2019t a fancy backup. It\u2019s a <strong>boring recovery<\/strong>. The kind where you grab coffee, follow the steps, and fifteen minutes later the room is quiet again. Hope this was helpful! If you want me to dig into your setup or share a runbook template I use with clients, just say the word. See you in the next post.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>It always starts the same way. A quiet afternoon, a harmless schema change, a quick \u201cthis will only take a second\u201d moment\u2026 and then someone pings you with that message no one wants to read: \u201cCheckout is broken.\u201d The blood runs cold, coffee suddenly tastes like regret, and you start hearing the clock tick louder. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1326,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1325","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1325","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1325"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1325\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1326"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1325"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1325"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1325"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}