{"id":1665,"date":"2025-11-10T23:28:41","date_gmt":"2025-11-10T20:28:41","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/the-calm-guide-to-postgresql-autovacuum-on-a-vps-tune-it-shrink-bloat-and-repack-without-the-drama\/"},"modified":"2025-11-10T23:28:41","modified_gmt":"2025-11-10T20:28:41","slug":"the-calm-guide-to-postgresql-autovacuum-on-a-vps-tune-it-shrink-bloat-and-repack-without-the-drama","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/the-calm-guide-to-postgresql-autovacuum-on-a-vps-tune-it-shrink-bloat-and-repack-without-the-drama\/","title":{"rendered":"The Calm Guide to PostgreSQL Autovacuum on a VPS: Tune It, Shrink Bloat, and Repack Without the Drama"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was, staring at a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a> that looked perfectly fine from the outside\u2014CPU cruising, RAM not pegged, disk I\/O modest\u2014and yet the app felt sluggish. Not a full meltdown, just a kind of weary sigh on every request. If you\u2019ve ever had that \u201csomething\u2019s off but I can\u2019t prove it\u201d gut feeling, you know the vibe. I tailed logs, poked at queries, and then the lightbulb went on: table bloat. Autovacuum was on, sure, but it wasn\u2019t tuned for this little server\u2019s reality. And the worst part? I\u2019d seen this movie before. Autovacuum wasn\u2019t lazy; it was just trying to be polite and wound up being late. On a VPS, that can quietly snowball into bloat, random IO, and a slow dance nobody asked for.<\/p>\n<p>In this post, I want to walk you through how I think about <strong>PostgreSQL autovacuum tuning and bloat control on a VPS<\/strong>, the practical knobs that actually matter, and how to wield <strong>pg_repack<\/strong> when you\u2019ve already got a mess on your hands. We\u2019ll chat about what bloat really is (beyond the scary name), how to measure it without spreadsheets taking over your life, the handful of settings that usually move the needle, and a safe workflow to repack tables without making your app hold its breath. We\u2019ll keep it friendly, but we\u2019ll get into the good stuff.<\/p>\n<h2 id=\"section-1\">Why Bloat Happens and Why Your VPS Feels It First<\/h2>\n<p>Here\u2019s the thing about PostgreSQL: it\u2019s wonderfully honest. When you update or delete a row, the old version doesn\u2019t just vanish\u2014it sticks around until <strong>VACUUM<\/strong> comes by and cleans it up. That\u2019s a feature, not a bug, because Postgres uses multiversion concurrency control (MVCC) to keep readers happy while writers do their thing. But if autovacuum isn\u2019t firing as often as it should, or if it\u2019s politely tiptoeing around your workload, those dead rows pile up like uncollected recycling. That\u2019s <strong>bloat<\/strong>.<\/p>\n<p>On a big iron database server with more IOPS than sense, you can sometimes coast for a while. On a VPS\u2014especially the kind we love for small apps and startups\u2014disks are shared, I\/O is precious, and RAM is the tightest bottleneck. Bloat translates into larger tables and indexes, more pages to scan, extra cache misses, and a general sense of heaviness. Think of it like carrying a backpack stuffed with last year\u2019s receipts; you can still walk, but each step feels a tiny bit harder.<\/p>\n<p>Autovacuum is the background helper that keeps things tidy without you thinking about it. The defaults are conservative because Postgres doesn\u2019t want to surprise you with sudden I\/O storms. That\u2019s fine for a lot of workloads, but if your app updates the same rows frequently, or if you\u2019ve got large, hot tables, you\u2019ll probably need to nudge those defaults so they fit your reality. A VPS appreciates those nudges even more than a big box does.<\/p>\n<h2 id=\"section-2\">Reading the Room: How to Spot Bloat and Lazy Autovacuum<\/h2>\n<p>I\u2019ve learned to start with a quiet look around. Before touching a single setting, ask: how is autovacuum behaving today? Is it running at all? Is it finishing? Where is the pressure? You don\u2019t need a full observability platform\u2014just a couple of views Postgres gives you and a sensible logging setup.<\/p>\n<p>First, I like to surface basic signals. You can peek at dead tuples per table:<\/p>\n<pre class=\"language-sql line-numbers\"><code class=\"language-sql\">SELECT\n  schemaname,\n  relname,\n  n_live_tup,\n  n_dead_tup,\n  last_vacuum,\n  last_autovacuum,\n  last_analyze,\n  last_autoanalyze\nFROM pg_stat_user_tables\nORDER BY n_dead_tup DESC\nLIMIT 20;\n<\/code><\/pre>\n<p>If you see the same tables floating to the top with high <strong>n_dead_tup<\/strong> and no recent autovacuum, that\u2019s a clue. I also love checking <strong>pg_stat_progress_vacuum<\/strong> in a second session when autovacuum is on the move, just to see who\u2019s being worked on and how far along it is. It gives you a feel for whether autovacuum is actually keeping up or just nibbling at the edges. If you want to get into the weeds, the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/progress-reporting.html\" rel=\"nofollow noopener\" target=\"_blank\">VACUUM progress reporting<\/a> docs are a great compass.<\/p>\n<p>Second, turn on helpful logging. You don\u2019t need to go wild\u2014just enable logs for autovacuum that runs long enough to be interesting:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">log_autovacuum_min_duration = '5s'\n<\/code><\/pre>\n<p>With that, the server will log autovacuum jobs that took longer than five seconds. Pick a value that keeps noise out while still capturing \u201creal work.\u201d I usually ship those logs to a central place so I can search and graph them. If that sounds like your kind of tidy, I wrote about setting up <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-log-yonetimi-nasil-rayina-oturur-grafana-loki-promtail-ile-merkezi-loglama-tutma-sureleri-ve-alarm-kurallari\/\">centralised logging with Grafana Loki + Promtail<\/a>\u2014a perfect companion when you want to slice through Postgres logs without SSH gymnastics.<\/p>\n<p>Third, remember indexes bloat too. You can feel it when index scans get slower for no apparent reason. If your table does heavy updates, the index leaf pages can grow with pointers to dead tuples. Autovacuum helps, but once an index has sprawled, you often need a rebuild or a repack to reclaim space.<\/p>\n<h2 id=\"section-3\">The Practical Autovacuum Settings That Usually Move the Needle<\/h2>\n<p>Let\u2019s talk knobs. There are many, but only a handful usually change the plot on a VPS. We\u2019ll keep each one anchored in how it feels at runtime.<\/p>\n<p>First up: <strong>when<\/strong> autovacuum decides to act. The scale and threshold settings control this:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Fire more predictably on modest tables too\nautovacuum_vacuum_scale_factor = 0.1        # default is 0.2; 10% change triggers sooner\nautovacuum_vacuum_threshold     = 50         # base tuples before scale factor applies\n\n# Analyze more often to keep planner statistics fresh\nautovacuum_analyze_scale_factor = 0.05       # default is 0.1\nautovacuum_analyze_threshold    = 50\n<\/code><\/pre>\n<p>I like smaller scale factors on a VPS, especially for hot tables that aren\u2019t massive. It means autovacuum wakes up more often to do smaller, cheaper rounds of maintenance. You can always tune per-table later, but shifting the baseline helps. On very large tables, pure percentages can be a trap\u2014ten percent of a huge table is still, well, huge. That\u2019s where per-table settings shine.<\/p>\n<p>Second: <strong>how fast<\/strong> autovacuum moves. The cost settings are your pacing tool:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Let VACUUM move with purpose, but not bulldoze\nvacuum_cost_limit = 2000\nvacuum_cost_delay = '2ms'  # small delay smooths IO; increase if IO stalls\n<\/code><\/pre>\n<p>Out of the box, autovacuum can be a bit too polite. On a VPS with SSDs, a higher <strong>vacuum_cost_limit<\/strong> often helps autovacuum finish sooner, which ironically reduces overall pressure. If you see spiky I\/O or latency wobbles during vacuum, lengthen <strong>vacuum_cost_delay<\/strong> a touch. Picture it like steady breathing during a jog rather than sprinting and stopping.<\/p>\n<p>Third: <strong>worker power<\/strong>. How many jobs can run, and how much memory do they get?<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">autovacuum_max_workers = 3       # or 4 on slightly larger VPS\nmaintenance_work_mem   = '512MB' # for vacuum\/reindex; match VPS size and workload\nautovacuum_naptime     = '15s'   # how often to wake and check for work\n<\/code><\/pre>\n<p>On a small VPS, more workers isn\u2019t always better. Three is a sweet spot for many apps\u2014enough to work in parallel but not enough to cause a thundering herd. I bump <strong>maintenance_work_mem<\/strong> as long as the server can afford it; it helps VACUUM and index maintenance move briskly. And a shorter <strong>autovacuum_naptime<\/strong> keeps scheduling responsive without turning the server into a jittery hummingbird.<\/p>\n<p>Fourth: <strong>freezing<\/strong> old tuples. If you\u2019ve ever been sideswiped by long freeze vacuums, you know they can be noisy. These settings help keep them orderly:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">autovacuum_freeze_max_age          = 800000000\nautovacuum_multixact_freeze_max_age= 160000000\nvacuum_freeze_table_age            = 200000000\nvacuum_freeze_min_age              = 50000000\n<\/code><\/pre>\n<p>Think of freezing as the seasonal deep clean. You want it to happen before it\u2019s urgent, but not so often it interrupts daily life. I keep these near defaults and rely on the earlier triggers to keep tables fresh. If your workload sits idle for long stretches, glance at these numbers so you don\u2019t get a surprise \u201chouse cleaning\u201d during peak traffic.<\/p>\n<p>And finally: <strong>visibility<\/strong>. If autovacuum seems sleepy, tell PostgreSQL to say more:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">log_autovacuum_min_duration = '5s'\nlog_checkpoints             = on\nlog_temp_files              = '4MB'\n<\/code><\/pre>\n<p>This isn\u2019t about drowning in logs; it\u2019s about catching patterns. If you see constant checkpoint pressure while autovacuum runs, consider bumping up <strong>max_wal_size<\/strong> a bit. If temp files explode during analyze or queries, you might tweak <strong>work_mem<\/strong> carefully (per-session, ideally).<\/p>\n<p>By the way, for the curious: the upstream details behind the curtain are in the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-autovacuum.html\" rel=\"nofollow noopener\" target=\"_blank\">PostgreSQL autovacuum documentation<\/a>. It\u2019s not bedtime reading, but it\u2019s the map you wish you had when you\u2019re lost.<\/p>\n<h2 id=\"section-4\">Per\u2011Table Tuning: Where the Real Wins Hide<\/h2>\n<p>Global settings get you in the ballpark. But the tables doing the most damage to your I\/O bill often need special care. The most common pattern I see is an \u201cupdates galore\u201d table\u2014think sessions, carts, or anything that gets nudged on almost every request.<\/p>\n<p>For those, scale factors are too blunt. I\u2019d rather use small fixed thresholds so autovacuum fires predictably:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">ALTER TABLE public.sessions\n  SET (autovacuum_vacuum_scale_factor = 0.02,\n       autovacuum_vacuum_threshold    = 50,\n       autovacuum_analyze_scale_factor= 0.02,\n       autovacuum_analyze_threshold   = 50);\n<\/code><\/pre>\n<p>If the table is still ballooning, check the <strong>fillfactor<\/strong>. Lowering it on a hot update table leaves room on each page for updated rows to stay put, enabling HOT (heap-only) updates more often. It\u2019s like leaving space in a suitcase so you don\u2019t have to unpack everything to add one shirt.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">ALTER TABLE public.sessions SET (fillfactor = 80);\nVACUUM FULL public.sessions;  -- blocking; do this in a quiet window\n<\/code><\/pre>\n<p>I only use <strong>VACUUM FULL<\/strong> when the table isn\u2019t too big or the maintenance window is generous because it\u2019s blocking. In most cases where downtime is precious, I go straight to <strong>pg_repack<\/strong> instead (we\u2019ll get there). The point is: per-table fillfactor plus tighter autovacuum triggers often cuts bloat growth at the source.<\/p>\n<p>One more sneaky source of trouble is unused or overly broad indexes. They bloat just as happily as tables, but they don\u2019t help your queries. Before repacking, I\u2019ll sometimes disable an obviously unused index in staging and measure the impact. In production, I\u2019m careful: I prefer marking it invalid and seeing if anything yelps, or rolling out the drop during a calm moment. You\u2019d be amazed how often a single unnecessary index doubles the maintenance cost on a busy table.<\/p>\n<h2 id=\"section-5\">When the Mess Has Already Happened: pg_repack, Without the Drama<\/h2>\n<p>Alright, let\u2019s say bloat has gotten out of hand. Autovacuum\u2019s doing its best but can\u2019t claw back space because pages are already overstuffed. That\u2019s when I reach for <strong>pg_repack<\/strong>. If you haven\u2019t met it yet, it\u2019s an external utility that rebuilds tables and indexes online, using triggers to keep a shadow copy in sync and then swapping it in. The result: reclaimed space with minimal locks.<\/p>\n<p>Before you touch anything, step zero: <strong>backups and safety checks<\/strong>. I know it\u2019s boring, but I sleep better after a fresh backup and a quick restore test on another machine. If you want a friendly template for that mindset, here\u2019s how I think through <a href=\"https:\/\/www.dchost.com\/blog\/en\/felaket-kurtarma-plani-nasil-yazilir-rto-rpoyu-kafada-netlestirip-yedek-testleri-ve-runbooklari-gercekten-calisir-hale-getirmek\/\">a no\u2011drama DR plan<\/a>. Even with pg_repack\u2019s smooth approach, there\u2019s always a chance the final swap needs a short lock; be ready.<\/p>\n<p>Installing pg_repack is straightforward on Debian\/Ubuntu or RPM-based distros. Package names vary by PostgreSQL version, and sometimes you\u2019ll compile from source. The official <a href=\"https:\/\/reorg.github.io\/pg_repack\/\" rel=\"nofollow noopener\" target=\"_blank\">pg_repack project page<\/a> has clear steps.<\/p>\n<p>There are two big caveats I always call out to clients: first, your table should have a primary key or at least a unique not-null index that pg_repack can use. Otherwise it might require a full table lock. Second, watch your disk and WAL budget. Repacking a large table creates a parallel copy and churns WAL, which is fine if you\u2019ve planned for it and unpleasant if you haven\u2019t.<\/p>\n<p>My usual flow looks like this. I start in a quiet period, confirm space, and keep eyes on replication lag if there\u2019s a standby. Then I run a spot repack for the worst offender:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">pg_repack \n  -h 127.0.0.1 -p 5432 -U postgres \n  -d mydb -t public.sessions \n  --wait-timeout=600 --no-order\n<\/code><\/pre>\n<p>The <strong>&#8211;no-order<\/strong> flag cuts some overhead when you don\u2019t care about ordering. If everything behaves, I\u2019ll consider cleaning up related indexes too. Many times it\u2019s better to repack the whole database during a maintenance window:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">pg_repack -h 127.0.0.1 -U postgres -d mydb -a --wait-timeout=600\n<\/code><\/pre>\n<p>That does all tables and indexes it can, skipping ones that aren\u2019t safe. When you\u2019ve got a read replica, watch lag and maybe throttle using <strong>&#8211;jobs<\/strong> or spacing runs apart. Slower and steady beats fast and scary on a VPS.<\/p>\n<p>One of my clients once had a \u201cmystery\u201d 70 GB database. The tables added up to barely half that. We measured, repacked the hot tables first, then the rest, and came out at 38 GB without changing a single row of data. Queries sped up simply because the filesystem had less to drag around. That\u2019s what I mean by bloat being sneaky: it steals milliseconds in a thousand little ways.<\/p>\n<p>If you\u2019re running Postgres in containers, quick note: I prefer running pg_repack from a dedicated utility container or from the host with network access, rather than baking it into the database image. Same logic I use when I talk about <a href=\"https:\/\/www.dchost.com\/blog\/en\/bir-konteyner-gununde-kafama-takilanlar\/\">how I ship safer containers<\/a>: keep the database container lean, grant only what\u2019s needed, and run maintenance from controlled tooling with clear permissions.<\/p>\n<h2 id=\"section-6\">A Day\u2011to\u2011Day Maintenance Rhythm That Actually Sticks<\/h2>\n<p>Autovacuum tuning is not a once-and-done switch flip. It\u2019s a conversation with your workload. Here\u2019s how I keep it manageable without turning it into a full-time job.<\/p>\n<p>Step one, set a monthly audit ritual. I don\u2019t mean a spreadsheet; I mean 20 minutes with coffee. Check your top tables by dead tuples, look at autovacuum logs for long runners, and glance at index sizes for the usual suspects. If anything looks weird, I dig in right away or schedule a repack during the next low traffic window.<\/p>\n<p>Step two, stop bloat at the source. That means a healthy <strong>fillfactor<\/strong> on the hottest tables and the courage to remove an index that isn\u2019t earning its keep. It also means keeping statistics fresh. If you\u2019ve got lopsided distributions (like a handful of \u201chot\u201d tenants or products), raising <strong>default_statistics_target<\/strong> a bit or setting it per table can help the planner understand reality.<\/p>\n<p>Step three, monitor without obsessing. Centralised logs are enough for many teams. I ship autovacuum and error logs to Loki and set a few gentle alerts: \u201cautovacuum jobs over 2 minutes,\u201d \u201crepack finished,\u201d that sort of thing. If you want a quick start, I laid out a friendly path to <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-log-yonetimi-nasil-rayina-oturur-grafana-loki-promtail-ile-merkezi-loglama-tutma-sureleri-ve-alarm-kurallari\/\">VPS log management with Loki + Promtail<\/a>. It makes charting autovacuum runs oddly satisfying.<\/p>\n<p>Step four, plan for the inevitable \u201coh no\u201d moment. Even with perfect tuning, a migration, a feature launch, or a surprise traffic spike can tilt a table into bloat territory. Having a written runbook helps\u2014what you\u2019ll repack first, what queries might be paused, who needs a heads-up. I treat it like a mini version of a DR playbook, just surgical and specific. If you haven\u2019t written one before, I shared my approach to <a href=\"https:\/\/www.dchost.com\/blog\/en\/felaket-kurtarma-plani-nasil-yazilir-rto-rpoyu-kafada-netlestirip-yedek-testleri-ve-runbooklari-gercekten-calisir-hale-getirmek\/\">disaster recovery runbooks that actually work<\/a> and the mindset carries over nicely.<\/p>\n<p>And finally, prefer online operations where possible. In the MySQL world, I love using online migration tools to keep changes flowing with minimal disruption. The same principle applies here with pg_repack. If that idea resonates, I wrote about <a href=\"https:\/\/www.dchost.com\/blog\/en\/mysqlde-sifir-kesinti-sema-degisiklikleri-gh-ost-ve-pt-online-schema-change-ile-blue-green-nasil-kurulur\/\">zero\u2011downtime MySQL migrations<\/a>\u2014different database, same philosophy: do the heavy lifting quietly in the background, then switch over.<\/p>\n<h2 id=\"section-7\">Putting It All Together: A Simple Recipe You Can Trust<\/h2>\n<p>If we were sitting together with your VPS right now, I\u2019d do three things. First, I\u2019d turn on lightweight autovacuum logging and look at the last week of activity. Second, I\u2019d identify the two or three tables that generate the most dead tuples, check their indexes and fillfactor, and set per-table autovacuum thresholds that fire early and often. Third, I\u2019d schedule a repack for the worst offender and watch disk, WAL, and replication lag while it runs. That\u2019s it. You don\u2019t need a 40\u2011point plan to feel the difference.<\/p>\n<p>Two weeks later, I\u2019d check in again. Are autovacuum runs shorter? Did index scans get snappier? Is cache hit ratio steadier? If yes, we keep going. If not, I\u2019ll weave in another pass: maybe increase <strong>maintenance_work_mem<\/strong>, trim an index, or lower a scale factor a smidge. The goal is not perfection; the goal is a boring database. Boring is beautiful.<\/p>\n<p>If you want to nerd out a bit more (and who among us doesn\u2019t), stash a link to the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-autovacuum.html\" rel=\"nofollow noopener\" target=\"_blank\">autovacuum configuration<\/a> page and the <a href=\"https:\/\/reorg.github.io\/pg_repack\/\" rel=\"nofollow noopener\" target=\"_blank\">pg_repack project docs<\/a>. They\u2019re the authoritative sources behind many of the tips here.<\/p>\n<h2 id=\"section-8\">Wrap\u2011up: A Friendly Nudge Toward Happier Queries<\/h2>\n<p>Ever had that moment when your app just feels heavier than it should? Nine times out of ten, it\u2019s not some exotic bug\u2014it\u2019s everyday maintenance lagging behind. On a VPS, a little tuning goes a long way. Let autovacuum wake up a touch earlier. Give it just enough muscle to finish its rounds without stomping on I\/O. Teach your hottest tables to leave breathing room with fillfactor. And when bloat has already moved in, repack with a clear plan and a backup you\u2019ve actually tested, not just assumed.<\/p>\n<p>If this nudged you to tweak a setting, peek at a log, or schedule a quiet repack, great\u2014that\u2019s a win. And if you\u2019re juggling containers along the way, keep your database image clean and your maintenance tools sharp and separate; it\u2019ll save you from a pile of \u201cwhy is this in prod?\u201d questions later. Most of all, aim for a database that\u2019s as boring as possible in the best way: predictable, quiet, and fast enough that you forget it\u2019s there.<\/p>\n<p>Hope this was helpful! If you\u2019ve got a fun bloat story or a setting you swear by, I\u2019m all ears. See you in the next post, and here\u2019s to fewer dead tuples and happier queries.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was, staring at a VPS that looked perfectly fine from the outside\u2014CPU cruising, RAM not pegged, disk I\/O modest\u2014and yet the app felt sluggish. Not a full meltdown, just a kind of weary sigh on every request. If you\u2019ve ever had that \u201csomething\u2019s off but I can\u2019t prove it\u201d gut feeling, you [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1666,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1665","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1665"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1665\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1666"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}