{"id":1537,"date":"2025-11-08T12:28:11","date_gmt":"2025-11-08T09:28:11","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/the-friendly-vps-playbook-for-postgresql-performance-shared_buffers-work_mem-wal-and-pgbouncer-without-the-drama\/"},"modified":"2025-11-08T12:28:11","modified_gmt":"2025-11-08T09:28:11","slug":"the-friendly-vps-playbook-for-postgresql-performance-shared_buffers-work_mem-wal-and-pgbouncer-without-the-drama","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/the-friendly-vps-playbook-for-postgresql-performance-shared_buffers-work_mem-wal-and-pgbouncer-without-the-drama\/","title":{"rendered":"The Friendly VPS Playbook for PostgreSQL Performance: shared_buffers, work_mem, WAL, and PgBouncer Without the Drama"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#So_there_I_was_staring_at_a_slow_dashboard\"><span class=\"toc_number toc_depth_1\">1<\/span> So there I was, staring at a slow dashboard\u2026<\/a><\/li><li><a href=\"#The_VPS_reality_limited_RAM_noisy_neighbors_and_why_it_matters\"><span class=\"toc_number toc_depth_1\">2<\/span> The VPS reality: limited RAM, noisy neighbors, and why it matters<\/a><\/li><li><a href=\"#shared_buffers_and_effective_cache_size_your_first_two_levers\"><span class=\"toc_number toc_depth_1\">3<\/span> shared_buffers and effective_cache_size: your first two levers<\/a><\/li><li><a href=\"#work_mem_and_friends_where_sorts_hashes_and_temp_spills_hide\"><span class=\"toc_number toc_depth_1\">4<\/span> work_mem and friends: where sorts, hashes, and temp spills hide<\/a><\/li><li><a href=\"#WAL_checkpoints_and_durability_why_your_disk_sometimes_feels_hot\"><span class=\"toc_number toc_depth_1\">5<\/span> WAL, checkpoints, and durability: why your disk sometimes feels hot<\/a><\/li><li><a href=\"#PgBouncer_the_quiet_hero_of_steady_latency\"><span class=\"toc_number toc_depth_1\">6<\/span> PgBouncer: the quiet hero of steady latency<\/a><\/li><li><a href=\"#Planner_nudges_and_filesystem_reality_practical_nudges_that_matter\"><span class=\"toc_number toc_depth_1\">7<\/span> Planner nudges and filesystem reality: practical nudges that matter<\/a><\/li><li><a href=\"#A_practical_tuning_workflow_you_can_reuse\"><span class=\"toc_number toc_depth_1\">8<\/span> A practical tuning workflow you can reuse<\/a><\/li><li><a href=\"#Putting_it_together_a_sample_config_to_spark_a_conversation\"><span class=\"toc_number toc_depth_1\">9<\/span> Putting it together: a sample config to spark a conversation<\/a><\/li><li><a href=\"#Operational_habits_that_keep_PostgreSQL_happy\"><span class=\"toc_number toc_depth_1\">10<\/span> Operational habits that keep PostgreSQL happy<\/a><\/li><li><a href=\"#Common_traps_and_how_to_sidestep_them\"><span class=\"toc_number toc_depth_1\">11<\/span> Common traps (and how to sidestep them)<\/a><\/li><li><a href=\"#Where_to_go_from_here\"><span class=\"toc_number toc_depth_1\">12<\/span> Where to go from here<\/a><\/li><li><a href=\"#Wrap-up_a_calm_repeatable_way_to_tune_PostgreSQL_on_a_VPS\"><span class=\"toc_number toc_depth_1\">13<\/span> Wrap-up: a calm, repeatable way to tune PostgreSQL on a VPS<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"So_there_I_was_staring_at_a_slow_dashboard\">So there I was, staring at a slow dashboard\u2026<\/span><\/h2>\n<p>It was one of those late afternoons when everything looks fine on paper but feels sluggish in real life. The app was up, CPU wasn\u2019t pegged, network looked calm, yet the reports page took ages to load. If you\u2019ve ever run PostgreSQL on a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>, you probably know that sinking feeling. The logs weren\u2019t screaming, the graphs were polite, and still\u2014something in the database layer was moving like it had mud on its shoes.<\/p>\n<p>Ever had that moment when you say, \u201cIt worked yesterday, what changed?\u201d And the answer is: your workload did. Or your connection patterns. Or that one migration that nudged a setting from good to \u201cmeh.\u201d In that moment, the magic words are simple: <strong>know your knobs<\/strong>. For PostgreSQL on a small-to-mid VPS, a handful of settings make a massive difference\u2014<strong>shared_buffers<\/strong>, <strong>work_mem<\/strong>, <strong>WAL and checkpoints<\/strong>\u2014plus one very practical sidekick called <strong>PgBouncer<\/strong> for connection pooling.<\/p>\n<p>In this guide, I\u2019ll walk you through the way I approach tuning on real VPSes, not lab-perfect servers. We\u2019ll talk about the trade-offs, the gotchas, and the \u201cdon\u2019t touch that unless you know why\u201d bits. I\u2019ll share the approaches I keep reusing\u2014what I set first, what I watch, and how I noodle my way to a snappier, more reliable PostgreSQL without buying more hardware. We\u2019ll also talk about the workflow of tuning, not just the settings, because a good habit beats a good guess every time.<\/p>\n<p>Before we dive in, one warm reminder: always protect your data first. If you\u2019re testing changes, have proper backups and a rollback plan. If you need a friendly way to think through durable, remote copies, I\u2019ve walked through <a href=\"https:\/\/www.dchost.com\/blog\/en\/restic-ve-borg-ile-s3-uyumlu-uzak-yedekleme-surumleme-sifreleme-ve-saklama-ne-zaman-nasil\/\">offsite backups with Restic or Borg to S3\u2011compatible storage<\/a>\u2014that piece pairs nicely with this guide.<\/p>\n<h2 id=\"section-2\"><span id=\"The_VPS_reality_limited_RAM_noisy_neighbors_and_why_it_matters\">The VPS reality: limited RAM, noisy neighbors, and why it matters<\/span><\/h2>\n<p>Let\u2019s set the scene. A VPS isn\u2019t a big iron database server with heaps of RAM just waiting to cache your indexes. It\u2019s usually a modest slice of CPU, a few gigabytes of memory, and an SSD that\u2019s fast when no one else is slamming it. There\u2019s also the little matter of noisy neighbors and the occasional IO hiccup. None of that is a deal-breaker for PostgreSQL\u2014it just means your tuning should respect the budget.<\/p>\n<p>In my experience, two things shape your tuning decisions right away: your <strong>memory constraints<\/strong> and your <strong>connection behavior<\/strong>. Memory constraints push you to set sane limits on PostgreSQL\u2019s buffers and per-query memory. Connection behavior decides whether you need PgBouncer yesterday. If your app opens lots of short-lived connections (looking at you, some ORMs and serverless-ish patterns), that can cause surprising overhead on a small VPS. If you tame connections and spend memory wisely, you\u2019ve already won half the battle.<\/p>\n<p>Here\u2019s the thing: tuning is rarely one-and-done. It\u2019s a conversation between your workload and your resources. So we\u2019ll set thoughtful defaults, watch how the system responds, and then nudge settings in small, safe steps\u2014like a good barista adjusting grind size as the beans change.<\/p>\n<h2 id=\"section-3\"><span id=\"shared_buffers_and_effective_cache_size_your_first_two_levers\">shared_buffers and effective_cache_size: your first two levers<\/span><\/h2>\n<p>I still remember the first time I bumped <strong>shared_buffers<\/strong> and watched latency graph lines chill out like I\u2019d just poured them a cup of tea. On a VPS, this setting matters a lot, but it\u2019s also one you shouldn\u2019t overdo. Think of <strong>shared_buffers<\/strong> as PostgreSQL\u2019s private cache for table and index pages. The OS has its own cache too. If you hog all memory with shared buffers, you leave the OS cache gasping\u2014and both will perform worse.<\/p>\n<p>So how much is \u201cjust enough\u201d? On a small VPS, my starting principle is modest: a slice, not the whole cake. On a 4 GB instance, for example, I\u2019ll often start around 25% for <strong>shared_buffers<\/strong>, then adjust based on observed cache hit ratios and memory pressure. For 2 GB, I\u2019m even more conservative. Large values don\u2019t magically make the database faster if the OS cache can\u2019t breathe.<\/p>\n<p>Then there\u2019s <strong>effective_cache_size<\/strong>. This one isn\u2019t memory you\u2019re reserving\u2014it\u2019s a hint to the planner about how much cache is effectively available across the system (PostgreSQL\u2019s buffers plus OS filesystem cache). If you\u2019re too pessimistic here, the planner may shy away from index scans it would have loved. Too optimistic, and it may gamble on plans that thrash. I typically set <strong>effective_cache_size<\/strong> to somewhere around half to three-quarters of total RAM on a VPS, subtracting what I know apps and the OS will need. It\u2019s an art informed by observation.<\/p>\n<p>Here\u2019s a representative starting point for a small-to-mid VPS. Don\u2019t copy-paste without thinking; use it as a conversation starter with your workload:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># memory and cache\nshared_buffers = 1GB                 # start modestly; adjust after observing\neffective_cache_size = 3GB          # planner hint, not a reservation\nmax_connections = 100               # keep this realistic if you plan to use PgBouncer\n<\/code><\/pre>\n<p>For deeper context on these and other memory knobs, the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-resource.html\" rel=\"nofollow noopener\" target=\"_blank\">official PostgreSQL documentation for resource settings<\/a> is a great touchstone when you want the low-level details.<\/p>\n<h2 id=\"section-4\"><span id=\"work_mem_and_friends_where_sorts_hashes_and_temp_spills_hide\">work_mem and friends: where sorts, hashes, and temp spills hide<\/span><\/h2>\n<p>If shared_buffers is the living room, <strong>work_mem<\/strong> is the little bench space each person gets when they\u2019re chopping veggies. It\u2019s per operation, not per connection, which is where many folks get surprised. A single complex query can have multiple sort or hash steps, and each of those can take up to <strong>work_mem<\/strong>. Multiply that by concurrent queries, and you can blow past your RAM faster than you can say \u201cOOM.\u201d<\/p>\n<p>So the goal is to keep it reasonable globally and then tune per-session or per-query for the expensive workloads. On a 4 GB VPS, I might start with a global <strong>work_mem<\/strong> of 8\u201332 MB. Then I watch for temp file activity. If I see specific queries spilling to disk, I\u2019ll temporarily raise <strong>work_mem<\/strong> just for those sessions or use a connection class in PgBouncer to give background jobs more elbow room than web requests.<\/p>\n<p>Here\u2019s a starter set of memory-tuning values that treats RAM like the precious thing it is on a VPS:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># per-operation memory\nwork_mem = 16MB                     # keep conservative globally; tune per session when needed\nmaintenance_work_mem = 256MB        # for VACUUM, CREATE INDEX, etc. raise only during maintenance windows\n<\/code><\/pre>\n<p>I once tuned an analytics workload where giving a single nightly job more <strong>work_mem<\/strong> cut runtime from 40 minutes to 8. The trick wasn\u2019t raising the global value\u2014it was <em>targeting<\/em> the heavy job while keeping day-to-day traffic sane. If you\u2019re curious which queries hurt the most, the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/pgstatstatements.html\" rel=\"nofollow noopener\" target=\"_blank\">pg_stat_statements extension reference<\/a> shows you a goldmine of query-level metrics. Use it to find the real hogs.<\/p>\n<p>Two quick tips from the trenches: first, watch <strong>temp_files<\/strong> and <strong>temp_bytes<\/strong> in your monitoring. If it\u2019s spiking, you probably need to revisit query plans, indexes, or targeted work_mem increases. Second, if your app occasionally runs large batch jobs, consider isolating them by role and tuning per-role settings during the job\u2019s lifetime. It\u2019s like giving your movers wider doors for one day.<\/p>\n<h2 id=\"section-5\"><span id=\"WAL_checkpoints_and_durability_why_your_disk_sometimes_feels_hot\">WAL, checkpoints, and durability: why your disk sometimes feels hot<\/span><\/h2>\n<p>Write-Ahead Logging (WAL) is where PostgreSQL writes changes before they hit the main data files. On a VPS, WAL behavior can be the difference between smooth sailing and sudden disk pressure. There are a few settings here that I always approach with respect: <strong>checkpoint_timeout<\/strong>, <strong>max_wal_size<\/strong>, <strong>wal_compression<\/strong>, and <strong>synchronous_commit<\/strong>. Each has trade-offs around performance, durability, and IO patterns.<\/p>\n<p>Let\u2019s talk checkpoints first. If checkpoints happen too often, you\u2019ll see bursts of IO and latency spikes as pages get flushed. If they\u2019re too far apart with too small a WAL size, you risk big flushes later or running out of WAL headroom during surges. On a VPS with decent SSDs, I like to aim for fewer, calmer checkpoints by allowing a larger WAL budget than the stock default and a sensible timeout. I also turn on <strong>wal_compression<\/strong> for most write-heavy workloads\u2014it often pays off on VPS storage.<\/p>\n<p>Here\u2019s a baseline I keep coming back to when I\u2019m starting out on small-to-mid instances:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># WAL and checkpoints\nwal_level = replica                 # enough for PITR\/replication if needed\nwal_compression = on                # often a win on VPS SSDs\ncheckpoint_timeout = 15min          # fewer, smoother checkpoints\nmax_wal_size = 2GB                  # adjust to your write volume and disk\nmin_wal_size = 256MB\nsynchronous_commit = on             # keep durability unless you accept losing a few ms of transactions on crash\n<\/code><\/pre>\n<p>Now, <strong>synchronous_commit<\/strong> is often misunderstood. Turning it off can make writes feel faster because transactions return before the WAL is safely on disk. That\u2019s a durability trade-off. If you can accept losing a trickle of the latest transactions in a crash, you might consider relaxing it for specific non-critical workloads. I\u2019ve seen teams use it selectively for queue-like tables whose entries could be recreated. But don\u2019t casually flip it off for the entire cluster unless you truly understand the risks.<\/p>\n<p>And then there\u2019s the humble <strong>autovacuum<\/strong>, which deserves a fan club. When it\u2019s neglected, bloat and dead tuples pile up, queries slow down, and people start blaming everything else. Keep autovacuum healthy by ensuring it has enough <strong>maintenance_work_mem<\/strong> during off-hours, and consider nudging <strong>autovacuum_vacuum_cost_limit<\/strong> and thresholds for hot tables. A little care here prevents many \u201cmysterious slowdowns.\u201d<\/p>\n<p>By the way, if you\u2019re doing this inside a broader production playbook, observability helps you catch checkpoint spikes before users do. Setting up dashboards and alerts early pays off. If you haven\u2019t already, consider how to <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">set up useful monitoring with Prometheus, Grafana, and Uptime Kuma<\/a> so you can see WAL volume, checkpoint intervals, and temp file usage in one glance.<\/p>\n<h2 id=\"section-6\"><span id=\"PgBouncer_the_quiet_hero_of_steady_latency\">PgBouncer: the quiet hero of steady latency<\/span><\/h2>\n<p>One of my clients had a web app that was beautifully built but had a habit of opening short-lived connections like it was collecting trading cards. On a modest VPS, that pattern becomes expensive\u2014each connection isn\u2019t free, and context switching plus per-connection memory adds up quickly. Enter <strong>PgBouncer<\/strong>, the lightweight connection pooler that sits in front of PostgreSQL and keeps a small, steady pool of connections to the database while your app churns through many incoming requests.<\/p>\n<p>There are two modes you\u2019ll hear about a lot: <strong>session<\/strong> pooling and <strong>transaction<\/strong> pooling. Session pooling keeps a database connection for the lifetime of your client session. Transaction pooling, on the other hand, assigns a server connection only during a transaction and returns it to the pool immediately after. Transaction pooling is usually the sweet spot for high-concurrency web traffic on a VPS because it lets you serve many logical clients with a smaller set of real database connections. Just be aware it doesn\u2019t play nicely with session-level features that assume a persistent backend.<\/p>\n<p>Here\u2019s a skeletal PgBouncer configuration I reach for as a starting point. Adjust the pool sizes to your workload and instance size:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[databases]\nappdb = host=127.0.0.1 port=5432 dbname=appdb\n\n[pgbouncer]\nlisten_addr = 0.0.0.0\nlisten_port = 6432\nauth_type = md5\nauth_file = \/etc\/pgbouncer\/userlist.txt\npool_mode = transaction\nmax_client_conn = 1000\ndefault_pool_size = 20\nmin_pool_size = 5\nreserve_pool_size = 5\nreserve_pool_timeout = 5\nserver_idle_timeout = 60\nserver_reset_query = DISCARD ALL\nignore_startup_parameters = extra_float_digits\n<\/code><\/pre>\n<p>And here\u2019s the trick I wish I\u2019d learned earlier: align <strong>max_connections<\/strong> in PostgreSQL with the pool you actually need, not the number of app processes. With PgBouncer in front, PostgreSQL might only need a few dozen connections even if your app sees hundreds of concurrent requests. The result is a calmer database, less memory pressure, and fewer spikes. It\u2019s the difference between a quiet restaurant with a steady kitchen and a line of customers banging on the chef\u2019s door.<\/p>\n<p>If you want to browse more options and edge cases, the <a href=\"https:\/\/www.pgbouncer.org\/\" rel=\"nofollow noopener\" target=\"_blank\">PgBouncer documentation<\/a> is short and to the point. It\u2019s one of those rare tools that does one job well and doesn\u2019t try to be a circus.<\/p>\n<h2 id=\"section-7\"><span id=\"Planner_nudges_and_filesystem_reality_practical_nudges_that_matter\">Planner nudges and filesystem reality: practical nudges that matter<\/span><\/h2>\n<p>There\u2019s a whole world of planner preferences that can give you small but meaningful wins when used carefully. Two that come up often in VPS land are <strong>random_page_cost<\/strong> and <strong>seq_page_cost<\/strong>. With SSDs, random access is less painful than it used to be, so lowering <strong>random_page_cost<\/strong> from the old HDD defaults can encourage index usage where it\u2019s beneficial. I\u2019m cautious with global changes here; I\u2019ll usually start by nudging random_page_cost down a bit and observing plans for known expensive queries. If your IO is healthy, the planner can afford to be a little more optimistic.<\/p>\n<p>On the filesystem side, your VPS provider\u2019s storage stack (and whether it has write-back caching) influences how much you get out of WAL tweaks and checkpoint tuning. One habit I stick to is leaving a margin in RAM for the OS cache\u2014it does heavy lifting for <strong>effective_cache_size<\/strong> and helps reads stay snappy. Another is avoiding swap thrash by keeping my memory knobs realistic. If you see swap usage creeping up under load, it\u2019s time to rein in work_mem or reduce concurrent connections (which PgBouncer helps with).<\/p>\n<p>Sometimes the best tuning move isn\u2019t in PostgreSQL at all. I had a case where the single biggest win was switching a cron-driven batch job from running at the same time as peak traffic to an off-peak window. Same query, same indexes, same settings\u2014just less contention. It\u2019s a reminder that architecture is part of tuning. Let your database breathe when it needs to sprint.<\/p>\n<h2 id=\"section-8\"><span id=\"A_practical_tuning_workflow_you_can_reuse\">A practical tuning workflow you can reuse<\/span><\/h2>\n<p>Tuning without a workflow is like tilting a mirror in the dark. You want consistent steps, small changes, and quick feedback. Here\u2019s the flow I keep coming back to for VPS-sized PostgreSQL instances.<\/p>\n<p>First, baseline what \u201cnormal\u201d looks like. Collect query stats, connection counts, temp file usage, WAL volume, checkpoint frequency, cache hit ratios, and disk IO latency. If you don\u2019t have dashboards yet, this is a great moment to set them up. Here\u2019s a friendly walkthrough on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">building monitoring with Prometheus, Grafana, and Uptime Kuma<\/a>. Even a few well-chosen graphs make you dramatically more confident.<\/p>\n<p>Second, sketch a low-risk plan. I usually start by setting <strong>shared_buffers<\/strong>, <strong>effective_cache_size<\/strong>, and <strong>work_mem<\/strong> conservatively. Then I tune <strong>WAL<\/strong> and <strong>checkpoints<\/strong> to avoid spiky flushes. If the app is chatty with connections, I drop PgBouncer in front and relax <strong>max_connections<\/strong> on the database.<\/p>\n<p>Third, make one change at a time and let it bake. Watch your graphs for a few hours or days depending on your traffic pattern. If you have scheduled jobs, try to observe at least one full cycle. Keep notes\u2014future you will thank you.<\/p>\n<p>Fourth, focus on the top offenders. Use <strong>pg_stat_statements<\/strong> to find the queries that account for the majority of time. Then improve plans with targeted indexes, rewrite a heavy CTE if needed, or bump <strong>work_mem<\/strong> just for that job. The gains here tend to be outsized. You can use the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/pgstatstatements.html\" rel=\"nofollow noopener\" target=\"_blank\">pg_stat_statements extension reference<\/a> as a quick lookup when you want to remember exactly which views matter.<\/p>\n<p>Fifth, make your changes easy to roll out and roll back. If you\u2019re shipping app builds alongside DB changes, smooth deploys help you gather clean measurements. I\u2019ve shared a repeatable approach to <a href=\"https:\/\/www.dchost.com\/blog\/en\/vpse-sifir-kesinti-ci-cd-nasil-kurulur-rsync-sembolik-surumler-ve-systemd-ile-sicacik-bir-yolculuk\/\">zero\u2011downtime deploys with rsync, symlinks, and systemd<\/a> that keeps your hands calm when you\u2019re also watching dashboards.<\/p>\n<p>Finally, follow the \u201cthree reads\u201d habit after every tweak. Read your app logs, read your database logs, and read your metrics. If you want to give yourself more visibility without drowning in noise, I wrote about <a href=\"https:\/\/www.dchost.com\/blog\/en\/merkezi-loglama-ve-gozlemlenebilirlik-vpste-loki-promtail-grafana-ile-sakin-kalan-bir-zihin\/\">centralized logging with Loki, Promtail, and Grafana<\/a>. The point is simple: when something drifts, you\u2019ll notice early.<\/p>\n<h2 id=\"section-9\"><span id=\"Putting_it_together_a_sample_config_to_spark_a_conversation\">Putting it together: a sample config to spark a conversation<\/span><\/h2>\n<p>Here\u2019s a minimal, opinionated snippet I often adapt for a 4 GB-ish VPS with an app that has mixed read\/write traffic and uses PgBouncer for transaction pooling. Please treat this as a starting point, not an endpoint.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># postgresql.conf (excerpt)\n\n# connections\nmax_connections = 100                   # actual active backends are kept low by PgBouncer\n\n# memory\nshared_buffers = 1GB\nwork_mem = 16MB\nmaintenance_work_mem = 256MB\neffective_cache_size = 3GB\n\n# WAL and checkpoints\nwal_level = replica\nwal_compression = on\ncheckpoint_timeout = 15min\nmax_wal_size = 2GB\nmin_wal_size = 256MB\nsynchronous_commit = on\n\n# autovacuum\nautovacuum = on\nautovacuum_vacuum_cost_limit = 200     # nudge for busier tables if needed\nautovacuum_naptime = 10s               # small interval; tune carefully for load\n\n# planner hints (SSD-friendly; adjust after observing)\nrandom_page_cost = 1.1\nseq_page_cost = 1.0\n\n# logging (helpful for visibility)\nlog_checkpoints = on\nlog_autovacuum_min_duration = 0\nlog_temp_files = 0                      # log any temp files to catch spills\nlog_min_duration_statement = 500        # ms; adjust to your tolerance\n<\/code><\/pre>\n<p>And a matching PgBouncer sketch:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># pgbouncer.ini (excerpt)\n\n[databases]\nappdb = host=127.0.0.1 port=5432 dbname=appdb\n\n[pgbouncer]\nlisten_addr = 127.0.0.1\nlisten_port = 6432\npool_mode = transaction\nmax_client_conn = 1000\ndefault_pool_size = 20\nmin_pool_size = 5\nreserve_pool_size = 5\nreserve_pool_timeout = 5\nauth_type = md5\nauth_file = \/etc\/pgbouncer\/userlist.txt\nserver_reset_query = DISCARD ALL\nignore_startup_parameters = extra_float_digits\n<\/code><\/pre>\n<p>If you\u2019re hungry for a parameter-by-parameter deep dive, the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-resource.html\" rel=\"nofollow noopener\" target=\"_blank\">official PostgreSQL documentation for resource settings<\/a> complements this nicely. When in doubt, measure and iterate.<\/p>\n<h2 id=\"section-10\"><span id=\"Operational_habits_that_keep_PostgreSQL_happy\">Operational habits that keep PostgreSQL happy<\/span><\/h2>\n<p>Tools and settings are half the story. Habits are the other half. A few that have paid off for me over and over:<\/p>\n<p>First, schedule maintenance on purpose. If you need to rebuild indexes or run heavier VACUUMs, do it during predictable off-peak windows. Bump <strong>maintenance_work_mem<\/strong> during that window and dial it back after. The more intentional you are, the fewer \u201crandom\u201d performance blips you\u2019ll see.<\/p>\n<p>Second, set expectations inside your team. Not every query is meant to be real-time. For analytics-style joins over millions of rows, consider materialized views or pre-aggregations that run on a schedule. Keep the hot path for your users simple and well-indexed, then let background jobs do the heavy lifting.<\/p>\n<p>Third, make logs and metrics \u201cboringly obvious.\u201d I like to log checkpoints and temp files because they reveal stress patterns early. If you notice checkpoints lining up with user complaints, you know exactly where to look. Again, a simple, visible stack is often the difference between swift action and slow guessing.<\/p>\n<p>Fourth, keep deployments smooth. Nothing distorts a performance experiment like a botched rollout. If you want a steady cadence and easy rollbacks, this piece on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vpse-sifir-kesinti-ci-cd-nasil-kurulur-rsync-sembolik-surumler-ve-systemd-ile-sicacik-bir-yolculuk\/\">zero\u2011downtime deploys with rsync, symlinks, and systemd<\/a> fits neatly into the tuning workflow\u2014flip a symlink, watch metrics, and breathe.<\/p>\n<p>Finally, keep a human-readable change log. I\u2019ve lost count of how many times someone asked, \u201cWhen did this start?\u201d and a one-paragraph note about \u201craised work_mem for nightly job; reverted after test\u201d saved the day. It\u2019s not fancy, but it\u2019s as practical as it gets.<\/p>\n<h2 id=\"section-11\"><span id=\"Common_traps_and_how_to_sidestep_them\">Common traps (and how to sidestep them)<\/span><\/h2>\n<p>I\u2019ve walked into a few recurring traps on VPS projects\u2014maybe you\u2019ve seen these too. The first is chasing global fixes for local problems. A single misbehaving query can tempt you to raise <strong>work_mem<\/strong> cluster-wide, which just moves the pain around. Better to fix the query or raise work_mem surgically for that role or session.<\/p>\n<p>The second is \u201cmore connections = more throughput.\u201d With PostgreSQL, that\u2019s not how the story goes, especially on a small VPS. At some point, more connections just means more context switching and cache churn. PgBouncer solves this elegantly; fewer backend connections with transaction pooling often give you exactly the throughput you wanted with lower jitter.<\/p>\n<p>The third is treating WAL as an afterthought. If you\u2019re seeing disk spikes every few minutes, checkpoints are a prime suspect. Give your WAL a more generous runway, switch on <strong>wal_compression<\/strong>, and verify you\u2019re not starving the OS cache. You\u2019ll feel the difference in latency\u2014and your disk will thank you.<\/p>\n<p>The fourth is ignoring autovacuum until bloat shouts. Keep it running, give it reasonable memory during off-hours, and pay attention to hot tables that need gentler thresholds. It\u2019s routine garden care, not emergency landscaping.<\/p>\n<h2 id=\"section-12\"><span id=\"Where_to_go_from_here\">Where to go from here<\/span><\/h2>\n<p>If any single section here sparked an \u201caha,\u201d that\u2019s your next experiment. Nudge shared_buffers and effective_cache_size into a healthier balance for your RAM budget. Give PgBouncer a seat at the table and let PostgreSQL relax. Tame WAL spikes so your disk stops doing sprints. And for your heavy queries, use <strong>pg_stat_statements<\/strong> as your flashlight, not your hammer.<\/p>\n<p>When you need more context, keep these two bookmarks handy: the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-resource.html\" rel=\"nofollow noopener\" target=\"_blank\">resource configuration docs<\/a> and the <a href=\"https:\/\/www.pgbouncer.org\/\" rel=\"nofollow noopener\" target=\"_blank\">PgBouncer documentation<\/a>. They\u2019re short reads with high ROI. And if you want to round out your ops picture, I\u2019ve also shared <a href=\"https:\/\/www.dchost.com\/blog\/en\/merkezi-loglama-ve-gozlemlenebilirlik-vpste-loki-promtail-grafana-ile-sakin-kalan-bir-zihin\/\">a friendly path to centralized logs<\/a> and a practical starter for <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">VPS monitoring<\/a>. They make every tuning step easier.<\/p>\n<h2 id=\"section-13\"><span id=\"Wrap-up_a_calm_repeatable_way_to_tune_PostgreSQL_on_a_VPS\">Wrap-up: a calm, repeatable way to tune PostgreSQL on a VPS<\/span><\/h2>\n<p>Tuning PostgreSQL on a VPS is less about secret incantations and more about teamwork between memory, connections, and IO. Start with thoughtful defaults: keep <strong>shared_buffers<\/strong> roomy but not greedy, set <strong>effective_cache_size<\/strong> to a believable planner hint, and treat <strong>work_mem<\/strong> like a scalpel, not a sledgehammer. Smooth out your <strong>WAL<\/strong> and checkpoints so the disk never panics. Put <strong>PgBouncer<\/strong> in front to calm the connection storm. Then measure, iterate, and keep notes.<\/p>\n<p>What I love about this approach is that it scales with your confidence. The first pass makes things obviously better. The second pass makes them feel snappy. And the third tweak\u2014the one you do after watching real traffic for a week\u2014often unlocks a kind of quiet stability that your users feel even if they can\u2019t name it.<\/p>\n<p>If this helped you, I\u2019m glad. Make a small change, watch what happens, and keep going. And hey\u2014before you touch anything risky, take five minutes to ensure you have <a href=\"https:\/\/www.dchost.com\/blog\/en\/restic-ve-borg-ile-s3-uyumlu-uzak-yedekleme-surumleme-sifreleme-ve-saklama-ne-zaman-nasil\/\">reliable offsite backups<\/a>. I want you to sleep well tonight. See you in the next post, and may your queries always find the right index.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>\u0130&ccedil;indekiler1 So there I was, staring at a slow dashboard\u20262 The VPS reality: limited RAM, noisy neighbors, and why it matters3 shared_buffers and effective_cache_size: your first two levers4 work_mem and friends: where sorts, hashes, and temp spills hide5 WAL, checkpoints, and durability: why your disk sometimes feels hot6 PgBouncer: the quiet hero of steady latency7 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1538,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1537","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1537","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1537"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1537\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1538"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1537"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1537"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1537"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}