On a VPS, RAM problems rarely start with a dramatic crash. They usually begin as a line on a resource report, a quick check in htop, or a capacity planning discussion for a new feature. One graph trends upward a bit too steadily, swap usage quietly appears, and at some point the Linux Out-of-Memory (OOM) killer decides which process must die. If you manage websites, APIs, e‑commerce stores or SaaS workloads, understanding how RAM, swap and the OOM killer actually work on a VPS is the difference between stable uptime and random failures that are hard to reproduce.
In this guide, we will walk through how Linux memory management behaves on real VPS servers, how to configure swap correctly, what the OOM killer really does, and—most importantly—how to prevent out‑of‑memory errors before they hit production. We will look at PHP, databases, background workers and containers, plus show how to use simple tools and monitoring to turn memory from a mystery into a predictable resource you can plan around.
İçindekiler
- 1 How Linux Memory Really Works on a VPS
- 2 Understanding Swap on a VPS
- 3 The Linux OOM Killer: What It Does and Why
- 4 Preventing Out‑of‑Memory Errors in Real Workloads
- 5 Monitoring and Alerting Before Memory Becomes a Problem
- 6 When to Upgrade RAM or Rethink Your Architecture
- 7 Putting It All Together: A Practical Checklist
- 8 Conclusion: Calm, Predictable Memory Management on Your VPS
How Linux Memory Really Works on a VPS
Linux memory usage on a VPS often looks scary at first glance: RAM is “almost full” even when the server seems idle. That is usually good news. The kernel aggressively uses free memory as cache to speed up disk access. To manage RAM safely, you need to distinguish between real pressure and healthy caching.
Key concepts: RAM, virtual memory, cache and buffers
- Physical RAM: The actual memory available to your VPS. This is the hard limit; exceed it and the kernel may start swapping or invoke the OOM killer.
- Virtual memory: The address space applications see. It includes RAM plus swap. Virtual memory can be larger than physical RAM.
- Page cache: File data cached in RAM. Caching speeds up repeated file and database access and is dropped automatically when memory is needed.
- Buffers: Metadata for block devices (file system structures, etc.). A smaller fraction of memory, but part of the “cached” concept.
Reading free -m without panicking
Run:
free -m
You will see something like:
total used free shared buff/cache available
Mem: 3953 2100 150 90 1700 1500
Swap: 2047 150 1897
Many admins look only at the used and free columns and think “I’m using 2100 MB, only 150 MB free, I’m about to crash”. The line that matters most on modern kernels is available. This is the RAM the kernel can still give to processes without heavy swapping, counting cache that can be reclaimed. In the example above, 1500 MB is still realistically usable, so there is no immediate memory crisis.
Essential tools to inspect memory on a VPS
htop: Interactive process viewer. Shows per‑process memory, swap, and CPU. PressF6to adjust sort order by RES (resident memory).top: Installed almost everywhere. PressMto sort by memory usage.vmstat 1: Shows system‑level statistics each second. Thesiandsocolumns (swap in/out) tell you if the system is actively swapping.ps aux --sort=-%mem | head: Quick way to list top memory‑consuming processes.
If you want a deeper monitoring setup with charts and alerts, check our guide on monitoring VPS resource usage with htop, iotop, Netdata and Prometheus. It shows how to turn these snapshots into continuous visibility.
Understanding Swap on a VPS
Swap is disk space used as an extension of RAM. It is much slower than real memory, but it gives the kernel more breathing room before it has to kill processes. On modern NVMe‑based storage, swap is far less painful than on HDDs, but it is still orders of magnitude slower than RAM.
What swap is (and what it is not)
- Swap is not performance magic. If your workloads need 8 GB consistently and you only have 4 GB RAM, adding 8 GB swap will not make the server feel like it has 12 GB RAM. It will simply be slow.
- Swap is a safety net. It helps absorb short spikes and background memory usage, and gives you some time to react to leaks or misconfigurations before the OOM killer steps in.
- Swap can reduce crashes. A small swap area can prevent the kernel from killing MySQL or your web server due to a momentary spike.
Checking existing swap
On most Linux VPS distributions, you can inspect swap with:
swapon --show
free -m
If you see no swap configured, it is usually worth adding at least a small swap file unless your provider forbids it at the hypervisor level.
Recommended swap sizes on VPS servers
There is no one‑size‑fits‑all rule, but for general web and app hosting, these ballpark values work well:
| RAM | Suggested swap | Notes |
|---|---|---|
| ≤ 2 GB | 1–2 GB | Small VPS; swap mainly as a buffer, not for sustained use. |
| 2–4 GB | 2–4 GB | Typical single‑site or small multi‑site VPS. |
| 4–8 GB | 2–4 GB | Use swap to absorb peaks; watch for high swap‑in/out. |
| > 8 GB | ~25–50% of RAM | Depends heavily on workload; many databases like some swap but not too much. |
For memory‑sensitive workloads (databases, in‑memory caches), keep swap modest and focus on correct sizing and configuration instead.
Creating a swap file
Typical steps for a 2 GB swap file on a VPS with root access:
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
To make it persistent across reboots, add this line to /etc/fstab:
/swapfile none swap sw 0 0
Tuning swappiness for better behavior
Swappiness controls how aggressively Linux prefers to use swap versus dropping cache. Values range from 0 to 100:
- 10–20: Good for most VPS web workloads. Use RAM more, swap later.
- 60 (default on many distros): Balanced for general desktops but often too aggressive on servers.
- 0–5: Only for very specific cases; can make the kernel reluctant to swap even when it would help performance.
Check current value:
cat /proc/sys/vm/swappiness
Temporarily set it to 10:
sysctl vm.swappiness=10
To make it persistent, add to /etc/sysctl.conf:
vm.swappiness = 10
The Linux OOM Killer: What It Does and Why
The OOM (Out‑Of‑Memory) killer is a safety mechanism in the Linux kernel. When the system runs out of allocatable memory (RAM + usable swap) and cannot satisfy allocations, it must free memory quickly. That is when the OOM killer chooses one or more processes to terminate.
When does the OOM killer trigger?
Typical situations on a VPS:
- A memory leak in an application or background worker slowly consumes RAM until nothing is left.
- Too many PHP‑FPM or Node.js workers are started, each using more memory than expected.
- A database (MySQL/PostgreSQL) is configured for a much larger server than the actual VPS.
- A runaway cron job or script loads a huge dataset into memory.
When the kernel cannot reclaim enough cache or swap to satisfy allocations, it logs an OOM event and starts killing processes.
How the OOM killer decides what to kill
The kernel uses several factors (OOM score, memory usage, privileges) to find the “best” victim. In practice, it often kills:
- The biggest memory consumer (e.g.
mysqldorphp-fpm), or - Processes marked as low priority using
oom_score_adj(if you customized it).
This is why a single misconfigured app can take down a critical service: the OOM killer does not know which process matters most to your business; it just wants memory back.
Detecting past OOM events
If you suspect the OOM killer ran, check:
dmesg | grep -i "out of memory" -i killjournalctl -k | grep -i oom(on systemd‑based distros)/var/log/kern.logor/var/log/messages, depending on distribution.
Typical log snippets look like:
Out of memory: Killed process 1234 (php-fpm) total-vm:1024000kB, anon-rss:512000kB, ...
These logs are your starting point: they tell you which process was killed and how much memory it was using, which is crucial for tuning.
Preventing Out‑of‑Memory Errors in Real Workloads
Once you understand RAM, swap and the OOM killer, the real work is to configure your stack so that normal load never reaches that point. On a VPS, this often comes down to three pillars: right‑sizing processes, using system‑level limits, and good capacity planning.
Right‑sizing PHP‑FPM, Node.js and other application servers
Dynamic applications usually have multiple worker processes. Each worker consumes memory; too many workers cause pressure even if CPU is fine.
PHP‑FPM workers
For PHP sites (WordPress, Laravel, WooCommerce, etc.), PHP‑FPM settings are critical. A common pattern is:
- Estimate average memory per PHP process under load (e.g. 80–150 MB).
- Allocate at most ~70–80% of RAM for PHP workers.
- Set
pm.max_childrenso total PHP memory stays within that budget.
We explain these calculations in detail in our article on choosing the right PHP memory_limit and related PHP settings; pairing those PHP limits with sane pm.max_children is one of the most effective ways to prevent OOMs on a PHP‑heavy VPS.
Node.js and similar runtimes
- Prefer a smaller number of well‑utilised Node workers rather than many idle ones.
- For heavy workloads, consider setting
--max-old-space-sizeto prevent a single process from consuming the whole server. - Use a process manager (PM2, systemd) with restart policies, but fix root causes instead of auto‑restarting infinite loops.
Databases: MySQL/PostgreSQL memory budgets
A frequent cause of VPS OOM events is copying configuration from a blog or from a much larger dedicated server. On a 2–4 GB RAM VPS, settings like innodb_buffer_pool_size=4G are simply impossible.
- Reserve a clear portion of RAM for the database (e.g. 30–50% on a single‑VPS stack).
- Size
innodb_buffer_pool_size(MySQL/MariaDB) orshared_buffers(PostgreSQL) accordingly. - Avoid very large per‑connection buffers; they multiply quickly with concurrent clients.
For e‑commerce and larger catalog sites, see our deeper dives such as MySQL indexing and query optimization for WooCommerce and our friendly VPS playbook for PostgreSQL performance. Tuning queries and indexes often reduces memory pressure more than simply adding RAM.
Using systemd and cgroups to fence off memory hogs
On modern Linux distributions with systemd, you can apply cgroup‑based memory limits per service. This is extremely helpful on a VPS where one component (e.g. a batch worker) should never be allowed to consume the entire server.
Example for a background worker unit /etc/systemd/system/worker.service:
[Service]
ExecStart=/usr/bin/php /var/www/app/artisan queue:work
MemoryMax=512M
MemoryHigh=384M
OOMPolicy=restart
- MemoryHigh: Soft limit; the kernel will try to reclaim memory, slowing the service before things get critical.
- MemoryMax: Hard limit; the process cannot use more than this. If it tries, it will be killed inside the cgroup rather than taking the whole VPS down.
- OOMPolicy: Decide what happens when the service hits OOM (e.g. restart or stop).
This approach isolates risky components and gives you predictable behavior during spikes or bugs.
Application‑level limits: PHP, workers and queues
Even with system‑level protections, it is smart to put limits inside the application stack:
- Use appropriate
memory_limitinphp.iniso a single PHP script cannot allocate gigabytes. - In queue workers (Laravel, Symfony, custom code), periodically restart workers after N jobs to mitigate leaks.
- For image manipulation or reporting jobs, explicitly cap the size of input files and the complexity of operations.
Combining language‑level limits with systemd cgroups gives you several layers of defenses against accidental OOMs.
Security and abuse scenarios
Untrusted input can also cause memory problems: huge XML or JSON payloads, unbounded file uploads, or intentionally expensive queries. Pairing good RAM practices with hardening (firewalls, rate limiting, WAF) reduces the chance that an attacker can push your VPS into OOM territory. Our VPS security hardening checklist covers many of these defensive layers.
Monitoring and Alerting Before Memory Becomes a Problem
The most reliable way to avoid the OOM killer is to see memory trends early and react before hitting the wall. That means going beyond occasional htop checks and setting up at least basic monitoring and alerts.
Simple baseline checks
Start with a regular habit:
- Check
free -mandhtopduring peak traffic. - Track which processes consistently use the most RAM.
- Note swap usage and whether
vmstatshows frequent swap in/out activity.
If you see swap usage slowly grow and never return to near‑zero, or if available memory on free -m steadily declines across days, that is a sign to investigate before an OOM event.
Setting up real monitoring with alerts
For production VPS workloads, metrics + alerts are essential. With tools like Prometheus, Grafana and Uptime Kuma, you can:
- Graph RAM usage (total, used, cache, available) over time.
- Track per‑process memory for key services (PHP‑FPM, MySQL, Redis, etc.).
- Set thresholds—for example “alert if RAM usage > 80% for 15 minutes” or “alert if swap usage > 512 MB”.
If you want a practical walk‑through, see our guide on VPS monitoring and alerts with Prometheus, Grafana and Uptime Kuma. Once configured, you will receive early warnings long before the OOM killer needs to step in.
What to monitor specifically for OOM prevention
- Memory utilisation: Use the “available” metric, not just “free”.
- Swap usage and swap‑in/out rate: Small, occasional swap use is fine; constant high swap I/O is a red flag.
- Per‑service memory: Especially
mysqld,php-fpm,redis-server,nodeprocesses and background workers. - OOM events: Scrape kernel logs for “Out of memory” and alert when they appear.
When to Upgrade RAM or Rethink Your Architecture
Even with perfect tuning, there is a point where your workload simply needs more RAM than a given VPS can offer. The key is to recognise this early, using data rather than guessing.
Signals that it is time to add RAM
- Memory usage regularly stays above 75–80% even after optimisations.
- Swap usage grows during peak and does not return to near zero afterwards.
- You have already tuned PHP‑FPM, databases and workers, but OOM events still occur under legitimate load.
- Performance testing (e.g. with k6, JMeter or Locust) shows memory saturation before CPU or I/O become bottlenecks.
For realistic load simulations, our article on load testing hosting with k6, JMeter and Locust explains how to find such limits in a controlled way instead of discovering them in production.
Rethinking architecture, not just RAM size
Sometimes the right move is to change how you deploy rather than endlessly scaling RAM on one VPS:
- Separate database and application servers when both are competing heavily for memory.
- Introduce dedicated cache layers (Redis, Memcached) to offload repeated queries and session storage.
- Move heavy analytics, reporting or image processing jobs to a separate background VPS.
At dchost.com we see many customers start with a single VPS and gradually evolve towards more specialised machines or even dedicated servers and colocation as their workloads grow. The same memory management principles still apply; you simply have more RAM to distribute.
Putting It All Together: A Practical Checklist
To make this concrete, here is a pragmatic checklist you can apply to any Linux VPS:
- Get visibility
- Install
htop,vmstat, and a basic monitoring stack if you do not have one. - Measure typical memory usage during quiet and peak times.
- Install
- Enable and size swap safely
- Create a swap file (1–4 GB for most small/mid VPS), unless your provider already manages it.
- Set
vm.swappinessto 10–20 to prefer RAM while keeping swap as a safety net.
- Tune key services
- Set realistic PHP‑FPM
pm.max_childrenandmemory_limit. - Adjust MySQL/PostgreSQL buffer sizes to fit your RAM budget.
- Cap Node.js and other workers so their total memory fits within 60–70% of RAM.
- Set realistic PHP‑FPM
- Fence off risky workloads
- Use systemd
MemoryMaxandMemoryHighfor batch jobs and workers. - Restart long‑running workers periodically to avoid leaks.
- Use systemd
- Set alerts for early warning
- Alert if RAM usage stays above 80% for more than 10–15 minutes.
- Alert if swap usage exceeds a threshold (e.g. 25–30% of total swap).
- Alert on any kernel OOM messages.
- Review regularly
- Re‑evaluate settings after major code deployments or traffic increases.
- Periodically sanity‑check resource allocations as part of your regular maintenance, alongside backups and security updates. Our guide on what to do in the first 24 hours on a new VPS is a good baseline.
Conclusion: Calm, Predictable Memory Management on Your VPS
Reliable VPS hosting is not just about CPU cores and NVMe disks; it is about understanding how your workloads use memory and giving the Linux kernel the right conditions to do its job. Once you know what “used”, “available”, cache and swap really mean, the memory graph on your monitoring dashboard stops being a source of anxiety and becomes a planning tool. The OOM killer is no longer a mysterious villain—it is a last‑resort mechanism you rarely see because your stack is sized and tuned with intent.
If you take away one idea, let it be this: combine realistic RAM sizing, a modest swap safety net, sensible service limits and simple alerts. That combination prevents almost all surprise out‑of‑memory crashes we encounter in practice. As your projects grow, our team at dchost.com can help you move from a single VPS to larger VPS plans, dedicated servers or colocation while preserving the same calm, predictable memory behavior. With a bit of upfront tuning and the right platform underneath, your applications can run for months without a single OOM event—and that is exactly the kind of boring, stable infrastructure you want.
