On a busy PHP application server, using a single PHP‑FPM pool for everything – web requests, admin panel, API calls, queue workers and cron jobs – quickly turns into a bottleneck. A long‑running report or a stuck queue worker can block precious PHP‑FPM children, increase response times and make the whole stack feel unstable. The good news: you can avoid most of this pain by cleanly isolating session traffic and background workers with the tools you already have: PHP‑FPM, Supervisor and systemd.
In this article we’ll walk through a practical pool architecture we use at dchost.com for PHP sites and frameworks like Laravel, Symfony, WordPress and custom apps. We’ll look at why isolation matters, how to design separate PHP‑FPM pools for web vs queue vs CLI, where Supervisor and systemd fit in, and how to choose resource limits so the store checkout stays fast even when heavy jobs are running in the background.
İçindekiler
- 1 Why You Should Isolate PHP Sessions and Queue Workers
- 2 The Building Blocks: PHP‑FPM Pools, Supervisor and systemd
- 3 Session and Worker Isolation: Architecture Patterns
- 4 Designing Web vs Worker Resources and Limits
- 5 Putting It Together: Example Implementation
- 6 Relating This to Per‑Site PHP‑FPM Pool Architecture
- 7 Operational Tips and Common Pitfalls
- 8 Conclusion: Calm PHP Servers Through Thoughtful Isolation
Why You Should Isolate PHP Sessions and Queue Workers
The classic single‑pool problem
Most PHP applications start with the default setup: one PHP‑FPM pool, one user, one set of limits. Nginx or Apache sends all traffic to the same unix socket or TCP port. At first this is fine, but as soon as you add:
- Queued jobs (emails, imports, webhooks, video processing)
- Long‑running reports or exports from the admin panel
- API consumers that can spike traffic at any time
- High‑value sessions (logged‑in customers, admin users)
you hit a frustrating pattern: someone runs a big export or a queue worker misbehaves, PHP‑FPM children get stuck, and suddenly normal page loads wait behind background work. Checkout pages, login forms and simple product pages suffer because they share the same pool with CPU‑hungry jobs.
What isolation gives you in practice
By isolating PHP sessions and queue workers into separate PHP‑FPM pools and managed processes, you gain:
- Predictable response times for web sessions: session‑bearing requests use a dedicated, tightly sized pool that is never starved by queue workers.
- Safe capacity for background jobs: queues get their own pool and processes; if they spike, web users don’t feel it as much.
- Cleaner failure domains: if a queue worker leaks memory, crashes or deadlocks, it doesn’t take the entire site down.
- Security separation: you can run web pools and worker pools under different Unix users, with different filesystem and network access.
- More accurate tuning: each pool has its own
pm.max_children,pm.max_requests, timeouts and INI settings tuned for its workload.
We’ve written before about why background jobs matter so much on a VPS; the missing piece for many teams is giving those jobs their own resource “lane” instead of mixing them with interactive traffic.
The Building Blocks: PHP‑FPM Pools, Supervisor and systemd
PHP‑FPM pools as isolation units
At the heart of this architecture are multiple PHP‑FPM pools. Each pool is essentially a mini‑PHP runtime with its own:
- User and group (file permissions, OS‑level isolation)
- Process manager settings (
pm,pm.max_children,pm.max_requests) - PHP INI overrides (memory_limit, max_execution_time, opcache, etc.)
- Listen socket (e.g.
/run/php-fpm-web.sock,/run/php-fpm-queue.sock)
In practice, we usually define at least three pools for a serious PHP application:
- web: for normal HTTP requests (frontend, API, maybe admin)
- session‑critical: sometimes a smaller dedicated pool for cart/checkout or login flows
- queue: for worker processes that run CLI entrypoints (Laravel queue workers, custom daemons)
For a deeper dive into tuning pool parameters like pm.max_children and pm.max_requests specifically for high‑traffic PHP apps, you can check our guide on PHP‑FPM settings for WordPress and WooCommerce. The logic applies almost one‑to‑one to Laravel, Symfony and custom frameworks as well.
Supervisor: keeping CLI workers alive
Supervisor is a simple, battle‑tested process manager for long‑running CLI programs. It’s ideal for things like:
php artisan queue:workorphp artisan horizonin Laravel- Symfony Messenger workers
- Custom PHP daemons (e.g. websocket bridges, importers)
Supervisor handles:
- Automatic restart when a worker exits
- Process count (how many workers per queue)
- Logging stdout/stderr to dedicated log files
Even if you later move to pure systemd units, Supervisor is often the easiest way to start isolating queue workers without changing how your app is built.
systemd: units, slices and timers
On modern Linux distributions, systemd is the init system responsible for services. It gives you:
- Service units: to run PHP workers or Scheduler commands as managed daemons
- Resource control: CPU, memory and IO limits using cgroups and slices
- Timers: cron‑like scheduling with better health checks and logging
You can choose either Supervisor or systemd for managing queue workers; both are valid. We tend to use:
- PHP‑FPM for short‑lived HTTP requests
- Supervisor or systemd for long‑running CLI workers
If you’re curious about how systemd timers compare to classic cron for scheduled jobs, we have a separate guide on Cron vs systemd timers and reliable scheduling.
Session and Worker Isolation: Architecture Patterns
Pattern 1: Separate PHP‑FPM pools for web vs queue
The most impactful, low‑risk step is splitting your PHP‑FPM configuration into at least two pools:
[web]pool – handles Nginx/Apache PHP requests.[queue]pool – used only by CLI workers (if you choose to run workers via FPM, which is rare) or kept as a separate context if you want distinct INI settings.
More commonly, we keep CLI workers as plain CLI PHP (not via FPM), but we still use the concept of “web pool vs worker context” by:
- Running PHP‑FPM for HTTP traffic only
- Running CLI workers via
php -d ... artisan queue:workwith a dedicated php.ini or environment
Either way, the key is that web sessions and background workers do not share the same pool of FPM children anymore.
Pattern 2: Isolating session‑heavy vs stateless traffic
In many real projects, not all traffic is equal:
- Logged‑out product/category pages can be cached heavily
- Logged‑in dashboard, cart and checkout depend on PHP sessions and per‑user data
- Public API endpoints may be stateless but high‑volume
You can map these to different FPM pools, for example:
[web_public]– for mostly cached, stateless pages[web_session]– for cart/checkout and any route that touches sessions[api]– for API requests with their own rate limits and timeouts
Nginx can send different URI patterns to different sockets:
location /checkout {
include fastcgi_params;
fastcgi_pass unix:/run/php-fpm-web_session.sock;
}
location /api/ {
include fastcgi_params;
fastcgi_pass unix:/run/php-fpm-api.sock;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php-fpm-web_public.sock;
}
This prevents a burst of API traffic from eating the same pool that keeps user sessions alive.
Pattern 3: Sessions and cache storage isolation
Pool isolation works even better when combined with smart session and cache storage. For example, you might store sessions in:
- Files (with strict directory permissions per pool)
- Redis (with separate databases or key prefixes)
- Memcached (with namespace separation)
We covered the trade‑offs in detail in our guide on choosing PHP session and cache storage (files vs Redis vs Memcached). In an isolated architecture, it becomes natural to give each pool its own session storage strategy and TTLs.
Designing Web vs Worker Resources and Limits
Different latency expectations
A core principle we try to stick to at dchost.com is: different latency expectations mean different resource pools.
- Web requests: users feel anything above a few hundred milliseconds. For checkout and login, even small spikes are noticeable.
- Queue workers: can often take seconds or even minutes per job, as long as the queue drains within your business SLA.
From this, the tuning approach becomes clearer:
- Give the web pool stricter timeouts, smaller
max_execution_time, and often a higher process count for concurrency. - Give queue workers more generous execution time, but limit how many can run in parallel to avoid stealing CPU from the web pool.
Example: sizing pools on a 4 vCPU / 8 GB RAM VPS
Suppose you run a Laravel or WooCommerce store on a 4 vCPU / 8 GB RAM VPS from dchost.com. A reasonable starting point might be:
- web_session pool
pm = dynamicpm.max_children = 12pm.start_servers = 4pm.min_spare_servers = 4pm.max_spare_servers = 8pm.max_requests = 1000
- web_public pool
pm.max_children = 8(public pages mostly cached)
- queue workers
- 4–6 workers via Supervisor or systemd (not FPM children)
We cap queue workers so they can’t saturate all 4 vCPUs. On NVMe‑backed plans, like our NVMe VPS options, IO contention is usually low, but we still want margin for database and webserver processes. For a deeper view on right‑sizing CPU/RAM and IO, see our article on choosing VPS specs for WooCommerce, Laravel and Node.js.
Using systemd slices and priorities
If you run queue workers under systemd, you can use slices and cgroups to enforce priorities. For example:
- Keep PHP‑FPM in the default slice (normal priority)
- Place workers in a lower‑priority slice with
CPUQuotaandIOWeightlimits
This ensures that even under heavy queue load, web requests remain responsive. It’s a powerful complement to pool‑level settings.
Putting It Together: Example Implementation
1. Define multiple PHP‑FPM pools
Example /etc/php-fpm.d/web_public.conf:
[web_public]
user = www-data
group = www-data
listen = /run/php-fpm-web_public.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 8
pm.start_servers = 2
pm.min_spare_servers = 2
pm.max_spare_servers = 4
pm.max_requests = 1000
php_admin_value[session.save_handler] = redis
php_admin_value[session.save_path] = "tcp://127.0.0.1:6379?database=0"
Example /etc/php-fpm.d/web_session.conf:
[web_session]
user = www-data
group = www-data
listen = /run/php-fpm-web_session.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 12
pm.start_servers = 4
pm.min_spare_servers = 4
pm.max_spare_servers = 8
pm.max_requests = 800
php_admin_value[max_execution_time] = 30
php_admin_value[memory_limit] = 512M
php_admin_value[session.save_handler] = redis
php_admin_value[session.save_path] = "tcp://127.0.0.1:6379?database=1"
Note how we use different Redis databases to isolate sessions between pools. You could also change cookie settings or GC probabilities per pool if needed.
2. Route requests to the right pool via Nginx
Example Nginx configuration snippet:
location /checkout {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass unix:/run/php-fpm-web_session.sock;
}
location /wp-login.php {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass unix:/run/php-fpm-web_session.sock;
}
location /api/ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass unix:/run/php-fpm-web_public.sock;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/run/php-fpm-web_public.sock;
}
In a Laravel application, you would typically route everything to public/index.php but still decide per‑prefix which pool to use.
3. Configure queue workers with Supervisor
Example /etc/supervisor.d/laravel-queue.conf:
[program:laravel-queue]
command=/usr/bin/php /var/www/app/artisan queue:work --sleep=3 --tries=3
process_name=%(program_name)s_%(process_num)02d
autostart=true
autorestart=true
numprocs=4
user=queueuser
redirect_stderr=true
stdout_logfile=/var/log/laravel-queue.log
stopwaitsecs=600
Key points:
- We run workers as a separate
queueuseraccount (fewer permissions than the web user). numprocsis set based on CPU; 4 workers on a 4‑vCPU VPS is usually safe if they’re not all CPU‑bound.stopwaitsecsis long enough for jobs to complete during a graceful restart.
4. Or run workers via systemd units
If you prefer systemd, a unit could look like this:
[Unit]
Description=Laravel Queue Worker
After=network.target
[Service]
User=queueuser
Group=queueuser
WorkingDirectory=/var/www/app
ExecStart=/usr/bin/php artisan queue:work --sleep=3 --tries=3
Restart=always
RestartSec=5
# Resource limits
Nice=5
IOSchedulingClass=idle
CPUQuota=60%%
[Install]
WantedBy=multi-user.target
Here we use Nice and CPUQuota to gently deprioritise queue workers compared to PHP‑FPM, which keeps interactive traffic snappy.
5. Observability: separate logs and metrics
Isolation is only truly useful if you can see what’s happening per pool and per worker. We strongly recommend:
- Separate access/error logs for different Nginx locations (public vs checkout vs API)
- Separate PHP‑FPM slow log files per pool
- Dedicated log files for queue workers (Supervisor or systemd)
- Basic server metrics (CPU, RAM, IO, network) with alerting
If you are not yet monitoring your VPS in a structured way, our guide on VPS monitoring and alerts with Prometheus, Grafana and Uptime Kuma is a good starting point.
Relating This to Per‑Site PHP‑FPM Pool Architecture
Per‑site and per‑role pools together
In many agency or multi‑project setups, you already run one FPM pool per site (or per customer). In that model, you can still apply the same idea, just one level deeper:
- Each site has its own
example1_web,example1_session, maybeexample1_apipools - Another site has
example2_web,example2_queue, etc.
We’ve written a whole story about this per‑site approach in how we run per‑site Nginx + PHP‑FPM pools without drama. Combining per‑site and per‑role isolation gives you a very clean boundary between customers and between traffic types.
When to move beyond a single server
At some point, even a well‑designed pool architecture on one VPS will hit its limits: queue workloads grow, databases need more IOPS, or uptime requirements push you to redundancy. That’s when it starts to make sense to:
- Move queues, Redis or the database to a separate VPS or dedicated server
- Use GeoDNS / multi‑region setups for latency and redundancy
- Introduce a load balancer and multiple application servers
We’ve covered bigger‑picture hosting decisions for PHP apps in articles like choosing hosting for Laravel, Symfony and custom PHP apps and our guide to GeoDNS and multi‑region hosting architecture.
Operational Tips and Common Pitfalls
Avoiding deadlocks and session contention
Isolating pools does not magically fix all session issues. Watch out for:
- Long‑running requests that hold
session_start()locks - AJAX calls that repeatedly open and close sessions
- Misconfigured session.save_path pointing multiple pools to the same directory without proper separation
If you see requests stuck in php-fpm status pages waiting for session locks, consider:
- Shortening the time spent between
session_start()andsession_write_close() - Storing truly large data outside sessions (e.g. Redis cache)
- Using Redis or Memcached for sessions with correct lock and TTL behaviour
Graceful deploys with queue and pool isolation
Pool and worker isolation also makes deploys safer:
- You can drain queue workers (Supervisor/systemd stops) separately from web traffic.
- You can reload PHP‑FPM pools gradually (e.g.
php-fpm reload) while keeping workers running. - You can perform zero‑downtime deployments to a VPS using symlink releases, while separate pools pick up the new code without killing live sessions abruptly.
For more complex Laravel setups, our detailed guide on deploying Laravel on a VPS with Nginx, PHP‑FPM, Horizon and zero‑downtime releases shows exactly how these pieces fit together.
Testing before enabling in production
When you introduce new pools and worker processes, always test on staging first:
- Simulate load on both web and queue side (e.g. using k6, JMeter or Locust)
- Watch PHP‑FPM status pages (per pool) for queue length and slow requests
- Verify sessions behave as expected across login, cart and checkout flows
We’ve written a practical guide on load testing your hosting before traffic spikes that pairs nicely with this kind of architecture change.
Conclusion: Calm PHP Servers Through Thoughtful Isolation
Isolating PHP sessions and queue workers is not about over‑engineering; it’s about making your existing server feel calmer and more predictable. By splitting PHP‑FPM into dedicated pools for session‑heavy routes, stateless traffic and background workers – and by managing workers with Supervisor or systemd – you protect the user experience from noisy neighbours on the same machine.
Once pools are tuned and Nginx routing is in place, everyday operations become easier: logs are cleaner, slow‑log analysis makes more sense, and deploys are less risky. When it’s time to scale beyond a single VPS, the same boundaries you drew between pools and workers become natural boundaries between servers.
At dchost.com, we design our VPS, dedicated and colocation setups with exactly this kind of separation in mind, so your PHP applications can grow without constant firefighting. If you’re planning a new Laravel, WooCommerce or custom PHP deployment, or you want to refactor an existing “one big pool” server into something calmer, you can start with a VPS sized for your workload and apply the patterns in this guide step by step. The result is a stack that feels faster, breaks less, and is much nicer to operate.
