{
“title”: “Background Jobs and Queue Management on a VPS: Laravel Queues, Supervisor, systemd and PM2 Explained”,
“content”: “
When you move from a simple website to a real web application on a VPS, background jobs quickly stop being a nice-to-have and become essential. Order confirmation emails, invoice generation, video processing, search indexing, webhooks, data imports, push notifications – all of these are much more reliable and faster when handled in the background instead of inside the main HTTP request. On a virtual private server, you control the whole environment, which means you’re also responsible for running and supervising those background workers correctly.
n
In this article we’ll walk through how to run background jobs and manage queues on a VPS in a production-friendly way. We’ll focus on Laravel queues (but the ideas apply to any PHP framework), and look at how to keep workers running using Supervisor, systemd and PM2. We’ll also talk about where each tool shines, how to avoid common pitfalls like stuck queues or zombie workers, and how we approach these setups on dchost.com VPS servers for our own and our customers’ projects.
nnn
On shared hosting, you’re often limited to basic cron jobs and small scripts. A VPS opens the door to proper architecture: queues, schedulers, separate workers, and multiple services. But with that freedom comes responsibility: if a worker dies at 03:00, there’s no hosting control panel quietly restarting it for you – you need a process manager.
n
Background jobs give you several concrete benefits:
n
- n
- Faster responses for users: Your API or web app can return a success message immediately while the heavy work continues in the background.
- Higher reliability: If a job fails, it can be retried without losing data or blocking the user.
- Better resource usage: You can control how many workers you run, how much CPU/RAM they consume, and schedule heavy jobs for off-peak hours.
- Scalability: As your load grows, you scale workers almost independently of your web layer.
n
n
n
n
n
If you’re still deciding between shared hosting and a VPS for your Laravel or PHP app, it’s worth reading our detailed comparison on when Laravel really needs a VPS for queues, schedulers and cache. Once you’re on a VPS, the rest of this article will help you put those queues on solid rails.
nn
İçindekiler
- 1 Laravel Queues on a VPS: Core Concepts
- 2 Option 1: Running Laravel Queues with Supervisor
- 3 Option 2: Using systemd Services for Queue Workers
- 4 Option 3: PM2 for Node.js and Mixed Stacks
- 5 Horizon and Advanced Laravel Queue Management
- 6 Monitoring, Scaling and Troubleshooting Queues on a VPS
- 7 Choosing Between Supervisor, systemd and PM2 on a dchost.com VPS
- 8 Putting It All Together on Your dchost.com VPS
Laravel Queues on a VPS: Core Concepts
n
Before we dive into Supervisor, systemd or PM2, it’s important to be clear on what Laravel itself is doing. Laravel provides three main pieces in the queue story:
n
- n
- Queue backends: Database, Redis, Beanstalkd, SQS, RabbitMQ (via packages), etc.
- Queue workers: Long-running PHP processes that continuously pull jobs from the backend and execute them.
- Horizon (optional): A nice dashboard and supervisor for Redis queues, but it still needs a process manager underneath.
n
n
n
n
The usual command to run a worker looks like this:
n
php artisan queue:work --queue=default --tries=3 --sleep=1 --max-time=3600
n
Or for Horizon:
n
php artisan horizon
n
These commands are long-running. If you just start them in an SSH session and close the terminal, they’ll exit. If PHP crashes, they stop. That’s why we need a process manager that will:
n
- n
- Start the workers on boot
- Restart them on failure
- Optionally, limit memory/CPU or number of restarts
- Provide logs or integrate with your logging system
n
n
n
n
n
For a complete Laravel production stack beyond just queues, we’ve documented our typical approach in our guide on deploying Laravel on a VPS with Nginx, PHP-FPM, Horizon and zero-downtime releases. Here we’ll zoom in specifically on the background job side.
nn
Option 1: Running Laravel Queues with Supervisor
n
Supervisor is a classic process control system for UNIX-like operating systems. It’s widely used in PHP/Laravel communities because it’s simple, battle-tested, and available in most distribution repositories.
nn
When Supervisor Makes Sense
n
Supervisor is a great fit when:
n
- n
- You’re on a typical Linux VPS (Ubuntu, Debian, AlmaLinux, Rocky Linux, etc.).
- You mostly run PHP workers (Laravel queue workers, Horizon, scheduled consumers).
- You want a simple, readable config file per queue without learning the full depth of systemd.
n
n
n
n
On many dchost.com VPS deployments, we still use Supervisor for single-app Laravel servers because it’s straightforward for developers and ops teams alike.
nn
Basic Supervisor Setup for Laravel
n
Assuming a typical Ubuntu or Debian VPS:
n
- n
- Install Supervisor:
apt update && apt install -y supervisor - Create a program config for your queue workers, for example:
/etc/supervisor/conf.d/laravel-queue.conf
n
n
n
A common configuration looks like this:
n
[program:laravel-queue]nprocess_name=%(program_name)s_%(process_num)02dncommand=/usr/bin/php /var/www/app/artisan queue:work redis --queue=default --sleep=1 --tries=3 --max-time=3600nautostart=truenautorestart=truenuser=www-datannumprocs=4nredirect_stderr=truenstdout_logfile=/var/log/laravel-queue.lognstopwaitsecs=3600n
n
Then reload and start:
n
supervisorctl rereadnsupervisorctl updatensupervisorctl start laravel-queue:*
nn
Key Settings You Should Think About
n
- n
- numprocs: How many worker processes to run. On a small 2 vCPU VPS, 2–4 workers per queue is often plenty. Start conservative and increase after monitoring.
- stopwaitsecs: Laravel workers may be in the middle of a job. Giving them enough time to finish during deploys or restarts prevents job duplication or partial runs.
- stdout_logfile: Logs can grow quickly. Combine this with
logrotate(we discuss this in detail in our guide on avoiding “No space left on device” on a VPS using logrotate).
n
n
n
nn
Pros and Cons of Supervisor
n
- n
- Pros:n
- n
- Very easy to read and edit configs
- Great for teams coming from shared hosting or panel-based environments
- Good control over multiple processes and groups
n
n
n
n
- Cons:n
- n
- One more daemon to manage on the VPS
- Less integrated with the OS than systemd (for resource limits, dependencies, etc.)
- Another layer of logging to keep an eye on
n
n
n
n
n
n
nn
Option 2: Using systemd Services for Queue Workers
n
On modern Linux distributions, systemd is the native init system and process supervisor. Instead of adding a separate tool, you can let systemd manage your Laravel workers directly. This approach is increasingly common in newer projects and is often our default on dchost.com VPS builds where teams are comfortable with systemd semantics.
nn
Why systemd Is Attractive on a VPS
n
Systemd brings several advantages:
n
- n
- No extra dependency: It’s already PID 1 on most distributions.
- Strong restart policies: Built-in support for restart throttling, delays, and failure tracking.
- Resource controls: You can limit memory, CPU, number of processes, etc. per service.
- Unified logs via journal: All logs can go through
journald, which simplifies centralized logging.
n
n
n
n
n
If you’re curious about scheduling with systemd (as an alternative to cron) for periodic jobs, we’ve covered it in our article on Cron vs systemd timers and when to choose each.
nn
Creating a systemd Unit for Laravel Queues
n
Let’s create a simple .service file for a Laravel queue worker:
n
/etc/systemd/system/laravel-queue.service
n
[Unit]nDescription=Laravel Queue WorkernAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=/var/www/appnExecStart=/usr/bin/php artisan queue:work redis --queue=default --sleep=1 --tries=3 --max-time=3600nRestart=alwaysnRestartSec=5nStartLimitBurst=10nStartLimitIntervalSec=60nn# Resource controls (tune for your VPS size)nMemoryMax=512MnCPUQuota=150%nn# Ensure environment variables are loaded (if using .env only)nEnvironment=APP_ENV=productionnEnvironment=APP_DEBUG=falsenn[Install]nWantedBy=multi-user.targetn
n
Then reload and enable:
n
systemctl daemon-reloadnsystemctl enable --now laravel-queue.service
nn
Scaling with systemd Templates
n
With systemd you can also create template units, where one file controls multiple instances. For example:
n
/etc/systemd/system/[email protected]
n
[Unit]nDescription=Laravel Queue Worker %inAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=/var/www/appnExecStart=/usr/bin/php artisan queue:work redis --queue=%i --sleep=1 --tries=3 --max-time=3600nRestart=alwaysnRestartSec=5nn[Install]nWantedBy=multi-user.targetn
n
You can then start one worker per queue like:
n
systemctl enable --now [email protected] enable --now [email protected] enable --now [email protected]
n
This keeps configuration DRY and readable, especially on multi-queue setups.
nn
Pros and Cons of systemd
n
- n
- Pros:n
- n
- No extra process manager to install
- Powerful restart and resource control features
- Good integration with OS boot and dependencies
- Works equally well for PHP, Node.js, workers, timers, etc.
n
n
n
n
n
- Cons:n
- n
- Unit file syntax and semantics can feel complex at first
- Developers less familiar with Linux internals may find troubleshooting harder
- Log access via
journalctlneeds a little orientation
n
n
n
n
n
n
nn
Option 3: PM2 for Node.js and Mixed Stacks
n
On many modern stacks we see a mix of technologies: Laravel for the backend API and panel, Node.js for real-time features or background workers, or a complete Node.js-based queue consumer reading from Redis/RabbitMQ and talking to a PHP API.
n
In such cases, PM2 is often the preferred process manager for the Node.js side. PM2 is a production-grade process manager with clustering support, zero-downtime restarts, and a JSON-based ecosystem.
nn
PM2 Basics
n
A typical PM2 setup on a VPS looks like:
n
- n
- Install PM2 globally:
npm install -g pm2 - Start your worker or app:
pm2 start worker.js --name node-queue-worker - Generate a startup script so PM2 restarts on boot:
pm2 startup systemd
Follow the printed instructions, then:pm2 save
n
n
n
n
From that point, PM2 remembers your processes and restores them after reboots.
nn
PM2 with Laravel + Node.js
n
If your background infrastructure looks like this:
n
- n
- Laravel producing jobs into Redis or another broker
- Laravel workers consuming some queues
- Node.js workers consuming other queues (e.g. WebSocket notifications, video pipelines)
n
n
n
n
You can comfortably mix approaches:
n
- n
- Use Supervisor or systemd for
php artisan queue:workandphp artisan horizon. - Use PM2 for Node.js-based workers, WebSocket servers, or API gateways.
n
n
n
We go deeper specifically into Node.js production setups (including PM2 vs systemd and Nginx fronting) in our article on how we host Node.js in production with PM2, systemd, Nginx and zero-downtime deploys.
nn
Pros and Cons of PM2
n
- n
- Pros:n
- n
- Designed for Node.js from day one
- Cluster mode for multi-core usage without much effort
- Pretty CLI and dashboard, ecosystem configs, log management
n
n
n
n
- Cons:n
- n
- Another layer on top of systemd (you often still use systemd to keep PM2 alive)
- Less natural if you only run PHP/Laravel and nothing Node-based
- Some orgs prefer to stay 100% on systemd for consistency
n
n
n
n
n
n
nn
Horizon and Advanced Laravel Queue Management
n
If you use Redis as your queue backend, Laravel Horizon is a great way to manage multiple queues, prioritize workloads, and visualize what’s going on. But it’s important to understand that Horizon is not a replacement for Supervisor or systemd – it still needs a process manager underneath.
nn
How Horizon Fits into the Picture
n
Horizon adds several things:
n
- n
- A dashboard showing queue throughput, failed jobs, processing time, etc.
- Named queues and supervisors with different worker counts.
- Tags and metrics for jobs.
n
n
n
n
Operationally, you run it as a long-living process, for example via systemd:
n
[Unit]nDescription=Laravel HorizonnAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=/var/www/appnExecStart=/usr/bin/php artisan horizonnRestart=alwaysnRestartSec=5nn[Install]nWantedBy=multi-user.targetn
n
On a small or medium dchost.com VPS, a common pattern is:
n
- n
- One
horizon.servicefor Redis-based queues - A few classic queue workers (Supervisor or systemd) for special queues that must not mix with others or need a different PHP version
n
n
n
For more Laravel-specific tuning (FPM pools, OPcache, Octane, Redis, Horizon, etc.), we collected our usual production checklist in our Laravel production optimization guide for VPS servers.
nn
Monitoring, Scaling and Troubleshooting Queues on a VPS
n
Setting up workers is only half the story. The other half is making sure they’re actually running, not stuck, and sized correctly for your VPS resources.
nn
Monitoring Worker Health
n
On a VPS, you usually don’t have a fully managed monitoring stack out of the box, so it’s worth investing a little time here. At minimum, you want to know:
n
- n
- Are the worker processes running?
- Is the queue length growing uncontrollably?
- Are jobs failing more than usual?
- Is CPU, RAM or disk IO maxed out?
n
n
n
n
n
There are three layers you can combine:
n
- n
- OS-level checks: Simple
systemctl status,supervisorctl status, orpm2 listin scripts or external monitors. - Application-level checks: Laravel Horizon metrics, or custom health endpoints where you return queue lengths and last job times.
- Full monitoring stack: Tools like Prometheus + Grafana + Uptime Kuma as described in our guide on setting up VPS monitoring and alerts.
n
n
n
nn
Capacity Planning for Workers
n
Workers share the same CPU and RAM as your web stack, database (if on the same server), and cache. On a small VPS, it’s easy to overshoot with too many workers and starve everything else.
n
Some practical rules of thumb:
n
- n
- On a 2 vCPU / 4 GB RAM VPS hosting Laravel + MySQL + Redis:n
- n
- Start with 2–4 workers for general jobs.
- Run heavy exports or reports on a dedicated queue with 1–2 workers.
- Monitor CPU load; if average stays under 60–70% during peak, you’re fine.
n
n
n
n
- Use priority queues instead of more and more workers. For example:
high, default, low. - For CPU-bound jobs (image processing, PDF generation), consider an additional VPS just for workers once they start impacting the main site.
n
n
n
nn
Common Queue Problems and How to Fix Them
n
- n
- Jobs stuck in “reserved” state: Often caused by workers dying mid-job. Ensure your process manager restarts them and consider
--max-timeand--max-jobsto recycle workers periodically. - Database queue locking issues: If you use the database driver for high-volume queues, row locks can become a bottleneck. Moving to Redis is usually a better option on a VPS.
- “Out of memory” errors: Memory-hungry jobs (PDF, image, big ORM queries) can bloat long-lived workers. Recycling with
--max-jobsor--max-timeand optimizing the job code itself helps. - Deploys causing duplicate runs: If deploy scripts brutally kill workers, jobs can re-run on restart. Use graceful stops (Supervisor’s
stopwaitsecs,systemctl stopand signals) and design idempotent jobs.
n
n
n
n
nn
Choosing Between Supervisor, systemd and PM2 on a dchost.com VPS
n
So which approach should you pick for your own VPS at dchost.com? In practice, we usually follow a few simple patterns based on project type and team experience.
nn
Scenario 1: Pure Laravel App, Small to Medium Scale
n
For a classic Laravel application (API or web) on a single VPS:
n
- n
- Queue backend: Redis if possible; database driver only for very low volume or if Redis is not yet in the picture.
- Process manager:n
- n
- If your team is new to Linux internals: use Supervisor for queue workers and optional Horizon.
- If your team is comfortable with systemd: use systemd services (and optionally templates) for workers and Horizon.
n
n
n
- Monitoring: At least
systemctl statusorsupervisorctl statuschecks in your health playbook, and basic alerts from an external monitor.
n
n
n
nn
Scenario 2: Laravel + Node.js Mixed Stack
n
For stacks with both Laravel and Node.js:
n
- n
- Run Laravel workers via Supervisor or systemd.
- Run Node.js workers and WebSocket servers via PM2, kept alive by a systemd unit for PM2 itself.
- Use Redis (or another broker) as a common queueing layer, but respect language boundaries for the process managers.
n
n
n
nn
Scenario 3: Multiple Apps on One VPS
n
If you host multiple Laravel apps on a single dchost.com VPS, organization becomes more important:
n
- n
- Use clear naming in your process configs:
project1-queue,project2-horizon, etc. - Separate logs per project, and use log rotation aggressively.
- Consider separate systemd units or Supervisor configs per project instead of one big, shared worker file.
- Keep an eye on aggregated CPU/RAM usage; at some point it’s cleaner to move busy projects to their own VPS.
n
n
n
n
nn
Security and Stability Considerations
n
Background jobs are powerful – they often talk to payment gateways, third-party APIs, external storage and internal systems. Make sure they run on a hardened VPS with:
n
- n
- Strong SSH and firewall setup
- Regular updates and kernel patches
- Off-site and versioned backups in case of errors or ransomware
n
n
n
n
We’ve summarized a practical, non-dramatic hardening checklist for new servers in our article on how to secure a VPS server the calm way. Combine that with a disciplined queue setup and you’ll have a resilient backend.
nn
Putting It All Together on Your dchost.com VPS
n
Background jobs and queue workers are where a simple VPS becomes a capable application platform. Instead of letting users wait for PDFs to generate, emails to send or webhooks to call, you push all that into queues and let dedicated workers handle the heavy lifting. On a dchost.com VPS, you control the OS, the process manager, and the stack – which means you can shape exactly how reliable and scalable your background processing pipeline will be.
n
For most Laravel teams, the path is clear: start with Redis queues and a handful of workers managed by Supervisor or systemd. As your requirements grow, add Horizon for visibility, PM2 for any Node.js sidecars, and proper monitoring with tools like Prometheus and Grafana following our VPS monitoring and alerts guide. When the load eventually outgrows a single server, it’s straightforward to move heavy queues or Node.js workers to additional dchost.com VPS or dedicated servers.
n
If you’re planning a new application or considering migrating an existing one, our team at dchost.com can help you choose the right VPS size, storage and architecture for your queue-heavy workloads, whether that’s a single Laravel project with Horizon or a multi-service stack mixing PHP, Node.js, and separate cache/databases. Start with a clean, secure VPS foundation, put your background jobs under a solid process manager, and you’ll have the calm, predictable backend you need to build on confidently.
n”,
“focus_keyword”: “background jobs and queue management on a VPS”,
“meta_description”: “Learn how to run background jobs and queues on a VPS using Laravel queues, Supervisor, systemd and PM2, with practical setup tips, tuning and monitoring.”,
“faqs”: [
{
“question”: “Should I use Supervisor or systemd for Laravel queues on my VPS?”,
“answer”: “Both Supervisor and systemd work well for managing Laravel queues on a VPS, and the choice mostly comes down to your team’s familiarity. Supervisor is very popular in the Laravel world because its configuration files are simple and easy to understand, making it a great fit if you’re transitioning from shared hosting or panel-based environments. Systemd, on the other hand, is built into modern Linux distributions and offers powerful restart policies and resource controls without adding another daemon. On many dchost.com VPS deployments we use systemd by default for new projects and Supervisor where teams are already comfortable with it.”
},
{
“question”: “How many queue workers should I run on a small VPS?”,
“answer”: “The right number of workers depends on your VPS resources and how heavy your jobs are, but it’s better to start small and scale up. On a typical 2 vCPU / 4 GB RAM VPS running Laravel, MySQL and Redis, 2–4 general-purpose workers are usually enough at the beginning. If you have very heavy jobs like PDF or image processing, consider putting them on a dedicated queue with 1–2 workers so they don’t block everything else. Monitor CPU and RAM usage: if average CPU stays under 60–70% during peak and queues don’t grow uncontrollably, your worker count is in a good range.”
},
{
“question”: “Do I still need Horizon if I already use Supervisor or systemd?”,
“answer”: “Horizon and Supervisor/systemd solve different problems. Horizon gives you a dashboard, metrics, and high-level management for Redis-based Laravel queues, but it still needs a process manager to keep it and its workers running. Supervisor or systemd start, stop and restart the underlying processes at the OS level. On a production VPS at dchost.com, a common pattern is to run Horizon under a systemd or Supervisor service and use Horizon to manage worker counts and priorities, while relying on the process manager to ensure Horizon itself is always alive, restarts on failure, and starts automatically on boot.”
},
{
“question”: “When should I introduce PM2 for queue management?”,
“answer”: “You should consider PM2 when your background processing includes Node.js components, such as WebSocket servers, real-time notification services, or Node-based workers consuming the same queues as Laravel. PM2 is designed for Node.js, with handy features like clustering, graceful reloads and integrated logging. You typically keep PHP workers under Supervisor or systemd and use PM2 just for Node.js processes, often supervised in turn by a small systemd unit for PM2 itself. If your stack is pure PHP/Laravel with no Node.js, there’s usually no need to introduce PM2 – stick with Supervisor or systemd for simplicity.”
},
{
“question”: “How can I monitor my queue workers and avoid silent failures?”,
“answer”: “On a VPS you should combine several layers of monitoring to avoid silent queue failures. At minimum, check process status using tools like systemctl, supervisorctl or pm2 in regular health checks. Add application-level metrics such as queue length, failed job count and average processing time, either via Laravel Horizon or custom health endpoints. For serious projects, we recommend setting up a lightweight monitoring stack like Prometheus + Grafana + Uptime Kuma as described in our VPS monitoring guide, and raising alerts when queue length exceeds a threshold, workers are down, or CPU/RAM usage stays high for too long.”
}
]
}
