{"id":3505,"date":"2025-12-27T18:09:18","date_gmt":"2025-12-27T15:09:18","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/why-background-jobs-matter-so-much-on-a-vps\/"},"modified":"2025-12-27T18:09:18","modified_gmt":"2025-12-27T15:09:18","slug":"why-background-jobs-matter-so-much-on-a-vps","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/why-background-jobs-matter-so-much-on-a-vps\/","title":{"rendered":"Why Background Jobs Matter So Much on a VPS"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>{<br \/>\n  &#8220;title&#8221;: &#8220;Background Jobs and Queue Management on a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>: Laravel Queues, Supervisor, systemd and PM2 Explained&#8221;,<br \/>\n  &#8220;content&#8221;: &#8220;<\/p>\n<p>When you move from a simple website to a real web application on a VPS, background jobs quickly stop being a nice-to-have and become essential. Order confirmation emails, invoice generation, video processing, search indexing, webhooks, data imports, push notifications \u2013 all of these are much more reliable and faster when handled in the background instead of inside the main HTTP request. On a virtual private server, you control the whole environment, which means you\u2019re also responsible for running and supervising those background workers correctly.<\/p>\n<p>n<\/p>\n<p>In this article we\u2019ll walk through how to run background jobs and manage queues on a VPS in a production-friendly way. We\u2019ll focus on Laravel queues (but the ideas apply to any PHP framework), and look at how to keep workers running using <strong>Supervisor<\/strong>, <strong>systemd<\/strong> and <strong>PM2<\/strong>. We\u2019ll also talk about where each tool shines, how to avoid common pitfalls like stuck queues or zombie workers, and how we approach these setups on dchost.com VPS servers for our own and our customers\u2019 projects.<\/p>\n<p>nnn<\/p>\n<p>On shared hosting, you\u2019re often limited to basic cron jobs and small scripts. A VPS opens the door to proper architecture: queues, schedulers, separate workers, and multiple services. But with that freedom comes responsibility: if a worker dies at 03:00, there\u2019s no hosting control panel quietly restarting it for you \u2013 you need a process manager.<\/p>\n<p>n<\/p>\n<p>Background jobs give you several concrete benefits:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Faster responses for users:<\/strong> Your API or web app can return a success message immediately while the heavy work continues in the background.<\/li>\n<p>n  <\/p>\n<li><strong>Higher reliability:<\/strong> If a job fails, it can be retried without losing data or blocking the user.<\/li>\n<p>n  <\/p>\n<li><strong>Better resource usage:<\/strong> You can control how many workers you run, how much CPU\/RAM they consume, and schedule heavy jobs for off-peak hours.<\/li>\n<p>n  <\/p>\n<li><strong>Scalability:<\/strong> As your load grows, you scale workers almost independently of your web layer.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>If you\u2019re still deciding between shared hosting and a VPS for your Laravel or PHP app, it\u2019s worth reading our detailed comparison on <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-ve-diger-php-frameworkler-icin-paylasimli-hosting-mi-vps-mi\/\">when Laravel really needs a VPS for queues, schedulers and cache<\/a>. Once you\u2019re on a VPS, the rest of this article will help you put those queues on solid rails.<\/p>\n<p>nn<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#Laravel_Queues_on_a_VPS_Core_Concepts\"><span class=\"toc_number toc_depth_1\">1<\/span> Laravel Queues on a VPS: Core Concepts<\/a><\/li><li><a href=\"#Option_1_Running_Laravel_Queues_with_Supervisor\"><span class=\"toc_number toc_depth_1\">2<\/span> Option 1: Running Laravel Queues with Supervisor<\/a><ul><li><a href=\"#When_Supervisor_Makes_Sense\"><span class=\"toc_number toc_depth_2\">2.1<\/span> When Supervisor Makes Sense<\/a><\/li><li><a href=\"#Basic_Supervisor_Setup_for_Laravel\"><span class=\"toc_number toc_depth_2\">2.2<\/span> Basic Supervisor Setup for Laravel<\/a><\/li><li><a href=\"#Key_Settings_You_Should_Think_About\"><span class=\"toc_number toc_depth_2\">2.3<\/span> Key Settings You Should Think About<\/a><\/li><li><a href=\"#Pros_and_Cons_of_Supervisor\"><span class=\"toc_number toc_depth_2\">2.4<\/span> Pros and Cons of Supervisor<\/a><\/li><\/ul><\/li><li><a href=\"#Option_2_Using_systemd_Services_for_Queue_Workers\"><span class=\"toc_number toc_depth_1\">3<\/span> Option 2: Using systemd Services for Queue Workers<\/a><ul><li><a href=\"#Why_systemd_Is_Attractive_on_a_VPS\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Why systemd Is Attractive on a VPS<\/a><\/li><li><a href=\"#Creating_a_systemd_Unit_for_Laravel_Queues\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Creating a systemd Unit for Laravel Queues<\/a><\/li><li><a href=\"#Scaling_with_systemd_Templates\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Scaling with systemd Templates<\/a><\/li><li><a href=\"#Pros_and_Cons_of_systemd\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Pros and Cons of systemd<\/a><\/li><\/ul><\/li><li><a href=\"#Option_3_PM2_for_Nodejs_and_Mixed_Stacks\"><span class=\"toc_number toc_depth_1\">4<\/span> Option 3: PM2 for Node.js and Mixed Stacks<\/a><ul><li><a href=\"#PM2_Basics\"><span class=\"toc_number toc_depth_2\">4.1<\/span> PM2 Basics<\/a><\/li><li><a href=\"#PM2_with_Laravel_Nodejs\"><span class=\"toc_number toc_depth_2\">4.2<\/span> PM2 with Laravel + Node.js<\/a><\/li><li><a href=\"#Pros_and_Cons_of_PM2\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Pros and Cons of PM2<\/a><\/li><\/ul><\/li><li><a href=\"#Horizon_and_Advanced_Laravel_Queue_Management\"><span class=\"toc_number toc_depth_1\">5<\/span> Horizon and Advanced Laravel Queue Management<\/a><ul><li><a href=\"#How_Horizon_Fits_into_the_Picture\"><span class=\"toc_number toc_depth_2\">5.1<\/span> How Horizon Fits into the Picture<\/a><\/li><\/ul><\/li><li><a href=\"#Monitoring_Scaling_and_Troubleshooting_Queues_on_a_VPS\"><span class=\"toc_number toc_depth_1\">6<\/span> Monitoring, Scaling and Troubleshooting Queues on a VPS<\/a><ul><li><a href=\"#Monitoring_Worker_Health\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Monitoring Worker Health<\/a><\/li><li><a href=\"#Capacity_Planning_for_Workers\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Capacity Planning for Workers<\/a><\/li><li><a href=\"#Common_Queue_Problems_and_How_to_Fix_Them\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Common Queue Problems and How to Fix Them<\/a><\/li><\/ul><\/li><li><a href=\"#Choosing_Between_Supervisor_systemd_and_PM2_on_a_dchostcom_VPS\"><span class=\"toc_number toc_depth_1\">7<\/span> Choosing Between Supervisor, systemd and PM2 on a dchost.com VPS<\/a><ul><li><a href=\"#Scenario_1_Pure_Laravel_App_Small_to_Medium_Scale\"><span class=\"toc_number toc_depth_2\">7.1<\/span> Scenario 1: Pure Laravel App, Small to Medium Scale<\/a><\/li><li><a href=\"#Scenario_2_Laravel_Nodejs_Mixed_Stack\"><span class=\"toc_number toc_depth_2\">7.2<\/span> Scenario 2: Laravel + Node.js Mixed Stack<\/a><\/li><li><a href=\"#Scenario_3_Multiple_Apps_on_One_VPS\"><span class=\"toc_number toc_depth_2\">7.3<\/span> Scenario 3: Multiple Apps on One VPS<\/a><\/li><li><a href=\"#Security_and_Stability_Considerations\"><span class=\"toc_number toc_depth_2\">7.4<\/span> Security and Stability Considerations<\/a><\/li><\/ul><\/li><li><a href=\"#Putting_It_All_Together_on_Your_dchostcom_VPS\"><span class=\"toc_number toc_depth_1\">8<\/span> Putting It All Together on Your dchost.com VPS<\/a><\/li><\/ul><\/div>\n<h2><span id=\"Laravel_Queues_on_a_VPS_Core_Concepts\">Laravel Queues on a VPS: Core Concepts<\/span><\/h2>\n<p>n<\/p>\n<p>Before we dive into Supervisor, systemd or PM2, it\u2019s important to be clear on what Laravel itself is doing. Laravel provides three main pieces in the queue story:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Queue backends:<\/strong> Database, Redis, Beanstalkd, SQS, RabbitMQ (via packages), etc.<\/li>\n<p>n  <\/p>\n<li><strong>Queue workers:<\/strong> Long-running PHP processes that continuously pull jobs from the backend and execute them.<\/li>\n<p>n  <\/p>\n<li><strong>Horizon (optional):<\/strong> A nice dashboard and supervisor for Redis queues, but it still needs a process manager underneath.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>The usual command to run a worker looks like this:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">php artisan queue:work --queue=default --tries=3 --sleep=1 --max-time=3600<\/code><\/pre>\n<p>n<\/p>\n<p>Or for Horizon:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">php artisan horizon<\/code><\/pre>\n<p>n<\/p>\n<p>These commands are <strong>long-running<\/strong>. If you just start them in an SSH session and close the terminal, they\u2019ll exit. If PHP crashes, they stop. That\u2019s why we need a <strong>process manager<\/strong> that will:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Start the workers on boot<\/li>\n<p>n  <\/p>\n<li>Restart them on failure<\/li>\n<p>n  <\/p>\n<li>Optionally, limit memory\/CPU or number of restarts<\/li>\n<p>n  <\/p>\n<li>Provide logs or integrate with your logging system<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>For a complete Laravel production stack beyond just queues, we\u2019ve documented our typical approach in our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-uygulamalarini-vpste-nasil-yayinlarim-nginx-php%e2%80%91fpm-horizon-ve-sifir-kesinti-dagitimun-sicacik-yol-haritasi\/\">deploying Laravel on a VPS with Nginx, PHP-FPM, Horizon and zero-downtime releases<\/a>. Here we\u2019ll zoom in specifically on the background job side.<\/p>\n<p>nn<\/p>\n<h2><span id=\"Option_1_Running_Laravel_Queues_with_Supervisor\">Option 1: Running Laravel Queues with Supervisor<\/span><\/h2>\n<p>n<\/p>\n<p><strong>Supervisor<\/strong> is a classic process control system for UNIX-like operating systems. It\u2019s widely used in PHP\/Laravel communities because it\u2019s simple, battle-tested, and available in most distribution repositories.<\/p>\n<p>nn<\/p>\n<h3><span id=\"When_Supervisor_Makes_Sense\">When Supervisor Makes Sense<\/span><\/h3>\n<p>n<\/p>\n<p>Supervisor is a great fit when:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>You\u2019re on a typical Linux VPS (Ubuntu, Debian, AlmaLinux, Rocky Linux, etc.).<\/li>\n<p>n  <\/p>\n<li>You mostly run PHP workers (Laravel queue workers, Horizon, scheduled consumers).<\/li>\n<p>n  <\/p>\n<li>You want a simple, readable config file per queue without learning the full depth of systemd.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>On many dchost.com VPS deployments, we still use Supervisor for single-app Laravel servers because it\u2019s straightforward for developers and ops teams alike.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Basic_Supervisor_Setup_for_Laravel\">Basic Supervisor Setup for Laravel<\/span><\/h3>\n<p>n<\/p>\n<p>Assuming a typical Ubuntu or Debian VPS:<\/p>\n<p>n<\/p>\n<ol>n  <\/p>\n<li>Install Supervisor:<br \/><code>apt update &amp;&amp; apt install -y supervisor<\/code><\/li>\n<p>n  <\/p>\n<li>Create a program config for your queue workers, for example:<br \/><code>\/etc\/supervisor\/conf.d\/laravel-queue.conf<\/code><\/li>\n<p>n<\/ol>\n<p>n<\/p>\n<p>A common configuration looks like this:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[program:laravel-queue]nprocess_name=%(program_name)s_%(process_num)02dncommand=\/usr\/bin\/php \/var\/www\/app\/artisan queue:work redis --queue=default --sleep=1 --tries=3 --max-time=3600nautostart=truenautorestart=truenuser=www-datannumprocs=4nredirect_stderr=truenstdout_logfile=\/var\/log\/laravel-queue.lognstopwaitsecs=3600n<\/code><\/pre>\n<p>n<\/p>\n<p>Then reload and start:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">supervisorctl rereadnsupervisorctl updatensupervisorctl start laravel-queue:*<\/code><\/pre>\n<p>nn<\/p>\n<h3><span id=\"Key_Settings_You_Should_Think_About\">Key Settings You Should Think About<\/span><\/h3>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>numprocs:<\/strong> How many worker processes to run. On a small 2 vCPU VPS, 2\u20134 workers per queue is often plenty. Start conservative and increase after monitoring.<\/li>\n<p>n  <\/p>\n<li><strong>stopwaitsecs:<\/strong> Laravel workers may be in the middle of a job. Giving them enough time to finish during deploys or restarts prevents job duplication or partial runs.<\/li>\n<p>n  <\/p>\n<li><strong>stdout_logfile:<\/strong> Logs can grow quickly. Combine this with <code>logrotate<\/code> (we discuss this in detail in our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-disk-kullanimi-ve-logrotate-ayarlariyla-no-space-left-on-device-hatasini-onlemek\/\">avoiding &#8220;No space left on device&#8221; on a VPS using logrotate<\/a>).<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h3><span id=\"Pros_and_Cons_of_Supervisor\">Pros and Cons of Supervisor<\/span><\/h3>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Pros:<\/strong>n\n<ul>n      <\/p>\n<li>Very easy to read and edit configs<\/li>\n<p>n      <\/p>\n<li>Great for teams coming from shared hosting or panel-based environments<\/li>\n<p>n      <\/p>\n<li>Good control over multiple processes and groups<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n  <\/p>\n<li><strong>Cons:<\/strong>n\n<ul>n      <\/p>\n<li>One more daemon to manage on the VPS<\/li>\n<p>n      <\/p>\n<li>Less integrated with the OS than systemd (for resource limits, dependencies, etc.)<\/li>\n<p>n      <\/p>\n<li>Another layer of logging to keep an eye on<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h2><span id=\"Option_2_Using_systemd_Services_for_Queue_Workers\">Option 2: Using systemd Services for Queue Workers<\/span><\/h2>\n<p>n<\/p>\n<p>On modern Linux distributions, <strong>systemd<\/strong> is the native init system and process supervisor. Instead of adding a separate tool, you can let systemd manage your Laravel workers directly. This approach is increasingly common in newer projects and is often our default on dchost.com VPS builds where teams are comfortable with systemd semantics.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Why_systemd_Is_Attractive_on_a_VPS\">Why systemd Is Attractive on a VPS<\/span><\/h3>\n<p>n<\/p>\n<p>Systemd brings several advantages:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>No extra dependency:<\/strong> It\u2019s already PID 1 on most distributions.<\/li>\n<p>n  <\/p>\n<li><strong>Strong restart policies:<\/strong> Built-in support for restart throttling, delays, and failure tracking.<\/li>\n<p>n  <\/p>\n<li><strong>Resource controls:<\/strong> You can limit memory, CPU, number of processes, etc. per service.<\/li>\n<p>n  <\/p>\n<li><strong>Unified logs via journal:<\/strong> All logs can go through <code>journald<\/code>, which simplifies centralized logging.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>If you\u2019re curious about scheduling with systemd (as an alternative to cron) for periodic jobs, we\u2019ve covered it in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/cron-mu-systemd-timer-mi-neden-nasil-ve-ne-zaman-hangisini-secmeli\/\">Cron vs systemd timers and when to choose each<\/a>.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Creating_a_systemd_Unit_for_Laravel_Queues\">Creating a systemd Unit for Laravel Queues<\/span><\/h3>\n<p>n<\/p>\n<p>Let\u2019s create a simple <code>.service<\/code> file for a Laravel queue worker:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/etc\/systemd\/system\/laravel-queue.service<\/code><\/pre>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[Unit]nDescription=Laravel Queue WorkernAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=\/var\/www\/appnExecStart=\/usr\/bin\/php artisan queue:work redis --queue=default --sleep=1 --tries=3 --max-time=3600nRestart=alwaysnRestartSec=5nStartLimitBurst=10nStartLimitIntervalSec=60nn# Resource controls (tune for your VPS size)nMemoryMax=512MnCPUQuota=150%nn# Ensure environment variables are loaded (if using .env only)nEnvironment=APP_ENV=productionnEnvironment=APP_DEBUG=falsenn[Install]nWantedBy=multi-user.targetn<\/code><\/pre>\n<p>n<\/p>\n<p>Then reload and enable:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">systemctl daemon-reloadnsystemctl enable --now laravel-queue.service<\/code><\/pre>\n<p>nn<\/p>\n<h3><span id=\"Scaling_with_systemd_Templates\">Scaling with systemd Templates<\/span><\/h3>\n<p>n<\/p>\n<p>With systemd you can also create <strong>template units<\/strong>, where one file controls multiple instances. For example:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/etc\/systemd\/system\/laravel-queue@.service<\/code><\/pre>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[Unit]nDescription=Laravel Queue Worker %inAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=\/var\/www\/appnExecStart=\/usr\/bin\/php artisan queue:work redis --queue=%i --sleep=1 --tries=3 --max-time=3600nRestart=alwaysnRestartSec=5nn[Install]nWantedBy=multi-user.targetn<\/code><\/pre>\n<p>n<\/p>\n<p>You can then start one worker per queue like:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">systemctl enable --now laravel-queue@default.servicensystemctl enable --now laravel-queue@emails.servicensystemctl enable --now laravel-queue@reports.service<\/code><\/pre>\n<p>n<\/p>\n<p>This keeps configuration DRY and readable, especially on multi-queue setups.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Pros_and_Cons_of_systemd\">Pros and Cons of systemd<\/span><\/h3>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Pros:<\/strong>n\n<ul>n      <\/p>\n<li>No extra process manager to install<\/li>\n<p>n      <\/p>\n<li>Powerful restart and resource control features<\/li>\n<p>n      <\/p>\n<li>Good integration with OS boot and dependencies<\/li>\n<p>n      <\/p>\n<li>Works equally well for PHP, Node.js, workers, timers, etc.<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n  <\/p>\n<li><strong>Cons:<\/strong>n\n<ul>n      <\/p>\n<li>Unit file syntax and semantics can feel complex at first<\/li>\n<p>n      <\/p>\n<li>Developers less familiar with Linux internals may find troubleshooting harder<\/li>\n<p>n      <\/p>\n<li>Log access via <code>journalctl<\/code> needs a little orientation<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h2><span id=\"Option_3_PM2_for_Nodejs_and_Mixed_Stacks\">Option 3: PM2 for Node.js and Mixed Stacks<\/span><\/h2>\n<p>n<\/p>\n<p>On many modern stacks we see a mix of technologies: Laravel for the backend API and panel, Node.js for real-time features or background workers, or a complete Node.js-based queue consumer reading from Redis\/RabbitMQ and talking to a PHP API.<\/p>\n<p>n<\/p>\n<p>In such cases, <strong>PM2<\/strong> is often the preferred process manager for the Node.js side. PM2 is a production-grade process manager with clustering support, zero-downtime restarts, and a JSON-based ecosystem.<\/p>\n<p>nn<\/p>\n<h3><span id=\"PM2_Basics\">PM2 Basics<\/span><\/h3>\n<p>n<\/p>\n<p>A typical PM2 setup on a VPS looks like:<\/p>\n<p>n<\/p>\n<ol>n  <\/p>\n<li>Install PM2 globally:<br \/><code>npm install -g pm2<\/code><\/li>\n<p>n  <\/p>\n<li>Start your worker or app:<br \/><code>pm2 start worker.js --name node-queue-worker<\/code><\/li>\n<p>n  <\/p>\n<li>Generate a startup script so PM2 restarts on boot:<br \/><code>pm2 startup systemd<\/code><br \/>Follow the printed instructions, then:<br \/><code>pm2 save<\/code><\/li>\n<p>n<\/ol>\n<p>n<\/p>\n<p>From that point, PM2 remembers your processes and restores them after reboots.<\/p>\n<p>nn<\/p>\n<h3><span id=\"PM2_with_Laravel_Nodejs\">PM2 with Laravel + Node.js<\/span><\/h3>\n<p>n<\/p>\n<p>If your background infrastructure looks like this:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Laravel producing jobs into Redis or another broker<\/li>\n<p>n  <\/p>\n<li>Laravel workers consuming some queues<\/li>\n<p>n  <\/p>\n<li>Node.js workers consuming other queues (e.g. WebSocket notifications, video pipelines)<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>You can comfortably mix approaches:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Use Supervisor or systemd for <code>php artisan queue:work<\/code> and <code>php artisan horizon<\/code>.<\/li>\n<p>n  <\/p>\n<li>Use PM2 for Node.js-based workers, WebSocket servers, or API gateways.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>We go deeper specifically into Node.js production setups (including PM2 vs systemd and Nginx fronting) in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/node-jsi-canliya-alirken-panik-yapma-pm2-systemd-nginx-ssl-ve-sifir-kesinti-deploy-nasil-kurulur\/\">how we host Node.js in production with PM2, systemd, Nginx and zero-downtime deploys<\/a>.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Pros_and_Cons_of_PM2\">Pros and Cons of PM2<\/span><\/h3>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Pros:<\/strong>n\n<ul>n      <\/p>\n<li>Designed for Node.js from day one<\/li>\n<p>n      <\/p>\n<li>Cluster mode for multi-core usage without much effort<\/li>\n<p>n      <\/p>\n<li>Pretty CLI and dashboard, ecosystem configs, log management<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n  <\/p>\n<li><strong>Cons:<\/strong>n\n<ul>n      <\/p>\n<li>Another layer on top of systemd (you often still use systemd to keep PM2 alive)<\/li>\n<p>n      <\/p>\n<li>Less natural if you only run PHP\/Laravel and nothing Node-based<\/li>\n<p>n      <\/p>\n<li>Some orgs prefer to stay 100% on systemd for consistency<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h2><span id=\"Horizon_and_Advanced_Laravel_Queue_Management\">Horizon and Advanced Laravel Queue Management<\/span><\/h2>\n<p>n<\/p>\n<p>If you use Redis as your queue backend, <strong>Laravel Horizon<\/strong> is a great way to manage multiple queues, prioritize workloads, and visualize what\u2019s going on. But it\u2019s important to understand that Horizon is not a replacement for Supervisor or systemd \u2013 it still needs a process manager underneath.<\/p>\n<p>nn<\/p>\n<h3><span id=\"How_Horizon_Fits_into_the_Picture\">How Horizon Fits into the Picture<\/span><\/h3>\n<p>n<\/p>\n<p>Horizon adds several things:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>A dashboard showing queue throughput, failed jobs, processing time, etc.<\/li>\n<p>n  <\/p>\n<li>Named queues and supervisors with different worker counts.<\/li>\n<p>n  <\/p>\n<li>Tags and metrics for jobs.<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>Operationally, you run it as a long-living process, for example via systemd:<\/p>\n<p>n<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[Unit]nDescription=Laravel HorizonnAfter=network.targetnn[Service]nUser=www-datanGroup=www-datanWorkingDirectory=\/var\/www\/appnExecStart=\/usr\/bin\/php artisan horizonnRestart=alwaysnRestartSec=5nn[Install]nWantedBy=multi-user.targetn<\/code><\/pre>\n<p>n<\/p>\n<p>On a small or medium dchost.com VPS, a common pattern is:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>One <code>horizon.service<\/code> for Redis-based queues<\/li>\n<p>n  <\/p>\n<li>A few classic queue workers (Supervisor or systemd) for special queues that must not mix with others or need a different PHP version<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>For more Laravel-specific tuning (FPM pools, OPcache, Octane, Redis, Horizon, etc.), we collected our usual production checklist in <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-prod-ortam-optimizasyonu-nasil-yapilir-php%e2%80%91fpm-opcache-octane-queue-horizon-ve-redisi-el-ele-calistirmak\/\">our Laravel production optimization guide for VPS servers<\/a>.<\/p>\n<p>nn<\/p>\n<h2><span id=\"Monitoring_Scaling_and_Troubleshooting_Queues_on_a_VPS\">Monitoring, Scaling and Troubleshooting Queues on a VPS<\/span><\/h2>\n<p>n<\/p>\n<p>Setting up workers is only half the story. The other half is making sure they\u2019re actually running, not stuck, and sized correctly for your VPS resources.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Monitoring_Worker_Health\">Monitoring Worker Health<\/span><\/h3>\n<p>n<\/p>\n<p>On a VPS, you usually don\u2019t have a fully managed monitoring stack out of the box, so it\u2019s worth investing a little time here. At minimum, you want to know:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Are the worker processes running?<\/li>\n<p>n  <\/p>\n<li>Is the queue length growing uncontrollably?<\/li>\n<p>n  <\/p>\n<li>Are jobs failing more than usual?<\/li>\n<p>n  <\/p>\n<li>Is CPU, RAM or disk IO maxed out?<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>There are three layers you can combine:<\/p>\n<p>n<\/p>\n<ol>n  <\/p>\n<li><strong>OS-level checks:<\/strong> Simple <code>systemctl status<\/code>, <code>supervisorctl status<\/code>, or <code>pm2 list<\/code> in scripts or external monitors.<\/li>\n<p>n  <\/p>\n<li><strong>Application-level checks:<\/strong> Laravel Horizon metrics, or custom health endpoints where you return queue lengths and last job times.<\/li>\n<p>n  <\/p>\n<li><strong>Full monitoring stack:<\/strong> Tools like Prometheus + Grafana + Uptime Kuma as described in our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">setting up VPS monitoring and alerts<\/a>.<\/li>\n<p>n<\/ol>\n<p>nn<\/p>\n<h3><span id=\"Capacity_Planning_for_Workers\">Capacity Planning for Workers<\/span><\/h3>\n<p>n<\/p>\n<p>Workers share the same CPU and RAM as your web stack, database (if on the same server), and cache. On a small VPS, it\u2019s easy to overshoot with too many workers and starve everything else.<\/p>\n<p>n<\/p>\n<p>Some practical rules of thumb:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>On a 2 vCPU \/ 4 GB RAM VPS hosting Laravel + MySQL + Redis:n\n<ul>n      <\/p>\n<li>Start with 2\u20134 workers for general jobs.<\/li>\n<p>n      <\/p>\n<li>Run heavy exports or reports on a dedicated queue with 1\u20132 workers.<\/li>\n<p>n      <\/p>\n<li>Monitor CPU load; if average stays under 60\u201370% during peak, you\u2019re fine.<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n  <\/p>\n<li>Use priority queues instead of more and more workers. For example: <code>high, default, low<\/code>.<\/li>\n<p>n  <\/p>\n<li>For CPU-bound jobs (image processing, PDF generation), consider an additional VPS just for workers once they start impacting the main site.<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h3><span id=\"Common_Queue_Problems_and_How_to_Fix_Them\">Common Queue Problems and How to Fix Them<\/span><\/h3>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Jobs stuck in &#8220;reserved&#8221; state:<\/strong> Often caused by workers dying mid-job. Ensure your process manager restarts them and consider <code>--max-time<\/code> and <code>--max-jobs<\/code> to recycle workers periodically.<\/li>\n<p>n  <\/p>\n<li><strong>Database queue locking issues:<\/strong> If you use the database driver for high-volume queues, row locks can become a bottleneck. Moving to Redis is usually a better option on a VPS.<\/li>\n<p>n  <\/p>\n<li><strong>&#8220;Out of memory&#8221; errors:<\/strong> Memory-hungry jobs (PDF, image, big ORM queries) can bloat long-lived workers. Recycling with <code>--max-jobs<\/code> or <code>--max-time<\/code> and optimizing the job code itself helps.<\/li>\n<p>n  <\/p>\n<li><strong>Deploys causing duplicate runs:<\/strong> If deploy scripts brutally kill workers, jobs can re-run on restart. Use graceful stops (Supervisor&#8217;s <code>stopwaitsecs<\/code>, <code>systemctl stop<\/code> and signals) and design idempotent jobs.<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h2><span id=\"Choosing_Between_Supervisor_systemd_and_PM2_on_a_dchostcom_VPS\">Choosing Between Supervisor, systemd and PM2 on a dchost.com VPS<\/span><\/h2>\n<p>n<\/p>\n<p>So which approach should you pick for your own VPS at dchost.com? In practice, we usually follow a few simple patterns based on project type and team experience.<\/p>\n<p>nn<\/p>\n<h3><span id=\"Scenario_1_Pure_Laravel_App_Small_to_Medium_Scale\">Scenario 1: Pure Laravel App, Small to Medium Scale<\/span><\/h3>\n<p>n<\/p>\n<p>For a classic Laravel application (API or web) on a single VPS:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li><strong>Queue backend:<\/strong> Redis if possible; database driver only for very low volume or if Redis is not yet in the picture.<\/li>\n<p>n  <\/p>\n<li><strong>Process manager:<\/strong>n\n<ul>n      <\/p>\n<li>If your team is new to Linux internals: use <strong>Supervisor<\/strong> for queue workers and optional Horizon.<\/li>\n<p>n      <\/p>\n<li>If your team is comfortable with systemd: use <strong>systemd services<\/strong> (and optionally templates) for workers and Horizon.<\/li>\n<p>n    <\/ul>\n<p>n  <\/li>\n<p>n  <\/p>\n<li><strong>Monitoring:<\/strong> At least <code>systemctl status<\/code> or <code>supervisorctl status<\/code> checks in your health playbook, and basic alerts from an external monitor.<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h3><span id=\"Scenario_2_Laravel_Nodejs_Mixed_Stack\">Scenario 2: Laravel + Node.js Mixed Stack<\/span><\/h3>\n<p>n<\/p>\n<p>For stacks with both Laravel and Node.js:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Run <strong>Laravel workers<\/strong> via Supervisor or systemd.<\/li>\n<p>n  <\/p>\n<li>Run <strong>Node.js workers<\/strong> and WebSocket servers via PM2, kept alive by a systemd unit for PM2 itself.<\/li>\n<p>n  <\/p>\n<li>Use Redis (or another broker) as a common queueing layer, but respect language boundaries for the process managers.<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h3><span id=\"Scenario_3_Multiple_Apps_on_One_VPS\">Scenario 3: Multiple Apps on One VPS<\/span><\/h3>\n<p>n<\/p>\n<p>If you host multiple Laravel apps on a single dchost.com VPS, organization becomes more important:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Use clear naming in your process configs: <code>project1-queue<\/code>, <code>project2-horizon<\/code>, etc.<\/li>\n<p>n  <\/p>\n<li>Separate logs per project, and use log rotation aggressively.<\/li>\n<p>n  <\/p>\n<li>Consider <strong>separate systemd units or Supervisor configs per project<\/strong> instead of one big, shared worker file.<\/li>\n<p>n  <\/p>\n<li>Keep an eye on aggregated CPU\/RAM usage; at some point it\u2019s cleaner to move busy projects to their own VPS.<\/li>\n<p>n<\/ul>\n<p>nn<\/p>\n<h3><span id=\"Security_and_Stability_Considerations\">Security and Stability Considerations<\/span><\/h3>\n<p>n<\/p>\n<p>Background jobs are powerful \u2013 they often talk to payment gateways, third-party APIs, external storage and internal systems. Make sure they run on a hardened VPS with:<\/p>\n<p>n<\/p>\n<ul>n  <\/p>\n<li>Strong SSH and firewall setup<\/li>\n<p>n  <\/p>\n<li>Regular updates and kernel patches<\/li>\n<p>n  <\/p>\n<li>Off-site and versioned backups in case of errors or ransomware<\/li>\n<p>n<\/ul>\n<p>n<\/p>\n<p>We\u2019ve summarized a practical, non-dramatic hardening checklist for new servers in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-sunucu-guvenligi-nasil-saglanir-kapiyi-acik-birakmadan-yasamanin-sirri\/\">how to secure a VPS server the calm way<\/a>. Combine that with a disciplined queue setup and you\u2019ll have a resilient backend.<\/p>\n<p>nn<\/p>\n<h2><span id=\"Putting_It_All_Together_on_Your_dchostcom_VPS\">Putting It All Together on Your dchost.com VPS<\/span><\/h2>\n<p>n<\/p>\n<p>Background jobs and queue workers are where a simple VPS becomes a capable application platform. Instead of letting users wait for PDFs to generate, emails to send or webhooks to call, you push all that into queues and let dedicated workers handle the heavy lifting. On a dchost.com VPS, you control the OS, the process manager, and the stack \u2013 which means you can shape exactly how reliable and scalable your background processing pipeline will be.<\/p>\n<p>n<\/p>\n<p>For most Laravel teams, the path is clear: start with Redis queues and a handful of workers managed by Supervisor or systemd. As your requirements grow, add Horizon for visibility, PM2 for any Node.js sidecars, and proper monitoring with tools like Prometheus and Grafana following our <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">VPS monitoring and alerts guide<\/a>. When the load eventually outgrows a single server, it\u2019s straightforward to move heavy queues or Node.js workers to additional dchost.com VPS or <a href=\"https:\/\/www.dchost.com\/dedicated-server\">dedicated server<\/a>s.<\/p>\n<p>n<\/p>\n<p>If you\u2019re planning a new application or considering migrating an existing one, our team at dchost.com can help you choose the right VPS size, storage and architecture for your queue-heavy workloads, whether that\u2019s a single Laravel project with Horizon or a multi-service stack mixing PHP, Node.js, and separate cache\/databases. Start with a clean, secure VPS foundation, put your background jobs under a solid process manager, and you\u2019ll have the calm, predictable backend you need to build on confidently.<\/p>\n<p>n&#8221;,<br \/>\n  &#8220;focus_keyword&#8221;: &#8220;background jobs and queue management on a VPS&#8221;,<br \/>\n  &#8220;meta_description&#8221;: &#8220;Learn how to run background jobs and queues on a VPS using Laravel queues, Supervisor, systemd and PM2, with practical setup tips, tuning and monitoring.&#8221;,<br \/>\n  &#8220;faqs&#8221;: [<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;Should I use Supervisor or systemd for Laravel queues on my VPS?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;Both Supervisor and systemd work well for managing Laravel queues on a VPS, and the choice mostly comes down to your team\u2019s familiarity. Supervisor is very popular in the Laravel world because its configuration files are simple and easy to understand, making it a great fit if you\u2019re transitioning from shared hosting or panel-based environments. Systemd, on the other hand, is built into modern Linux distributions and offers powerful restart policies and resource controls without adding another daemon. On many dchost.com VPS deployments we use systemd by default for new projects and Supervisor where teams are already comfortable with it.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;How many queue workers should I run on a small VPS?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;The right number of workers depends on your VPS resources and how heavy your jobs are, but it\u2019s better to start small and scale up. On a typical 2 vCPU \/ 4 GB RAM VPS running Laravel, MySQL and Redis, 2\u20134 general-purpose workers are usually enough at the beginning. If you have very heavy jobs like PDF or image processing, consider putting them on a dedicated queue with 1\u20132 workers so they don\u2019t block everything else. Monitor CPU and RAM usage: if average CPU stays under 60\u201370% during peak and queues don\u2019t grow uncontrollably, your worker count is in a good range.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;Do I still need Horizon if I already use Supervisor or systemd?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;Horizon and Supervisor\/systemd solve different problems. Horizon gives you a dashboard, metrics, and high-level management for Redis-based Laravel queues, but it still needs a process manager to keep it and its workers running. Supervisor or systemd start, stop and restart the underlying processes at the OS level. On a production VPS at dchost.com, a common pattern is to run Horizon under a systemd or Supervisor service and use Horizon to manage worker counts and priorities, while relying on the process manager to ensure Horizon itself is always alive, restarts on failure, and starts automatically on boot.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;When should I introduce PM2 for queue management?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;You should consider PM2 when your background processing includes Node.js components, such as WebSocket servers, real-time notification services, or Node-based workers consuming the same queues as Laravel. PM2 is designed for Node.js, with handy features like clustering, graceful reloads and integrated logging. You typically keep PHP workers under Supervisor or systemd and use PM2 just for Node.js processes, often supervised in turn by a small systemd unit for PM2 itself. If your stack is pure PHP\/Laravel with no Node.js, there\u2019s usually no need to introduce PM2 \u2013 stick with Supervisor or systemd for simplicity.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;How can I monitor my queue workers and avoid silent failures?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;On a VPS you should combine several layers of monitoring to avoid silent queue failures. At minimum, check process status using tools like systemctl, supervisorctl or pm2 in regular health checks. Add application-level metrics such as queue length, failed job count and average processing time, either via Laravel Horizon or custom health endpoints. For serious projects, we recommend setting up a lightweight monitoring stack like Prometheus + Grafana + Uptime Kuma as described in our VPS monitoring guide, and raising alerts when queue length exceeds a threshold, workers are down, or CPU\/RAM usage stays high for too long.&#8221;<br \/>\n    }<br \/>\n  ]<br \/>\n}<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>{ &#8220;title&#8221;: &#8220;Background Jobs and Queue Management on a VPS: Laravel Queues, Supervisor, systemd and PM2 Explained&#8221;, &#8220;content&#8221;: &#8220; When you move from a simple website to a real web application on a VPS, background jobs quickly stop being a nice-to-have and become essential. Order confirmation emails, invoice generation, video processing, search indexing, webhooks, data [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3506,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-3505","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=3505"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3505\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/3506"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=3505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=3505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=3505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}