{"id":4106,"date":"2026-01-03T21:15:00","date_gmt":"2026-01-03T18:15:00","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/sizing-a-vps-for-laravel-horizon-and-queues-cpu-ram-redis-and-worker-counts\/"},"modified":"2026-01-03T21:15:00","modified_gmt":"2026-01-03T18:15:00","slug":"sizing-a-vps-for-laravel-horizon-and-queues-cpu-ram-redis-and-worker-counts","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/sizing-a-vps-for-laravel-horizon-and-queues-cpu-ram-redis-and-worker-counts\/","title":{"rendered":"Sizing a VPS for Laravel Horizon and Queues: CPU, RAM, Redis and Worker Counts"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>When you move a Laravel application from shared hosting to a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>, queues and Laravel Horizon are often what change the most. On shared hosting, cron usually runs a single queue worker in the background. On a VPS, you suddenly have to decide how many workers to run, how much CPU and RAM they need, how big Redis should be, and what happens during traffic peaks. If you guess, you either overpay for an oversized server or end up with stuck jobs, slow checkouts and frustrated users. In this article, we will walk through a practical way to size a VPS specifically for Laravel Horizon and queues. We will translate business requirements like \u201csend all emails within 2 minutes\u201d or \u201cgenerate reports within 5 minutes\u201d into concrete numbers: vCPU, RAM, Redis memory and worker counts. The goal is to give you a repeatable method that works for small projects, growing SaaS products and busy e\u2011commerce sites running on dchost.com infrastructure.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#How_Laravel_Horizon_Queues_and_Redis_Use_Your_VPS_Resources\"><span class=\"toc_number toc_depth_1\">1<\/span> How Laravel Horizon, Queues and Redis Use Your VPS Resources<\/a><ul><li><a href=\"#What_Horizon_Actually_Does\"><span class=\"toc_number toc_depth_2\">1.1<\/span> What Horizon Actually Does<\/a><\/li><li><a href=\"#How_Queue_Workers_Use_CPU_and_RAM\"><span class=\"toc_number toc_depth_2\">1.2<\/span> How Queue Workers Use CPU and RAM<\/a><\/li><li><a href=\"#What_Redis_Stores_for_Laravel_Queues\"><span class=\"toc_number toc_depth_2\">1.3<\/span> What Redis Stores for Laravel Queues<\/a><\/li><\/ul><\/li><li><a href=\"#Step_1_Describe_Your_Queue_Workload_in_Numbers\"><span class=\"toc_number toc_depth_1\">2<\/span> Step 1: Describe Your Queue Workload in Numbers<\/a><ul><li><a href=\"#Identify_Job_Types_and_SLAs\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Identify Job Types and SLAs<\/a><\/li><li><a href=\"#Measure_or_Estimate_Job_Runtime\"><span class=\"toc_number toc_depth_2\">2.2<\/span> Measure or Estimate Job Runtime<\/a><\/li><li><a href=\"#Compute_Required_Throughput\"><span class=\"toc_number toc_depth_2\">2.3<\/span> Compute Required Throughput<\/a><\/li><\/ul><\/li><li><a href=\"#Step_2_Translate_Workload_Into_Worker_Counts_and_vCPU\"><span class=\"toc_number toc_depth_1\">3<\/span> Step 2: Translate Workload Into Worker Counts and vCPU<\/a><ul><li><a href=\"#Throughput_Per_Worker\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Throughput Per Worker<\/a><\/li><li><a href=\"#Workers_Required_to_Meet_SLA\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Workers Required to Meet SLA<\/a><\/li><li><a href=\"#Mapping_Workers_to_vCPU\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Mapping Workers to vCPU<\/a><\/li><\/ul><\/li><li><a href=\"#Step_3_Estimating_RAM_for_Workers_Redis_and_the_OS\"><span class=\"toc_number toc_depth_1\">4<\/span> Step 3: Estimating RAM for Workers, Redis and the OS<\/a><ul><li><a href=\"#Baseline_RAM_for_OS_and_Web_Stack\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Baseline RAM for OS and Web Stack<\/a><\/li><li><a href=\"#RAM_Per_Queue_Worker\"><span class=\"toc_number toc_depth_2\">4.2<\/span> RAM Per Queue Worker<\/a><\/li><li><a href=\"#Redis_Memory_Sizing\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Redis Memory Sizing<\/a><\/li><li><a href=\"#Putting_RAM_Sizing_Together\"><span class=\"toc_number toc_depth_2\">4.4<\/span> Putting RAM Sizing Together<\/a><\/li><\/ul><\/li><li><a href=\"#Step_4_Designing_Horizon_Worker_Configurations\"><span class=\"toc_number toc_depth_1\">5<\/span> Step 4: Designing Horizon Worker Configurations<\/a><ul><li><a href=\"#Separate_Queues_by_Latency_Sensitivity\"><span class=\"toc_number toc_depth_2\">5.1<\/span> Separate Queues by Latency Sensitivity<\/a><\/li><li><a href=\"#Setting_Horizon_Balancing_and_Max_Jobs\"><span class=\"toc_number toc_depth_2\">5.2<\/span> Setting Horizon Balancing and Max Jobs<\/a><\/li><li><a href=\"#PHPFPM_vs_Queue_Workers\"><span class=\"toc_number toc_depth_2\">5.3<\/span> PHP\u2011FPM vs Queue Workers<\/a><\/li><\/ul><\/li><li><a href=\"#Step_5_Example_VPS_Sizing_Scenarios_for_Laravel_Horizon\"><span class=\"toc_number toc_depth_1\">6<\/span> Step 5: Example VPS Sizing Scenarios for Laravel Horizon<\/a><ul><li><a href=\"#Scenario_A_Small_SaaS_or_Internal_Tool\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Scenario A: Small SaaS or Internal Tool<\/a><\/li><li><a href=\"#Scenario_B_Growing_ECommerce_with_Campaign_Peaks\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Scenario B: Growing E\u2011Commerce with Campaign Peaks<\/a><\/li><li><a href=\"#Scenario_C_Heavy_Background_Processing_Analytics\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Scenario C: Heavy Background Processing \/ Analytics<\/a><\/li><\/ul><\/li><li><a href=\"#Step_6_Monitor_Iterate_and_Know_When_to_Resize\"><span class=\"toc_number toc_depth_1\">7<\/span> Step 6: Monitor, Iterate and Know When to Resize<\/a><ul><li><a href=\"#Metrics_to_Watch_for_Horizon_and_Queues\"><span class=\"toc_number toc_depth_2\">7.1<\/span> Metrics to Watch for Horizon and Queues<\/a><\/li><li><a href=\"#When_to_Add_More_Workers_vs_When_to_Upgrade_the_VPS\"><span class=\"toc_number toc_depth_2\">7.2<\/span> When to Add More Workers vs When to Upgrade the VPS<\/a><\/li><li><a href=\"#Keep_Background_Jobs_FirstClass_in_Your_Architecture\"><span class=\"toc_number toc_depth_2\">7.3<\/span> Keep Background Jobs First\u2011Class in Your Architecture<\/a><\/li><\/ul><\/li><li><a href=\"#Bringing_It_All_Together_and_How_dchostcom_Fits_In\"><span class=\"toc_number toc_depth_1\">8<\/span> Bringing It All Together (and How dchost.com Fits In)<\/a><\/li><\/ul><\/div>\n<h2><span id=\"How_Laravel_Horizon_Queues_and_Redis_Use_Your_VPS_Resources\">How Laravel Horizon, Queues and Redis Use Your VPS Resources<\/span><\/h2>\n<p>Before we talk numbers, it helps to understand how each component consumes resources on a VPS.<\/p>\n<h3><span id=\"What_Horizon_Actually_Does\">What Horizon Actually Does<\/span><\/h3>\n<p><strong>Laravel Horizon<\/strong> is a dashboard and supervisor for queue workers. It does three main things:<\/p>\n<ul>\n<li>Starts and monitors your queue workers (how many processes, which queues, priorities).<\/li>\n<li>Keeps metrics about jobs (throughput, failures, runtime percentiles) in Redis.<\/li>\n<li>Provides a web UI to see job status and manage your workers.<\/li>\n<\/ul>\n<p>Horizon itself is light; it is your queue workers that really consume CPU and RAM.<\/p>\n<h3><span id=\"How_Queue_Workers_Use_CPU_and_RAM\">How Queue Workers Use CPU and RAM<\/span><\/h3>\n<p>Each worker is basically a long\u2011running PHP process. It:<\/p>\n<ul>\n<li>Bootstraps Laravel once, then repeatedly pulls jobs from the queue.<\/li>\n<li>Spends CPU time executing your job logic (sending email, resizing images, hitting APIs).<\/li>\n<li>Uses RAM to hold the Laravel framework, your code, loaded services and any in\u2011memory data it processes.<\/li>\n<\/ul>\n<p>Roughly speaking:<\/p>\n<ul>\n<li><strong>I\/O\u2011bound jobs<\/strong> (calling external APIs, sending email) spend a lot of time waiting; they do not fully saturate the CPU.<\/li>\n<li><strong>CPU\u2011bound jobs<\/strong> (PDF generation, image processing, encryption heavy tasks) keep the CPU cores busy.<\/li>\n<\/ul>\n<p>This distinction is important because it determines how many workers you can safely run per vCPU.<\/p>\n<h3><span id=\"What_Redis_Stores_for_Laravel_Queues\">What Redis Stores for Laravel Queues<\/span><\/h3>\n<p>When you use Redis as your queue driver, Redis stores:<\/p>\n<ul>\n<li>Pending jobs on each queue.<\/li>\n<li>Reserved and delayed jobs.<\/li>\n<li>Horizon monitoring data (throughput counters, failed job metadata, tags).<\/li>\n<li>Anything else you use Redis for: cache, sessions, rate limiting, etc.<\/li>\n<\/ul>\n<p>Because Redis keeps everything in memory, the amount of RAM you assign to Redis directly limits how many jobs and how much Horizon history you can keep. In our <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-prod-ortam-optimizasyonu-nasil-yapilir-php%e2%80%91fpm-opcache-octane-queue-horizon-ve-redisi-el-ele-calistirmak\/\">Laravel production tune\u2011up guide for PHP\u2011FPM, OPcache, Octane and Redis<\/a>, we explain why tuning Redis memory limits is critical; the same logic applies here.<\/p>\n<h2><span id=\"Step_1_Describe_Your_Queue_Workload_in_Numbers\">Step 1: Describe Your Queue Workload in Numbers<\/span><\/h2>\n<p>Most sizing mistakes happen because we start from hardware (\u201cmaybe 4 vCPU and 8 GB RAM?\u201d) instead of workload. Start by quantifying what your queues must handle.<\/p>\n<h3><span id=\"Identify_Job_Types_and_SLAs\">Identify Job Types and SLAs<\/span><\/h3>\n<p>Make a simple table of your key job types:<\/p>\n<ul>\n<li><strong>Transactional emails<\/strong>: order confirmation, password reset, notifications.<\/li>\n<li><strong>Media processing<\/strong>: image resize, video thumbnail generation.<\/li>\n<li><strong>Reporting \/ exports<\/strong>: daily reports, CSV exports, invoices.<\/li>\n<li><strong>Billing tasks<\/strong>: subscription renewals, invoice generation.<\/li>\n<li><strong>Real\u2011time UX jobs<\/strong>: broadcasting events, WebSocket pushes, quick cache updates.<\/li>\n<\/ul>\n<p>For each job type, specify:<\/p>\n<ul>\n<li><strong>Volume:<\/strong> jobs per minute \/ hour during normal and peak times.<\/li>\n<li><strong>Latency target (SLA):<\/strong> how quickly it must finish from the time it is queued (e.g. 30 seconds, 2 minutes, 5 minutes).<\/li>\n<li><strong>Priority:<\/strong> can it wait when the system is under load?<\/li>\n<\/ul>\n<p>Example for an e\u2011commerce site:<\/p>\n<ul>\n<li>Transactional emails: 20\/minute normal, 80\/minute during campaigns; SLA 1\u20132 minutes.<\/li>\n<li>Image resize: 2\/minute normal, 10\/minute peak; SLA 5\u201310 minutes.<\/li>\n<li>Reports: 10\/hour; SLA 15\u201330 minutes.<\/li>\n<\/ul>\n<h3><span id=\"Measure_or_Estimate_Job_Runtime\">Measure or Estimate Job Runtime<\/span><\/h3>\n<p>Next, estimate how long a single job takes to run when executed by one worker. You can:<\/p>\n<ul>\n<li>Add simple timing logs around <code>handle()<\/code> using <code>microtime(true)<\/code>.<\/li>\n<li>Use Horizon\u2019s job runtime metrics after a short test run.<\/li>\n<\/ul>\n<p>For a starting point, you might see something like:<\/p>\n<ul>\n<li>Email job: 150\u2013250 ms (depends heavily on SMTP or API performance).<\/li>\n<li>Image resize: 1\u20133 seconds (CPU intensive, depends on resolution and library).<\/li>\n<li>Report generation: 3\u201310 seconds (database + CPU work).<\/li>\n<\/ul>\n<p>This per\u2011job runtime is the key to computing throughput and workers later.<\/p>\n<h3><span id=\"Compute_Required_Throughput\">Compute Required Throughput<\/span><\/h3>\n<p>For each job type, compute how many jobs per second you must complete to satisfy your SLA:<\/p>\n<p><code>required_throughput = peak_jobs_in_SLA_window \/ SLA_seconds<\/code><\/p>\n<p>Example for emails:<\/p>\n<ul>\n<li>Peak: 80 emails per minute.<\/li>\n<li>SLA: 120 seconds (2 minutes).<\/li>\n<li>In 120 seconds you may accumulate up to 160 jobs (80\/min * 2 min).<\/li>\n<li>So you need to process 160 jobs in 120 seconds \u2192 <strong>1.33 jobs\/second<\/strong>.<\/li>\n<\/ul>\n<p>Keep these numbers; we will match them against worker capacity in the next step.<\/p>\n<h2><span id=\"Step_2_Translate_Workload_Into_Worker_Counts_and_vCPU\">Step 2: Translate Workload Into Worker Counts and vCPU<\/span><\/h2>\n<p>Now we use those job runtimes to estimate how many workers and how much CPU you need.<\/p>\n<h3><span id=\"Throughput_Per_Worker\">Throughput Per Worker<\/span><\/h3>\n<p>If a job takes <code>t<\/code> seconds on average, then a single worker can execute approximately:<\/p>\n<p><code>worker_throughput \u2248 1 \/ t jobs per second<\/code><\/p>\n<p>Examples:<\/p>\n<ul>\n<li>Email job: 0.2 s \u2192 5 jobs\/second per worker.<\/li>\n<li>Image resize: 2 s \u2192 0.5 jobs\/second per worker.<\/li>\n<li>Report: 5 s \u2192 0.2 jobs\/second per worker.<\/li>\n<\/ul>\n<p>This assumes the worker is constantly busy, which is reasonable during peaks.<\/p>\n<h3><span id=\"Workers_Required_to_Meet_SLA\">Workers Required to Meet SLA<\/span><\/h3>\n<p>To meet your SLA, you need:<\/p>\n<p><code>workers_needed = required_throughput \/ worker_throughput<\/code><\/p>\n<p>Using our email example:<\/p>\n<ul>\n<li>Required throughput: 1.33 jobs\/second.<\/li>\n<li>Worker throughput: 5 jobs\/second.<\/li>\n<li>Workers needed: 1.33 \/ 5 = 0.266 \u2192 round up to <strong>1 worker<\/strong>.<\/li>\n<\/ul>\n<p>For image resize jobs at peak 10\/min (0.167 jobs\/second) and 2 s per job:<\/p>\n<ul>\n<li>Worker throughput: 0.5 jobs\/second.<\/li>\n<li>Workers needed: 0.167 \/ 0.5 = 0.334 \u2192 <strong>1 worker<\/strong> is still enough.<\/li>\n<\/ul>\n<p>The point is not to get an exact number, but a ballpark that you can refine with real monitoring.<\/p>\n<h3><span id=\"Mapping_Workers_to_vCPU\">Mapping Workers to vCPU<\/span><\/h3>\n<p>Workers translate to CPU usage differently for I\/O\u2011bound and CPU\u2011bound jobs:<\/p>\n<ul>\n<li><strong>I\/O\u2011bound jobs:<\/strong> Because they wait on external services, you can often run <strong>2\u20134 workers per vCPU<\/strong> without saturating the CPU.<\/li>\n<li><strong>CPU\u2011bound jobs:<\/strong> They really use CPU cycles, so plan for about <strong>1\u20131.5 workers per vCPU<\/strong> to keep load stable.<\/li>\n<\/ul>\n<p>As a conservative starting point for a mixed workload:<\/p>\n<ul>\n<li><strong>1.5\u20132 workers per vCPU<\/strong> is usually safe on a modern VPS with PHP 8.x and OPcache enabled.<\/li>\n<\/ul>\n<p>If your calculations say you need 8 workers for various queues, and you want to stay at 2 workers\/vCPU, then:<\/p>\n<p><code>vCPU_needed \u2248 8 \/ 2 = 4 vCPU<\/code><\/p>\n<p>Remember the rest of the stack (PHP\u2011FPM for web traffic, MySQL\/PostgreSQL, Nginx\/Apache) also uses CPU. If your Laravel app handles both web and queue traffic on the same VPS, add at least one extra vCPU as a buffer.<\/p>\n<p>We dive deeper into overall VPS sizing (CPU vs RAM vs NVMe) for PHP apps in our article <a href=\"https:\/\/www.dchost.com\/blog\/en\/woocommerce-laravel-ve-node-jsde-dogru-vps-kaynaklarini-nasil-secersin-cpu-ram-nvme-ve-bant-genisligi-rehberi\/\">how to choose VPS specs for WooCommerce, Laravel and Node.js<\/a>; the same principles apply when Horizon is part of the picture.<\/p>\n<h2><span id=\"Step_3_Estimating_RAM_for_Workers_Redis_and_the_OS\">Step 3: Estimating RAM for Workers, Redis and the OS<\/span><\/h2>\n<p>Once CPU and worker counts are roughly clear, you can size RAM. RAM is usually consumed by:<\/p>\n<ul>\n<li>The operating system and background services.<\/li>\n<li>Web stack (Nginx\/Apache, PHP\u2011FPM, database server).<\/li>\n<li>Queue workers managed by Horizon.<\/li>\n<li>Redis and any other caches.<\/li>\n<\/ul>\n<h3><span id=\"Baseline_RAM_for_OS_and_Web_Stack\">Baseline RAM for OS and Web Stack<\/span><\/h3>\n<p>On a lean Linux VPS (e.g. Ubuntu or AlmaLinux) running Nginx, PHP\u2011FPM and a small database, you should budget:<\/p>\n<ul>\n<li><strong>0.8\u20131.0 GB<\/strong> RAM for the OS and base services.<\/li>\n<li><strong>0.5\u20131.0 GB<\/strong> RAM for PHP\u2011FPM and the web layer on a low\u2011traffic app.<\/li>\n<li><strong>0.5\u20131.0 GB<\/strong> RAM for a small MySQL\/PostgreSQL instance.<\/li>\n<\/ul>\n<p>So even before Horizon, a modest Laravel app can easily use 2\u20133 GB of RAM under load. Our post on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vpste-ram-swap-ve-oom-killer-yonetimi\/\">managing RAM, swap and the OOM killer on VPS servers<\/a> explains why you should avoid running right at the edge of total RAM.<\/p>\n<h3><span id=\"RAM_Per_Queue_Worker\">RAM Per Queue Worker<\/span><\/h3>\n<p>A single Laravel queue worker (PHP 8.x, OPcache, typical business code) tends to hover around:<\/p>\n<ul>\n<li><strong>80\u2013200 MB<\/strong> of RAM per worker, depending on:<\/li>\n<li>The number of loaded service providers.<\/li>\n<li>How much data you process in memory (images, big arrays, etc.).<\/li>\n<li>Memory leaks in long\u2011running jobs.<\/li>\n<\/ul>\n<p>To stay safe, assume:<\/p>\n<p><code>ram_per_worker \u2248 150\u2013200 MB<\/code><\/p>\n<p>Then:<\/p>\n<p><code>worker_ram_total = worker_count * ram_per_worker<\/code><\/p>\n<p>If you plan for 8 workers at 200 MB each:<\/p>\n<ul>\n<li>8 * 200 MB = <strong>1.6 GB<\/strong> RAM for workers.<\/li>\n<\/ul>\n<h3><span id=\"Redis_Memory_Sizing\">Redis Memory Sizing<\/span><\/h3>\n<p>Redis memory has two parts:<\/p>\n<ul>\n<li>Memory for queue data (jobs, delays, reserved jobs).<\/li>\n<li>Memory for Horizon metrics, cache, sessions, etc.<\/li>\n<\/ul>\n<p>For many small and medium projects, Redis for queues and Horizon happily lives within <strong>256\u2013512 MB<\/strong>. If you also use Redis for cache and sessions, consider <strong>512\u20131024 MB<\/strong> to leave room for growth.<\/p>\n<p>A simple starting estimate:<\/p>\n<ul>\n<li><strong>Queue + Horizon only:<\/strong> 256 MB.<\/li>\n<li><strong>Queue + Horizon + cache\/sessions:<\/strong> 512\u20131024 MB, depending on traffic and cache TTL.<\/li>\n<\/ul>\n<p>Always set <code>maxmemory<\/code> in <code>redis.conf<\/code> so Redis cannot grow until the kernel kills it. Then pick a <code>maxmemory-policy<\/code> that fits your use (often <code>allkeys-lru<\/code> for cache\u2011heavy workloads, <code>volatile-lru<\/code> if you rely heavily on non\u2011expiring keys).<\/p>\n<h3><span id=\"Putting_RAM_Sizing_Together\">Putting RAM Sizing Together<\/span><\/h3>\n<p>Combine everything into a rough formula:<\/p>\n<p><code>total_ram_needed \u2248 OS_base + web_stack + database + workers + Redis + 20\u201330% headroom<\/code><\/p>\n<p>Example for a mid\u2011size app:<\/p>\n<ul>\n<li>OS_base: 1 GB.<\/li>\n<li>Web_stack (Nginx + PHP\u2011FPM): 1 GB.<\/li>\n<li>Database: 1 GB.<\/li>\n<li>Workers: 1.6 GB (8 workers * 200 MB).<\/li>\n<li>Redis: 512 MB.<\/li>\n<li>Subtotal: 5.1 GB.<\/li>\n<li>Headroom 30%: \u2248 1.5 GB.<\/li>\n<li>Total: \u2248 <strong>6.5 GB \u2192 choose an 8 GB RAM VPS<\/strong>.<\/li>\n<\/ul>\n<p>This type of back\u2011of\u2011envelope math prevents you from under\u2011sizing and fighting the OOM killer.<\/p>\n<h2><span id=\"Step_4_Designing_Horizon_Worker_Configurations\">Step 4: Designing Horizon Worker Configurations<\/span><\/h2>\n<p>Now that CPU, RAM and worker counts roughly make sense, you can map them into an actual Horizon configuration.<\/p>\n<h3><span id=\"Separate_Queues_by_Latency_Sensitivity\">Separate Queues by Latency Sensitivity<\/span><\/h3>\n<p>Do not dump every job into a single <code>default<\/code> queue. Instead, create queues with different SLAs:<\/p>\n<ul>\n<li><code>high<\/code>: user\u2011facing tasks that must be fast (emails, broadcasts, quick cache refresh).<\/li>\n<li><code>medium<\/code>: normal background tasks.<\/li>\n<li><code>low<\/code>: heavy, slow jobs like reports and imports.<\/li>\n<\/ul>\n<p>Configure Horizon to run different worker counts for each queue based on the throughput calculations in Step 2. For example, on a 4 vCPU server:<\/p>\n<ul>\n<li><strong>high priority:<\/strong> 4 workers.<\/li>\n<li><strong>medium:<\/strong> 4 workers.<\/li>\n<li><strong>low:<\/strong> 2 workers.<\/li>\n<li>Total: 10 workers \u2192 ~2.5 workers\/vCPU (fine for mostly I\/O\u2011bound jobs).<\/li>\n<\/ul>\n<h3><span id=\"Setting_Horizon_Balancing_and_Max_Jobs\">Setting Horizon Balancing and Max Jobs<\/span><\/h3>\n<p>Horizon supports different balancing strategies:<\/p>\n<ul>\n<li><strong>simple:<\/strong> each worker handles all queues with priorities.<\/li>\n<li><strong>auto:<\/strong> dynamically adjusts worker counts between queues based on load.<\/li>\n<\/ul>\n<p>For most applications, start with <code>auto<\/code> balancing so Horizon can move capacity from low\u2011priority to high\u2011priority queues under load.<\/p>\n<p>Also set:<\/p>\n<ul>\n<li><code>maxProcesses<\/code> or equivalent settings so Horizon does not spawn more processes than your RAM and vCPU budget.<\/li>\n<li><code>maxJobs<\/code> per worker before restart, to clean up any slow memory leaks in long\u2011running workers.<\/li>\n<\/ul>\n<h3><span id=\"PHPFPM_vs_Queue_Workers\">PHP\u2011FPM vs Queue Workers<\/span><\/h3>\n<p>Remember that both PHP\u2011FPM (for web requests) and queue workers are PHP processes. If you configure both independently, you might accidentally allow too many PHP processes overall.<\/p>\n<p>For example:<\/p>\n<ul>\n<li>PHP\u2011FPM pool: <code>pm.max_children = 20<\/code><\/li>\n<li>Horizon workers: 10<\/li>\n<li>Total potential PHP processes: 30<\/li>\n<\/ul>\n<p>On a 4 vCPU, 8 GB VPS, 30 concurrent PHP processes is usually fine <em>if<\/em> they do not all run CPU\u2011heavy tasks at once. But if your app is heavy, consider lowering PHP\u2011FPM\u2019s <code>max_children<\/code> or Horizon workers slightly. Our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/php-fpm-ayarlari-pm-pm-max_children-ve-pm-max_requests-hesaplama-rehberi\/\">PHP\u2011FPM settings for WordPress and WooCommerce<\/a> shows how to think about process counts; the same principles can be applied to Laravel.<\/p>\n<h2><span id=\"Step_5_Example_VPS_Sizing_Scenarios_for_Laravel_Horizon\">Step 5: Example VPS Sizing Scenarios for Laravel Horizon<\/span><\/h2>\n<p>Let\u2019s turn all of this into concrete example plans you can adapt on dchost.com VPS servers.<\/p>\n<h3><span id=\"Scenario_A_Small_SaaS_or_Internal_Tool\">Scenario A: Small SaaS or Internal Tool<\/span><\/h3>\n<p><strong>Profile:<\/strong> 10\u201330 concurrent users, a few hundred jobs per hour, mostly emails and light notifications.<\/p>\n<ul>\n<li><strong>VPS suggestion:<\/strong> 2 vCPU, 4 GB RAM, fast SSD\/NVMe.<\/li>\n<li><strong>Typical jobs:<\/strong> email notifications, webhooks, simple data syncs.<\/li>\n<\/ul>\n<p><strong>Queue &amp; worker plan:<\/strong><\/p>\n<ul>\n<li>Queues: <code>high<\/code> (emails, webhooks), <code>low<\/code> (reports).<\/li>\n<li>Workers: 3\u20134 total (2 for high, 1\u20132 for low).<\/li>\n<li>Redis memory: 256\u2013384 MB.<\/li>\n<\/ul>\n<p><strong>Why it works:<\/strong> With mostly I\/O\u2011bound jobs, 3\u20134 workers on 2 vCPU gives enough parallelism without overloading CPU. 4 GB RAM is adequate for OS, Nginx, PHP\u2011FPM, a small database and a modest Redis instance, as long as you keep PHP\u2011FPM pool sizes reasonable.<\/p>\n<h3><span id=\"Scenario_B_Growing_ECommerce_with_Campaign_Peaks\">Scenario B: Growing E\u2011Commerce with Campaign Peaks<\/span><\/h3>\n<p><strong>Profile:<\/strong> 50\u2013200 concurrent users during campaigns, thousands of orders per day, heavy bursts of email and some on\u2011the\u2011fly image processing.<\/p>\n<ul>\n<li><strong>VPS suggestion:<\/strong> 4\u20136 vCPU, 8\u201312 GB RAM.<\/li>\n<li><strong>Typical jobs:<\/strong> order emails, newsletter queue push, image thumbnails, invoice generation.<\/li>\n<\/ul>\n<p><strong>Queue &amp; worker plan:<\/strong><\/p>\n<ul>\n<li>Queues: <code>high<\/code> (order emails, payment notifications), <code>medium<\/code> (image processing), <code>low<\/code> (reports, bulk newsletters).<\/li>\n<li>Workers on 4 vCPU: 10\u201312 total (4 high, 4 medium, 2\u20134 low).<\/li>\n<li>Redis memory: 512\u20131024 MB (because Horizon metrics and cache will grow with traffic).<\/li>\n<\/ul>\n<p><strong>Why it works:<\/strong> 10\u201312 workers on 4 vCPU equals ~2.5\u20133 workers\/vCPU, which is fine because most jobs are I\/O\u2011bound except image processing. Put CPU\u2011heavy jobs on the <code>medium<\/code> queue and limit workers there so they do not starve the system. Use Horizon\u2019s <code>auto<\/code> balancing to move extra capacity to <code>high<\/code> during peaks.<\/p>\n<h3><span id=\"Scenario_C_Heavy_Background_Processing_Analytics\">Scenario C: Heavy Background Processing \/ Analytics<\/span><\/h3>\n<p><strong>Profile:<\/strong> A Laravel app that runs heavy reports, data imports, or complex billing calculations. Web traffic is modest, but queue load is high and CPU\u2011bound.<\/p>\n<ul>\n<li><strong>VPS suggestion:<\/strong> 8 vCPU, 16 GB RAM (or a <a href=\"https:\/\/www.dchost.com\/dedicated-server\">dedicated server<\/a> if queues dominate your workload).<\/li>\n<li><strong>Typical jobs:<\/strong> large CSV imports\/exports, complex invoice runs, advanced reporting, integrations with many APIs.<\/li>\n<\/ul>\n<p><strong>Queue &amp; worker plan:<\/strong><\/p>\n<ul>\n<li>Queues: <code>high<\/code> (user\u2011visible tasks), <code>batch<\/code> (heavy processing), <code>low<\/code> (maintenance, slow reports).<\/li>\n<li>Workers: around 12\u201314 total, but with only 6\u20138 assigned to the CPU\u2011bound <code>batch<\/code> queue.<\/li>\n<li>Redis memory: 1\u20132 GB if Horizon keeps long history and you cache a lot of data.<\/li>\n<\/ul>\n<p><strong>Why it works:<\/strong> You intentionally keep workers per vCPU close to 1.5\u20132 when jobs are CPU\u2011bound, to avoid constant 100% CPU usage. With 16 GB RAM you can allocate more to the database and Redis while still leaving room for Horizon workers and PHP\u2011FPM. On dchost.com this is a typical size we see for analytics\u2011heavy Laravel applications before they move to dedicated servers or multi\u2011VPS architectures.<\/p>\n<h2><span id=\"Step_6_Monitor_Iterate_and_Know_When_to_Resize\">Step 6: Monitor, Iterate and Know When to Resize<\/span><\/h2>\n<p>No sizing guide is complete without monitoring. Your first configuration will always need tuning once real traffic hits.<\/p>\n<h3><span id=\"Metrics_to_Watch_for_Horizon_and_Queues\">Metrics to Watch for Horizon and Queues<\/span><\/h3>\n<p>Monitor at least these metrics:<\/p>\n<ul>\n<li><strong>Queue length over time:<\/strong> Are queues consistently growing (under\u2011provisioned) or empty (maybe over\u2011provisioned)?<\/li>\n<li><strong>Job wait time and runtime:<\/strong> Horizon shows how long jobs wait before being processed and how long they take to run.<\/li>\n<li><strong>CPU usage:<\/strong> If average CPU is near 80\u201390% during peaks, reduce workers or upgrade the VPS.<\/li>\n<li><strong>RAM usage and swap:<\/strong> If your VPS starts swapping, reduce PHP\u2011FPM \/ worker counts or move to a larger RAM plan.<\/li>\n<li><strong>Redis memory and eviction:<\/strong> Check for evicted keys if your <code>maxmemory<\/code> is too low.<\/li>\n<\/ul>\n<p>Our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-kaynak-kullanimi-izleme-rehberi-htop-iotop-netdata-ve-prometheus\/\">monitoring VPS resource usage with htop, iotop, Netdata and Prometheus<\/a> gives practical commands and dashboards you can reuse for Laravel Horizon setups.<\/p>\n<h3><span id=\"When_to_Add_More_Workers_vs_When_to_Upgrade_the_VPS\">When to Add More Workers vs When to Upgrade the VPS<\/span><\/h3>\n<p>You have two main levers:<\/p>\n<ul>\n<li><strong>Increase worker counts<\/strong> if CPU and RAM are still comfortable (e.g. CPU &lt; 60%, plenty of free RAM) but queues are growing during peaks.<\/li>\n<li><strong>Upgrade the VPS<\/strong> (more vCPU\/RAM) if CPU is consistently high or RAM is close to full even with conservative worker and PHP\u2011FPM settings.<\/li>\n<\/ul>\n<p>As you scale, you might eventually split roles: one VPS for web + Horizon dashboard, another for queue workers, another for the database. Our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-uygulamalarini-vpste-nasil-yayinlarim-nginx-php%e2%80%91fpm-horizon-ve-sifir-kesinti-dagitimin-sicacik-yol-haritasi\/\">deploying Laravel on a VPS with Nginx, PHP\u2011FPM, Horizon and zero\u2011downtime releases<\/a> shows how to structure those deployments cleanly.<\/p>\n<h3><span id=\"Keep_Background_Jobs_FirstClass_in_Your_Architecture\">Keep Background Jobs First\u2011Class in Your Architecture<\/span><\/h3>\n<p>Queues are not an afterthought; they are core to how your application feels to users. We explored this in detail in our article <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-uzerinde-arka-plan-isleri-ve-kuyruk-yonetimi-laravel-queue-supervisor-systemd-ve-pm2\/\">why background jobs matter so much on a VPS<\/a>. The same lesson applies here: design your VPS sizing, Horizon configuration and Redis memory limits around your queue workload, not the other way around.<\/p>\n<h2><span id=\"Bringing_It_All_Together_and_How_dchostcom_Fits_In\">Bringing It All Together (and How dchost.com Fits In)<\/span><\/h2>\n<p>You do not need to guess when sizing a VPS for Laravel Horizon and queues. Start from your workload: job types, peak volume and latency targets. Measure or estimate per\u2011job runtimes to compute throughput per worker. From there, decide how many workers you actually need, and map that to a realistic number of vCPUs. Then allocate RAM for the OS, web stack, database, Horizon workers and Redis with at least 20\u201330% headroom to stay clear of the OOM killer. Finally, design Horizon\u2019s queue groups and worker distributions to prioritise user\u2011facing jobs while giving heavy batch tasks their own space.<\/p>\n<p>On dchost.com, you can start with a modest VPS plan (for example 2\u20134 vCPU, 4\u20138 GB RAM) and adjust as Horizon metrics and system monitoring show real behaviour. Because we also offer dedicated servers and colocation, you have a smooth path forward when your queues grow beyond a single VPS. If you are planning a new Laravel project or want to stabilise an existing Horizon setup, our team can help you translate your job workloads and SLAs into a concrete VPS or server plan that fits both your performance needs and your budget.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>When you move a Laravel application from shared hosting to a VPS, queues and Laravel Horizon are often what change the most. On shared hosting, cron usually runs a single queue worker in the background. On a VPS, you suddenly have to decide how many workers to run, how much CPU and RAM they need, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":4107,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-4106","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/4106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=4106"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/4106\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/4107"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=4106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=4106"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=4106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}