{"id":3131,"date":"2025-12-07T17:41:45","date_gmt":"2025-12-07T14:41:45","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/big-media-file-upload-strategy-php-limits-web-server-timeouts-and-chunked-uploads-with-a-cdn\/"},"modified":"2025-12-07T17:41:45","modified_gmt":"2025-12-07T14:41:45","slug":"big-media-file-upload-strategy-php-limits-web-server-timeouts-and-chunked-uploads-with-a-cdn","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/big-media-file-upload-strategy-php-limits-web-server-timeouts-and-chunked-uploads-with-a-cdn\/","title":{"rendered":"Big Media File Upload Strategy: PHP Limits, Web Server Timeouts and Chunked Uploads with a CDN"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>Big media uploads look simple on paper: a user selects a 2 GB video, clicks \u201cUpload\u201d, and expects it to work. In reality, that single action passes through browser constraints, HTTP limits, PHP configuration, Nginx\/Apache timeouts, storage performance and finally a CDN in front of everything. If any layer is misconfigured, you end up with half\u2011uploaded files, mysterious 413\/504 errors, or users who give up after the third failed attempt. In this article we\u2019ll walk through how we design big upload flows for customers at dchost.com: from PHP limits to web server timeouts, from classic single\u2011POST uploads to resilient chunked\/resumable strategies that work nicely with a CDN and modern object storage. The goal is simple: give you a practical blueprint so your large media uploads \u201cjust work\u201d instead of becoming a recurring support ticket.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#Why_Big_Media_Uploads_Keep_Failing\"><span class=\"toc_number toc_depth_1\">1<\/span> Why Big Media Uploads Keep Failing<\/a><\/li><li><a href=\"#Step_1_Get_Your_PHP_Upload_Limits_Under_Control\"><span class=\"toc_number toc_depth_1\">2<\/span> Step 1: Get Your PHP Upload Limits Under Control<\/a><ul><li><a href=\"#Core_PHP_directives_that_gate_big_uploads\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Core PHP directives that gate big uploads<\/a><\/li><li><a href=\"#Practical_sizing_examples_for_large_files\"><span class=\"toc_number toc_depth_2\">2.2<\/span> Practical sizing examples for large files<\/a><\/li><li><a href=\"#FPM_and_FastCGI_timeouts_you_must_align\"><span class=\"toc_number toc_depth_2\">2.3<\/span> FPM and FastCGI timeouts you must align<\/a><\/li><\/ul><\/li><li><a href=\"#Step_2_Align_Nginx_or_Apache_with_PHP_for_Large_Bodies\"><span class=\"toc_number toc_depth_1\">3<\/span> Step 2: Align Nginx or Apache with PHP for Large Bodies<\/a><ul><li><a href=\"#Nginx_body_size_and_timeout_settings\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Nginx body size and timeout settings<\/a><\/li><li><a href=\"#Apache_equivalents_for_big_uploads\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Apache equivalents for big uploads<\/a><\/li><li><a href=\"#Detecting_which_layer_is_failing\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Detecting which layer is failing<\/a><\/li><\/ul><\/li><li><a href=\"#Step_3_Why_Chunked_Uploads_Beat_One_Huge_POST\"><span class=\"toc_number toc_depth_1\">4<\/span> Step 3: Why Chunked Uploads Beat One Huge POST<\/a><ul><li><a href=\"#How_chunked_uploads_work_conceptually\"><span class=\"toc_number toc_depth_2\">4.1<\/span> How chunked uploads work conceptually<\/a><\/li><li><a href=\"#Benefits_of_chunked_uploads_for_PHP_and_your_web_server\"><span class=\"toc_number toc_depth_2\">4.2<\/span> Benefits of chunked uploads for PHP and your web server<\/a><\/li><li><a href=\"#Implementing_a_chunked_backend_in_PHP\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Implementing a chunked backend in PHP<\/a><\/li><\/ul><\/li><li><a href=\"#Step_4_Putting_a_CDN_in_Front_of_Big_Uploads\"><span class=\"toc_number toc_depth_1\">5<\/span> Step 4: Putting a CDN in Front of Big Uploads<\/a><ul><li><a href=\"#Pattern_1_Uploads_bypass_the_CDN_downloads_go_through_it\"><span class=\"toc_number toc_depth_2\">5.1<\/span> Pattern 1: Uploads bypass the CDN, downloads go through it<\/a><\/li><li><a href=\"#Pattern_2_Directtoobjectstorage_uploads_with_signed_URLs\"><span class=\"toc_number toc_depth_2\">5.2<\/span> Pattern 2: Direct\u2011to\u2011object\u2011storage uploads with signed URLs<\/a><\/li><li><a href=\"#Pattern_3_Chunked_uploads_via_the_CDN_to_your_origin\"><span class=\"toc_number toc_depth_2\">5.3<\/span> Pattern 3: Chunked uploads via the CDN to your origin<\/a><\/li><li><a href=\"#CDN_strategy_for_media_downloads\"><span class=\"toc_number toc_depth_2\">5.4<\/span> CDN strategy for media downloads<\/a><\/li><\/ul><\/li><li><a href=\"#Step_5_Storage_Choices_and_File_Lifecycle\"><span class=\"toc_number toc_depth_1\">6<\/span> Step 5: Storage Choices and File Lifecycle<\/a><ul><li><a href=\"#Object_vs_block_vs_file_storage_for_media_uploads\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Object vs block vs file storage for media uploads<\/a><\/li><li><a href=\"#Lifecycle_management_and_backups\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Lifecycle management and backups<\/a><\/li><\/ul><\/li><li><a href=\"#Security_Validation_and_Operational_Tips\"><span class=\"toc_number toc_depth_1\">7<\/span> Security, Validation and Operational Tips<\/a><ul><li><a href=\"#File_type_validation_and_antivirus_scanning\"><span class=\"toc_number toc_depth_2\">7.1<\/span> File type validation and antivirus scanning<\/a><\/li><li><a href=\"#Authentication_quotas_and_rate_limiting\"><span class=\"toc_number toc_depth_2\">7.2<\/span> Authentication, quotas and rate limiting<\/a><\/li><li><a href=\"#Logging_monitoring_and_alerting\"><span class=\"toc_number toc_depth_2\">7.3<\/span> Logging, monitoring and alerting<\/a><\/li><\/ul><\/li><li><a href=\"#Putting_It_All_Together_Example_Architectures\"><span class=\"toc_number toc_depth_1\">8<\/span> Putting It All Together: Example Architectures<\/a><ul><li><a href=\"#Scenario_1_WordPress_site_with_heavy_media_library\"><span class=\"toc_number toc_depth_2\">8.1<\/span> Scenario 1: WordPress site with heavy media library<\/a><\/li><li><a href=\"#Scenario_2_SPA_frontend_PHP_API_for_video_uploads\"><span class=\"toc_number toc_depth_2\">8.2<\/span> Scenario 2: SPA frontend + PHP API for video uploads<\/a><\/li><\/ul><\/li><li><a href=\"#Summary_and_Next_Steps\"><span class=\"toc_number toc_depth_1\">9<\/span> Summary and Next Steps<\/a><\/li><\/ul><\/div>\n<h2><span id=\"Why_Big_Media_Uploads_Keep_Failing\">Why Big Media Uploads Keep Failing<\/span><\/h2>\n<p>Before tuning anything, it helps to understand where big uploads typically break. The path from the user\u2019s browser to your storage involves at least four layers:<\/p>\n<ul>\n<li>The browser and network (unstable connections, mobile devices, corporate proxies)<\/li>\n<li>The web server (Nginx or Apache) receiving the HTTP request<\/li>\n<li>PHP (and PHP\u2011FPM\/FastCGI) that parses the body and runs your application<\/li>\n<li>Your storage backend and potentially a CDN in front of it<\/li>\n<\/ul>\n<p>Common symptoms map directly to one of these layers:<\/p>\n<ul>\n<li><strong>HTTP 413 Request Entity Too Large<\/strong>: Web server or PHP body size limit<\/li>\n<li><strong>HTTP 408 or 504 Gateway Timeout<\/strong>: Upload or backend processing took too long<\/li>\n<li><strong>PHP file upload errors<\/strong> (UPLOAD_ERR_INI_SIZE \/ UPLOAD_ERR_FORM_SIZE): PHP ini or form limits<\/li>\n<li><strong>Truncated or corrupt files<\/strong>: Storage write failures or app\u2011level bugs during assembly<\/li>\n<\/ul>\n<p>Your upload strategy has to make these layers align: PHP limits cannot be lower than Nginx\/Apache limits, timeouts must be realistic for your users\u2019 bandwidth, and the application must be designed so a temporary network hiccup does not destroy a 4 GB upload. Let\u2019s start where most people get stuck: PHP.<\/p>\n<h2><span id=\"Step_1_Get_Your_PHP_Upload_Limits_Under_Control\">Step 1: Get Your PHP Upload Limits Under Control<\/span><\/h2>\n<h3><span id=\"Core_PHP_directives_that_gate_big_uploads\">Core PHP directives that gate big uploads<\/span><\/h3>\n<p>PHP has several configuration directives that directly control how large and how long an upload is allowed to run. You configure them in <code>php.ini<\/code>, per\u2011site PHP settings in your control panel, or custom <code>.user.ini<\/code> files (depending on your hosting plan at dchost.com).<\/p>\n<ul>\n<li><strong>upload_max_filesize<\/strong>: Maximum size of a single uploaded file<\/li>\n<li><strong>post_max_size<\/strong>: Maximum size of the entire POST body (all files + form fields)<\/li>\n<li><strong>memory_limit<\/strong>: Max RAM a PHP script may use<\/li>\n<li><strong>max_execution_time<\/strong>: Max time (in seconds) a PHP script is allowed to run<\/li>\n<li><strong>max_input_time<\/strong>: Max time PHP waits for input (upload\/POST data)<\/li>\n<\/ul>\n<p>Two basic rules keep you out of trouble:<\/p>\n<ol>\n<li><strong>post_max_size must be \u2265 upload_max_filesize<\/strong>. Otherwise PHP rejects the request before your app sees it.<\/li>\n<li><strong>memory_limit must be comfortably larger than anything your code loads into memory<\/strong> (for example, image processing libraries often need several times the file size).<\/li>\n<\/ol>\n<p>We covered how to choose realistic values for <code>memory_limit<\/code>, <code>max_execution_time<\/code> and <code>upload_max_filesize<\/code> in detail in our guide <a href=\"https:\/\/www.dchost.com\/blog\/en\/php-ayarlarini-dogru-yapmak-memory_limit-max_execution_time-ve-upload_max_filesize-kac-olmali\/\">on choosing the right PHP memory_limit, max_execution_time and upload_max_filesize for your website<\/a>. For truly large media files, you\u2019ll usually go above the typical 64\u2013128 MB defaults.<\/p>\n<h3><span id=\"Practical_sizing_examples_for_large_files\">Practical sizing examples for large files<\/span><\/h3>\n<p>Let\u2019s say you want to support up to <strong>2 GB<\/strong> uploads (for long videos or raw footage):<\/p>\n<ul>\n<li><code>upload_max_filesize = 2048M<\/code><\/li>\n<li><code>post_max_size = 2050M<\/code> (a bit higher to account for form fields and overhead)<\/li>\n<li><code>memory_limit<\/code>: depends on what you do with the file. If you just move it to storage, 256M\u2013512M might be enough. If you transcode it in PHP (usually not ideal), you\u2019ll need much more or an external worker.<\/li>\n<li><code>max_input_time<\/code>: if a user with 10 Mbps upload speed sends 2 GB, in theory it can take more than 30 minutes. For classic single\u2011POST uploads, a value like 3600 seconds (1 hour) is safer, but this is exactly why we prefer chunked uploads (we\u2019ll get there).<\/li>\n<li><code>max_execution_time<\/code>: however long your PHP code needs after the upload finishes (file validation, DB writes, moving to storage). Often 300\u2013600 seconds is fine.<\/li>\n<\/ul>\n<p>On a busy shared environment, it is risky to set every limit to \u201chuge\u201d values, because a couple of stuck uploads can eat resources. That\u2019s why many of our customers move high\u2011volume, large\u2011file projects to a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a> or <a href=\"https:\/\/www.dchost.com\/dedicated-server\">dedicated server<\/a> at dchost.com, where they fully control PHP\u2011FPM pools and per\u2011site limits.<\/p>\n<h3><span id=\"FPM_and_FastCGI_timeouts_you_must_align\">FPM and FastCGI timeouts you must align<\/span><\/h3>\n<p>If you use PHP\u2011FPM behind Nginx or Apache, you also have timeouts on that layer:<\/p>\n<ul>\n<li><strong>PHP\u2011FPM<\/strong>: <code>request_terminate_timeout<\/code> (per pool)<\/li>\n<li><strong>Nginx<\/strong>: <code>fastcgi_read_timeout<\/code>, <code>fastcgi_send_timeout<\/code><\/li>\n<li><strong>Apache with PHP\u2011FPM<\/strong> (via proxy_fcgi or mod_fcgid): similar proxy timeouts<\/li>\n<\/ul>\n<p>These must be equal to or larger than your <code>max_execution_time<\/code> and <code>max_input_time<\/code>. Otherwise, the web server may give up on PHP and return 504 errors while PHP is still happily processing the upload. If you are tuning PHP\u2011FPM pools for a CMS or e\u2011commerce store, our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/wordpress-ve-woocommerce-icin-php-fpm-ayarlari-pm-pm-max_children-ve-pm-max_requests-hesaplama-rehberi\/\">PHP\u2011FPM settings for WordPress and WooCommerce<\/a> gives you a solid foundation for balancing concurrency and memory usage.<\/p>\n<h2><span id=\"Step_2_Align_Nginx_or_Apache_with_PHP_for_Large_Bodies\">Step 2: Align Nginx or Apache with PHP for Large Bodies<\/span><\/h2>\n<h3><span id=\"Nginx_body_size_and_timeout_settings\">Nginx body size and timeout settings<\/span><\/h3>\n<p>Nginx is often the first line that rejects a large upload. At minimum, you should review:<\/p>\n<ul>\n<li><strong><code>client_max_body_size<\/code><\/strong>: Maximum size of the request body. This must be \u2265 <code>post_max_size<\/code>.<\/li>\n<li><strong><code>client_body_timeout<\/code><\/strong>: How long Nginx waits for the body to be sent. This should tolerate slow or unstable connections.<\/li>\n<li><strong><code>proxy_read_timeout<\/code> \/ <code>fastcgi_read_timeout<\/code><\/strong>: How long Nginx waits for a response from the backend (PHP\u2011FPM, upstream API, etc.).<\/li>\n<li><strong><code>send_timeout<\/code><\/strong>: How long Nginx is willing to wait while sending data to the client.<\/li>\n<\/ul>\n<p>Example Nginx snippet for 2 GB uploads to a PHP\u2011FPM backend:<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    client_max_body_size 2050M;\n    client_body_timeout 60m;\n\n    location ~ .php$ {\n        include fastcgi_params;\n        fastcgi_pass unix:\/run\/php-fpm.sock;\n        fastcgi_read_timeout 3600s;\n        fastcgi_send_timeout 3600s;\n    }\n}\n<\/code><\/pre>\n<p>Note that extremely long timeouts increase the risk of hanging workers occupying resources. This is another reason we advocate for chunked uploads: each HTTP request becomes smaller and short\u2011lived, so timeouts can stay reasonable while still supporting \u201chuge\u201d overall uploads.<\/p>\n<h3><span id=\"Apache_equivalents_for_big_uploads\">Apache equivalents for big uploads<\/span><\/h3>\n<p>On Apache, similar concepts exist but with different names and modules:<\/p>\n<ul>\n<li><strong><code>LimitRequestBody<\/code><\/strong> (in Apache or per\u2011VirtualHost): caps the size of the request body.<\/li>\n<li><strong><code>Timeout<\/code><\/strong>: controls many operations, including how long Apache waits for data from the client or backend.<\/li>\n<li><strong><code>ProxyTimeout<\/code><\/strong>: if you proxy to PHP\u2011FPM or another backend, this controls backend wait time.<\/li>\n<li><strong>mod_fcgid \/ proxy_fcgi parameters<\/strong>: define per\u2011request limits and timeouts when using FastCGI.<\/li>\n<\/ul>\n<p>In PHP\u2011as\u2011Apache\u2011module setups, <code>LimitRequestBody<\/code> plus PHP <code>upload_max_filesize<\/code> and <code>post_max_size<\/code> are the main limiters. On more modern Apache + PHP\u2011FPM environments (which we run on many dchost.com VPS and dedicated servers), the Apache proxy timeouts must be aligned with PHP\u2011FPM\u2019s own <code>request_terminate_timeout<\/code>.<\/p>\n<h3><span id=\"Detecting_which_layer_is_failing\">Detecting which layer is failing<\/span><\/h3>\n<p>When an upload fails, you need to quickly answer \u201cwho dropped the ball?\u201d. Two tools help here:<\/p>\n<ul>\n<li><strong>HTTP status codes<\/strong>: 413 suggests body size, 408\/504 suggests timeouts. We\u2019ve covered how to read these in depth in our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/hosting-sunucu-loglarini-okumayi-ogrenin-apache-ve-nginx-ile-4xx-5xx-hatalarini-teshis-rehberi\/\">reading web server logs to diagnose 4xx\u20135xx errors on Apache and Nginx<\/a>.<\/li>\n<li><strong>Server logs<\/strong>: Nginx error logs, Apache error logs, and PHP\u2011FPM logs usually print a clear reason (e.g. \u201cclient intended to send too large body\u201d or \u201cupstream timed out\u201d).<\/li>\n<\/ul>\n<p>Once you know whether the rejection came from the web server or from PHP, adjusting the corresponding limit becomes straightforward.<\/p>\n<h2><span id=\"Step_3_Why_Chunked_Uploads_Beat_One_Huge_POST\">Step 3: Why Chunked Uploads Beat One Huge POST<\/span><\/h2>\n<p>Scaling uploads purely by inflating limits works up to a point, but it cannot solve all real\u2011world issues. Mobile connections drop, laptops sleep, corporate proxies reset long\u2011lived connections, and users sometimes need to upload multiple gigabytes. That\u2019s where <strong>chunked<\/strong> (resumable) uploads come in.<\/p>\n<h3><span id=\"How_chunked_uploads_work_conceptually\">How chunked uploads work conceptually<\/span><\/h3>\n<p>Instead of sending a 4 GB file in a single HTTP request, the browser splits it into many smaller parts (e.g. 5\u201320 MB chunks). The flow looks like this:<\/p>\n<ol>\n<li>The client asks the server to <strong>start an upload session<\/strong> and receives an <strong>upload ID<\/strong>.<\/li>\n<li>The browser reads the file in slices and uploads each slice as a separate request, including the upload ID and chunk index.<\/li>\n<li>The server stores each chunk (disk, object storage, or temp directory) and records progress (e.g. in a database or cache).<\/li>\n<li>When all chunks are uploaded, the client sends a <strong>finalize<\/strong> request; the server assembles chunks into the final file and moves it to permanent storage.<\/li>\n<li>If the connection is lost mid\u2011way, the client asks the server which chunks exist already and resumes from where it left off.<\/li>\n<\/ol>\n<p>This can be implemented in various ways (Tus protocol, custom REST endpoints, S3 multipart uploads, etc.), but the principle is the same: each HTTP request is small and quick, but the total result can be many gigabytes.<\/p>\n<h3><span id=\"Benefits_of_chunked_uploads_for_PHP_and_your_web_server\">Benefits of chunked uploads for PHP and your web server<\/span><\/h3>\n<ul>\n<li><strong>Shorter request lifetimes<\/strong>: Each chunk completes in seconds, not tens of minutes. Nginx\/Apache timeouts can stay conservative.<\/li>\n<li><strong>Lower per\u2011request memory usage<\/strong>: PHP only processes one chunk at a time, not the whole file.<\/li>\n<li><strong>Automatic resume<\/strong>: Temporary network errors don\u2019t kill the whole upload; the client retries only missing chunks.<\/li>\n<li><strong>Parallelism<\/strong>: Advanced clients can upload multiple chunks in parallel if your backend and bandwidth allow it.<\/li>\n<\/ul>\n<p>In practice, this means you no longer need to set <code>client_max_body_size<\/code>, <code>post_max_size<\/code> and <code>upload_max_filesize<\/code> equal to the total file size. They only need to fit a single chunk, e.g. 20\u201350 MB. The total upload size is then enforced at the application level (e.g. \u201cthis upload session must not exceed 10 GB\u201d).<\/p>\n<h3><span id=\"Implementing_a_chunked_backend_in_PHP\">Implementing a chunked backend in PHP<\/span><\/h3>\n<p>A typical PHP backend for chunked uploads uses:<\/p>\n<ul>\n<li>A table or key\u2011value store to track upload sessions (upload ID, user ID, total size, number of chunks, status)<\/li>\n<li>A temporary directory or object storage bucket for unassembled chunks<\/li>\n<li>Endpoints such as <code>POST \/upload\/init<\/code>, <code>PUT \/upload\/{id}\/chunk\/{index}<\/code>, <code>POST \/upload\/{id}\/complete<\/code><\/li>\n<\/ul>\n<p>When finalizing, your PHP code will:<\/p>\n<ol>\n<li>Verify that all expected chunks exist and sizes match what was declared at init time.<\/li>\n<li>Concatenate chunks in the correct order, preferably via streaming (e.g. <code>fopen<\/code>\/<code>fwrite<\/code> on the server) rather than loading everything into memory.<\/li>\n<li>Move the final file into its long\u2011term storage location (local disk, NFS, or S3\u2011compatible object storage).<\/li>\n<li>Clean up temporary chunks and mark the upload as completed.<\/li>\n<\/ol>\n<p>On dchost.com VPS or dedicated servers you can tune PHP\u2011FPM, file system and storage stack (NVMe, RAID, etc.) so this assembly step is fast and doesn\u2019t block other workloads.<\/p>\n<h2><span id=\"Step_4_Putting_a_CDN_in_Front_of_Big_Uploads\">Step 4: Putting a CDN in Front of Big Uploads<\/span><\/h2>\n<p>CDNs are traditionally thought of for <strong>downloads<\/strong>: serving media quickly to users worldwide. When you introduce <strong>uploads<\/strong> into the picture, you have three main patterns to choose from:<\/p>\n<h3><span id=\"Pattern_1_Uploads_bypass_the_CDN_downloads_go_through_it\">Pattern 1: Uploads bypass the CDN, downloads go through it<\/span><\/h3>\n<p>The simplest approach:<\/p>\n<ul>\n<li><strong>Uploads<\/strong> go directly to your origin (Nginx\/Apache + PHP on your hosting at dchost.com).<\/li>\n<li><strong>Public access<\/strong> to media (images, videos, documents) is via the CDN, which pulls from your origin or from an object storage endpoint.<\/li>\n<\/ul>\n<p>This keeps cache logic simple: all upload endpoints are set to <code>Cache-Control: no-store<\/code>, so the CDN does not cache any POST\/PUT responses. For many small to medium projects, this is perfectly sufficient. Our article <a href=\"https:\/\/www.dchost.com\/blog\/en\/cdn-nedir-ne-zaman-gerekir-trafik-ve-lokasyona-gore-karar-rehberi\/\">\u201cWhat Is a CDN and When Do You Really Need One?\u201d<\/a> walks you through when this pattern makes sense.<\/p>\n<h3><span id=\"Pattern_2_Directtoobjectstorage_uploads_with_signed_URLs\">Pattern 2: Direct\u2011to\u2011object\u2011storage uploads with signed URLs<\/span><\/h3>\n<p>Once uploads or media sizes become serious (think: video platforms, photography archives, LMS systems with huge lecture videos), pushing all that traffic through PHP and your primary web servers becomes wasteful. A common pattern is:<\/p>\n<ol>\n<li>The user authenticates with your app.<\/li>\n<li>Your backend issues a short\u2011lived <strong>signed upload URL<\/strong> for object storage (S3\u2011compatible endpoint, possibly behind a CDN).<\/li>\n<li>The browser uses that URL to perform a multipart\/chunked upload <strong>directly<\/strong> to storage, without PHP proxying the file bytes.<\/li>\n<li>Your app receives a completion callback or checks the object\u2019s existence to finalize metadata.<\/li>\n<\/ol>\n<p>This offloads heavy I\/O from your application servers and scales more smoothly. We explored this pattern in depth for WordPress in our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/wordpress-medyani-s3e-tasiyalim-mi-cdn-imzali-url-ve-onbellek-gecersizlestirme-adim-adim\/\">offloading WordPress media to S3\u2011compatible storage with CDN, signed URLs and cache invalidation<\/a>. The same ideas apply to custom PHP\/Laravel applications.<\/p>\n<h3><span id=\"Pattern_3_Chunked_uploads_via_the_CDN_to_your_origin\">Pattern 3: Chunked uploads via the CDN to your origin<\/span><\/h3>\n<p>Some CDNs fully proxy your entire API, including upload endpoints. In that case:<\/p>\n<ul>\n<li>Ensure that upload paths are <strong>never cached<\/strong> (e.g. <code>Cache-Control: no-store<\/code>, CDN rules to bypass cache).<\/li>\n<li>Increase CDN request body and timeout limits if they exist for large uploads.<\/li>\n<li>Keep chunk sizes moderate (e.g. 5\u201320 MB) to minimize the chance of any single chunk hitting those limits.<\/li>\n<\/ul>\n<p>This makes engine\u2011level timeouts much less of a problem because each request is small and short\u2011lived. But you still pay CDN bandwidth for the upload traffic, so be sure to measure. Our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/cdn-trafik-maliyetlerini-kontrol-altina-almak-origin-pull-cache-hit-ratio-ve-bolgesel-fiyatlandirma\/\">controlling CDN bandwidth costs with origin pull, cache hit ratio and regional pricing<\/a> explains how to keep those bills predictable.<\/p>\n<h3><span id=\"CDN_strategy_for_media_downloads\">CDN strategy for media downloads<\/span><\/h3>\n<p>Uploads are only half the story. Once big media files are stored, you want them to be delivered quickly and cheaply:<\/p>\n<ul>\n<li>Use aggressive <code>Cache-Control<\/code> headers and versioned URLs for media that rarely change.<\/li>\n<li>Leverage <strong>origin shield<\/strong> or a single \u201cshield\u201d region so your origin sees fewer cache misses.<\/li>\n<li>Transcode and optimize images (WebP\/AVIF) and videos to reduce size without sacrificing quality.<\/li>\n<\/ul>\n<p>We shared a real\u2011world pipeline in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/goruntu-optimizasyonu-boru-hatti-nasil-kurulur-avif-webp-origin-shield-ve-akilli-cache-key-ile-cdn-faturaniza-nefes-aldirin\/\">building an image optimization pipeline with AVIF\/WebP, origin shield and smarter cache keys to cut CDN costs<\/a>. Combine that with a robust upload strategy and you get a system where users can upload large source files, but the CDN serves lean, optimized renditions.<\/p>\n<h2><span id=\"Step_5_Storage_Choices_and_File_Lifecycle\">Step 5: Storage Choices and File Lifecycle<\/span><\/h2>\n<p>Where you store big uploads is just as important as how you upload them. Local disk on a single server might be fine for a small project, but quickly becomes a bottleneck for large archives and multi\u2011region delivery.<\/p>\n<h3><span id=\"Object_vs_block_vs_file_storage_for_media_uploads\">Object vs block vs file storage for media uploads<\/span><\/h3>\n<p>Broadly, you have three kinds of storage to consider:<\/p>\n<ul>\n<li><strong>Block storage<\/strong> (e.g. local SSD\/NVMe, SAN volumes): great for databases and low\u2011latency workloads.<\/li>\n<li><strong>File storage<\/strong> (NFS, SMB): shared file systems; good when multiple servers need to see the same files.<\/li>\n<li><strong>Object storage<\/strong> (S3\u2011compatible): ideal for large, immutable objects like images and videos, with built\u2011in versioning and lifecycle policies.<\/li>\n<\/ul>\n<p>For most big\u2011media projects we host, we recommend a hybrid: application runs on VPS or dedicated servers at dchost.com, while the heavy media goes to S3\u2011compatible object storage, typically fronted by a CDN. If you are deciding which storage type fits your workloads, our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/object-storage-vs-block-storage-vs-file-storage-web-uygulamalari-ve-yedekler-icin-dogru-secim\/\">object storage vs block storage vs file storage<\/a> walks through pros and cons with web apps and backups in mind.<\/p>\n<h3><span id=\"Lifecycle_management_and_backups\">Lifecycle management and backups<\/span><\/h3>\n<p>Big uploads create big responsibilities:<\/p>\n<ul>\n<li><strong>Lifecycle policies<\/strong>: automatically move old, rarely accessed media to cheaper tiers.<\/li>\n<li><strong>Replication<\/strong>: copy critical files to another region or provider for disaster recovery.<\/li>\n<li><strong>Backups<\/strong>: even object storage benefits from versioning and periodic offsite snapshots.<\/li>\n<\/ul>\n<p>On dchost.com infrastructure we often pair object storage with a separate backup flow (e.g. S3 versioning + periodic sync to another region). That way, a buggy deployment or a script error cannot easily wipe out an entire media library.<\/p>\n<h2><span id=\"Security_Validation_and_Operational_Tips\">Security, Validation and Operational Tips<\/span><\/h2>\n<p>Handling large media uploads safely is not only about avoiding 413\/504 errors. A few additional practices save you from messy incidents later.<\/p>\n<h3><span id=\"File_type_validation_and_antivirus_scanning\">File type validation and antivirus scanning<\/span><\/h3>\n<ul>\n<li><strong>Validate by both extension and MIME type<\/strong>, and ideally inspect headers\/content where feasible.<\/li>\n<li><strong>Whitelist<\/strong> allowed types (e.g. MP4, MOV, JPEG, PNG) rather than trying to blacklist bad ones.<\/li>\n<li>Run suspicious or high\u2011risk uploads through <strong>antivirus scanning<\/strong> (e.g. ClamAV in a separate worker) before making them publicly accessible.<\/li>\n<\/ul>\n<h3><span id=\"Authentication_quotas_and_rate_limiting\">Authentication, quotas and rate limiting<\/span><\/h3>\n<ul>\n<li>Lock upload endpoints behind proper <strong>authentication and authorization<\/strong>.<\/li>\n<li>Enforce <strong>per\u2011user and per\u2011project quotas<\/strong> on total storage and number of files.<\/li>\n<li>Add <strong>rate limiting<\/strong> to APIs handling uploads to mitigate abuse or DoS\u2011like behaviour.<\/li>\n<\/ul>\n<h3><span id=\"Logging_monitoring_and_alerting\">Logging, monitoring and alerting<\/span><\/h3>\n<p>Big uploads are easy to break accidentally when changing timeouts, reverse proxy rules or CDN configuration. We recommend:<\/p>\n<ul>\n<li>Structured logs for upload endpoints (upload ID, user ID, size, duration, status).<\/li>\n<li>Dashboards tracking success\/error rates and average upload duration.<\/li>\n<li>Alerts when error rates spike, or when storage usage crosses thresholds.<\/li>\n<\/ul>\n<p>If you are already instrumenting your stack with Prometheus + Grafana or other tools, add specific metrics for upload success and latency. This lets you catch regressions after deployments or configuration changes on your dchost.com servers.<\/p>\n<h2><span id=\"Putting_It_All_Together_Example_Architectures\">Putting It All Together: Example Architectures<\/span><\/h2>\n<h3><span id=\"Scenario_1_WordPress_site_with_heavy_media_library\">Scenario 1: WordPress site with heavy media library<\/span><\/h3>\n<p>For a photography or online course site running on WordPress, a proven architecture for big media is:<\/p>\n<ul>\n<li>WordPress runs on a VPS with PHP\u2011FPM, tuned PHP limits and Nginx\/Apache body\/timeouts configured as described above.<\/li>\n<li>Uploads go directly from users to S3\u2011compatible storage using a plugin that handles multipart uploads and signed URLs.<\/li>\n<li>A CDN fronts the storage endpoint for fast global delivery, with proper cache rules for media URLs.<\/li>\n<li>Thumbnails and derivatives are generated automatically and optimized to WebP\/AVIF to reduce CDN bandwidth.<\/li>\n<\/ul>\n<p>If you are planning such a setup, combine this article with our guides on <a href=\"https:\/\/www.dchost.com\/blog\/en\/wordpress-yedekleme-stratejileri-paylasimli-hosting-ve-vpste-otomatik-yedek-ve-geri-yukleme\/\">WordPress backup strategies<\/a> and <a href=\"https:\/\/www.dchost.com\/blog\/en\/gorsel-agirlikli-siteler-icin-hosting-disk-cdn-ve-webp-avif-stratejisi\/\">hosting and CDN strategy for image\u2011heavy websites<\/a> to make sure performance and data safety grow together.<\/p>\n<h3><span id=\"Scenario_2_SPA_frontend_PHP_API_for_video_uploads\">Scenario 2: SPA frontend + PHP API for video uploads<\/span><\/h3>\n<p>For a modern Single Page Application (React\/Vue\/Angular) talking to a PHP backend API, a clean pattern is:<\/p>\n<ul>\n<li>SPA and API hosted on the same domain for simpler cookies, CORS and SEO, as we described in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/react-vue-ve-angular-single-page-applicationlari-ayni-alan-adinda-api-ile-host-etmek-nginx-yonlendirme-ve-ssl-mimarisi\/\">why put the SPA and API on one domain<\/a>.<\/li>\n<li>Upload UI uses a chunked\/resumable library that talks to PHP endpoints or directly to an S3\u2011compatible service via signed URLs.<\/li>\n<li>API itself is fully proxied through a CDN, with cache bypass for upload endpoints and longer timeouts only on those specific routes.<\/li>\n<li>All heavy transcoding is offloaded to worker queues or external media pipelines, not done in the API request itself.<\/li>\n<\/ul>\n<p>This keeps API servers at dchost.com responsive even under heavy upload load, because each request is short and involves minimal processing.<\/p>\n<h2><span id=\"Summary_and_Next_Steps\">Summary and Next Steps<\/span><\/h2>\n<p>Reliable big media uploads are not about a single magic directive; they are about aligning PHP limits, Nginx\/Apache body sizes, timeouts and your overall application design. Classic \u201cone huge POST\u201d uploads quickly hit the ceiling on slow or unstable networks, even if you crank up all limits to gigabytes and timeouts to hours. Chunked, resumable uploads solve this by making each HTTP request small and quick, allowing you to keep conservative server timeouts while still supporting multi\u2011gigabyte files. Pair that with object storage, signed URLs and a CDN that is correctly configured not to cache upload endpoints, and you get a stack where users can confidently upload large videos, images and archives without drama.<\/p>\n<p>If you\u2019re planning such an architecture, start by reviewing your PHP and web server limits, decide whether chunked uploads make sense for your use case, and choose a storage\/CDN combination that fits your growth plan. At dchost.com we provide the full spectrum\u2014from shared hosting to NVMe VPS, dedicated servers and colocation\u2014so you can start small and scale your media workload without re\u2011architecting everything. If you\u2019d like help translating this strategy into concrete settings on your current plan or a new server, our team is happy to review your requirements and propose a clean, future\u2011proof setup.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Big media uploads look simple on paper: a user selects a 2 GB video, clicks \u201cUpload\u201d, and expects it to work. In reality, that single action passes through browser constraints, HTTP limits, PHP configuration, Nginx\/Apache timeouts, storage performance and finally a CDN in front of everything. If any layer is misconfigured, you end up with [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3132,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-3131","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3131","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=3131"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3131\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/3132"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=3131"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=3131"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=3131"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}