Technology

Big Media File Upload Strategy: PHP Limits, Web Server Timeouts and Chunked Uploads with a CDN

Big media uploads look simple on paper: a user selects a 2 GB video, clicks “Upload”, and expects it to work. In reality, that single action passes through browser constraints, HTTP limits, PHP configuration, Nginx/Apache timeouts, storage performance and finally a CDN in front of everything. If any layer is misconfigured, you end up with half‑uploaded files, mysterious 413/504 errors, or users who give up after the third failed attempt. In this article we’ll walk through how we design big upload flows for customers at dchost.com: from PHP limits to web server timeouts, from classic single‑POST uploads to resilient chunked/resumable strategies that work nicely with a CDN and modern object storage. The goal is simple: give you a practical blueprint so your large media uploads “just work” instead of becoming a recurring support ticket.

Why Big Media Uploads Keep Failing

Before tuning anything, it helps to understand where big uploads typically break. The path from the user’s browser to your storage involves at least four layers:

  • The browser and network (unstable connections, mobile devices, corporate proxies)
  • The web server (Nginx or Apache) receiving the HTTP request
  • PHP (and PHP‑FPM/FastCGI) that parses the body and runs your application
  • Your storage backend and potentially a CDN in front of it

Common symptoms map directly to one of these layers:

  • HTTP 413 Request Entity Too Large: Web server or PHP body size limit
  • HTTP 408 or 504 Gateway Timeout: Upload or backend processing took too long
  • PHP file upload errors (UPLOAD_ERR_INI_SIZE / UPLOAD_ERR_FORM_SIZE): PHP ini or form limits
  • Truncated or corrupt files: Storage write failures or app‑level bugs during assembly

Your upload strategy has to make these layers align: PHP limits cannot be lower than Nginx/Apache limits, timeouts must be realistic for your users’ bandwidth, and the application must be designed so a temporary network hiccup does not destroy a 4 GB upload. Let’s start where most people get stuck: PHP.

Step 1: Get Your PHP Upload Limits Under Control

Core PHP directives that gate big uploads

PHP has several configuration directives that directly control how large and how long an upload is allowed to run. You configure them in php.ini, per‑site PHP settings in your control panel, or custom .user.ini files (depending on your hosting plan at dchost.com).

  • upload_max_filesize: Maximum size of a single uploaded file
  • post_max_size: Maximum size of the entire POST body (all files + form fields)
  • memory_limit: Max RAM a PHP script may use
  • max_execution_time: Max time (in seconds) a PHP script is allowed to run
  • max_input_time: Max time PHP waits for input (upload/POST data)

Two basic rules keep you out of trouble:

  1. post_max_size must be ≥ upload_max_filesize. Otherwise PHP rejects the request before your app sees it.
  2. memory_limit must be comfortably larger than anything your code loads into memory (for example, image processing libraries often need several times the file size).

We covered how to choose realistic values for memory_limit, max_execution_time and upload_max_filesize in detail in our guide on choosing the right PHP memory_limit, max_execution_time and upload_max_filesize for your website. For truly large media files, you’ll usually go above the typical 64–128 MB defaults.

Practical sizing examples for large files

Let’s say you want to support up to 2 GB uploads (for long videos or raw footage):

  • upload_max_filesize = 2048M
  • post_max_size = 2050M (a bit higher to account for form fields and overhead)
  • memory_limit: depends on what you do with the file. If you just move it to storage, 256M–512M might be enough. If you transcode it in PHP (usually not ideal), you’ll need much more or an external worker.
  • max_input_time: if a user with 10 Mbps upload speed sends 2 GB, in theory it can take more than 30 minutes. For classic single‑POST uploads, a value like 3600 seconds (1 hour) is safer, but this is exactly why we prefer chunked uploads (we’ll get there).
  • max_execution_time: however long your PHP code needs after the upload finishes (file validation, DB writes, moving to storage). Often 300–600 seconds is fine.

On a busy shared environment, it is risky to set every limit to “huge” values, because a couple of stuck uploads can eat resources. That’s why many of our customers move high‑volume, large‑file projects to a VPS or dedicated server at dchost.com, where they fully control PHP‑FPM pools and per‑site limits.

FPM and FastCGI timeouts you must align

If you use PHP‑FPM behind Nginx or Apache, you also have timeouts on that layer:

  • PHP‑FPM: request_terminate_timeout (per pool)
  • Nginx: fastcgi_read_timeout, fastcgi_send_timeout
  • Apache with PHP‑FPM (via proxy_fcgi or mod_fcgid): similar proxy timeouts

These must be equal to or larger than your max_execution_time and max_input_time. Otherwise, the web server may give up on PHP and return 504 errors while PHP is still happily processing the upload. If you are tuning PHP‑FPM pools for a CMS or e‑commerce store, our article on PHP‑FPM settings for WordPress and WooCommerce gives you a solid foundation for balancing concurrency and memory usage.

Step 2: Align Nginx or Apache with PHP for Large Bodies

Nginx body size and timeout settings

Nginx is often the first line that rejects a large upload. At minimum, you should review:

  • client_max_body_size: Maximum size of the request body. This must be ≥ post_max_size.
  • client_body_timeout: How long Nginx waits for the body to be sent. This should tolerate slow or unstable connections.
  • proxy_read_timeout / fastcgi_read_timeout: How long Nginx waits for a response from the backend (PHP‑FPM, upstream API, etc.).
  • send_timeout: How long Nginx is willing to wait while sending data to the client.

Example Nginx snippet for 2 GB uploads to a PHP‑FPM backend:

server {
    client_max_body_size 2050M;
    client_body_timeout 60m;

    location ~ .php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php-fpm.sock;
        fastcgi_read_timeout 3600s;
        fastcgi_send_timeout 3600s;
    }
}

Note that extremely long timeouts increase the risk of hanging workers occupying resources. This is another reason we advocate for chunked uploads: each HTTP request becomes smaller and short‑lived, so timeouts can stay reasonable while still supporting “huge” overall uploads.

Apache equivalents for big uploads

On Apache, similar concepts exist but with different names and modules:

  • LimitRequestBody (in Apache or per‑VirtualHost): caps the size of the request body.
  • Timeout: controls many operations, including how long Apache waits for data from the client or backend.
  • ProxyTimeout: if you proxy to PHP‑FPM or another backend, this controls backend wait time.
  • mod_fcgid / proxy_fcgi parameters: define per‑request limits and timeouts when using FastCGI.

In PHP‑as‑Apache‑module setups, LimitRequestBody plus PHP upload_max_filesize and post_max_size are the main limiters. On more modern Apache + PHP‑FPM environments (which we run on many dchost.com VPS and dedicated servers), the Apache proxy timeouts must be aligned with PHP‑FPM’s own request_terminate_timeout.

Detecting which layer is failing

When an upload fails, you need to quickly answer “who dropped the ball?”. Two tools help here:

  • HTTP status codes: 413 suggests body size, 408/504 suggests timeouts. We’ve covered how to read these in depth in our guide on reading web server logs to diagnose 4xx–5xx errors on Apache and Nginx.
  • Server logs: Nginx error logs, Apache error logs, and PHP‑FPM logs usually print a clear reason (e.g. “client intended to send too large body” or “upstream timed out”).

Once you know whether the rejection came from the web server or from PHP, adjusting the corresponding limit becomes straightforward.

Step 3: Why Chunked Uploads Beat One Huge POST

Scaling uploads purely by inflating limits works up to a point, but it cannot solve all real‑world issues. Mobile connections drop, laptops sleep, corporate proxies reset long‑lived connections, and users sometimes need to upload multiple gigabytes. That’s where chunked (resumable) uploads come in.

How chunked uploads work conceptually

Instead of sending a 4 GB file in a single HTTP request, the browser splits it into many smaller parts (e.g. 5–20 MB chunks). The flow looks like this:

  1. The client asks the server to start an upload session and receives an upload ID.
  2. The browser reads the file in slices and uploads each slice as a separate request, including the upload ID and chunk index.
  3. The server stores each chunk (disk, object storage, or temp directory) and records progress (e.g. in a database or cache).
  4. When all chunks are uploaded, the client sends a finalize request; the server assembles chunks into the final file and moves it to permanent storage.
  5. If the connection is lost mid‑way, the client asks the server which chunks exist already and resumes from where it left off.

This can be implemented in various ways (Tus protocol, custom REST endpoints, S3 multipart uploads, etc.), but the principle is the same: each HTTP request is small and quick, but the total result can be many gigabytes.

Benefits of chunked uploads for PHP and your web server

  • Shorter request lifetimes: Each chunk completes in seconds, not tens of minutes. Nginx/Apache timeouts can stay conservative.
  • Lower per‑request memory usage: PHP only processes one chunk at a time, not the whole file.
  • Automatic resume: Temporary network errors don’t kill the whole upload; the client retries only missing chunks.
  • Parallelism: Advanced clients can upload multiple chunks in parallel if your backend and bandwidth allow it.

In practice, this means you no longer need to set client_max_body_size, post_max_size and upload_max_filesize equal to the total file size. They only need to fit a single chunk, e.g. 20–50 MB. The total upload size is then enforced at the application level (e.g. “this upload session must not exceed 10 GB”).

Implementing a chunked backend in PHP

A typical PHP backend for chunked uploads uses:

  • A table or key‑value store to track upload sessions (upload ID, user ID, total size, number of chunks, status)
  • A temporary directory or object storage bucket for unassembled chunks
  • Endpoints such as POST /upload/init, PUT /upload/{id}/chunk/{index}, POST /upload/{id}/complete

When finalizing, your PHP code will:

  1. Verify that all expected chunks exist and sizes match what was declared at init time.
  2. Concatenate chunks in the correct order, preferably via streaming (e.g. fopen/fwrite on the server) rather than loading everything into memory.
  3. Move the final file into its long‑term storage location (local disk, NFS, or S3‑compatible object storage).
  4. Clean up temporary chunks and mark the upload as completed.

On dchost.com VPS or dedicated servers you can tune PHP‑FPM, file system and storage stack (NVMe, RAID, etc.) so this assembly step is fast and doesn’t block other workloads.

Step 4: Putting a CDN in Front of Big Uploads

CDNs are traditionally thought of for downloads: serving media quickly to users worldwide. When you introduce uploads into the picture, you have three main patterns to choose from:

Pattern 1: Uploads bypass the CDN, downloads go through it

The simplest approach:

  • Uploads go directly to your origin (Nginx/Apache + PHP on your hosting at dchost.com).
  • Public access to media (images, videos, documents) is via the CDN, which pulls from your origin or from an object storage endpoint.

This keeps cache logic simple: all upload endpoints are set to Cache-Control: no-store, so the CDN does not cache any POST/PUT responses. For many small to medium projects, this is perfectly sufficient. Our article “What Is a CDN and When Do You Really Need One?” walks you through when this pattern makes sense.

Pattern 2: Direct‑to‑object‑storage uploads with signed URLs

Once uploads or media sizes become serious (think: video platforms, photography archives, LMS systems with huge lecture videos), pushing all that traffic through PHP and your primary web servers becomes wasteful. A common pattern is:

  1. The user authenticates with your app.
  2. Your backend issues a short‑lived signed upload URL for object storage (S3‑compatible endpoint, possibly behind a CDN).
  3. The browser uses that URL to perform a multipart/chunked upload directly to storage, without PHP proxying the file bytes.
  4. Your app receives a completion callback or checks the object’s existence to finalize metadata.

This offloads heavy I/O from your application servers and scales more smoothly. We explored this pattern in depth for WordPress in our guide on offloading WordPress media to S3‑compatible storage with CDN, signed URLs and cache invalidation. The same ideas apply to custom PHP/Laravel applications.

Pattern 3: Chunked uploads via the CDN to your origin

Some CDNs fully proxy your entire API, including upload endpoints. In that case:

  • Ensure that upload paths are never cached (e.g. Cache-Control: no-store, CDN rules to bypass cache).
  • Increase CDN request body and timeout limits if they exist for large uploads.
  • Keep chunk sizes moderate (e.g. 5–20 MB) to minimize the chance of any single chunk hitting those limits.

This makes engine‑level timeouts much less of a problem because each request is small and short‑lived. But you still pay CDN bandwidth for the upload traffic, so be sure to measure. Our article on controlling CDN bandwidth costs with origin pull, cache hit ratio and regional pricing explains how to keep those bills predictable.

CDN strategy for media downloads

Uploads are only half the story. Once big media files are stored, you want them to be delivered quickly and cheaply:

  • Use aggressive Cache-Control headers and versioned URLs for media that rarely change.
  • Leverage origin shield or a single “shield” region so your origin sees fewer cache misses.
  • Transcode and optimize images (WebP/AVIF) and videos to reduce size without sacrificing quality.

We shared a real‑world pipeline in our article on building an image optimization pipeline with AVIF/WebP, origin shield and smarter cache keys to cut CDN costs. Combine that with a robust upload strategy and you get a system where users can upload large source files, but the CDN serves lean, optimized renditions.

Step 5: Storage Choices and File Lifecycle

Where you store big uploads is just as important as how you upload them. Local disk on a single server might be fine for a small project, but quickly becomes a bottleneck for large archives and multi‑region delivery.

Object vs block vs file storage for media uploads

Broadly, you have three kinds of storage to consider:

  • Block storage (e.g. local SSD/NVMe, SAN volumes): great for databases and low‑latency workloads.
  • File storage (NFS, SMB): shared file systems; good when multiple servers need to see the same files.
  • Object storage (S3‑compatible): ideal for large, immutable objects like images and videos, with built‑in versioning and lifecycle policies.

For most big‑media projects we host, we recommend a hybrid: application runs on VPS or dedicated servers at dchost.com, while the heavy media goes to S3‑compatible object storage, typically fronted by a CDN. If you are deciding which storage type fits your workloads, our guide on object storage vs block storage vs file storage walks through pros and cons with web apps and backups in mind.

Lifecycle management and backups

Big uploads create big responsibilities:

  • Lifecycle policies: automatically move old, rarely accessed media to cheaper tiers.
  • Replication: copy critical files to another region or provider for disaster recovery.
  • Backups: even object storage benefits from versioning and periodic offsite snapshots.

On dchost.com infrastructure we often pair object storage with a separate backup flow (e.g. S3 versioning + periodic sync to another region). That way, a buggy deployment or a script error cannot easily wipe out an entire media library.

Security, Validation and Operational Tips

Handling large media uploads safely is not only about avoiding 413/504 errors. A few additional practices save you from messy incidents later.

File type validation and antivirus scanning

  • Validate by both extension and MIME type, and ideally inspect headers/content where feasible.
  • Whitelist allowed types (e.g. MP4, MOV, JPEG, PNG) rather than trying to blacklist bad ones.
  • Run suspicious or high‑risk uploads through antivirus scanning (e.g. ClamAV in a separate worker) before making them publicly accessible.

Authentication, quotas and rate limiting

  • Lock upload endpoints behind proper authentication and authorization.
  • Enforce per‑user and per‑project quotas on total storage and number of files.
  • Add rate limiting to APIs handling uploads to mitigate abuse or DoS‑like behaviour.

Logging, monitoring and alerting

Big uploads are easy to break accidentally when changing timeouts, reverse proxy rules or CDN configuration. We recommend:

  • Structured logs for upload endpoints (upload ID, user ID, size, duration, status).
  • Dashboards tracking success/error rates and average upload duration.
  • Alerts when error rates spike, or when storage usage crosses thresholds.

If you are already instrumenting your stack with Prometheus + Grafana or other tools, add specific metrics for upload success and latency. This lets you catch regressions after deployments or configuration changes on your dchost.com servers.

Putting It All Together: Example Architectures

Scenario 1: WordPress site with heavy media library

For a photography or online course site running on WordPress, a proven architecture for big media is:

  • WordPress runs on a VPS with PHP‑FPM, tuned PHP limits and Nginx/Apache body/timeouts configured as described above.
  • Uploads go directly from users to S3‑compatible storage using a plugin that handles multipart uploads and signed URLs.
  • A CDN fronts the storage endpoint for fast global delivery, with proper cache rules for media URLs.
  • Thumbnails and derivatives are generated automatically and optimized to WebP/AVIF to reduce CDN bandwidth.

If you are planning such a setup, combine this article with our guides on WordPress backup strategies and hosting and CDN strategy for image‑heavy websites to make sure performance and data safety grow together.

Scenario 2: SPA frontend + PHP API for video uploads

For a modern Single Page Application (React/Vue/Angular) talking to a PHP backend API, a clean pattern is:

  • SPA and API hosted on the same domain for simpler cookies, CORS and SEO, as we described in our article on why put the SPA and API on one domain.
  • Upload UI uses a chunked/resumable library that talks to PHP endpoints or directly to an S3‑compatible service via signed URLs.
  • API itself is fully proxied through a CDN, with cache bypass for upload endpoints and longer timeouts only on those specific routes.
  • All heavy transcoding is offloaded to worker queues or external media pipelines, not done in the API request itself.

This keeps API servers at dchost.com responsive even under heavy upload load, because each request is short and involves minimal processing.

Summary and Next Steps

Reliable big media uploads are not about a single magic directive; they are about aligning PHP limits, Nginx/Apache body sizes, timeouts and your overall application design. Classic “one huge POST” uploads quickly hit the ceiling on slow or unstable networks, even if you crank up all limits to gigabytes and timeouts to hours. Chunked, resumable uploads solve this by making each HTTP request small and quick, allowing you to keep conservative server timeouts while still supporting multi‑gigabyte files. Pair that with object storage, signed URLs and a CDN that is correctly configured not to cache upload endpoints, and you get a stack where users can confidently upload large videos, images and archives without drama.

If you’re planning such an architecture, start by reviewing your PHP and web server limits, decide whether chunked uploads make sense for your use case, and choose a storage/CDN combination that fits your growth plan. At dchost.com we provide the full spectrum—from shared hosting to NVMe VPS, dedicated servers and colocation—so you can start small and scale your media workload without re‑architecting everything. If you’d like help translating this strategy into concrete settings on your current plan or a new server, our team is happy to review your requirements and propose a clean, future‑proof setup.

Frequently Asked Questions

For large file uploads you must adjust several PHP directives together. Set upload_max_filesize to your desired per-file limit and ensure post_max_size is at least as large (preferably slightly higher). Increase max_input_time so PHP has enough time to receive the body, and set max_execution_time to cover any validation or post-processing you do after the upload finishes. memory_limit must be high enough for what your code does with the file, especially if you use image or video libraries. Finally, align PHP-FPM’s request_terminate_timeout with these values so the PHP worker is not killed prematurely.

You can often get away with simply increasing limits for moderate sizes (for example 100–500 MB files on stable desktop connections). However, as soon as you deal with multi-gigabyte files, mobile users, or global audiences on slower networks, classic single-POST uploads become fragile. Connections drop, proxies reset long requests, and you start hitting timeouts or 413 errors despite generous limits. Chunked uploads turn one huge request into many small ones that are easier on PHP, Nginx/Apache and the CDN, and they allow resume after temporary failures. For serious media workloads, chunked or multipart uploads are the more robust long-term choice.

A CDN shines mainly on the download side: it caches images, videos and other media close to users, reducing latency and load on your origin servers. For uploads, you usually configure the CDN to bypass cache on POST/PUT endpoints and just act as a smart reverse proxy with good connectivity. In more advanced setups, your app issues signed URLs so browsers can upload directly to S3-compatible storage that sits behind the CDN, offloading heavy I/O from PHP. Either way, once files are stored, you can use aggressive caching, origin shield and optimized formats (WebP/AVIF) to serve large media quickly and keep bandwidth costs under control.

Both patterns work, but they have different trade-offs. Uploading via PHP is simpler to implement initially and keeps all traffic and access control in one place, but it also means your web servers handle every byte of large uploads, which can become expensive and limit scalability. Direct-to-object-storage uploads with signed URLs shift the heavy lifting to the storage service and CDN, while your PHP app only handles metadata and authorization. This typically scales better for big media workloads. Many teams start with PHP-mediated uploads, then move to signed direct uploads once file sizes or traffic grow beyond what their app servers can comfortably handle.

Chunk size is a balance between overhead and reliability. Very small chunks (for example 1 MB) have low failure impact but high per-request overhead and more metadata to track. Very large chunks (100–200 MB) reduce overhead but are more likely to hit timeouts or fail on unstable connections. In practice, 5–20 MB per chunk works well for most web applications: it keeps each HTTP request short while still making good use of bandwidth. You can tune chunk size based on your users’ average upload speeds, your CDN or reverse proxy limits, and how fast your backend can persist each chunk to storage.