Technology

Sharing Files Between Multiple Web Servers: NFS vs SSHFS vs rsync

Once you move beyond a single web server, file management becomes one of the first real architecture questions you must solve. Two or more application nodes need access to the same code, media uploads, logs or even configuration snippets. If you do not design this layer carefully, you end up with weird bugs like missing images on some nodes, plugins only updated on one server, or race conditions when two machines overwrite the same file. In this article, we will look at three classic approaches to sharing files between multiple web servers: NFS (Network File System), SSHFS and rsync. Each one solves a slightly different problem and has very different performance, reliability and operational characteristics. As dchost.com, we see all three in real production environments on VPS, dedicated and colocation servers. Let’s walk through how they work, where they shine, where they hurt, and how to combine them with other tools like object storage, Redis and CI/CD so your multi-server setup stays predictable and easy to operate.

What Problem Are We Really Solving?

Before choosing NFS, SSHFS or rsync, it helps to be very clear about what kind of data you are trying to share between web servers. In practice, you almost always deal with one or more of these categories:

  • Application code (PHP, Node.js, Laravel, WordPress core, theme and plugin files).
  • User uploads (images, PDFs, videos, avatars, reports, export files).
  • Generated assets (build artifacts, compiled JS/CSS, resized thumbnails).
  • Sessions and cache data (PHP sessions, object cache, queues, temporary files).
  • Logs (access logs, error logs, application logs).

Not all of these should be handled with the same mechanism. For example, file-based sessions shared over NFS can work, but using Redis or Memcached is usually both faster and more robust. We have a separate deep dive on this in our article choosing PHP session and cache storage: files vs Redis vs Memcached.

Likewise, user uploads can live on a central file server mounted via NFS, or you can push them to object storage and serve via CDN, as explained in our guide on offloading WordPress and WooCommerce media to S3/MinIO-compatible object storage. When you understand which data is hot (read/write constantly), which is append-only, and which can be immutable, you can pick the right combination of NFS, SSHFS, rsync and possibly object storage.

Quick Overview: NFS, SSHFS and rsync

NFS (Network File System)

NFS exposes a directory on one server as a network filesystem that other servers can mount as if it were a local disk. The kernel handles read/write operations over TCP/UDP. All web nodes see the same path (for example /var/www/shared) with standard POSIX semantics (permissions, ownership, file locking, symlinks, etc.).

This makes NFS a natural choice for scenarios where multiple web servers must concurrently read and write the same files: user uploads, shared configuration, even home directories.

SSHFS

SSHFS is a FUSE-based filesystem that tunnels file operations over SSH. You run an SSH server on the “storage” machine, then mount a directory on other servers using the SSH protocol. No extra daemon, no special ports, authentication piggybacks on your SSH keys.

It feels like “NFS over SSH”, but internally it is very different: it lives in userspace, has more overhead and slightly different semantics.

rsync

rsync is not a filesystem at all; it is a file synchronization tool. You run it periodically (or triggered) to push or pull changes between directories. rsync sends only differences (deltas), can preserve permissions, hardlinks and ownership, and works over SSH.

This is perfect for one-way replication: publishing new code releases to multiple web servers, syncing uploads from a primary node to secondaries, or creating backups.

NFS: Central Shared Storage for Multiple Web Servers

How NFS Works in a web hosting Setup

With NFS, you typically designate one server (or HA cluster) as your file server. You export a directory, for example:

/srv/web-uploads  10.0.0.0/24(rw,sync,no_root_squash)

On each web server, you mount that export to a local path:

mount -t nfs 10.0.0.10:/srv/web-uploads /var/www/uploads

From the point of view of your application, it’s just a single directory; it doesn’t care that it’s actually network storage.

Pros of NFS for Web Clusters

  • Single source of truth: All nodes see the same files at the same path in real time. Upload an image on server A, it’s instantly available on servers B and C.
  • POSIX semantics: Permissions, symlinks, file locking and directory operations behave like a local filesystem (with some caveats around locking and caching).
  • Mature and widely supported: NFS is decades old, available on virtually all Linux distributions and well-understood by sysadmins.
  • Good performance on LAN: With a properly tuned 1 Gbps or 10 Gbps network, NFS can deliver very solid throughput and low latency, especially when backed by NVMe or SSD storage. For a deeper look at disk characteristics, see our article on NVMe vs SATA SSD vs HDD for hosting and backups.
  • Centralized backup: Backup one NFS export and you’ve effectively backed up all shared files for the whole cluster.

Cons and Pitfalls of NFS

  • Single point of failure (unless you build HA): If the NFS server goes down, all web nodes lose access to uploads or shared data. You can mitigate this with redundant servers and failover IPs, but that adds complexity.
  • Performance bottleneck: All file I/O goes through the NFS server and its network link. For very busy sites with many small writes (for example, heavy comments or uploads), this can become a bottleneck.
  • Locking and caching quirks: Although NFS supports locking, it’s more fragile than local filesystems. Some applications that expect perfect local semantics may behave oddly under heavy load.
  • Security configuration required: You must carefully restrict which IPs can mount exports, use firewall rules, and configure options like root_squash properly.
  • Not ideal across WAN: NFS is designed for low-latency LANs. Over high-latency or lossy links it can feel very slow and unreliable.

When NFS Makes Sense

In our experience at dchost.com, NFS works well in these scenarios:

  • Small to medium WordPress or PHP clusters where you need shared wp-content/uploads or similar directories across 2–5 web servers, all in the same data center.
  • Shared home directories for multiple development or admin users on a fleet of servers.
  • Legacy applications that expect a single shared filesystem and would be difficult to refactor towards object storage.

You should be more cautious using NFS for:

  • Very high-traffic e‑commerce with huge bursts of concurrent writes.
  • Session storage, if you can instead use Redis/Memcached or database-backed sessions.
  • Cross-region setups where latency between servers is significant.

SSHFS: Convenient, But Rarely Right for Production Traffic

How SSHFS Works

SSHFS lets you mount a remote directory over SSH using FUSE. For example:

sshfs [email protected]:/srv/uploads /var/www/uploads 
  -o IdentityFile=/home/user/.ssh/id_rsa

From the application’s point of view, it looks like a regular folder, just like NFS. Under the hood, though, every file operation is translated into SSH messages handled in user space. That has important implications.

Advantages of SSHFS

  • Very easy to set up: If you have SSH access, you basically have everything you need. No NFS server, no exports, no special kernel modules.
  • Encrypted by default: All traffic is wrapped in SSH. You don’t need to add extra encryption layers.
  • Fine-grained access using SSH keys: You can quickly give a developer or a tool access to a subset of files with a dedicated SSH user.
  • Great for ad‑hoc mounts: For one-off maintenance tasks, manual file inspection or quick migrations, SSHFS is very handy.

Limitations and Risks of SSHFS in Web Hosting

  • Performance overhead: SSH encryption + user-space FUSE + chattier protocol means higher CPU and latency compared to NFS on the same LAN.
  • Less robust under load: Under high concurrency and I/O, SSHFS is more likely to misbehave, hang or produce strange errors than a kernel-level filesystem like NFS.
  • Connection sensitivity: If SSH drops, your mount can hang or block processes until timeouts occur. That’s not something you want in the hot path of your web requests.
  • FUSE semantics: Some features like locking and metadata handling are not identical to a local filesystem; depending on the app, this can cause edge-case bugs.

When SSHFS Is (and Is Not) a Good Idea

Where SSHFS can shine:

  • Temporary admin access to a remote file tree from a management server or engineer’s workstation.
  • Low-traffic internal tools where convenience is more important than peak performance.
  • One-off migrations when you want to browse or copy files interactively before switching to rsync or another tool.

Where we generally do not recommend SSHFS at dchost.com:

  • As the primary shared filesystem for a production web cluster.
  • For session or cache storage in front of a busy PHP/Laravel application.
  • For databases or any kind of transactional storage.

If SSHFS is currently in your production data path, a good medium-term goal is to migrate that workload to NFS, rsync-based distribution or object storage, and keep SSHFS purely for admin tasks.

rsync: Efficient File Synchronization, Not Shared Storage

How rsync Fits into Multi-Server Architectures

rsync synchronizes files between two directories. The most common pattern in multi-web-server setups is:

  • From CI/CD to servers: Push new releases from a build server to each application node.
  • From a primary to secondaries: Sync user uploads from a main node (where uploads are written) to all others (which mostly read).
  • From production to backup: Copy files off-site for backup and disaster recovery.

For example, a simple code deployment command might look like this:

rsync -az --delete ./release/ web1:/var/www/app 
      && rsync -az --delete ./release/ web2:/var/www/app

Because rsync transfers only differences, subsequent deployments are much faster than copying everything from scratch.

Pros of rsync

  • Bandwidth-efficient: Differential transfers (based on checksums and file sizes) mean you move only the changed blocks.
  • Flexible topologies: One-to-one, one-to-many (fan-out), or even many-to-one (aggregating logs).
  • Works over SSH: Reuses your SSH keys and access controls; traffic can be encrypted.
  • Good for immutable or append-only data: Code deployments, static assets, nightly exports, etc.
  • Great for backups: With --link-dest you can create efficient snapshot-like backups using hardlinks.

Cons and Design Gotchas with rsync

  • Not real-time shared storage: There is always some delay between changes on the source and when they appear on targets (depending on how often rsync runs).
  • Conflict-prone if you write on multiple nodes: If two servers modify the same file before the next sync, the last rsync run will win, overwriting the other version.
  • Operational complexity: You have to schedule jobs, handle failures, monitor runtimes and think about what happens if a sync runs longer than expected.
  • Temporary inconsistencies: During a big sync, some servers may have the new version while others still serve the old version, unless you design around this with blue-green techniques.

rsync for Code Deployments and Blue-Green Releases

rsync becomes extremely powerful when used together with release directories and symlinks. You sync each new release into a separate directory (for example /var/www/releases/2025-12-31_1200) and then atomically switch a current symlink to point to the new release.

We describe this pattern step-by-step in our article zero‑downtime CI/CD to a VPS with rsync + symlinks + systemd. The big advantage is that you can deploy to many servers consistently and roll back instantly if something goes wrong.

rsync for Upload Replication

For user uploads, a common pattern is:

  1. Designate one web server as the primary uploader. All forms or APIs that accept uploads hit this node (often via load balancer routing or internal API calls).
  2. Run a periodic rsync job from the primary to all secondary servers (or to a central storage server) for the uploads directory.
  3. Serve uploads from any node; in the worst case, a newly uploaded file may be briefly available only on the primary until the next sync completes.

This can be a good compromise when you want to avoid NFS but still keep everything on local disks. However, if you need global availability or higher durability, object storage plus CDN is usually a better long-term strategy.

Designing a Shared Files Strategy for Your Web Cluster

Instead of asking “Which is better, NFS, SSHFS or rsync?”, the more useful question is: Which combination fits the data types and traffic patterns of my application?

Data-Type-Driven Choices

  • Application code:
    • Best option: rsync-based deployments (possibly plus Git) to each node, using release directories and symlinks for zero downtime.
    • Avoid: Storing live code on NFS, if possible. Local disks are faster and more resilient; NFS for code adds network latency to every PHP include.
  • User uploads:
    • Option A: NFS shared directory across nodes (simple, real-time, needs reliable NFS server).
    • Option B: Single-writer rsync replication from a primary node to others (slight delay, but less central bottleneck).
    • Option C: Object storage + CDN, where web servers upload once and serve via CDN; see our comparison of object storage vs block vs file storage for deeper guidance.
  • Sessions and cache data:
    • Recommended: Redis, Memcached or database, not a shared filesystem.
    • If you must use files, NFS works better than SSHFS, but understand the locking overhead and failure modes. Our article on PHP session and cache storage choices goes into detail.
  • Logs:
    • Often best stored locally and then shipped to a central log system (ELK, Loki, etc.). We have a practical guide on centralizing logs for multiple servers.
    • You can use rsync (or log shippers) to aggregate logs; NFS is rarely necessary here.

Example Architecture: Small WordPress Cluster

Imagine you are running 3 WordPress nodes behind a load balancer on dchost.com VPS instances or dedicated servers.

  • Code: Deployed via CI/CD using rsync + release directories to each node. No shared filesystem for code.
  • Uploads: Stored on an NFS server mounted at /var/www/html/wp-content/uploads on all nodes.
  • Sessions and object cache: Stored in Redis on a small dedicated instance, as described in our guide on WordPress object cache with Redis or Memcached.
  • Backups: NFS exports and databases backed up to offsite object storage with tools like restic or rclone.

This keeps web nodes essentially stateless; if one fails, you can spin up another, mount NFS, deploy code with rsync and rejoin the cluster quickly.

Example Architecture: Static Asset Build + rsync Fan-Out

For a Laravel or SPA frontend project, you might build all assets in CI, then:

  • Produce a versioned build directory (for example, build-2025-12-31_1200).
  • Use rsync to push that directory to a static assets path on all web servers.
  • Serve assets from local disk (very fast), reference them by version in HTML to avoid cache conflicts.

No NFS here; rsync is enough because assets are immutable after each build.

Operational Considerations: Performance, Security, Backups

Performance Tuning for NFS and rsync

Whichever method you pick, you must watch resource usage. We strongly recommend monitoring your servers with tools like htop, iotop, and a metrics stack. Our article on monitoring VPS resource usage with htop, iotop, Netdata and Prometheus walks through a practical setup.

For NFS performance:

  • Use fast underlying storage (NVMe/SSD) and enough RAM for caching.
  • Ensure at least 1 Gbps networking, preferably 10 Gbps for busy clusters.
  • Consider mount options like noatime and tune read/write sizes (rsize, wsize).

For rsync performance:

  • Schedule heavy syncs during lower-traffic periods to avoid I/O contention with live traffic.
  • Use --delete carefully to keep targets clean, but always test on a staging environment first.
  • Leverage --partial and --inplace wisely; they can speed up large-file updates but also impact rollback strategies.

Security Best Practices

  • NFS:
    • Restrict exports to specific subnets or IP addresses.
    • Use root_squash (unless you have a very good reason not to) to avoid remote root writing as root on the NFS server.
    • Protect the NFS server with a firewall; do not expose it directly to the public internet.
  • SSHFS and rsync over SSH:
    • Use key-based auth with restricted accounts.
    • Combine with a hardened SSH configuration (no password login, no root login), as we describe in our VPS security guides.
    • Consider command= restrictions or chroot for rsync-only users.

Backups and Ransomware Resilience

Regardless of whether you use NFS or rsync for live file distribution, backups must be separate from your primary storage. Do not assume that “we have two web servers” equals having backups.

A robust strategy often follows the 3‑2‑1 rule (3 copies of data, 2 different media, 1 off-site). Our article on designing a backup strategy with realistic RPO/RTO and our guide to ransomware‑resistant hosting backups provide concrete checklists you can apply on top of your file-sharing design.

How to Choose: NFS vs SSHFS vs rsync

To wrap the comparison, here is a high-level decision table for multi-web-server setups:

Scenario Recommended Tool Notes
Shared uploads directory with concurrent reads/writes on LAN NFS Simple and real-time; size NFS server and network properly. Consider object storage as you grow.
Code deployments to multiple web servers rsync Use rsync + release directories + symlinks for zero downtime and easy rollbacks.
Occasional admin access to remote files SSHFS Great for temporary mounts from a management server or local workstation.
One-way replication of uploads from a single writer rsync or NFS rsync if you accept small delays; NFS if you need immediate consistency.
Sessions and cache across web nodes Redis / Memcached Avoid file-based sessions where possible; use NFS only as a last resort.

From dchost.com’s perspective, a good default for most multi-web-server projects is:

  • Local disks for code + rsync-based deployments.
  • NFS or object storage for uploads (depending on scale and budget).
  • Redis/Memcached for sessions and cache.
  • rsync or backup tools for off-site backups.

Conclusion: Build the Right Mix for Your Multi-Server Stack

NFS, SSHFS and rsync all have their place in sharing files between multiple web servers, but they solve different problems. NFS gives you a live, POSIX-like shared filesystem and is often the easiest way to centralize uploads or shared configuration on a LAN. rsync is excellent at pushing immutable or mostly-append-only data—code releases, built assets, periodic upload replication and backups—while keeping bandwidth usage efficient. SSHFS is a convenient, secure tool for ad‑hoc or low-traffic mounts, but rarely the right choice for production traffic paths.

The key is to map each type of data in your application to the right storage and distribution pattern: local disk for code, NFS or object storage for uploads, Redis for sessions, rsync for synchronization and backups. On top of that, you need proper monitoring, security hardening and a tested backup strategy. If you are planning or refactoring a multi-server architecture, our team at dchost.com can help you design a sane mix of VPS, dedicated servers or colocation with private networking, shared storage and backup plans that match your real workload. Reach out to us with your current setup and growth plans, and we can sketch a practical, step-by-step migration path without drama.

Frequently Asked Questions

They solve different problems. NFS provides real-time, shared storage: all servers see the same uploads directory instantly and can read or write concurrently. This is ideal when you need immediate consistency and a simple mental model, but it requires a reliable NFS server and network. rsync is one-way synchronization; it works best when one node is the primary writer and others mostly read. You accept a small delay between upload and replication, but you avoid a central NFS bottleneck. For very large or globally distributed sites, offloading uploads to object storage and CDN is often a better long‑term solution than either NFS or rsync alone.

SSHFS runs in user space over SSH and FUSE, which adds significant latency and CPU overhead compared to a kernel-level filesystem like NFS. Under high concurrency, it is more prone to stalls, hangs or strange error conditions, especially if the SSH connection drops. Locking semantics and metadata handling can also differ from a local filesystem, which can confuse some applications. SSHFS is fantastic for temporary admin work, quick inspections or low‑traffic internal tools, but putting it in the hot path of a busy WordPress or Laravel site is asking for performance issues and hard-to-debug failures. For production, prefer NFS, rsync, or object storage depending on your needs.

The safest pattern is to combine rsync with versioned release directories and a symlink. For each deploy, you rsync the new code into a new directory like /var/www/releases/2025-12-31_1200 on every server. Once all copies are complete and checked, you atomically update a symbolic link (for example /var/www/current) to point to the new release. Your web server’s document root or PHP-FPM pool always points to /var/www/current, so the switch happens instantly. If something goes wrong, you simply repoint the symlink to the previous release. We cover this in detail in our guide on zero‑downtime CI/CD to a VPS with rsync, symlinks and systemd.

It can work, but it is not ideal. Storing session files on NFS allows multiple web servers to share login state, but introduces extra latency and more complex failure modes when the NFS server or network has issues. File locking over NFS also adds overhead and can cause stalls under load. A more robust pattern is to move sessions into Redis, Memcached or your database, which gives you faster access and clearer replication options. Only consider NFS-based session storage if you cannot change the application and make sure to test behavior under load and during NFS outages before trusting it in production.

Treat shared storage just like any other critical data: follow a 3‑2‑1 strategy (3 copies, 2 media, 1 off‑site). For NFS, back up from the NFS server itself using snapshot-capable filesystems or tools like rsync/restic to off‑site object storage. For rsync-managed trees, you can add a second rsync (or restic/borg) job that pushes to a backup target, ideally in a different data center. Avoid storing backups on the same physical disks as your live data. Regularly test restores on a staging server to verify that permissions, ownership, symlinks and application behavior are all correct; this is the only way to be confident your backup plan really works.