If you are deploying multiple apps on a single VPS, Docker is one of the simplest ways to keep everything neat, isolated and easy to manage. Instead of manually installing PHP, Node.js, databases and tools side by side on the same operating system, you bundle each application into its own container and run them in parallel without conflicts. On a properly configured VPS from dchost.com, this gives you a clean separation between projects, faster rollbacks, simpler updates and a much more predictable environment.
In this guide, we walk through running isolated Docker containers on a VPS from scratch, step by step. We will cover how to prepare your VPS, install Docker, run your first containers, separate projects with networks and volumes, and harden everything for real‑world use. The goal is a practical, copy‑paste‑friendly guide you can follow even if you are new to Linux and Docker. By the end, you will have a clear mental model of how containers work on a VPS and a repeatable workflow you can reuse for every new project.
İçindekiler
- 1 Why Run Docker on a VPS Instead of Installing Everything Directly?
- 2 Preparing Your VPS for Docker
- 3 Installing Docker on Your VPS (Step by Step)
- 4 Running Your First Isolated Container
- 5 Structuring Multiple Isolated Containers on One VPS
- 6 Security and Isolation Best Practices for Docker on a VPS
- 7 Monitoring, Logs and Backups for Docker on a VPS
- 8 Putting It All Together: A Simple Multi‑Container Example
- 9 Conclusion: Your Next Step with Isolated Docker on a VPS
Why Run Docker on a VPS Instead of Installing Everything Directly?
Before jumping into commands, it is worth understanding what you actually gain by putting Docker on a VPS instead of installing applications directly on the host.
Containers vs virtual machines in simple terms
A VPS is already a virtual machine: it has its own virtual CPU, RAM, storage and network, isolated from other customers on the same physical server. Docker works one layer above this. Instead of virtualizing hardware, containers share the host kernel but isolate processes, file systems, users and network namespaces.
In practice, this means:
- Lighter than VMs: containers start in milliseconds and use less RAM and disk space.
- Reproducible: you define everything in a Dockerfile and can recreate the same environment on another VPS easily.
- Isolated: projects do not see each other’s processes, ports or dependencies unless you explicitly connect them.
- Disposable: you can destroy and recreate containers without losing data stored in volumes.
We covered how this trend is reshaping hosting in our article about containerization trends in VPS technology. Here we focus on the hands‑on part.
Real‑world problems Docker solves on a VPS
- Conflicting dependencies: one app needs PHP 7.4, another PHP 8.2. With Docker, each runs in its own image.
- Port conflicts: you want two different Nginx or Node.js apps, both listening on port 80. Docker lets you map host ports independently.
- Messy upgrades: instead of upgrading libraries on the host and risking breakage, you update the image and roll back if needed.
- Simple migration: move your containers and volumes from one dchost.com VPS to another with very predictable behavior.
For small and medium projects, this gives you a nice middle ground between traditional shared hosting and full Kubernetes clusters: you keep control, but without unnecessary complexity.
Preparing Your VPS for Docker
You can run Docker on most modern Linux distributions. On dchost.com we usually recommend Ubuntu, Debian or one of the RHEL‑compatible distros like AlmaLinux or Rocky Linux. If you are still deciding, our detailed comparison of distributions in choosing a Linux distro for your VPS can help.
Minimum VPS requirements
Docker itself is lightweight, but your containers still need CPU, RAM and storage. As a baseline:
- 1 vCPU / 1 GB RAM: enough for testing, small personal projects and a few low‑traffic containers.
- 2–4 vCPU / 4–8 GB RAM: better for real applications (e.g., one or two websites, a database and background workers).
- Fast SSD / NVMe: containers do many small I/O operations; NVMe VPS plans at dchost.com will noticeably improve responsiveness.
Disk space depends on the base images (each image can be hundreds of MB) and your data. Plan at least 20–40 GB for small stacks; more if you store media, logs or databases.
Update and secure the base system
Before installing Docker, make sure your VPS is updated and not exposed with default settings. We go much deeper in our security guides like how to secure a VPS server without leaving doors open and the more checklist‑oriented VPS security hardening checklist, but here is a minimum:
- Connect via SSH with the credentials provided by dchost.com.
- Update packages (Ubuntu/Debian example):
sudo apt update && sudo apt upgrade -y - Create a non‑root user and give it sudo access if your image did not already create one.
- Harden SSH: disable password login if you can, use key‑based auth, change root Login settings.
- Enable a firewall: allow only SSH (port 22) and HTTP/HTTPS (80/443) at first.
For example with ufw on Ubuntu:
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
If you want to go further with SSH protection, the article VPS SSH hardening without the drama shows modern options like FIDO2 keys and SSH certificate authorities.
Installing Docker on Your VPS (Step by Step)
We will use Ubuntu LTS as the main example, but the logic is similar on other distributions. Always prefer distribution packages provided by Docker rather than random scripts from the internet.
1. Install prerequisites
On Ubuntu/Debian:
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release
2. Add Docker’s official GPG key and repository
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg |
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg]
https://download.docker.com/linux/ubuntu
$(lsb_release -cs) stable" |
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
This installs:
- docker-ce: main Docker engine (daemon and CLI).
- containerd: the runtime that actually runs containers.
- docker buildx / compose: extended build features and the modern
docker composesubcommand.
4. Enable and test Docker
sudo systemctl enable docker
sudo systemctl start docker
sudo docker run hello-world
If everything is correct, Docker will pull the hello-world image and print a confirmation message.
5. Run Docker as a non‑root user (optional but recommended)
By default you need sudo to run Docker commands. For development and low‑risk scenarios you can add your non‑root user to the docker group:
sudo usermod -aG docker $USER
# Log out and back in, then test:
docker ps
For stricter security, consider rootless Docker or user namespace remapping. We will touch on those in the hardening section and you can dive deeper in our dedicated article how we ship safer containers with rootless Docker and Podman.
Running Your First Isolated Container
Let us move beyond hello-world and run a simple web server in a container. We will use Nginx as an example, but the same model applies to any application image.
Basic Nginx container on a VPS
Start with a single command:
docker run -d
--name demo-nginx
-p 80:80
nginx:alpine
What this does:
-druns the container in the background (detached mode).--name demo-nginxgives the container a predictable name.-p 80:80maps port 80 on your VPS to port 80 inside the container.nginx:alpineis a lightweight Nginx image based on Alpine Linux.
Open your VPS IP in a browser. You should see the Nginx welcome page, served from inside the container.
Persisting content with volumes
By default, changes inside containers disappear when you recreate them. For web content, logs and databases, you should use volumes (bind mounts or Docker volumes):
mkdir -p ~/docker/demo-nginx/html
cat > ~/docker/demo-nginx/html/index.html << 'EOF'
<h1>Hello from Docker on my VPS</h1>
EOF
docker stop demo-nginx && docker rm demo-nginx
docker run -d
--name demo-nginx
-p 80:80
-v ~/docker/demo-nginx/html:/usr/share/nginx/html:ro
nginx:alpine
Now Nginx serves your custom index.html. The important part:
-v host:container:romounts your host directory into the container read‑only, keeping your content outside the container image.- You can destroy and recreate the container without losing data in
~/docker/demo-nginx/html.
Understanding isolation: what is actually separated?
When we say “isolated Docker containers”, we are mainly talking about:
- Process isolation: processes inside a container see only their own PID namespace, not host processes.
- File system isolation: each container has its own file system view based on the image plus mounted volumes.
- Network isolation: containers live on a virtual network; only exposed ports are reachable from the VPS network interface.
- Resource limits: using cgroups, you can limit CPU and memory per container.
This is why you can safely run multiple versions of the same service (e.g., Nginx, MySQL) on one VPS without conflicts, as long as they use different ports or networks.
Structuring Multiple Isolated Containers on One VPS
Once you are comfortable with a single container, the next step is running multiple applications side by side in a way that stays maintainable over time. Here are key building blocks.
Create a separate Docker network per project
Docker networks let containers talk to each other by name while being isolated from other networks. For each project or client, create a dedicated network:
docker network create project1-net
Then attach containers to this network:
docker run -d
--name project1-web
--network project1-net
-p 8080:80
nginx:alpine
If you add a database container on the same network, the web container can reach it via its container name (e.g., project1-db) without you exposing the DB port to the whole internet.
One container per responsibility
A common beginner mistake is trying to put everything in a single container: web server, application runtime, database, cron jobs, etc. This makes updates and scaling harder. A better pattern is:
- web: Nginx or Caddy for static files and reverse proxy.
- app: PHP‑FPM, Node.js, Python, etc., serving the application.
- db: MySQL, MariaDB, PostgreSQL, Redis, etc.
- worker: background job runners, queues, schedulers.
You then connect these containers on the same project network and expose only the web container’s HTTP/HTTPS ports to the outside world.
Using docker compose for bigger stacks
For more than a couple of containers, typing long docker run commands quickly becomes painful. The typical next step is declaring your stack in a docker-compose.yml file and running everything with a single command:
docker compose up -d
Compose is ideal for common setups like WordPress, Laravel or custom microservices. If you are interested in a production‑ready example, see our practical guide WordPress on Docker Compose with Nginx, MariaDB, Redis and automatic backups. Even if you are not running WordPress, the patterns apply to most PHP and Node.js apps.
Restart policies to keep containers alive
On a VPS you want your containers to start automatically after a reboot or a crash. Add a restart policy to your docker run or compose definitions:
--restart unless-stopped– restart on failure and on boot, unless you manually stop the container.--restart on-failure– restart only when the container exits with an error.
Example:
docker run -d
--name demo-nginx
--restart unless-stopped
-p 80:80
nginx:alpine
Security and Isolation Best Practices for Docker on a VPS
Docker’s defaults are reasonable for development, but a VPS that is reachable from the internet requires a bit more care. The good news: with a few habits you dramatically reduce risk while keeping things simple.
1. Limit the Docker API and daemon exposure
- Do not expose Docker’s TCP API to the public internet unless you really know what you are doing.
- By default Docker listens on a Unix socket (
/var/run/docker.sock). Keep it that way and only allow trusted users to access it.
2. Prefer non‑root containers
Many official images still run processes as root inside the container by default. While a container boundary exists, escaping from a root process can have more serious consequences than from an unprivileged user.
Practical steps:
- Choose images that document non‑root usage (e.g., specifying
USERin Dockerfile or environment variables). - Override the user when starting the container if the image supports it:
docker run -u 1000:1000 ... - Investigate rootless Docker or user namespace remapping for stricter isolation, as discussed in our container security deep dive.
3. Run containers with the minimum privileges they need
Avoid --privileged unless you have a very specific reason. Some safer flags and patterns:
--read-onlyto make the container’s file system read‑only, mounting only necessary paths as writable volumes.--cap-drop ALL --cap-add ...to remove all Linux capabilities and add back only the ones an app truly needs.- Use
--tmpfsfor temporary directories that should not persist on disk.
4. Keep images small and up to date
Every extra package in your image is an extra potential vulnerability. Prefer minimal images (e.g., alpine or distro‑provided slim variants) and update them regularly:
# Pull newer versions
docker pull nginx:alpine
# Recreate containers with new images
docker compose pull
docker compose up -d
Combine this with a regular VPS patching routine as described in our guide how to secure a VPS server.
5. Use the VPS firewall as a second boundary
Even though Docker exposes only the ports you map, it is a good idea to keep the VPS firewall as your source of truth:
- Open 80/443 only if the container actually serves HTTP/HTTPS.
- Keep database ports (3306, 5432, 6379, etc.) closed externally and reachable only through Docker networks.
- Consider rate limiting and connection limits on exposed ports for extra safety.
For more advanced packet‑level rules with IPv4 and IPv6, our cookbook on using nftables as a firewall on a VPS can help you build a layered defence.
Monitoring, Logs and Backups for Docker on a VPS
Once your containers are running, you must be able to see what is happening and protect your data. Containers are ephemeral; your data should not be.
Basic monitoring and log inspection
docker ps– list running containers.docker ps -a– list all containers (including stopped).docker logs <name>– view container logs.docker stats– view CPU and RAM usage per container.
At the VPS level, tools like htop, iotop, Netdata, Prometheus and Grafana are great additions. We have a dedicated article that walks through this stack step by step in monitoring VPS resource usage with htop, iotop, Netdata and Prometheus, which combines nicely with containerized workloads.
Where to store persistent data
There are two main patterns:
- Bind mounts: host paths like
/srv/project1/dbare mounted into containers. Easy to understand and back up but ties you to the host layout. - Named volumes: managed by Docker and stored under Docker’s data directory. Clean abstraction but you need to use
docker volumecommands to interact.
For most small projects we recommend bind mounts under a clear directory structure, for example:
/srv/project1/
app/
db/
logs/
/srv/project2/
app/
db/
logs/
Then mount them into containers, e.g. -v /srv/project1/db:/var/lib/mysql.
Backing up Docker data on a VPS
Backups should focus on data and configuration, not container internals. Typically you back up:
- Application code (if not already version‑controlled in Git).
- Database data directories or dumps (MySQL/MariaDB, PostgreSQL, etc.).
- Uploaded files and user content (images, documents, etc.).
- Configuration files such as
docker-compose.yml, env files and Nginx configs.
Common strategies:
- Use
docker execto run database dumps on a schedule and store them in a host directory. - Use rsync or rclone from the host to sync
/srvto object storage. - Use backup tools like restic or Borg directly on the VPS.
We show practical, encrypted and versioned setups with these tools in our guide to offsite backups with Restic/Borg to S3‑compatible storage. The same principles apply whether your workloads run in Docker or on bare metal.
Automating lifecycle tasks
As you grow more comfortable, you can automate:
- Image updates: scheduled pulls and rolling restarts.
- Backup jobs: cron jobs or systemd timers calling scripts that dump databases and push to remote storage.
- Health checks: scripts or external services that hit containerized endpoints and alert you if things go down.
If you already use tools like Terraform and Ansible to automate VPS setup, you can reuse those patterns for Docker stacks as well. Our article on automating VPS setup with Terraform and Ansible shows how to consistently bring new servers to the exact same state, including Docker installation and configuration.
Putting It All Together: A Simple Multi‑Container Example
To consolidate everything, let us sketch a small, realistic setup you might run on a dchost.com VPS using only core Docker features (no orchestration system required).
Scenario: a small web app with a database
Requirements:
- Public website at
https://example.com. - Application written in PHP or Node.js.
- MySQL or PostgreSQL database.
- Everything isolated from another project on the same VPS.
High‑level design
- Create a dedicated Docker network:
project1-net. - Run three containers:
project1-web,project1-app,project1-db. - Expose only
project1-web(port 80/443) to the internet. - Mount host directories under
/srv/project1for persistent data and configuration.
Example using docker compose (YAML sketch)
This is not a full production file, but it shows how isolation, networking and volumes come together:
version: '3.9'
services:
db:
image: mariadb:10.11
container_name: project1-db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=supersecret
- MYSQL_DATABASE=project1
- MYSQL_USER=project1
- MYSQL_PASSWORD=project1pass
volumes:
- /srv/project1/db:/var/lib/mysql
networks:
- project1-net
app:
image: myregistry/project1-app:latest
container_name: project1-app
restart: unless-stopped
environment:
- DB_HOST=project1-db
- DB_NAME=project1
- DB_USER=project1
- DB_PASS=project1pass
depends_on:
- db
networks:
- project1-net
web:
image: nginx:alpine
container_name: project1-web
restart: unless-stopped
ports:
- '80:80'
- '443:443'
volumes:
- /srv/project1/nginx/conf.d:/etc/nginx/conf.d:ro
- /srv/project1/app/public:/var/www/html:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
depends_on:
- app
networks:
- project1-net
networks:
project1-net:
driver: bridge
With this pattern, adding a second project on the same VPS is simply a matter of creating a new network, a new directory under /srv/project2 and another compose file. The projects stay nicely isolated even though they share the same VPS resources.
Conclusion: Your Next Step with Isolated Docker on a VPS
Running isolated Docker containers on a VPS gives you a powerful balance of control, simplicity and safety. Instead of fighting with conflicting packages and ad‑hoc scripts, you describe each service in a Dockerfile or compose file, keep data in well‑defined volumes and let Docker handle process lifecycles and networking. On a well‑sized VPS from dchost.com, this pattern scales gracefully from a single personal site to dozens of small client projects, all without needing a full Kubernetes stack.
If you are just starting, focus on a single application: install Docker, run a basic container, mount a volume and experiment with networks. Then gradually introduce compose files, non‑root containers, resource limits and automated backups. Combine the practices in this guide with the security and monitoring patterns we share across our blog, and you will have a clean, repeatable blueprint for almost any new project. When you are ready to host more apps or need extra CPU, RAM or NVMe storage, you can simply upgrade your dchost.com VPS plan or add another VPS and reuse the exact same Docker workflows.
