If you find yourself repeating the same VPS setup steps again and again—creating users, hardening SSH, installing Docker or a web stack, opening the same ports—it is time to treat your servers like code. With Terraform and Ansible, you can turn the entire VPS lifecycle into a predictable pipeline: provision the machine, configure the OS and services, and be ready for production at the push of a button. In this article, we will walk through how we approach reproducible VPS builds as the dchost.com team, from designing the folder layout to wiring Terraform outputs into Ansible inventories. The goal is simple: every new VPS, whether it lives on a dchost VPS plan, a dedicated server or a colocation box you host with us, should follow the same automated recipe. That means faster launches, fewer mistakes, and an infrastructure you can rebuild on demand if you ever need to scale out or recover from a disaster.
İçindekiler
- 1 Why Reproducible VPS Builds Matter
- 2 Terraform and Ansible: Who Does What?
- 3 Designing Your Automated VPS Stack on dchost
- 4 Step 1 – Defining VPS Infrastructure with Terraform
- 5 Step 2 – Provisioning the Server with Ansible
- 6 Step 3 – Orchestrating Terraform and Ansible Together
- 7 Adding DNS, SSL, Monitoring and Backups
- 8 Common Pitfalls and How to Avoid Them
- 9 Where This Fits in Your Hosting Strategy with dchost
- 10 Bringing It All Together
Why Reproducible VPS Builds Matter
Before diving into tools, it is worth clarifying why this level of automation is worth the effort. A manually built VPS typically follows a checklist in someone’s notebook or wiki page. Over time, people skip steps, tweak configurations directly on the server, or forget to apply improvements everywhere. This leads to configuration drift: two servers that are supposed to be identical behave differently, and debugging becomes guesswork.
Reproducible VPS builds change that. You declare in code exactly how a machine should look—CPU/RAM, disk, OS image, installed packages, users, SSH keys, firewall rules, monitoring agents, backup scripts. Then you let tools like Terraform and Ansible apply that declaration in a repeatable way. If you ever lose a VPS or need a second one, you simply re-run the pipeline. Combined with a solid backup strategy, this is also a massive boost for disaster recovery; if you want to go deeper on that side, we recommend our guide on writing a realistic disaster recovery plan with tested runbooks.
We also see this approach pay off during security work. Instead of hardening each VPS by hand, you encode your best practices once and roll them out everywhere. Our detailed checklist for VPS security hardening (sshd_config, Fail2ban, no-root SSH) fits perfectly into Ansible roles that run on every new server automatically.
Terraform and Ansible: Who Does What?
Terraform and Ansible overlap conceptually, but in practice they shine at different layers of the stack. Understanding that division of responsibilities will keep your automation simple and maintainable.
Terraform: Infrastructure as Code
Terraform is an infrastructure-as-code (IaC) tool. You describe the infrastructure you want—VPS instances, disks, IP addresses, networks, DNS records—in declarative configuration files. Terraform compares your desired state with the actual state and then creates, updates or destroys resources to match.
For VPS automation, you typically let Terraform handle:
- Creating the VPS itself (vCPU/RAM/disk plan, OS image)
- Assigning public IPs and networking options
- Managing DNS records for hostnames
- Outputting connection details (IP, SSH port, usernames) for later steps
If you want a deeper Terraform-focused perspective, we have a dedicated article on automating VPS and DNS with Terraform and achieving zero-downtime deploys.
Ansible: Configuration Management and Orchestration
Ansible focuses on what happens inside the server. You describe the desired state of packages, services, configuration files, users and permissions. Ansible connects via SSH and makes idempotent changes—if the system already matches the desired state, it does nothing; if not, it fixes it.
For VPS automation, Ansible is ideal for:
- Creating non-root users and authorized SSH keys
- Locking down SSH, installing Fail2ban, configuring firewalls
- Installing web stacks (Nginx, PHP-FPM, Node.js, Docker, databases)
- Deploying application code and setting up systemd services
- Configuring backups, monitoring agents and log shippers
We use Ansible heavily together with cloud-init; if you want another angle on that, see our practical story on turning a blank VPS into a ready-to-serve machine with cloud-init and Ansible.
The Simple Rule of Thumb
A helpful way to remember the split:
- Terraform: “Give me three VPS machines with these specs and hostnames.”
- Ansible: “On each VPS, create these users, install this stack and configure these services.”
Once that is clear, wiring them together becomes much easier.
Designing Your Automated VPS Stack on dchost
At dchost.com we like to start with a simple repository layout that you can grow over time. You can host this in Git and connect it to CI/CD later.
infra-project/
terraform/
main.tf
variables.tf
outputs.tf
provider.tf
environments/
staging/
terraform.tfvars
production/
terraform.tfvars
ansible/
inventory/
hosts.ini
group_vars/
all.yml
roles/
base/
webserver/
monitoring/
backup/
playbooks/
site.yml
hardening.yml
This separation keeps responsibilities clear: Terraform defines and creates VPS instances under terraform/, while Ansible configures them under ansible/.
Prerequisites
To follow a similar setup on dchost infrastructure, you will want:
- A dchost VPS, dedicated server or colocation server where you can reach the hypervisor or API (depending on your architecture).
- Terraform installed on your local machine or CI server.
- Ansible installed locally or in a build container.
- At least one SSH key pair ready (we usually store public keys in Ansible vars and upload them automatically).
- A Git repository to version-control your Terraform and Ansible code.
If you are new to VPS administration itself, it is worth reading our guide on what to do in the first 24 hours on a new VPS. Much of that checklist is exactly what we will automate with Ansible in this article.
Step 1 – Defining VPS Infrastructure with Terraform
We will keep the Terraform examples provider-agnostic, because the exact resource names depend on which API or integration you use to control your VPS at dchost. Conceptually, however, every provider-specific module will have the same ingredients: plan, disk, image, network and SSH key.
Basic Terraform Configuration
Start by defining variables for things that will change per environment: hostname prefix, number of instances, plan size, region and SSH key.
// terraform/variables.tf
variable "project" {
type = string
description = "Project name prefix for resources"
}
variable "environment" {
type = string
description = "Environment name (staging, production, etc.)"
}
variable "vps_count" {
type = number
description = "How many VPS instances to create"
default = 1
}
variable "ssh_public_key" {
type = string
description = "SSH public key to install on VPS instances"
}
Then define a simple VPS resource. Here we use a fictional myvps_server resource to keep things generic; you would replace this with the real resource type matching how you orchestrate dchost infrastructure.
// terraform/main.tf
resource "myvps_server" "app" {
count = var.vps_count
name = "${var.project}-${var.environment}-${count.index + 1}"
plan = "nvme-2vcpu-4gb" // example plan name
region = "eu-central" // or your preferred data center
image = "ubuntu-22-04" // or Debian/AlmaLinux etc.
ssh_keys = [var.ssh_public_key]
// Optional: cloud-init to bootstrap before Ansible
user_data = file("cloud-init.yml")
}
Finally, expose the outputs that Ansible will use later: public IPs and hostnames.
// terraform/outputs.tf
output "app_ips" {
description = "Public IPs of the app servers"
value = [for s in myvps_server.app : s.ipv4_address]
}
output "app_hostnames" {
description = "Hostnames of the app servers"
value = [for s in myvps_server.app : s.name]
}
Terraform Workflow
With the configuration in place, your workflow looks like this:
- Initialize Terraform once:
cd terraform terraform init - Set environment-specific variables in
environments/staging/terraform.tfvarsetc:
// terraform/environments/staging/terraform.tfvars
project = "shop"
environment = "staging"
vps_count = 2
ssh_public_key = "ssh-ed25519 AAAA... your-key-here"
- Plan the changes:
terraform plan -var-file="environments/staging/terraform.tfvars"
- Apply to actually create the VPS instances:
terraform apply -var-file="environments/staging/terraform.tfvars"
Terraform will output the IPs and hostnames you defined in outputs.tf. We will consume those from Ansible in the next step.
Step 2 – Provisioning the Server with Ansible
Once Terraform has created the VPS, Ansible takes over to turn a plain OS into a production-ready server. You can run Ansible from your laptop, a CI job, or a management VM inside your dchost environment.
Building the Inventory from Terraform Outputs
Ansible needs to know which hosts to connect to. The simplest approach is to generate a static inventory file after Terraform runs. For small setups, you can paste IPs manually; for larger ones, you can use a script or Terraform’s local_file resource to render hosts.ini automatically.
# ansible/inventory/hosts.ini
[app]
203.0.113.10
203.0.113.11
[app:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=~/.ssh/id_ed25519
Now Ansible knows it has an app group with two hosts and how to connect to them.
Writing a Base Hardening Role
We strongly recommend putting security basics into a reusable role so every new VPS starts hardened by default. Here is a simplified example that touches on some of the points from our VPS security hardening checklist.
# ansible/roles/base/tasks/main.yml
---
- name: Ensure apt cache is up to date
apt:
update_cache: yes
cache_valid_time: 3600
- name: Upgrade all packages (safe)
apt:
upgrade: safe
when: ansible_os_family == "Debian"
- name: Create non-root deploy user
user:
name: deploy
shell: /bin/bash
groups: sudo
append: yes
- name: Authorize SSH key for deploy user
authorized_key:
user: deploy
key: "{{ lookup('file', '~/.ssh/id_ed25519.pub') }}"
- name: Disable root SSH login and password auth
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^{{ item.key }}'
line: '{{ item.key }} {{ item.value }}'
state: present
loop:
- { key: 'PermitRootLogin', value: 'no' }
- { key: 'PasswordAuthentication', value: 'no' }
notify: Restart sshd
- name: Ensure uncomplicated firewall (ufw) is installed
apt:
name: ufw
state: present
- name: Allow SSH and HTTP/HTTPS through ufw
ufw:
rule: allow
name: "{{ item }}"
loop:
- OpenSSH
- 'Nginx Full'
- name: Enable ufw
ufw:
state: enabled
policy: deny
Handlers restart services when configuration files change:
# ansible/roles/base/handlers/main.yml
---
- name: Restart sshd
service:
name: ssh
state: restarted
Installing a Web Stack Role
Next, create a simple webserver role to install Nginx and PHP-FPM (or another stack that matches your application). This is also where you can apply PHP-FPM tuning similar to what we explain in our guide on PHP-FPM settings for high-performance WordPress and WooCommerce.
# ansible/roles/webserver/tasks/main.yml
---
- name: Install Nginx and PHP-FPM
apt:
name:
- nginx
- php-fpm
state: present
- name: Ensure Nginx is enabled and running
service:
name: nginx
state: started
enabled: yes
- name: Ensure PHP-FPM is enabled and running
service:
name: php-fpm
state: started
enabled: yes
- name: Deploy Nginx vhost for app
template:
src: vhost.conf.j2
dest: /etc/nginx/sites-available/app.conf
notify: Reload nginx
- name: Enable app vhost
file:
src: /etc/nginx/sites-available/app.conf
dest: /etc/nginx/sites-enabled/app.conf
state: link
notify: Reload nginx
# ansible/roles/webserver/handlers/main.yml
---
- name: Reload nginx
service:
name: nginx
state: reloaded
Bringing Roles Together in a Playbook
Now create a main playbook that applies both roles to the app group:
# ansible/playbooks/site.yml
---
- hosts: app
become: yes
roles:
- base
- webserver
Run it like this:
cd ansible
ansible-playbook -i inventory/hosts.ini playbooks/site.yml
After a few minutes, your Terraform-created VPS should be locked down, updated and serving a basic web stack.
Step 3 – Orchestrating Terraform and Ansible Together
So far we have run Terraform and Ansible as separate commands. To feel like “one button”, we need a tiny bit of orchestration. You do not need a full-blown platform for this; a Makefile or a shell script is often enough.
Using a Makefile as a Simple Orchestrator
# Makefile
ENV ?= staging
TF_DIR = terraform
ANSIBLE_DIR = ansible
TF_VARS = $(TF_DIR)/environments/$(ENV)/terraform.tfvars
.PHONY: plan apply destroy provision all
plan:
cd $(TF_DIR) && terraform plan -var-file=$(TF_VARS)
apply:
cd $(TF_DIR) && terraform apply -auto-approve -var-file=$(TF_VARS)
provision:
cd $(ANSIBLE_DIR) && ansible-playbook -i inventory/hosts.ini playbooks/site.yml
all: apply provision
Now the entire pipeline becomes:
make ENV=staging all
Behind the scenes, Terraform creates or updates the VPS, and Ansible configures it. When your team gets used to this workflow, spinning up extra capacity or a staging environment becomes a routine command instead of a mini-project.
Integrating with CI/CD
Later, you can move this into CI/CD. For example:
- On pushes to
main, your pipeline runsterraform planfor review. - On approved merges or tagged releases, it runs
terraform applyfollowed byansible-playbook. - Environment selection (staging/production) is driven by branch or tag naming conventions.
This keeps the entire lifecycle auditable in Git: who changed what, when and why.
Adding DNS, SSL, Monitoring and Backups
A VPS that serves traffic in production needs more than just Nginx. With Terraform and Ansible in place, it is straightforward to extend your automation to DNS, TLS, monitoring and backups.
Managing DNS with Terraform
Most DNS providers have Terraform support, and you can also automate DNS for domains you host through dchost.com by wiring Terraform to the relevant APIs or templates. The idea is to let Terraform create A/AAAA records for each VPS, so hostnames always match the infrastructure state.
resource "mydns_record" "app" {
count = length(myvps_server.app)
zone = "example.com"
name = "app-${count.index + 1}"
type = "A"
value = myvps_server.app[count.index].ipv4_address
ttl = 300
}
We go deeper into this style of automation in our article on Terraform-based VPS and DNS automation for zero-downtime deployments.
Automating SSL certificates
Once DNS is in place, you want HTTPS. A common pattern is:
- Ansible installs a web server and ACME client (such as certbot or acme.sh).
- Playbooks request and renew certificates automatically.
- Nginx/Apache templates reference the certificate paths.
This can be either HTTP-01 or DNS-01 based, depending on your DNS setup and whether you need wildcards. For a practical deep dive, including multi-domain and wildcard strategies, see our guide on Let’s Encrypt wildcard SSL automation.
Monitoring and Alerts
Automation makes it easy to add monitoring agents to every new VPS. A common pattern on our side is:
- Terraform labels instances with environment and role.
- Ansible installs and configures exporters or agents (Node Exporter, Promtail, etc.).
- A central Prometheus + Grafana stack scrapes or receives metrics and logs.
If you want to set up a basic monitoring stack quickly, we have a beginner-friendly walkthrough on VPS monitoring and alerts with Prometheus, Grafana and Uptime Kuma. The same Ansible roles you use there can be attached to any new VPS created by Terraform.
Backups and Off-Site Storage
Backups are the final pillar of a production-ready VPS. The nice thing about infrastructure as code is that you can fully encode your backup strategy as well:
- Ansible installs tools like restic or Borg.
- Playbooks configure backup targets (S3-compatible object storage, NFS, etc.), credentials and schedules.
- Systemd timers or cron jobs run backups on a regular cadence.
We have a dedicated guide on offsite backups with restic/Borg and S3-compatible storage (versioning, encryption and retention) that slots neatly into Ansible roles.
Common Pitfalls and How to Avoid Them
Terraform + Ansible workflows are powerful, but there are a few gotchas that we see repeatedly in real projects. The good news: most of them are easy to avoid once you know what to look for.
Forgetting About SSH Connectivity
Terraform might successfully create a VPS that Ansible cannot reach. Common reasons:
- Firewall or security group does not allow SSH from your Ansible runner.
- The default username (e.g.
ubuntu,debian,root) is different from what you assumed. - Your SSH key was not injected correctly (cloud-init misconfiguration).
To avoid this, test SSH manually as soon as Terraform finishes, and standardize your base image or cloud-init so it always has the same initial user and SSH configuration.
Non-Idempotent Ansible Tasks
Ansible’s power lies in idempotence: running the same playbook multiple times should not break anything. Pitfalls include:
- Using shell commands that append to files on every run.
- Downloading archives into the same directory without cleanup.
- Manipulating configuration files with fragile
sedorlineinfilerules.
Favour Ansible modules (like apt, user, template, ufw) over raw shell commands, and test your playbooks by running them twice on a fresh VPS to see if the second run reports “ok” instead of “changed”.
Mixing Manual Changes with Automation
It is tempting to “just tweak one thing” directly on a live server. Over time these manual edits diverge from what your Ansible roles expect, and future runs might undo your fixes or fail in surprising ways.
A healthier approach is to treat Terraform and Ansible as the single source of truth. When you need a change, commit it to Git, run the pipeline and let the automation apply it. This is especially important for security-related changes and firewall rules.
Secrets Management
Never hard-code passwords, API keys or database credentials in plain-text Ansible vars or Terraform files. Use at least:
- Ansible Vault to encrypt sensitive variables.
- Environment variables or secret storage in your CI system.
- Dedicated secret management tools if your stack grows (e.g. HashiCorp Vault, SOPS + age, etc.).
We follow similar patterns in our own infrastructure when automating VPS deployments and see a huge reduction in “leaked secrets in Git” incidents.
Where This Fits in Your Hosting Strategy with dchost
Not every project needs fully automated VPS builds from day one. For a single small website, a manually configured VPS or a managed hosting solution is often enough. Automation starts to shine when:
- You manage multiple environments (development, staging, production).
- You host several projects or many clients on dchost VPS or dedicated servers.
- You need to scale horizontally during campaigns or seasonal peaks.
- You must comply with strict security/audit requirements and prove how servers are configured.
The beauty of Terraform + Ansible is that the same patterns apply whether the underlying compute is a single dchost VPS, a farm of VPS instances, a dedicated server or your own hardware in our colocation facilities. Once you invest in infrastructure as code, migrating between these options, or scaling up over time, becomes far less painful.
If you are unsure which base platform—VPS versus dedicated—is the best starting point for your Terraform + Ansible stack, our comparison on choosing between dedicated servers and VPS for your business can help clarify the trade-offs.
Bringing It All Together
Automating VPS setup with Terraform and Ansible is not about fancy tooling for its own sake; it is about turning fragile, one-off server builds into a reliable, repeatable process. Terraform declares and provisions your VPS instances and networking; Ansible turns them into hardened, monitored, backed-up application servers. Together, they give you a push-button way to create or recreate your infrastructure—whether you are bringing up a new staging environment, adding capacity for a campaign, or recovering after a hardware failure.
As the dchost.com team, we see customers gain a lot of confidence once their hosting stack becomes code they can read, review and version-control. If you want to adopt this approach on your own dchost VPS, dedicated server or colocation environment, you can start small: one Terraform module, one Ansible role, and a simple Makefile. From there, you can grow into DNS automation, SSL, monitoring and backups using the resources we linked throughout this article.
If you would like help choosing the right VPS or server configuration for your Terraform + Ansible setup, or you want to discuss how to align this with your backup, monitoring and security requirements, our team at dchost.com is ready to walk through real scenarios with you and design an infrastructure that is both powerful and maintainable.
