Disk usage graphs on hosting servers rarely grow in a straight line. Traffic increases, marketing launches, more WordPress or Laravel sites get added, and in the background one thing grows steadily every single day: logs. On both cPanel hosting environments and unmanaged VPS servers, web, email, database and system logs can quietly consume gigabytes of space, slow down backups and even trigger “no space left on device” errors if you do not keep them under control. A clear log archiving strategy – built around gzip compression, S3‑compatible object storage offload and realistic retention policies – turns that risk into an asset. You keep the logs you actually need for debugging, security and compliance, while freeing your primary disks for what really matters: serving your applications. In this article we will walk through how we at dchost.com think about log archiving on cPanel and VPS, which tools we rely on in real projects, and how you can implement a strategy that is cheap, predictable and compliant.
İçindekiler
- 1 Why Log Archiving Matters on cPanel and VPS
- 2 Understanding Log Types and Where They Live
- 3 gzip Compression and logrotate: The Foundation of Log Archiving
- 4 Offloading Logs to S3‑Compatible Object Storage
- 5 Designing Realistic Log Retention Policies
- 6 Putting It All Together: Reference Architectures for cPanel and VPS
- 7 A Maintainable Log Archiving Strategy for the Long Term
Why Log Archiving Matters on cPanel and VPS
Server logs are not just noise; they are a detailed history of how your sites, APIs and email infrastructure behave. The problem is that this history grows forever unless you make intentional decisions.
Key reasons to archive logs properly
- Disk capacity and stability: Uncontrolled logs fill disks, cause failed backups and can even crash services when they cannot write temporary files. On a VPS with limited NVMe or SSD space, this can happen faster than you think.
- Performance of tools and backups: Backup jobs, malware scanners and search tools slow down when they must process months of uncompressed logs sitting in
/homeor/var/log. - Security and incident response: When something goes wrong, you want to be sure that the right logs still exist – not just yesterday’s, but maybe 30, 90 or 365 days back, depending on your risk profile.
- Legal and regulatory requirements: Regulations like KVKK and GDPR often require that you keep certain logs for a minimum period and delete or anonymize them after a maximum period. We covered this perspective in detail in our guide on log retention on hosting and email infrastructure.
- Cost optimisation: Fast NVMe disks are excellent for active databases and applications, but expensive as an archive. Compressing and offloading logs to object storage gives you the same data at a fraction of the cost.
For customers using cPanel on shared or reseller hosting, some rotation exists out of the box, but it is rarely aligned with your specific needs. On VPS and dedicated servers, you are fully responsible for log rotation and archiving. In both cases, a bit of planning around gzip, object storage and retention will save you many headaches later.
Understanding Log Types and Where They Live
Before designing an archiving strategy, you need to know what you are archiving and where it is written.
Common log types on cPanel servers
On a typical cPanel/WHM server managed by dchost.com or by your own team, you will encounter:
- Web server logs (Apache / LiteSpeed):
- Global logs such as
/usr/local/apache/logs/access_loganderror_log - Per‑domain logs under
/usr/local/apache/domlogs/
- Global logs such as
- cPanel and WHM logs:
/usr/local/cpanel/logs/access_log(panel access)/usr/local/cpanel/logs/error_log/usr/local/cpanel/logs/login_log(authentication)
- Mail server logs (Exim, Dovecot):
/var/log/exim_mainlog,/var/log/exim_rejectlog,/var/log/exim_paniclog/var/log/maillogfor POP/IMAP connections
- FTP, SSH and system logs:
/var/log/messages,/var/log/secure(or/var/log/auth.logon some distros)/var/log/pureftpd.logor similar, depending on the FTP daemon
Many of these logs are already rotated by cPanel’s built‑in mechanisms and by logrotate, but default periods may not match your security or compliance needs. We will come back to that.
Common log types on plain VPS servers
On a custom VPS without a control panel, you typically manage services yourself:
- Web server logs:
/var/log/nginx/access.log,/var/log/nginx/error.log, or/var/log/httpd///var/log/apache2/. - Application logs: Laravel log files under
storage/logs/, Node.js app logs handled by pm2 or systemd’sjournalctl, custom scripts writing to/var/log/yourapp/etc. - Database logs: MySQL/MariaDB slow query log, error log, general log; PostgreSQL
postgresql.logfiles. - System and authentication logs:
/var/log/syslog,/var/log/messages,/var/log/auth.logor/var/log/secure.
On these servers, nothing stops your logs from growing indefinitely except what logrotate (or systemd‑journald) is configured to do. If you have ever hit a nasty “no space left on device” message, you might want to review our dedicated article on VPS disk usage and logrotate tuning.
gzip Compression and logrotate: The Foundation of Log Archiving
The simplest and most effective optimisation for logs is to compress them. Plain text compresses extremely well; it is common to see 70–90% size reduction using gzip. That means a 10 GB log history can shrink to 1–3 GB immediately.
How log rotation works
Both cPanel and most Linux distributions rely on a utility called logrotate to manage log files. The basic idea:
- The active log file (for example
access.log) is periodically “rotated”. - The current file is renamed to something like
access.log.1, and a new emptyaccess.logis created for the service to keep writing into. - Older files (
.2,.3…) are removed when they exceed your configured retention count or age. - If compression is enabled, rotated files are compressed with gzip and end up as
access.log.1.gz,access.log.2.gzetc.
Rotation can be based on time (daily/weekly/monthly) or size (for example, when a log exceeds 100 MB). Most hosting setups use daily or weekly rotation plus gzip.
Configuring gzip and rotation on cPanel servers
On a cPanel/WHM server, many rotations are preconfigured, but you can adjust them at two levels:
- WHM interface: Under Service Configuration → Apache Configuration → Log Rotation you can adjust whether to archive logs and how long to keep them for web logs specifically.
- logrotate configuration files: Located under
/etc/logrotate.confand/etc/logrotate.d/. These are standard Linux configuration files; cPanel also installs its own snippets there for Exim, MySQL, cPanel logs, etc.
A typical logrotate rule for an Apache log might look like this:
/usr/local/apache/logs/access_log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
sharedscripts
postrotate
/usr/local/apache/bin/apachectl graceful > /dev/null 2>&1 || true
endscript
}
Important directives:
daily: Rotate the log every day.rotate 14: Keep 14 compressed archives before deleting the oldest.compress/delaycompress: Compress rotated logs with gzip (starting from the second newest file).missingok/notifempty: Do not complain if the file is missing or empty.
For most cPanel environments, tightening these rules (for example, from 90 days to 14 or 30 days for large, low‑value logs) immediately frees disk space without losing useful data. Security and audit logs can be retained longer but perhaps moved off the main server, which we will cover in the S3 section.
Configuring gzip and rotation on unmanaged VPS servers
On a plain VPS, you have full control. Typical steps:
- Inspect existing rules:
cat /etc/logrotate.confandls /etc/logrotate.d/. - Add or adjust rules for key services such as Nginx, Apache, MySQL, PostgreSQL, and your application logs.
Example for Nginx logs:
/var/log/nginx/*.log {
daily
rotate 7
size 100M
compress
missingok
notifempty
sharedscripts
postrotate
[ -f /run/nginx.pid ] && kill -USR1 $(cat /run/nginx.pid)
endscript
}
Example for a Laravel application log under /var/www/example.com/storage/logs/laravel.log:
/var/www/example.com/storage/logs/laravel.log {
daily
rotate 30
compress
missingok
notifempty
copytruncate
}
The copytruncate directive tells logrotate to copy the existing file, then truncate it in place so that PHP can keep writing to the same file handle. It is not perfect for ultra‑high traffic systems, but for most small to medium sites it works well.
Once gzip and logrotate are correctly set, you have a rolling window of compressed history. The next step is to decide which of those compressed logs stay on the server, and which should be offloaded to cheaper, more scalable storage.
Offloading Logs to S3‑Compatible Object Storage
Even compressed, long log histories eat storage and clutter backups if they remain on your main server forever. A better pattern is:
- Keep a short, recent window of logs locally (for example, 7–30 days).
- Archive older logs to an external, S3‑compatible object storage bucket.
- Apply separate, cheaper retention rules on that bucket (months or years).
Object storage is ideal here: it is inexpensive, highly durable and designed for exactly this kind of “write once, read rarely” data. At dchost.com we often pair VPS or dedicated servers with S3‑compatible object storage for backups and log archives, so that customers can scale retention without constantly resizing disks.
Designing your bucket structure
For logs, a clear naming structure makes later searches much easier. A common pattern is:
logs/<environment>/<hostname>/<service>/YYYY/MM/DD/filename.log.gz
For example:
logs/production/cpanel01/web/2026/02/07/access_log-20260207.gzlogs/staging/vps-api01/nginx/2026/02/07/access.log-20260207.gz
This structure lets you filter by environment, server and service when you later ingest logs into a central system or respond to an incident.
Tools to sync logs to object storage
On Linux servers we usually rely on command‑line tools that speak the S3 API:
- rclone: Very flexible, supports many storage providers, sync and copy modes, encryption and bandwidth limits.
- s3cmd or other vendor tools: Simple and widely used; good for basic scripts.
- restic / borg: Backup tools that can also handle log archives as part of a wider backup strategy; we covered this pattern in our article on automating off‑site backups to object storage with rclone, restic and cron.
Example: rclone‑based log archive sync
1. Configure an S3‑compatible remote with rclone:
rclone config
# Create a new remote, choose S3, enter endpoint, access key and secret key
Assume the remote is called logarchive and the bucket my-logs.
2. Create a local directory where rotated, compressed logs live, for example /var/log/archive/. Configure your logrotate rules to move older .gz files there using the olddir directive:
/var/log/nginx/*.log {
daily
rotate 7
compress
olddir /var/log/archive/nginx
missingok
notifempty
}
3. Add a cron job to sync archives to object storage, for example once per night:
0 2 * * * /usr/bin/rclone sync
/var/log/archive/nginx
logarchive:my-logs/production/vps-api01/nginx
--transfers=4 --checkers=4 --log-file=/var/log/rclone-logarchive.log --log-level=INFO
4. After confirming files are safely uploaded and versioning is enabled on the bucket, you can add another logrotate or cron‑driven cleanup on the /var/log/archive/ directory itself, keeping only 7–30 days locally while the bucket holds months or years.
Security considerations for log archives
- Encryption: Enable server‑side encryption on your object storage bucket where possible. For sensitive logs, consider client‑side encryption (for example rclone’s
cryptbackend on top of S3). - Access control: Use dedicated access keys with minimal permissions (only the specific bucket and folder, write‑only if feasible from the server side).
- Public access: Log buckets should never be public. Make sure block‑public‑access style settings are enabled.
We also strongly recommend reviewing what personal data appears in logs. If you operate in KVKK/GDPR‑regulated environments, our article on log anonymisation and IP masking for KVKK/GDPR‑compliant logs is a good companion to this archiving guide.
Designing Realistic Log Retention Policies
Compression and S3 offload answer the how of log archiving. Retention policies answer the how long. Without clear policies, teams either keep everything forever (wasting storage and increasing privacy risk) or delete too aggressively (hurting incident response and compliance).
Step 1: Classify your logs
A practical way to start is to classify logs into three broad categories:
- Operational logs: Web access logs, application logs at
INFOlevel, general system logs. These help you troubleshoot performance issues or errors. - Security and audit logs: Authentication logs, firewall logs, panel access logs, admin actions. These are crucial for investigating suspicious activity.
- Debug logs: Verbose development logs, SQL debug output, framework debug modes. These are usually only needed in dev/staging or for short periods in production.
Step 2: Map categories to retention periods
The exact numbers depend on your industry and any contractual or legal obligations, but a typical baseline for many small and medium businesses might look like this:
| Log type | Local retention (on server) | Archive retention (object storage) |
|---|---|---|
| Web access logs | 7–30 days | 3–12 months |
| Application error logs | 14–60 days | 6–12 months |
| Database logs (slow query, error) | 7–30 days | 3–6 months |
| Security/audit logs (SSH, panel logins) | 30–90 days | 12–24 months (depending on policy) |
| Debug logs | 0–7 days (only during an incident) | Usually no archive |
When you design these periods, align them with whatever you have defined for backups and database snapshots. Our article on how long you should keep backups under KVKK/GDPR vs storage costs follows the same logic, and it is helpful to treat logs as just another dataset within your overall retention policy.
Step 3: Reflect policies in logrotate and object storage
Once you have your desired numbers:
- Set
rotate Nanddaily/weeklyin logrotate to achieve your local retention window (for example,daily+rotate 14≈ 14 days). - Configure an S3 lifecycle policy on the log bucket to transition old objects to cheaper “cold” tiers or delete them after your archive window. For example, delete access logs after 365 days, security logs after 730 days.
The combination of logrotate + S3 lifecycle policies gives you automated enforcement: old logs disappear on schedule without manual cleanup.
Step 4: Think about privacy and minimisation
Under KVKK/GDPR, you must not keep personal data longer than necessary for your stated purposes. Logs often contain IP addresses, user IDs, email addresses and sometimes request bodies. A few practical tips:
- Shorten retention for logs with high personal data exposure (for example, application request logs that include full URLs with query parameters).
- Prefer anonymised or pseudonymised fields where possible, especially for long‑term archives. Again, our log anonymisation guide dives deeper into concrete techniques.
- Document your retention decisions in a policy document that your technical and legal teams both understand.
Putting It All Together: Reference Architectures for cPanel and VPS
Let us combine the pieces into two concrete setups you can implement today: one for WHM/cPanel servers and one for custom VPS environments.
Scenario 1: WHM/cPanel server hosting many sites
This is common for agencies and resellers using dchost.com’s cPanel hosting or their own dedicated/cPanel servers.
- Review existing rotation:
- Check WHM’s log configuration for Apache, cPanel and Exim.
- Inspect
/etc/logrotate.d/for service‑specific rules.
- Enable and tighten gzip:
- Ensure
compressis set for all high‑volume logs (web, mail, panel). - Reduce
rotatecounts where safe. For example, from 90 to 30 days for general access logs if you also archive to object storage.
- Ensure
- Define an archive directory and object storage sync:
- Configure logrotate to move older
.gzfiles into something like/var/log/archive/{apache,exim,cpanel}. - Use rclone or s3cmd with a nightly cron job to push those archives into a bucket such as
logs/production/cpanel01/<service>/.
- Configure logrotate to move older
- Apply lifecycle policies on the bucket:
- For example: keep Apache logs for 12 months, Exim logs for 24 months, cPanel access logs for 18 months.
- Optionally transition older objects to a colder, cheaper storage class after 90 days.
- Document and test:
- Write down which logs live where, and for how long.
- Run a test by restoring a specific log file from the object storage archive and verifying you can read it.
This approach keeps the cPanel server lean – only a few weeks of logs on the main SSD – while still giving you a long, searchable history off the box. It also plays nicely with centralised log stacks if you decide to add them later.
Scenario 2: Custom VPS with Nginx/Apache, databases and apps
For VPS customers running their own stacks (for example, Nginx + PHP‑FPM + MariaDB + Redis + Node.js), you have even more flexibility.
- Standardise log locations:
- Ensure each app writes to a dedicated directory such as
/var/log/myapp/, not random paths under/home. - Configure pm2 or systemd units to log to files or to journald consistently.
- Ensure each app writes to a dedicated directory such as
- Create logrotate rules per service:
- Web server: daily rotation, 7–14 files, gzip,
olddir /var/log/archive/nginx. - Database: smaller retention locally (7–30 days) because these logs can be heavy.
- App logs: tune per project; for chatty Laravel logs you might want
size 50Mplus daily rotation.
- Web server: daily rotation, 7–14 files, gzip,
- Sync archives to object storage:
- Use rclone scripts similar to the cPanel scenario, but with separate prefixes for each VPS (for example
logs/production/vps-app01/). - Encrypt logs in transit (HTTPS to the object storage endpoint) and at rest.
- Use rclone scripts similar to the cPanel scenario, but with separate prefixes for each VPS (for example
- Optional: centralise logs for search and alerts:
- Instead of only cold archives, you can stream current logs into a central stack such as ELK/Opensearch or Grafana Loki, and keep S3 as long‑term storage.
- We explained this pattern in more depth in our guide to centralising logs for multiple servers with ELK and Loki.
- Monitor disk usage and adjust:
- Watch actual growth over a few weeks. If
/var/logstill grows fast, tighten local retention or reduce log verbosity. - Review object storage bills and tune lifecycle policies; moving old logs to colder storage or reducing retention can have a big impact.
- Watch actual growth over a few weeks. If
With this model, your VPS stays clean and predictable, while your log history lives in a durable, inexpensive archive. If you ever scale out to multiple VPSs, adopting a centralised stack as described above becomes a natural next step instead of a painful migration.
A Maintainable Log Archiving Strategy for the Long Term
Log archiving on cPanel and VPS does not need to be complicated or expensive. A solid strategy rests on a few practical decisions:
- Use gzip and logrotate everywhere to keep local disks lean.
- Separate hot vs cold logs: recent logs stay on fast SSD, older logs move to S3‑compatible object storage.
- Define clear retention windows per log category, aligned with your security, troubleshooting and regulatory needs.
- Automate enforcement with logrotate rules and object storage lifecycle policies, so no one has to remember manual cleanup tasks.
- Review privacy aspects and apply anonymisation or masking where long‑term retention is required.
From our perspective at dchost.com, the teams that sleep best are the ones who know exactly how far back they can look in their logs, how quickly they can fetch an archive and when old data will be deleted. If you are already hosting with us on cPanel, VPS, dedicated servers or colocation, our support team can help you review current log growth, propose reasonable retention targets and connect your servers to S3‑compatible object storage for archiving. Starting small – for example with web access logs only – is perfectly fine. The important part is to move from “logs grow forever until the disk is full” to a conscious, documented and automated log archiving strategy.
