{"id":3301,"date":"2025-12-14T20:17:23","date_gmt":"2025-12-14T17:17:23","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/centralizing-logs-for-multiple-servers-with-elk-and-loki-in-hosting-environments\/"},"modified":"2025-12-14T20:17:23","modified_gmt":"2025-12-14T17:17:23","slug":"centralizing-logs-for-multiple-servers-with-elk-and-loki-in-hosting-environments","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/centralizing-logs-for-multiple-servers-with-elk-and-loki-in-hosting-environments\/","title":{"rendered":"Centralizing Logs for Multiple Servers with ELK and Loki in Hosting Environments"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>When you operate more than a couple of servers, logging stops being a simple text-file problem and becomes an observability problem. Apache errors on one <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>, PHP warnings on another, MySQL slow queries on a dedicated database node, firewall events on an edge server\u2026 if each machine keeps its own logs, finding the root cause of an incident or a performance regression can take hours. Centralizing logs changes this completely: you can search across all servers, correlate events in seconds, and build alerts that react to real behaviour instead of guesses.<\/p>\n<p>In this article we will walk through how to build centralized logging for multiple servers using both the <strong>ELK stack<\/strong> (Elasticsearch, Logstash, Kibana) and the <strong>Loki stack<\/strong> (Grafana Loki + Promtail + Grafana). We will focus on practical hosting scenarios: fleets of VPSs, a mix of <a href=\"https:\/\/www.dchost.com\/dedicated-server\">dedicated server<\/a>s and colocation, or multi-tenant environments where you host many client sites. As the dchost.com team, we will share patterns that work well on real infrastructure, and how to choose between ELK and Loki for your own environment.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#Why_Centralized_Logging_Matters_in_Hosting_Environments\"><span class=\"toc_number toc_depth_1\">1<\/span> Why Centralized Logging Matters in Hosting Environments<\/a><\/li><li><a href=\"#The_Core_Building_Blocks_of_a_Centralized_Logging_Stack\"><span class=\"toc_number toc_depth_1\">2<\/span> The Core Building Blocks of a Centralized Logging Stack<\/a><ul><li><a href=\"#1_Log_Collection_Agents\"><span class=\"toc_number toc_depth_2\">2.1<\/span> 1. Log Collection Agents<\/a><\/li><li><a href=\"#2_Transport_and_Buffering\"><span class=\"toc_number toc_depth_2\">2.2<\/span> 2. Transport and Buffering<\/a><\/li><li><a href=\"#3_Storage_Indexing_and_Search\"><span class=\"toc_number toc_depth_2\">2.3<\/span> 3. Storage, Indexing and Search<\/a><\/li><li><a href=\"#4_Visualization_and_Alerting\"><span class=\"toc_number toc_depth_2\">2.4<\/span> 4. Visualization and Alerting<\/a><\/li><li><a href=\"#5_Security_Multi-Tenancy_and_Retention\"><span class=\"toc_number toc_depth_2\">2.5<\/span> 5. Security, Multi-Tenancy and Retention<\/a><\/li><\/ul><\/li><li><a href=\"#ELK_Stack_for_Multi-Server_Logging\"><span class=\"toc_number toc_depth_1\">3<\/span> ELK Stack for Multi-Server Logging<\/a><ul><li><a href=\"#Key_Components_and_Their_Roles\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Key Components and Their Roles<\/a><\/li><li><a href=\"#Reference_Architecture_for_1030_Servers\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Reference Architecture for 10\u201330 Servers<\/a><\/li><li><a href=\"#Advantages_of_ELK_in_Hosting_Environments\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Advantages of ELK in Hosting Environments<\/a><\/li><li><a href=\"#Challenges_and_How_to_Mitigate_Them\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Challenges and How to Mitigate Them<\/a><\/li><\/ul><\/li><li><a href=\"#Loki_Stack_for_Hosting_Environments\"><span class=\"toc_number toc_depth_1\">4<\/span> Loki Stack for Hosting Environments<\/a><ul><li><a href=\"#Loki_Stack_Components\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Loki Stack Components<\/a><\/li><li><a href=\"#How_Lokis_Label-Centric_Model_Fits_Hosting_Workloads\"><span class=\"toc_number toc_depth_2\">4.2<\/span> How Loki\u2019s Label-Centric Model Fits Hosting Workloads<\/a><\/li><li><a href=\"#Reference_Architecture_for_1050_Servers\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Reference Architecture for 10\u201350 Servers<\/a><\/li><li><a href=\"#Benefits_of_Loki_for_Multi-Server_Hosting\"><span class=\"toc_number toc_depth_2\">4.4<\/span> Benefits of Loki for Multi-Server Hosting<\/a><\/li><li><a href=\"#Where_Loki_Is_Less_Ideal\"><span class=\"toc_number toc_depth_2\">4.5<\/span> Where Loki Is Less Ideal<\/a><\/li><\/ul><\/li><li><a href=\"#ELK_vs_Loki_Choosing_the_Right_Stack_for_Your_Hosting_Scenario\"><span class=\"toc_number toc_depth_1\">5<\/span> ELK vs Loki: Choosing the Right Stack for Your Hosting Scenario<\/a><ul><li><a href=\"#When_ELK_Is_a_Better_Fit\"><span class=\"toc_number toc_depth_2\">5.1<\/span> When ELK Is a Better Fit<\/a><\/li><li><a href=\"#When_Loki_Is_a_Better_Fit\"><span class=\"toc_number toc_depth_2\">5.2<\/span> When Loki Is a Better Fit<\/a><\/li><li><a href=\"#Hybrid_Approaches\"><span class=\"toc_number toc_depth_2\">5.3<\/span> Hybrid Approaches<\/a><\/li><\/ul><\/li><li><a href=\"#Designing_Centralized_Logging_for_a_1020_Server_Hosting_Environment\"><span class=\"toc_number toc_depth_1\">6<\/span> Designing Centralized Logging for a 10\u201320 Server Hosting Environment<\/a><ul><li><a href=\"#Step_1_Define_Goals_and_Scope\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Step 1: Define Goals and Scope<\/a><\/li><li><a href=\"#Step_2_Choose_the_Stack_and_Hardware\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Step 2: Choose the Stack and Hardware<\/a><\/li><li><a href=\"#Step_3_Standardize_Log_Formats_on_Each_Server\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Step 3: Standardize Log Formats on Each Server<\/a><\/li><li><a href=\"#Step_4_Configure_Promtail_on_All_Nodes\"><span class=\"toc_number toc_depth_2\">6.4<\/span> Step 4: Configure Promtail on All Nodes<\/a><\/li><li><a href=\"#Step_5_Build_Dashboards_and_Basic_Alerts\"><span class=\"toc_number toc_depth_2\">6.5<\/span> Step 5: Build Dashboards and Basic Alerts<\/a><\/li><li><a href=\"#Step_6_Add_ELK_Where_It_Really_Helps\"><span class=\"toc_number toc_depth_2\">6.6<\/span> Step 6: Add ELK Where It Really Helps<\/a><\/li><\/ul><\/li><li><a href=\"#Operational_Best_Practices_Retention_Backups_and_Cost_Control\"><span class=\"toc_number toc_depth_1\">7<\/span> Operational Best Practices: Retention, Backups and Cost Control<\/a><ul><li><a href=\"#Retention_Policies_by_Log_Type\"><span class=\"toc_number toc_depth_2\">7.1<\/span> Retention Policies by Log Type<\/a><\/li><li><a href=\"#Backups_and_Disaster_Recovery_for_Logging\"><span class=\"toc_number toc_depth_2\">7.2<\/span> Backups and Disaster Recovery for Logging<\/a><\/li><li><a href=\"#Cost_and_Performance_Tuning\"><span class=\"toc_number toc_depth_2\">7.3<\/span> Cost and Performance Tuning<\/a><\/li><\/ul><\/li><li><a href=\"#Summary_and_Next_Steps_with_dchostcom\"><span class=\"toc_number toc_depth_1\">8<\/span> Summary and Next Steps with dchost.com<\/a><\/li><\/ul><\/div>\n<h2><span id=\"Why_Centralized_Logging_Matters_in_Hosting_Environments\">Why Centralized Logging Matters in Hosting Environments<\/span><\/h2>\n<p>On a single server, SSH-ing in and running <code>tail -f \/var\/log\/nginx\/error.log<\/code> may be enough. But as soon as you have multiple web nodes, database servers, caching tiers or background workers, this approach breaks down. You need a way to see what happened across the entire stack, in order, around the same time.<\/p>\n<p>In typical hosting setups we see at dchost.com \u2013 such as several VPSs serving a WooCommerce store, or an agency running 20+ client WordPress instances \u2013 the lack of centralized logging leads to a few recurring problems:<\/p>\n<ul>\n<li><strong>Slow troubleshooting:<\/strong> You jump between servers, grep different files, and try to mentally align timestamps.<\/li>\n<li><strong>Missed security signals:<\/strong> Brute-force attempts, suspicious 5xx bursts or WAF blocks on one server may be invisible when you only look locally.<\/li>\n<li><strong>No historical context:<\/strong> Log rotation on each host removes older data, exactly when you need it to understand a long-running issue.<\/li>\n<li><strong>Inconsistent formats:<\/strong> Each application logs differently, making manual analysis painful.<\/li>\n<\/ul>\n<p>Centralized logging solves these by shipping logs from every server to a <strong>single, searchable platform<\/strong> with dashboards and alerts. Once you have that in place, techniques like diagnosing <a href=\"https:\/\/www.dchost.com\/blog\/en\/hosting-sunucu-loglarini-okumayi-ogrenin-apache-ve-nginx-ile-4xx-5xx-hatalarini-teshis-rehberi\/\">4xx\u20135xx errors in Apache and Nginx logs<\/a> become faster and more systematic, because you can see patterns across all nodes, not just one.<\/p>\n<h2><span id=\"The_Core_Building_Blocks_of_a_Centralized_Logging_Stack\">The Core Building Blocks of a Centralized Logging Stack<\/span><\/h2>\n<p>Whether you choose ELK or Loki, the architecture of a logging stack in a hosting environment usually has the same building blocks:<\/p>\n<h3><span id=\"1_Log_Collection_Agents\">1. Log Collection Agents<\/span><\/h3>\n<p>Each server needs a small agent or forwarder that reads local log files or streams and sends them to the central system. Common options include:<\/p>\n<ul>\n<li><strong>Filebeat \/ other Beats:<\/strong> Lightweight shippers commonly used with ELK, good for tailing files and tagging logs.<\/li>\n<li><strong>Logstash Forwarder or Logstash itself:<\/strong> More heavyweight but powerful, often used to parse and transform logs before sending them on.<\/li>\n<li><strong>Promtail:<\/strong> Companion agent for Loki, designed to tail log files or journal entries and attach structured labels.<\/li>\n<li><strong>Syslog:<\/strong> Classic approach where services send logs via UDP\/TCP to a central syslog server.<\/li>\n<\/ul>\n<p>For hosting workloads, agents like Filebeat and Promtail are usually the most convenient: they are easy to deploy via Ansible or scripts across VPS and dedicated servers, support high throughput, and handle log rotation automatically.<\/p>\n<h3><span id=\"2_Transport_and_Buffering\">2. Transport and Buffering<\/span><\/h3>\n<p>Logs need to travel reliably from servers to your central platform, even if network hiccups or bursts occur. Options include:<\/p>\n<ul>\n<li><strong>Direct shipping:<\/strong> Agents send logs straight to Elasticsearch or Loki over HTTP or gRPC.<\/li>\n<li><strong>Message queues:<\/strong> Kafka, Redis or other queues sit in the middle, absorbing bursts and decoupling producers from consumers.<\/li>\n<li><strong>Local buffering:<\/strong> Modern agents keep a small on-disk buffer, so short outages in the central cluster do not cause log loss.<\/li>\n<\/ul>\n<p>For small to medium multi-server setups (for example 10\u201330 VPSs or a few dedicated servers), direct shipping with on-disk buffering is usually enough. Larger environments or very high traffic sites may introduce Kafka or another queue.<\/p>\n<h3><span id=\"3_Storage_Indexing_and_Search\">3. Storage, Indexing and Search<\/span><\/h3>\n<p>This is the heart of the logging stack: where logs are stored and how quickly you can query them.<\/p>\n<ul>\n<li><strong>Elasticsearch:<\/strong> Stores logs in <strong>indices<\/strong> with a flexible schema and inverted indexes for full-text search and aggregations.<\/li>\n<li><strong>Loki:<\/strong> Stores logs as compressed streams, indexed mainly by <strong>labels<\/strong> (metadata such as host, app, environment), and offloads most of the raw text to object storage.<\/li>\n<\/ul>\n<p>Elasticsearch shines when you need powerful analytics on structured fields (response time histograms, per-country aggregations, etc.). Loki shines when you want to store huge volumes of logs cheaply and search by labels and text, especially for infrastructure and application logs.<\/p>\n<h3><span id=\"4_Visualization_and_Alerting\">4. Visualization and Alerting<\/span><\/h3>\n<p>Once logs are centralized, you need dashboards and alerts:<\/p>\n<ul>\n<li><strong>Kibana:<\/strong> The traditional UI for Elasticsearch, with rich dashboards, visualizations and saved searches.<\/li>\n<li><strong>Grafana:<\/strong> A versatile dashboard tool that can query Loki, Elasticsearch and also metrics sources like Prometheus.<\/li>\n<\/ul>\n<p>Grafana is an especially good fit in hosting environments if you also follow our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">setting up monitoring and alerts with Prometheus and Grafana<\/a>. Using one tool to visualize both metrics and logs makes correlation much easier.<\/p>\n<h3><span id=\"5_Security_Multi-Tenancy_and_Retention\">5. Security, Multi-Tenancy and Retention<\/span><\/h3>\n<p>Central logs often contain sensitive data. If you host multiple projects or clients on the same logging stack, you must think about:<\/p>\n<ul>\n<li><strong>Access control:<\/strong> Who can see which logs? Can a given team only see its own apps or namespaces?<\/li>\n<li><strong>Network security:<\/strong> TLS encryption between agents and the central cluster, and firewall rules limiting access.<\/li>\n<li><strong>Retention and compliance:<\/strong> How long you keep logs for operational needs vs legal requirements (e.g. KVKK\/GDPR).<\/li>\n<\/ul>\n<p>We have a dedicated article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/kvkk-ve-gdpr-uyumlu-hosting-nasil-kurulur-veri-yerellestirme-loglama-ve-silme-uzerine-sicacik-bir-yol-haritasi\/\">KVKK and GDPR-compliant hosting, log retention and deletion practices<\/a> that goes into more detail on the compliance side.<\/p>\n<h2><span id=\"ELK_Stack_for_Multi-Server_Logging\">ELK Stack for Multi-Server Logging<\/span><\/h2>\n<p>The <strong>ELK stack<\/strong> \u2013 Elasticsearch, Logstash and Kibana \u2013 is one of the most widely used logging platforms. For hosting environments, it offers mature tooling and a rich ecosystem, but comes with higher resource usage than Loki.<\/p>\n<h3><span id=\"Key_Components_and_Their_Roles\">Key Components and Their Roles<\/span><\/h3>\n<ul>\n<li><strong>Elasticsearch:<\/strong> Distributed search and analytics engine that stores log events in indices.<\/li>\n<li><strong>Logstash:<\/strong> Data processing pipeline that can parse, enrich and route logs.<\/li>\n<li><strong>Beats (e.g. Filebeat, Metricbeat):<\/strong> Lightweight agents on each server that ship logs and metrics.<\/li>\n<li><strong>Kibana:<\/strong> Web UI for searching, visualizing and alerting on logs.<\/li>\n<\/ul>\n<p>You do not always need Logstash. For many setups, Filebeat can send logs directly to Elasticsearch, using built\u2011in modules to parse Nginx, Apache, MySQL and system logs.<\/p>\n<h3><span id=\"Reference_Architecture_for_1030_Servers\">Reference Architecture for 10\u201330 Servers<\/span><\/h3>\n<p>In a typical medium-sized hosting environment (for example a mix of 10\u201330 VPS and dedicated servers), you can start with:<\/p>\n<ul>\n<li><strong>1\u20133 Elasticsearch nodes<\/strong> on a dedicated logging VPS or server cluster.<\/li>\n<li><strong>Optional Logstash<\/strong> nodes if you need heavy parsing or enrichment.<\/li>\n<li><strong>Filebeat<\/strong> installed on each application, database and edge server.<\/li>\n<li><strong>Kibana<\/strong> hosted on the same logging server or in front of the cluster, protected with authentication and HTTPS.<\/li>\n<\/ul>\n<p>Each Filebeat instance tails key logs: web server access\/error logs, PHP-FPM logs, MySQL slow query logs, application logs (Laravel, Node.js, etc.), as well as system logs (journal or \/var\/log\/messages). It enriches events with fields like <code>host.name<\/code>, <code>environment<\/code>, <code>project<\/code> and sends them to Elasticsearch.<\/p>\n<h3><span id=\"Advantages_of_ELK_in_Hosting_Environments\">Advantages of ELK in Hosting Environments<\/span><\/h3>\n<ul>\n<li><strong>Powerful structured queries:<\/strong> You can aggregate on fields like HTTP status, upstream response time, URI, or customer ID.<\/li>\n<li><strong>Rich dashboards:<\/strong> Kibana makes it easy to build error-rate views, API latency histograms, and per\u2011server comparisons.<\/li>\n<li><strong>Mature ecosystem:<\/strong> Many prebuilt dashboards and parsers for common hosting components (Nginx, Apache, MySQL, systemd).<\/li>\n<li><strong>Alerting:<\/strong> Kibana and Elasticsearch can trigger alerts on log patterns (e.g. too many 500s, too many login failures).<\/li>\n<\/ul>\n<h3><span id=\"Challenges_and_How_to_Mitigate_Them\">Challenges and How to Mitigate Them<\/span><\/h3>\n<p>ELK\u2019s main downside is its <strong>resource usage and operational complexity<\/strong> as data volumes grow:<\/p>\n<ul>\n<li><strong>Disk and RAM hungry:<\/strong> Inverted indices and replicas consume a lot of storage and memory.<\/li>\n<li><strong>Index management:<\/strong> You must plan index rotation, templates and shard counts to avoid performance issues.<\/li>\n<li><strong>Scaling overhead:<\/strong> Adding more nodes requires careful balancing and monitoring.<\/li>\n<\/ul>\n<p>To keep ELK manageable for hosting use cases:<\/p>\n<ul>\n<li>Use <strong>index lifecycle management (ILM)<\/strong> to automatically roll over and delete old indices.<\/li>\n<li>Separate indices by <strong>log type<\/strong> (e.g. <code>nginx-*<\/code>, <code>php-*<\/code>, <code>mysql-*<\/code>) so you can tune retention per type.<\/li>\n<li>Consider storing only <strong>structured and high-value logs<\/strong> in ELK (e.g. errors, slow queries, security events) and pushing bulk info logs to Loki or cheaper storage.<\/li>\n<\/ul>\n<h2><span id=\"Loki_Stack_for_Hosting_Environments\">Loki Stack for Hosting Environments<\/span><\/h2>\n<p><strong>Grafana Loki<\/strong> takes a different approach. Instead of indexing every word of every log line, Loki indexes only labels (metadata) and stores the raw log text in compressed chunks, often on object storage. This design is ideal for infrastructure logs where you want to:<\/p>\n<ul>\n<li>Keep <strong>large volumes of logs<\/strong> for a long time, at lower cost.<\/li>\n<li>Query by <strong>host, app, environment or container<\/strong> first, then filter log text.<\/li>\n<li>Reuse <strong>Grafana<\/strong> for both metrics and logs, with a consistent UI.<\/li>\n<\/ul>\n<h3><span id=\"Loki_Stack_Components\">Loki Stack Components<\/span><\/h3>\n<ul>\n<li><strong>Loki:<\/strong> The log aggregation system that accepts log streams, indexes labels and stores chunks.<\/li>\n<li><strong>Promtail:<\/strong> Log collector\/shipper that runs on each server, tails log files or journal entries, and adds labels.<\/li>\n<li><strong>Grafana:<\/strong> Dashboard and exploration interface, querying Loki with LogQL.<\/li>\n<\/ul>\n<p>We have written detailed practical guides about Loki in VPS environments, such as our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-log-yonetimi-nasil-rayina-oturur-grafana-loki-promtail-ile-merkezi-loglama-tutma-sureleri-ve-alarm-kurallari\/\">VPS log management with Loki and Promtail, including retention and alert rules<\/a>, and an in\u2011depth <a href=\"https:\/\/www.dchost.com\/blog\/en\/merkezi-loglama-ve-gozlemlenebilirlik-vpste-loki-promtail-grafana-ile-sakin-kalan-bir-zihin\/\">Loki + Promtail + Grafana centralized logging playbook<\/a>. Here we will focus on how this stack behaves across multiple servers.<\/p>\n<h3><span id=\"How_Lokis_Label-Centric_Model_Fits_Hosting_Workloads\">How Loki\u2019s Label-Centric Model Fits Hosting Workloads<\/span><\/h3>\n<p>With Loki, you describe your log streams using <strong>labels<\/strong>. For example, Promtail might attach the following labels:<\/p>\n<ul>\n<li><code>{job=\"nginx\", host=\"web-01\", environment=\"production\", project=\"shop\"}<\/code><\/li>\n<li><code>{job=\"php-fpm\", host=\"web-02\", environment=\"staging\", project=\"client-x\"}<\/code><\/li>\n<\/ul>\n<p>Labels are cheap to query and perfect for multi-server hosting:<\/p>\n<ul>\n<li>You can instantly filter all logs for <code>project=\"shop\"<\/code> across every server.<\/li>\n<li>You can compare errors on <code>host=\"web-01\"<\/code> vs <code>host=\"web-02\"<\/code> without SSH-ing anywhere.<\/li>\n<li>For agencies or resellers, you can label by <code>customer<\/code> or <code>panel_account<\/code> to separate clients logically.<\/li>\n<\/ul>\n<p>The text of each log line is stored in compressed form. When you run a query in Grafana, Loki finds the relevant label-matched streams, then decompresses only the relevant chunks and filters within them. This is much more storage-efficient than full-text indexing all content, while still giving you powerful filtering via <strong>LogQL<\/strong>.<\/p>\n<h3><span id=\"Reference_Architecture_for_1050_Servers\">Reference Architecture for 10\u201350 Servers<\/span><\/h3>\n<p>A common Loki deployment for hosting environments could look like this:<\/p>\n<ul>\n<li><strong>1 Loki instance<\/strong> for small setups, or <strong>2\u20133 instances<\/strong> in microservices mode for redundancy.<\/li>\n<li><strong>Object storage<\/strong> (or a dedicated filesystem) for chunks and indexes.<\/li>\n<li><strong>Promtail<\/strong> installed on every VPS, dedicated server and node in your colocation racks.<\/li>\n<li><strong>Grafana<\/strong> on a management VPS or the same server as Loki, secured via HTTPS and login.<\/li>\n<\/ul>\n<p>Promtail configurations on each server typically include scrape jobs for:<\/p>\n<ul>\n<li>Nginx \/ Apache access and error logs.<\/li>\n<li>Application logs (Laravel, Symfony, Node.js, etc.).<\/li>\n<li>systemd-journald for OS-level events (ssh, sudo, kernel messages).<\/li>\n<li>MySQL or PostgreSQL logs and slow queries.<\/li>\n<\/ul>\n<p>Each log path gets a distinct <code>job<\/code> label, and you enrich with labels like <code>environment<\/code>, <code>project<\/code> and <code>customer<\/code>. This yields a clean, queryable label space even for dozens of servers.<\/p>\n<h3><span id=\"Benefits_of_Loki_for_Multi-Server_Hosting\">Benefits of Loki for Multi-Server Hosting<\/span><\/h3>\n<ul>\n<li><strong>Lower storage and RAM requirements:<\/strong> Ideal if you run many VPSs or high-traffic sites where log volume is huge.<\/li>\n<li><strong>Tight integration with metrics:<\/strong> If you already use Prometheus and Grafana, Loki feels natural.<\/li>\n<li><strong>Good fit for infrastructure logs:<\/strong> Nginx, systemd, containers and Kubernetes logs map nicely to label-based queries.<\/li>\n<li><strong>Simpler scaling model:<\/strong> You can scale storage separately from compute.<\/li>\n<\/ul>\n<h3><span id=\"Where_Loki_Is_Less_Ideal\">Where Loki Is Less Ideal<\/span><\/h3>\n<p>Loki is not a full-text analytics engine like Elasticsearch. If your primary use case is:<\/p>\n<ul>\n<li>Heavy aggregations on arbitrary JSON fields (e.g. ad-tech logs, complex BI analytics).<\/li>\n<li>Frequent joins between logs and other data sources.<\/li>\n<\/ul>\n<p>\u2026then ELK may still be the better fit. Many teams actually run both: Loki for infrastructure and high\u2011volume app logs, ELK for specialized analytical use cases.<\/p>\n<h2><span id=\"ELK_vs_Loki_Choosing_the_Right_Stack_for_Your_Hosting_Scenario\">ELK vs Loki: Choosing the Right Stack for Your Hosting Scenario<\/span><\/h2>\n<p>Both stacks solve centralized logging, but they excel in slightly different areas. Here is how we typically reason about them with dchost.com customers.<\/p>\n<h3><span id=\"When_ELK_Is_a_Better_Fit\">When ELK Is a Better Fit<\/span><\/h3>\n<ul>\n<li><strong>Advanced analytics are key:<\/strong> You need detailed Kibana dashboards, scripted fields, and ad\u2011hoc aggregations across large structured datasets.<\/li>\n<li><strong>Business analytics on logs:<\/strong> You treat logs as semi-structured event data for reporting, not only troubleshooting.<\/li>\n<li><strong>Team is already invested in ELK:<\/strong> Your developers know Elasticsearch and Kibana well.<\/li>\n<\/ul>\n<p>Example: a SaaS company runs separate application, API and database servers and wants to analyze tenant behaviour, API usage per feature and complex filters on JSON fields inside logs. ELK gives them strong analytical tools on top of operational visibility.<\/p>\n<h3><span id=\"When_Loki_Is_a_Better_Fit\">When Loki Is a Better Fit<\/span><\/h3>\n<ul>\n<li><strong>Cost-effective retention matters:<\/strong> You want to keep weeks or months of infrastructure and app logs without a large cluster.<\/li>\n<li><strong>Focus is debugging and correlation:<\/strong> Your primary goal is to quickly jump from a metric spike to the relevant logs.<\/li>\n<li><strong>You already use Grafana\/Prometheus:<\/strong> Adding Loki gives you logs, metrics and (if used) traces in one place.<\/li>\n<\/ul>\n<p>Example: an agency hosts 30 WordPress sites across several VPSs and a couple of dedicated servers. They want centralized error logs, Nginx access logs, and PHP-FPM logs to debug performance issues and plugin conflicts. Loki with Promtail on each server is lightweight and integrates cleanly with their existing Grafana dashboards.<\/p>\n<h3><span id=\"Hybrid_Approaches\">Hybrid Approaches<\/span><\/h3>\n<p>You do not have to choose exclusively. Many real-world setups look like this:<\/p>\n<ul>\n<li><strong>Loki<\/strong> for noisy infrastructure logs (web servers, systemd, containers, firewalls).<\/li>\n<li><strong>ELK<\/strong> for high-value structured logs (billing events, audit logs, compliance-related logs).<\/li>\n<\/ul>\n<p>This hybrid model lets you use each tool where it shines while keeping operating complexity under control. From a hosting perspective, this is often the sweet spot for medium to large deployments.<\/p>\n<h2><span id=\"Designing_Centralized_Logging_for_a_1020_Server_Hosting_Environment\">Designing Centralized Logging for a 10\u201320 Server Hosting Environment<\/span><\/h2>\n<p>To make this concrete, let us sketch a central logging design for a realistic environment: 12 VPSs hosting various WordPress and Laravel apps, 2 dedicated database servers, and 1 bastion\/management server \u2013 all running on infrastructure provided by dchost.com.<\/p>\n<h3><span id=\"Step_1_Define_Goals_and_Scope\">Step 1: Define Goals and Scope<\/span><\/h3>\n<p>First, decide what you expect from logs:<\/p>\n<ul>\n<li>See all <strong>5xx errors<\/strong> across all sites in one place.<\/li>\n<li>Correlate <strong>slow pages<\/strong> with PHP errors and database slow queries.<\/li>\n<li>Monitor <strong>security events<\/strong> like repeated failed logins or WAF blocks.<\/li>\n<li>Keep logs for <strong>90 days<\/strong> for troubleshooting and basic compliance.<\/li>\n<\/ul>\n<p>These goals already suggest that a Loki-centric design is attractive for cost-effective retention, possibly with a small Elasticsearch instance for a subset of structured logs if needed.<\/p>\n<h3><span id=\"Step_2_Choose_the_Stack_and_Hardware\">Step 2: Choose the Stack and Hardware<\/span><\/h3>\n<p>For this size, a simple and robust choice is:<\/p>\n<ul>\n<li><strong>1\u20132 logging VPSs<\/strong> dedicated to Loki + Grafana (and optionally Elasticsearch + Kibana if you want ELK for specific logs).<\/li>\n<li><strong>Promtail<\/strong> on every server for infrastructure and app logs.<\/li>\n<li>Optionally, <strong>Filebeat<\/strong> on selected servers to send a subset of structured logs to Elasticsearch.<\/li>\n<\/ul>\n<p>Using separate logging VPSs rather than co-locating Loki\/ELK on application servers simplifies scaling and avoids a log ingestion spike affecting your live sites.<\/p>\n<h3><span id=\"Step_3_Standardize_Log_Formats_on_Each_Server\">Step 3: Standardize Log Formats on Each Server<\/span><\/h3>\n<p>Centralized logging works best when logs are somewhat consistent. For web servers, configure JSON or at least structured access logs. This makes it much easier to later analyze request time, upstream time, status codes and cache hits.<\/p>\n<p>Our previous articles on <a href=\"https:\/\/www.dchost.com\/blog\/en\/e-ticaret-sepet-ve-odeme-adimlarini-izlemek-sunucu-loglari-ve-alarm-kurallari\/\">monitoring cart and checkout steps with server logs and alerts<\/a> show how structured logging unlocks powerful e\u2011commerce insights, such as funnel drop-offs and payment gateway issues. The same principle applies here: the better your log format, the more value you can get from centralized logging.<\/p>\n<h3><span id=\"Step_4_Configure_Promtail_on_All_Nodes\">Step 4: Configure Promtail on All Nodes<\/span><\/h3>\n<p>On each VPS and dedicated server:<\/p>\n<ol>\n<li>Install Promtail via your package manager or binary.<\/li>\n<li>Define <strong>scrape_configs<\/strong> for:<\/li>\n<\/ol>\n<ul>\n<li><code>\/var\/log\/nginx\/access.log<\/code> and <code>error.log<\/code> with labels <code>job=\"nginx\"<\/code>, <code>project<\/code>, <code>environment<\/code>.<\/li>\n<li>PHP-FPM logs with <code>job=\"php-fpm\"<\/code>, plus <code>pool<\/code> or <code>site<\/code> labels if you run multiple pools.<\/li>\n<li>Application logs (e.g. Laravel <code>storage\/logs\/*.log<\/code>, Node.js app logs).<\/li>\n<li>Systemd journal for sshd, sudo, kernel, etc., with <code>job=\"systemd\"<\/code>.<\/li>\n<\/ul>\n<p>Make sure each Promtail instance knows how to reach Loki over HTTPS, and use TLS certificates (self-signed or CA\u2011issued) plus basic auth or an auth proxy to secure the endpoint.<\/p>\n<h3><span id=\"Step_5_Build_Dashboards_and_Basic_Alerts\">Step 5: Build Dashboards and Basic Alerts<\/span><\/h3>\n<p>In Grafana, connect Loki as a data source and start with a few high-value dashboards:<\/p>\n<ul>\n<li><strong>Error overview:<\/strong> Panel for <code>{job=\"nginx\", status=~\"5..\"}<\/code> grouped by <code>project<\/code> and <code>host<\/code>.<\/li>\n<li><strong>PHP error tracker:<\/strong> Count of lines matching <code>\"PHP Fatal error\"<\/code> per site.<\/li>\n<li><strong>Security signals:<\/strong> Queries on sshd failures or WordPress login failures, plus relevant IPs.<\/li>\n<\/ul>\n<p>You can then define alerts from these queries. Many teams pair this with the metrics setup from our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-uyari-nasil-kurulur-prometheus-grafana-ve-node-exporter-ile-sessiz-alarmlari-konusturmak\/\">Prometheus + Grafana monitoring and alerting on a VPS<\/a>, so they receive a single alert that includes both CPU graphs and the most recent error logs.<\/p>\n<h3><span id=\"Step_6_Add_ELK_Where_It_Really_Helps\">Step 6: Add ELK Where It Really Helps<\/span><\/h3>\n<p>If you later decide that you need richer analytics on specific log streams, you can add a small ELK sidecar:<\/p>\n<ul>\n<li>Run a compact Elasticsearch + Kibana instance on the logging VPS.<\/li>\n<li>Configure Filebeat on the servers generating those special logs (e.g. billing or audit events) to send them to Elasticsearch.<\/li>\n<li>Use index lifecycle policies to keep only the required retention window.<\/li>\n<\/ul>\n<p>This lets you preserve your simple, scalable Loki setup for most logs, while giving analysts a powerful ELK environment for narrow, high\u2011value use cases.<\/p>\n<h2><span id=\"Operational_Best_Practices_Retention_Backups_and_Cost_Control\">Operational Best Practices: Retention, Backups and Cost Control<\/span><\/h2>\n<p>Centralized logging is a long-term investment. To keep it healthy and affordable, pay attention to operational aspects from day one.<\/p>\n<h3><span id=\"Retention_Policies_by_Log_Type\">Retention Policies by Log Type<\/span><\/h3>\n<p>Not all logs need the same retention. A good starting point for hosting environments is:<\/p>\n<ul>\n<li><strong>Infrastructure logs<\/strong> (Nginx, PHP-FPM, systemd): 30\u201390 days, depending on your troubleshooting needs.<\/li>\n<li><strong>Database slow query logs:<\/strong> 30\u201360 days, enough to see performance trends.<\/li>\n<li><strong>Security and audit logs:<\/strong> 6\u201312 months or mandated by regulation \/ internal policy.<\/li>\n<\/ul>\n<p>Implement these policies directly in Loki (via retention settings per tenant) and in Elasticsearch (via ILM). Combine this with your overall <a href=\"https:\/\/www.dchost.com\/blog\/en\/yedekleme-stratejisi-nasil-planlanir-blog-e-ticaret-ve-saas-siteleri-icin-rpo-rto-rehberi\/\">backup and retention strategy for RPO\/RTO<\/a> so logs support your disaster recovery plans, not just day-to-day debugging.<\/p>\n<h3><span id=\"Backups_and_Disaster_Recovery_for_Logging\">Backups and Disaster Recovery for Logging<\/span><\/h3>\n<p>For Loki, if you store chunks in durable object storage and keep configuration under version control, traditional backups may be minimal \u2013 you mostly protect configuration and any small local index components. For Elasticsearch, regular snapshots to object storage are essential. Treat your logging cluster as production-critical infrastructure: if you lost it today, could you still investigate incidents from last week?<\/p>\n<h3><span id=\"Cost_and_Performance_Tuning\">Cost and Performance Tuning<\/span><\/h3>\n<p>To avoid surprises:<\/p>\n<ul>\n<li>Monitor <strong>ingestion rate<\/strong> and <strong>storage growth<\/strong> from day one.<\/li>\n<li>Normalize log levels in your apps to reduce noisy debug\/info logs in production.<\/li>\n<li>Use sampling for very high-volume, low-value logs (e.g. debug traces), or disable them entirely in production.<\/li>\n<li>On Elasticsearch, keep shard counts reasonable and avoid too many small indices.<\/li>\n<\/ul>\n<p>Choosing appropriate VPS or dedicated server resources for your logging cluster is similar to sizing for databases: CPU, RAM and fast disk (NVMe where possible) matter. Our guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/nvme-vps-hosting-rehberi-hizin-nereden-geldigini-nasil-olculdugunu-ve-gercek-sonuclari-beraber-gorelim\/\">NVMe VPS hosting and IOPS<\/a> can help you estimate the impact of disk latency on indexing performance.<\/p>\n<h2><span id=\"Summary_and_Next_Steps_with_dchostcom\">Summary and Next Steps with dchost.com<\/span><\/h2>\n<p>Centralizing logs for multiple servers is one of those upgrades that permanently changes how you operate your infrastructure. Instead of chasing issues server by server, you gain a single pane of glass for all web, application, database and system logs. The ELK stack gives you powerful analytics and dashboards for structured data, while the Loki stack offers cost\u2011effective, label-based log storage that fits hosting workloads perfectly.<\/p>\n<p>In practice, many teams start with a <strong>Loki + Promtail + Grafana<\/strong> setup for all infrastructure and application logs, and optionally add a compact <strong>ELK<\/strong> deployment for specific high-value streams. With the right retention policies, secure access controls and a few carefully designed dashboards, you can reduce troubleshooting time, strengthen security visibility and make better capacity decisions across your dchost.com VPS, dedicated and colocation servers.<\/p>\n<p>If you are planning a new logging stack or want to consolidate an existing one, our team at dchost.com can help you choose suitable VPS or dedicated server configurations, and design an architecture that scales with your projects. Start by mapping the logs you already have, decide which stack (or hybrid) fits your goals, and build from there \u2013 your future self, staring at a clear, searchable log timeline instead of 15 SSH sessions, will thank you.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>When you operate more than a couple of servers, logging stops being a simple text-file problem and becomes an observability problem. Apache errors on one VPS, PHP warnings on another, MySQL slow queries on a dedicated database node, firewall events on an edge server\u2026 if each machine keeps its own logs, finding the root cause [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3302,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-3301","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3301","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=3301"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3301\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/3302"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=3301"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=3301"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=3301"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}