{"id":1477,"date":"2025-11-07T13:06:42","date_gmt":"2025-11-07T10:06:42","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/mariadb-high-availability-for-woocommerce-the-real%e2%80%91world-r-w-architecture-story-behind-galera-and-primary%e2%80%91replica\/"},"modified":"2025-11-07T13:06:42","modified_gmt":"2025-11-07T10:06:42","slug":"mariadb-high-availability-for-woocommerce-the-real%e2%80%91world-r-w-architecture-story-behind-galera-and-primary%e2%80%91replica","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/mariadb-high-availability-for-woocommerce-the-real%e2%80%91world-r-w-architecture-story-behind-galera-and-primary%e2%80%91replica\/","title":{"rendered":"MariaDB High Availability for WooCommerce: The Real\u2011World R\/W Architecture Story Behind Galera and Primary\u2011Replica"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was, staring at a graph that looked like a ski slope flipped on its head. A client\u2019s WooCommerce store had just been featured in a popular newsletter, and traffic went through the roof. Pages were still fast (we\u2019d tuned caching), but the checkout queue started to crawl. The culprit wasn\u2019t CPU. It wasn\u2019t PHP. It wasn\u2019t even the CDN. It was the database \u2014 that one unassuming box doing the heavy lifting while everyone else had a good time. If you\u2019ve ever watched orders pile up and felt that knot in your stomach, you know the feeling.<\/p>\n<p>That day nudged me into a deeper conversation with the team: do we double down on a simple primary\u2011replica setup and be honest about replication lag, or go all\u2011in on MariaDB Galera Cluster and ride the multi\u2011primary promise (with all its personality)? There isn\u2019t a single \u201cright\u201d answer for every store. But there is a right answer for your store, if you understand how WooCommerce behaves, where consistency matters, and how read\/write traffic really flows.<\/p>\n<p>In this guide, I\u2019ll walk you through how I think about MariaDB high availability for WooCommerce. We\u2019ll talk about Galera versus primary\u2011replica in plain English, sketch the read\/write architecture that actually works on busy stores, and cover the gritty bits: proxies, failover, conflicts, backups, and the maintenance dance. I\u2019ll share the mistakes I\u2019ve made and the patterns that keep me sleeping at night. Grab a coffee \u2014 this is one of those topics that pays you back the first time your homepage goes viral.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#The_WooCommerce_Reality_Check_Where_the_Database_Hurts_First\"><span class=\"toc_number toc_depth_1\">1<\/span> The WooCommerce Reality Check: Where the Database Hurts First<\/a><\/li><li><a href=\"#Two_Roads_to_High_Availability_Galera_and_PrimaryReplica_in_Plain_English\"><span class=\"toc_number toc_depth_1\">2<\/span> Two Roads to High Availability: Galera and Primary\u2011Replica, in Plain English<\/a><\/li><li><a href=\"#Designing_ReadWrite_Flows_That_Dont_Surprise_Your_Customers\"><span class=\"toc_number toc_depth_1\">3<\/span> Designing Read\/Write Flows That Don\u2019t Surprise Your Customers<\/a><ul><li><a href=\"#Catalog_and_search\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Catalog and search<\/a><\/li><li><a href=\"#Cart_and_account_views\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Cart and account views<\/a><\/li><li><a href=\"#Checkout_and_admin\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Checkout and admin<\/a><\/li><li><a href=\"#Getting_sticky_without_getting_stuck\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Getting sticky without getting stuck<\/a><\/li><\/ul><\/li><li><a href=\"#Galera_What_It_Feels_Like_DaytoDay_And_How_to_Avoid_Surprises\"><span class=\"toc_number toc_depth_1\">4<\/span> Galera: What It Feels Like Day\u2011to\u2011Day (And How to Avoid Surprises)<\/a><\/li><li><a href=\"#PrimaryReplica_Calm_Predictable_and_Still_Needs_a_Game_Plan\"><span class=\"toc_number toc_depth_1\">5<\/span> Primary\u2011Replica: Calm, Predictable, and Still Needs a Game Plan<\/a><\/li><li><a href=\"#The_Proxy_Layer_Where_Your_Architecture_Becomes_Real\"><span class=\"toc_number toc_depth_1\">6<\/span> The Proxy Layer: Where Your Architecture Becomes Real<\/a><\/li><li><a href=\"#Schema_Transactions_and_the_Small_Things_That_Keep_You_Sane\"><span class=\"toc_number toc_depth_1\">7<\/span> Schema, Transactions, and the Small Things That Keep You Sane<\/a><\/li><li><a href=\"#WooCommerceFriendly_Caching_and_RW_Routing_Hand_in_Hand\"><span class=\"toc_number toc_depth_1\">8<\/span> WooCommerce\u2011Friendly Caching and R\/W Routing, Hand in Hand<\/a><\/li><li><a href=\"#Backups_State_Transfers_and_the_Art_of_Not_Freezing_the_Store\"><span class=\"toc_number toc_depth_1\">9<\/span> Backups, State Transfers, and the Art of Not Freezing the Store<\/a><\/li><li><a href=\"#Failover_and_Maintenance_Without_Drama\"><span class=\"toc_number toc_depth_1\">10<\/span> Failover and Maintenance Without Drama<\/a><\/li><li><a href=\"#Observability_The_Dashboard_That_Saves_You_at_3_am\"><span class=\"toc_number toc_depth_1\">11<\/span> Observability: The Dashboard That Saves You at 3 a.m.<\/a><\/li><li><a href=\"#Practical_Build_A_Minimal_Realistic_RW_Architecture\"><span class=\"toc_number toc_depth_1\">12<\/span> Practical Build: A Minimal, Realistic R\/W Architecture<\/a><\/li><li><a href=\"#A_Word_on_Plugins_Migrations_and_the_Who_Touched_the_Database_Moment\"><span class=\"toc_number toc_depth_1\">13<\/span> A Word on Plugins, Migrations, and the \u201cWho Touched the Database?\u201d Moment<\/a><\/li><li><a href=\"#When_to_Choose_Which_A_GutCheck\"><span class=\"toc_number toc_depth_1\">14<\/span> When to Choose Which: A Gut\u2011Check<\/a><\/li><li><a href=\"#Extra_Reading_and_Tools_I_Keep_Handy\"><span class=\"toc_number toc_depth_1\">15<\/span> Extra Reading and Tools I Keep Handy<\/a><\/li><li><a href=\"#WrapUp_Your_Store_Your_Rules_But_Make_Them_Explicit\"><span class=\"toc_number toc_depth_1\">16<\/span> Wrap\u2011Up: Your Store, Your Rules \u2014 But Make Them Explicit<\/a><\/li><li><a href=\"#References_and_Useful_Links\"><span class=\"toc_number toc_depth_1\">17<\/span> References and Useful Links<\/a><\/li><li><a href=\"#Related_Reading\"><span class=\"toc_number toc_depth_1\">18<\/span> Related Reading<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"The_WooCommerce_Reality_Check_Where_the_Database_Hurts_First\">The WooCommerce Reality Check: Where the Database Hurts First<\/span><\/h2>\n<p>Let\u2019s set the scene. WooCommerce isn\u2019t just a collection of product pages \u2014 it\u2019s a living system with carts, sessions, transients, stock updates, and orders that are precious. The moment you add customer accounts and payment gateways, your database stops being a passive catalog and becomes the source of truth. That means two things: reads matter for scale, and writes matter for money.<\/p>\n<p>When a store is quiet, everything feels great. The catalog pages glide off the cache, the database sees the occasional query, and the checkout flow is leisurely. But when traffic spikes, the character of your workload changes. Suddenly, you have three very different streams hitting your database: the read\u2011heavy catalog and search pages, the slightly chatty cart and account views, and the write\u2011critical checkout and order management. Not all of these deserve the same path through your cluster.<\/p>\n<p>Here\u2019s the thing that catches people off guard: speeding up reads is easy to get addicted to. You throw replicas at the problem, you route SELECTs around, and life is good\u2026 until someone\u2019s cart shows the wrong stock quantity or a recently placed order doesn\u2019t show up for a minute on their account page. That\u2019s the tension we\u2019re going to solve \u2014 keeping reads fast without letting consistency slip where it counts.<\/p>\n<h2 id=\"section-2\"><span id=\"Two_Roads_to_High_Availability_Galera_and_PrimaryReplica_in_Plain_English\">Two Roads to High Availability: Galera and Primary\u2011Replica, in Plain English<\/span><\/h2>\n<p>When I\u2019m chatting with store owners, I describe these two patterns not as opponents, but as two personalities. One is the steady partner who prefers one person making the final call and everyone else following along. The other is the collaborative type who wants everyone participating equally \u2014 with some ground rules to avoid chaos. Both can work beautifully; they just ask you to play by different rules.<\/p>\n<p>Primary\u2011Replica is the steady partner. You write to one primary. It replicates changes to replicas. Reads can come from replicas, which lightens the load on the primary and keeps catalog pages snappy. The catch is replication lag. It\u2019s usually small, sometimes invisible, and occasionally dramatic under sustained write pressure or heavy schema changes. If you accept that, and you keep important reads on the primary or \u201csticky,\u201d this setup can be rock\u2011solid and simple to reason about.<\/p>\n<p>MariaDB Galera Cluster is the collaborative type. Every node can accept writes (multi\u2011primary), and the cluster keeps them in sync using group communication and certification. If there\u2019s a conflict, the cluster rolls back one of the transactions. Reads are local, and you don\u2019t worry about replicas falling behind \u2014 but you do worry about write conflicts, flow control, and quorum. Many stores end up running Galera in practice as a single\u2011writer for certain flows, even though multi\u2011primary is there when you need it. It\u2019s a different set of trade\u2011offs, but very attractive when you want tighter consistency across nodes.<\/p>\n<p>Which one fits? In my experience, catalog\u2011heavy sites that can isolate critical writes to the primary often feel happiest on primary\u2011replica. Stores with strict consistency needs across multiple availability zones, or teams that want maintenance flexibility without a single primary becoming a headache, lean toward Galera. The trick is designing your read\/write paths so that WooCommerce\u2019s quirks are respected either way.<\/p>\n<h2 id=\"section-3\"><span id=\"Designing_ReadWrite_Flows_That_Dont_Surprise_Your_Customers\">Designing Read\/Write Flows That Don\u2019t Surprise Your Customers<\/span><\/h2>\n<p>Let me share the pattern I\u2019ve come back to on WooCommerce over and over again. Imagine your traffic as three lanes merging onto a highway: catalog browsing, cart\/account interactions, and checkout\/admin. Each lane needs a slightly different route through your database topology to avoid pileups and odd behavior.<\/p>\n<h3><span id=\"Catalog_and_search\">Catalog and search<\/span><\/h3>\n<p>This is your bulk read traffic, and it scales beautifully with replicas or multiple Galera nodes. When I architect for speed, I push these reads away from the writer. On primary\u2011replica, they go to replicas, ideally via a proxy that can close the spigot if lag crosses a threshold. On Galera, they land on any node \u2014 but if I expect frequent updates (like price changes or stock volatility), I still keep an eye on read consistency settings so nothing \u201clooks\u201d stale.<\/p>\n<h3><span id=\"Cart_and_account_views\">Cart and account views<\/span><\/h3>\n<p>Now we\u2019re in the \u201cread your writes\u201d zone. If a user adds an item to cart or updates their address, they expect to see it immediately. On primary\u2011replica, that means routing these reads to the primary or making sure the user sticks to a consistent backend where their writes live. On Galera, you can lean on causal reads. A practical trick is to enable causal consistency for the session, which I\u2019ll talk about in a second, or just keep these users on the same node for the life of the session. Either way, consistency beats maximal fan\u2011out here.<\/p>\n<h3><span id=\"Checkout_and_admin\">Checkout and admin<\/span><\/h3>\n<p>This is the money lane. I treat it like a VIP convoy with a police escort. Every write during checkout goes to a designated writer. On primary\u2011replica, that\u2019s naturally your primary. On Galera, I still route checkout and wp\u2011admin to a single writer node by default to minimize conflicts, then let multi\u2011primary be my \u201cescape hatch\u201d during maintenance or node events. It\u2019s not that Galera can\u2019t handle multi\u2011writer; it\u2019s that WooCommerce updates the same set of rows frequently enough (orders, stock, transients) that I prefer not to invite unnecessary certification conflicts.<\/p>\n<h3><span id=\"Getting_sticky_without_getting_stuck\">Getting sticky without getting stuck<\/span><\/h3>\n<p>How you implement this in the real world comes down to your proxy. If you use a SQL\u2011aware proxy, you can route writes and reads based on rules, and pin a connection to a writer during a transaction. If you\u2019re on a TCP proxy, you enforce stickiness at the HTTP layer and keep certain routes pinned to a node pool. Both approaches can work. I\u2019ve had great results using an SQL\u2011aware proxy for R\/W splitting and then falling back to a simple VIP for failover. The key is designing for \u201cread your writes\u201d in user sessions and for deterministic routing on sensitive paths like checkout.<\/p>\n<h2 id=\"section-4\"><span id=\"Galera_What_It_Feels_Like_DaytoDay_And_How_to_Avoid_Surprises\">Galera: What It Feels Like Day\u2011to\u2011Day (And How to Avoid Surprises)<\/span><\/h2>\n<p>The first time I turned on a MariaDB Galera Cluster, I remember feeling slightly invincible. Writes could go anywhere! Scaling reads was trivial! Failover was smooth! And then, on a busy sale, I watched a spike of certification rollbacks because two nodes tried to update similar rows at the same time. Nothing broke, orders still completed, but latency during checkout ticked up. That\u2019s when I learned to treat Galera as a strong ally \u2014 not a magic trick.<\/p>\n<p>Here\u2019s the rhythm that\u2019s worked for WooCommerce stores on Galera. I keep a minimum of three data nodes for quorum. I use a single writer policy for checkout and admin to reduce conflicts. I monitor flow control like a hawk; if a node is pausing the cluster because it can\u2019t keep up with applying writes, I want to know before customers do. And I keep a plan for state transfers: how a node rejoins matters, because full state transfers can be heavy if you aren\u2019t careful.<\/p>\n<p>Galera gives you tools for consistency on reads. You can enable causal reads so that when a client performs a write, subsequent reads will wait until that write is visible locally. It\u2019s a small latency cost that pays back in sanity for cart and account views. For extra safety, I treat catalog reads as free to fan out, but session\u2011linked reads as causal or sticky to one node.<\/p>\n<p>If you\u2019re new to Galera, start with conservative settings and observability. Keep the workload honest by watching cluster health and understanding how it reacts when you push it. And do yourself a favor: choose a state transfer method that doesn\u2019t lock the world. Using a physical backup tool for SST keeps the site responsive while a new node catches up.<\/p>\n<p>For background reading on Galera fundamentals, the official documentation is a solid primer: <a href=\"https:\/\/mariadb.com\/kb\/en\/what-is-mariadb-galera-cluster\/\" rel=\"nofollow noopener\" target=\"_blank\">what MariaDB Galera Cluster is and how it synchronizes writes<\/a>.<\/p>\n<h2 id=\"section-5\"><span id=\"PrimaryReplica_Calm_Predictable_and_Still_Needs_a_Game_Plan\">Primary\u2011Replica: Calm, Predictable, and Still Needs a Game Plan<\/span><\/h2>\n<p>I\u2019ve also had long, drama\u2011free runs with primary\u2011replica. The joy of this setup is how predictable it is. There\u2019s one place where writes happen. Replicas are there to offload reads. If the primary fails, you promote a replica. It\u2019s simple enough to explain at 2 a.m. when a pager goes off. The friction point is replication lag. Short bursts are fine; prolonged bursts during heavy writes or maintenance windows can be stressful if you\u2019ve routed too many sensitive reads to replicas.<\/p>\n<p>So the art becomes drawing a clean line: replicas handle catalog and other non\u2011critical reads, while the primary handles anything a user might immediately read after writing. This is also where your proxy does some heavy lifting. A SQL\u2011aware proxy can detect writes and pin subsequent reads to the primary for a short time. Or you mark certain WordPress routes to bypass replicas entirely. Either way, design for lag to happen. When it doesn\u2019t, you\u2019re delighted. When it does, your users never notice.<\/p>\n<p>If you want to push the envelope a little, you can explore semi\u2011synchronous replication to reduce the risk of losing a committed transaction during a failover. It\u2019s not a silver bullet \u2014 and it adds latency \u2014 but it can be worth it for critical flows if your infrastructure can afford the extra roundtrip. Just remember that the human\u2011readable rule is still the same: keep money\u2011sensitive reads on the writer, and let catalog roam free.<\/p>\n<h2 id=\"section-6\"><span id=\"The_Proxy_Layer_Where_Your_Architecture_Becomes_Real\">The Proxy Layer: Where Your Architecture Becomes Real<\/span><\/h2>\n<p>Proxies are the traffic cops of your database world. I\u2019ve had great success using a SQL\u2011aware proxy to split reads and writes cleanly and to keep sessions sticky when needed. It\u2019s also your best friend for graceful failover \u2014 failing a VIP is easy, but failing a transaction in flight is not. A good proxy makes the difference between a blip and a bad morning.<\/p>\n<p>When I need granular read\/write rules, I reach for an engine designed for MySQL\u2011compatible backends. It understands autocommit semantics, transactions, and can hold a connection to the writer through a series of queries. If you\u2019re exploring options, take a look at <a href=\"https:\/\/www.proxysql.com\/\" rel=\"nofollow noopener\" target=\"_blank\">ProxySQL<\/a>. It\u2019s flexible, scriptable, and battle\u2011tested for read\/write splitting in front of MariaDB. For TCP\u2011level failover and a floating virtual IP, I\u2019ve had very reliable results with <a href=\"https:\/\/www.keepalived.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Keepalived\u2019s VRRP<\/a>. SQL\u2011aware for routing decisions, TCP\u2011level for service continuity \u2014 they complement each other nicely.<\/p>\n<p>If you\u2019re in the MariaDB ecosystem and want a vendor\u2011blessed option with topology awareness, you can also look at MaxScale. The point isn\u2019t which proxy brand you choose, it\u2019s that you choose one and design your rules with WooCommerce\u2019s patterns in mind. Checkout and admin stick to a writer. Catalog spreads its wings. Cart and account are sticky or causal. The proxy turns your philosophy into behavior.<\/p>\n<h2 id=\"section-7\"><span id=\"Schema_Transactions_and_the_Small_Things_That_Keep_You_Sane\">Schema, Transactions, and the Small Things That Keep You Sane<\/span><\/h2>\n<p>I once spent a weekend chasing a performance gremlin that turned out to be a missing composite index on a WooCommerce meta table. Not my proudest moment, but very educational. Before you scale horizontally, squeeze the obvious wins out of your schema and queries. It\u2019s astonishing how often a tidy index and a tuned buffer pool buy you headroom that makes HA a calmer topic. If you\u2019ve never gone deep on this, my long checklist on tuning is a friendly place to start: <a href=\"https:\/\/www.dchost.com\/blog\/en\/woocommerce-icin-mysql-innodb-tuning-kontrol-listesi-buffer-pool-indeksleme-ve-slow-query-analizi-nasil-akillica-yapilir\/\">the WooCommerce MySQL\/InnoDB tuning checklist I wish I had years ago<\/a>.<\/p>\n<p>On Galera, remember that it uses row\u2011based replication and certifies transactions across the cluster. Contention on hot rows \u2014 like stock counters \u2014 is where you\u2019ll feel it first. Minimizing the surface area of writes during checkout helps. Keep transactions short. Avoid touching the same row multiple times during a single request. Use idempotent patterns where you can so retries aren\u2019t catastrophic if a certification conflict happens.<\/p>\n<p>On primary\u2011replica, you\u2019ll meet replication lag whenever your write workload spikes. Batching non\u2011critical writes (like background metadata updates) can smooth the graph. And watch out for big schema changes \u2014 they\u2019re not just heavy; they can be asymmetrically heavy on replicas and skew lag for longer than you expect. Plan those changes with patience and proper maintenance windows.<\/p>\n<p>Globally, I keep the basics in order: durable transaction settings for the writer, sane InnoDB flush behavior, and a realistic connection pool. If PHP processes spike and your database accepts them all with a smile, that smile will fade as context switching and disk I\/O multiply. Set a ceiling you can survive.<\/p>\n<h2 id=\"section-8\"><span id=\"WooCommerceFriendly_Caching_and_RW_Routing_Hand_in_Hand\">WooCommerce\u2011Friendly Caching and R\/W Routing, Hand in Hand<\/span><\/h2>\n<p>Whenever someone asks me to make WooCommerce \u201cas fast as a static site,\u201d I smile and reach for two levers: caching and sensible database paths. You\u2019ll get far by caching the catalog safely and reserving the database for what truly requires it. The trick is not to cache the wrong things \u2014 like carts and personalized account data \u2014 and then blame the database for inconsistencies that were born in the cache.<\/p>\n<p>If you haven\u2019t seen it, I wrote a field guide on this exact dance: avoiding broken carts while still getting the huge wins from full\u2011page caching. It pairs beautifully with the read\/write architectures we\u2019re talking about here, because it reduces how often the database needs to work hard for anonymous traffic while keeping dynamic flows fresh. You can dig into it here when you\u2019re ready: it\u2019s my playbook on <em>full\u2011page caching for WordPress that won\u2019t break WooCommerce<\/em>.<\/p>\n<p>And don\u2019t sleep on the object cache. WooCommerce loves a fast metadata lookup. A persistent object cache like Redis reduces chatter to the database and gives your primary more breathing room, especially during checkout spikes. If you\u2019re still choosing between cache backends or wrestling with TTLs, I shared my takeaways from years of tuning expiration and eviction in this companion piece on Redis and Memcached. It\u2019s funny how a few changes there can take real pressure off your HA design.<\/p>\n<h2 id=\"section-9\"><span id=\"Backups_State_Transfers_and_the_Art_of_Not_Freezing_the_Store\">Backups, State Transfers, and the Art of Not Freezing the Store<\/span><\/h2>\n<p>I\u2019ll never forget the first time a node tried to join a Galera cluster during peak traffic using a full, locky state transfer. We didn\u2019t take the site down, but we probably aged a few years that night. The lesson: plan your state transfer strategy like you plan your backups \u2014 carefully, and with empathy for the production workload.<\/p>\n<p>On Galera, prefer non\u2011blocking state transfer methods that are designed for InnoDB. They let a node catch up without holding the rest of the cluster hostage. If you can pre\u2011seed a node from a recent physical backup, even better. And keep an eye on whether a node can perform an incremental state transfer (IST) instead of a full snapshot; that one detail turns a tense hour into a relaxed few minutes.<\/p>\n<p>For primary\u2011replica, backups are more straightforward, but you still want consistency and speed. Test restoration as seriously as you test backup creation. I like to keep a habit of periodic, automated restores into a staging environment. It\u2019s the only way to be certain your backups are not just pretty files. If you want a practical, vendor\u2011neutral path to offsite safety, I wrote a hands\u2011on guide for pushing backups to S3\u2011compatible storage with encryption and retention that doesn\u2019t become a second job.<\/p>\n<p>One more note: backups and HA are siblings. An HA setup reduces downtime, but it doesn\u2019t protect you from bad data being replicated beautifully everywhere. A recent, verified backup is your life raft when a rogue plugin or a fat\u2011fingered SQL statement decides to be the main character.<\/p>\n<h2 id=\"section-10\"><span id=\"Failover_and_Maintenance_Without_Drama\">Failover and Maintenance Without Drama<\/span><\/h2>\n<p>Let\u2019s talk about changing the tires while the car is moving. On primary\u2011replica, a clean failover plan usually looks like this: your proxy detects the primary is unhealthy, promotes a replica, and updates routing. You prevent writes to the old primary so split brain doesn\u2019t become a story you tell at conferences. The hard part isn\u2019t the mechanics; it\u2019s the practice. I beg teams to run game\u2011day drills in staging, because muscle memory beats documentation when you\u2019re under pressure.<\/p>\n<p>On Galera, failover is more about keeping quorum and write availability. With three or more nodes, you can lose one and keep going. I like to pair the cluster with a simple VIP managed by something like Keepalived. If the current writer wobbles, the VIP moves, and checkout keeps flowing to a healthy node. You\u2019ll still have to think through session stickiness at the application layer, but the result is a graceful wobble instead of a tumble.<\/p>\n<p>Maintenance windows are where Galera sometimes shines. You can rotate nodes for upgrades or configuration changes without taking the store offline, as long as you respect the cluster\u2019s needs. Keep an eye on flow control during heavy changes. And don\u2019t forget that primary\u2011replica can do rolling maintenance too with a replica promotion plan. Neither approach is allergic to change; they just ask for a little choreography.<\/p>\n<h2 id=\"section-11\"><span id=\"Observability_The_Dashboard_That_Saves_You_at_3_am\">Observability: The Dashboard That Saves You at 3 a.m.<\/span><\/h2>\n<p>We only get to be as calm as our dashboards allow. HA without observability is just a hope and a prayer. I want to see query latency, replication lag, connection counts, buffer pool health, and \u2014 in Galera \u2014 flow control and certification failures. I want per\u2011node visibility and an at\u2011a\u2011glance health score for the cluster. If a replica is drifting or a node is backpressuring the cluster, I want a notification before customers feel it.<\/p>\n<p>If you\u2019re just getting started with metrics and alerting on a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>, I\u2019ve shared my low\u2011drama way to stand up Prometheus and Grafana alongside uptime checks. It\u2019s amazing how quickly a couple of smart graphs turn panic into \u201coh, I know what that is.\u201d The bonus is that once you have that in place, capacity planning stops being guesswork and starts being arithmetic.<\/p>\n<p>While you\u2019re building this out, give your proxy its own health checks. If you\u2019re using SQL\u2011aware routing, log which queries are being pinned to the writer and why. Spotting an unexpected read going to the writer is a gift; it\u2019s a breadcrumb to an optimization you didn\u2019t know you had.<\/p>\n<h2 id=\"section-12\"><span id=\"Practical_Build_A_Minimal_Realistic_RW_Architecture\">Practical Build: A Minimal, Realistic R\/W Architecture<\/span><\/h2>\n<p>Let me sketch a build that I\u2019ve rolled out a dozen times in slightly different flavors. Start with three database nodes. If you choose primary\u2011replica: one primary, two replicas. If you choose Galera: three data nodes, treating one as the default writer for checkout and admin. In front of them, place a SQL\u2011aware proxy that understands transactions. Wrap it all with a simple TCP failover using a VIP so application configuration stays boring.<\/p>\n<p>Now, teach the proxy your rules. Route writes and anything under checkout or wp\u2011admin to the writer. Let catalog and search fan out to the read pool. For logged\u2011in sessions doing account or cart actions, either pin them to the writer for a short time or use causal reads on Galera to guarantee they see their own changes. Observe, adjust, repeat. The beauty of this approach is that it scales gracefully: add read capacity when browsing increases, and strengthen the writer tier when order volume grows.<\/p>\n<p>Over time, compliment this with a persistent object cache so reads become lighter by default, and continue to invest in your schema and query tuning. It\u2019s unglamorous compared to spinning up a cluster, but it\u2019s the quiet work that keeps you from needing heroics later.<\/p>\n<h2 id=\"section-13\"><span id=\"A_Word_on_Plugins_Migrations_and_the_Who_Touched_the_Database_Moment\">A Word on Plugins, Migrations, and the \u201cWho Touched the Database?\u201d Moment<\/span><\/h2>\n<p>Nothing throws a wet towel on a smooth HA setup faster than a plugin that decides to do heavy schema work at noon on a Monday. I\u2019ve been bitten by background migrations that didn\u2019t look dangerous until they were. The defense is simple: audit new plugins in staging, watch their database footprint, and schedule heavy jobs when your users are asleep. WooCommerce is friendly, but it\u2019s not immune to surprise background tasks.<\/p>\n<p>If you must run a data migration, think about how it interacts with your topology. On primary\u2011replica, expect lag and route sensitive reads accordingly. On Galera, expect flow control to kick in if a node can\u2019t apply changes quickly \u2014 and maybe give the cluster a lighter meal if you\u2019re doing something chunky.<\/p>\n<h2 id=\"section-14\"><span id=\"When_to_Choose_Which_A_GutCheck\">When to Choose Which: A Gut\u2011Check<\/span><\/h2>\n<p>Here\u2019s how I frame the choice when a client asks me straight up, \u201cWhich one should we use?\u201d If your store\u2019s main pain is read scale, and you can keep sensitive reads on the primary without contorting your app, primary\u2011replica often feels like coming home. It\u2019s easy to explain, reliable, and plenty fast with a smart proxy. If you\u2019re in a situation where replicas falling even a little behind causes unacceptable confusion, or you want to distribute writes across zones while maintaining tighter consistency, Galera takes the lead \u2014 with the caveat that you\u2019ll design carefully around write conflicts in WooCommerce.<\/p>\n<p>And don\u2019t forget the human factor. Your team\u2019s comfort matters. If your crew knows how to run and observe Galera, you\u2019ll be happier there. If they\u2019re strong with classic replication and promotion, lean into that. The best architecture is the one your team can operate calmly on a bad day.<\/p>\n<h2 id=\"section-15\"><span id=\"Extra_Reading_and_Tools_I_Keep_Handy\">Extra Reading and Tools I Keep Handy<\/span><\/h2>\n<p>If you\u2019re curious to explore beyond this walkthrough, I\u2019d keep a few bookmarks nearby. The Galera overview I mentioned earlier is a great starting point for understanding write set replication. For read\/write splitting, I like following what the ProxySQL community is doing; it\u2019s practical and oriented around real traffic. And for VIP failover and health checks at the network layer, Keepalived\u2019s VRRP remains one of those tools that does one job and does it well.<\/p>\n<p>For the WordPress\/WooCommerce side, if you want to make your catalog fly without checkout weirdness, you might enjoy the deep dive I wrote on tuning full\u2011page caching so it doesn\u2019t step on carts. It has saved me from a thousand tiny incidents. And if you\u2019ve ever wondered why your object cache doesn\u2019t feel as helpful as it should, the Redis vs Memcached post will give you concrete knobs to turn.<\/p>\n<h2 id=\"section-16\"><span id=\"WrapUp_Your_Store_Your_Rules_But_Make_Them_Explicit\">Wrap\u2011Up: Your Store, Your Rules \u2014 But Make Them Explicit<\/span><\/h2>\n<p>Let\u2019s bring it home. Whether you choose MariaDB Galera Cluster or a primary\u2011replica setup, WooCommerce will be happiest when you make your read\/write rules explicit. Treat checkout and wp\u2011admin like VIPs and route them to a dependable writer. Let the catalog spread out and run free. Give cart and account views the gift of \u201cread your writes,\u201d with stickiness or causal reads. Back it all with a proxy that understands your intent, and a dashboard that tells you when reality drifts from the plan.<\/p>\n<p>If you\u2019re leaning toward Galera, design around consistency and conflicts, and keep your state transfers gentle. If primary\u2011replica speaks to you, be honest about replication lag, and protect user\u2011visible reads. In both worlds, tune your schema, keep your object cache sharp, and resist the urge to skip game\u2011day drills. The first time traffic explodes and everything stays calm, you\u2019ll know it was worth it.<\/p>\n<p>Hope this was helpful. If you want a second pair of eyes on your setup or you\u2019re staring at a graph that\u2019s starting to frown, you\u2019re not alone \u2014 and you\u2019ve got options. See you in the next post.<\/p>\n<h2 id=\"section-17\"><span id=\"References_and_Useful_Links\">References and Useful Links<\/span><\/h2>\n<p>\u2022 Galera fundamentals: <a href=\"https:\/\/mariadb.com\/kb\/en\/what-is-mariadb-galera-cluster\/\" rel=\"nofollow noopener\" target=\"_blank\">what MariaDB Galera Cluster is<\/a><br \/>\u2022 SQL\u2011aware routing: <a href=\"https:\/\/www.proxysql.com\/\" rel=\"nofollow noopener\" target=\"_blank\">ProxySQL<\/a><br \/>\u2022 VIP failover: <a href=\"https:\/\/www.keepalived.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Keepalived VRRP<\/a><\/p>\n<h2 id=\"section-18\"><span id=\"Related_Reading\">Related Reading<\/span><\/h2>\n<p>\u2022 <a href=\"https:\/\/www.dchost.com\/blog\/en\/woocommerce-icin-mysql-innodb-tuning-kontrol-listesi-buffer-pool-indeksleme-ve-slow-query-analizi-nasil-akillica-yapilir\/\">The WooCommerce MySQL\/InnoDB Tuning Checklist I Wish I Had Years Ago<\/a><\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was, staring at a graph that looked like a ski slope flipped on its head. A client\u2019s WooCommerce store had just been featured in a popular newsletter, and traffic went through the roof. Pages were still fast (we\u2019d tuned caching), but the checkout queue started to crawl. The culprit wasn\u2019t CPU. It [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1478,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1477","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1477","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1477"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1477\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1478"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1477"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1477"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1477"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}