{"id":1355,"date":"2025-11-05T12:38:07","date_gmt":"2025-11-05T09:38:07","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/redis-vs-memcached-for-wordpress-woocommerce-the-ttl-and-eviction-tuning-playbook-i-wish-i-had\/"},"modified":"2025-11-05T12:38:07","modified_gmt":"2025-11-05T09:38:07","slug":"redis-vs-memcached-for-wordpress-woocommerce-the-ttl-and-eviction-tuning-playbook-i-wish-i-had","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/redis-vs-memcached-for-wordpress-woocommerce-the-ttl-and-eviction-tuning-playbook-i-wish-i-had\/","title":{"rendered":"Redis vs Memcached for WordPress\/WooCommerce: The TTL and Eviction Tuning Playbook I Wish I Had"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was, staring at a WooCommerce store that felt like it was wading through syrup, and it hit me: this wasn\u2019t a slow\u2011PHP problem or a broken CDN rule. It was that quiet little layer we forget until it screams\u2014object caching. Ever had that moment when a cart page drags after a product import, or when logged\u2011in customers feel like they\u2019re on dial\u2011up while guests zip around? That\u2019s often your object cache either doing you favors or tripping over its own shoelaces. In this piece, I want to walk you through the way I think about Redis vs Memcached for WordPress and WooCommerce, how I set TTLs without shooting myself in the foot, and the eviction policies that keep your store snappy when memory pressure hits. No tables, no stiff comparisons\u2014just the real story of what tends to work and how to keep it working when traffic spikes at the worst possible time.<\/p>\n<p>Here\u2019s the thing: WordPress is chatty with the database, and WooCommerce adds even more chatter. A persistent object cache is like keeping your best friend on speed dial instead of calling directory assistance every time. Whether you pick Redis or Memcached, the mindset is the same\u2014cache what\u2019s useful, expire what\u2019s dangerous when stale, and choose an eviction approach that won\u2019t toss your most precious keys out the window. I\u2019ll share the trade\u2011offs I see, the defaults I tweak, and the settings that have saved me during midnight sales. Grab a coffee; we\u2019re going in.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#What_a_Persistent_Object_Cache_Really_Is_And_Why_It_Changes_Everything\"><span class=\"toc_number toc_depth_1\">1<\/span> What a Persistent Object Cache Really Is (And Why It Changes Everything)<\/a><\/li><li><a href=\"#Redis_and_Memcached_Two_Personalities_One_Job\"><span class=\"toc_number toc_depth_1\">2<\/span> Redis and Memcached: Two Personalities, One Job<\/a><\/li><li><a href=\"#TTL_Strategy_That_Doesnt_Bite_You_Later\"><span class=\"toc_number toc_depth_1\">3<\/span> TTL Strategy That Doesn\u2019t Bite You Later<\/a><ul><li><a href=\"#Start_with_how_dangerous_is_stale\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Start with \u201chow dangerous is stale?\u201d<\/a><\/li><li><a href=\"#Transients_and_the_object_cache_handshake\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Transients and the object cache handshake<\/a><\/li><li><a href=\"#Group_behaviors_you_should_understand\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Group behaviors you should understand<\/a><\/li><li><a href=\"#Keep_invalidation_predictable\"><span class=\"toc_number toc_depth_2\">3.4<\/span> Keep invalidation predictable<\/a><\/li><li><a href=\"#My_default_TTL_mindset\"><span class=\"toc_number toc_depth_2\">3.5<\/span> My default TTL mindset<\/a><\/li><\/ul><\/li><li><a href=\"#Eviction_Tuning_How_to_Keep_Your_Hottest_Keys_from_Getting_Tossed\"><span class=\"toc_number toc_depth_1\">4<\/span> Eviction Tuning: How to Keep Your Hottest Keys from Getting Tossed<\/a><ul><li><a href=\"#Why_eviction_matters_more_during_the_storm\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Why eviction matters more during the storm<\/a><\/li><li><a href=\"#Redis_policies_I_actually_use\"><span class=\"toc_number toc_depth_2\">4.2<\/span> Redis policies I actually use<\/a><\/li><li><a href=\"#Memcacheds_quiet_superpower\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Memcached\u2019s quiet superpower<\/a><\/li><li><a href=\"#Dont_forget_the_warmup_plan\"><span class=\"toc_number toc_depth_2\">4.4<\/span> Don\u2019t forget the warm\u2011up plan<\/a><\/li><\/ul><\/li><li><a href=\"#Persistence_Restarts_and_High_Availability_Without_Overkill\"><span class=\"toc_number toc_depth_1\">5<\/span> Persistence, Restarts, and High Availability Without Overkill<\/a><ul><li><a href=\"#That_persistent_word_again\"><span class=\"toc_number toc_depth_2\">5.1<\/span> That \u201cpersistent\u201d word again<\/a><\/li><li><a href=\"#What_high_availability_looks_like_in_the_real_world\"><span class=\"toc_number toc_depth_2\">5.2<\/span> What high availability looks like in the real world<\/a><\/li><li><a href=\"#Dont_mix_everything_in_one_hotspot\"><span class=\"toc_number toc_depth_2\">5.3<\/span> Don\u2019t mix everything in one hotspot<\/a><\/li><\/ul><\/li><li><a href=\"#A_Practical_Setup_You_Can_Actually_Run\"><span class=\"toc_number toc_depth_1\">6<\/span> A Practical Setup You Can Actually Run<\/a><ul><li><a href=\"#Redis_the_settings_I_reach_for\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Redis: the settings I reach for<\/a><\/li><li><a href=\"#Memcached_the_flags_that_avoid_landmines\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Memcached: the flags that avoid landmines<\/a><\/li><li><a href=\"#WordPress_glue_what_to_actually_change\"><span class=\"toc_number toc_depth_2\">6.3<\/span> WordPress glue: what to actually change<\/a><\/li><li><a href=\"#Observability_without_drowning_in_metrics\"><span class=\"toc_number toc_depth_2\">6.4<\/span> Observability without drowning in metrics<\/a><\/li><li><a href=\"#Deployment_tricks_that_prevent_drama\"><span class=\"toc_number toc_depth_2\">6.5<\/span> Deployment tricks that prevent drama<\/a><\/li><\/ul><\/li><li><a href=\"#When_to_Choose_Redis_When_to_Choose_Memcached\"><span class=\"toc_number toc_depth_1\">7<\/span> When to Choose Redis, When to Choose Memcached<\/a><\/li><li><a href=\"#Common_Gotchas_And_How_I_Learned_to_Avoid_Them\"><span class=\"toc_number toc_depth_1\">8<\/span> Common Gotchas (And How I Learned to Avoid Them)<\/a><ul><li><a href=\"#Toolong_TTLs_on_userspecific_data\"><span class=\"toc_number toc_depth_2\">8.1<\/span> Too\u2011long TTLs on user\u2011specific data<\/a><\/li><li><a href=\"#Volatileonly_eviction_with_nonTTL_keys\"><span class=\"toc_number toc_depth_2\">8.2<\/span> Volatile\u2011only eviction with non\u2011TTL keys<\/a><\/li><li><a href=\"#Memcached_item_size_too_small\"><span class=\"toc_number toc_depth_2\">8.3<\/span> Memcached item size too small<\/a><\/li><li><a href=\"#Mixing_workloads_in_one_Redis\"><span class=\"toc_number toc_depth_2\">8.4<\/span> Mixing workloads in one Redis<\/a><\/li><li><a href=\"#Ignoring_the_basics_during_tuning\"><span class=\"toc_number toc_depth_2\">8.5<\/span> Ignoring the basics during tuning<\/a><\/li><\/ul><\/li><li><a href=\"#Further_Reading_That_Wont_Waste_Your_Time\"><span class=\"toc_number toc_depth_1\">9<\/span> Further Reading That Won\u2019t Waste Your Time<\/a><\/li><li><a href=\"#WrapUp_A_Calm_Way_to_Pick_Tune_and_Sleep_at_Night\"><span class=\"toc_number toc_depth_1\">10<\/span> Wrap\u2011Up: A Calm Way to Pick, Tune, and Sleep at Night<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"What_a_Persistent_Object_Cache_Really_Is_And_Why_It_Changes_Everything\">What a Persistent Object Cache Really Is (And Why It Changes Everything)<\/span><\/h2>\n<p>I remember the first time I enabled a persistent object cache on a busy WooCommerce site. The homepage went from sluggish to sprightly, and the \u201cwhy didn\u2019t we do this sooner?\u201d texts started rolling in. But let\u2019s clear up a little confusion up front. In WordPress land, <strong>persistent object cache<\/strong> doesn\u2019t mean \u201csaved to disk forever.\u201d It means your cached objects live <em>beyond a single PHP request<\/em>, so the next request can reuse them without hitting MySQL again. Whether Redis or Memcached is behind it, the goal is the same: cut down the database chatter and return answers fast.<\/p>\n<p>Think of your database as the kitchen in a busy restaurant and your object cache as the heated pass. When a request comes in, you\u2019d prefer to grab a dish that\u2019s still hot and ready (cache) rather than cook the whole meal from scratch (database). WordPress asks for the same exact things over and over\u2014options, user meta, term data, product data\u2014and WooCommerce piles on with everything from stock checks to taxes and shipping. If those answers don\u2019t change every second, they\u2019re perfect candidates for caching.<\/p>\n<p>One more gentle reminder: OPcache is for PHP bytecode, not data. It\u2019s awesome and you should definitely use it, but it doesn\u2019t replace an object cache. If you want the broader big\u2011picture setup\u2014PHP\u2011FPM, OPcache, Redis and friends\u2014I wrote a friendly deep dive you might like: <a href=\"https:\/\/www.dchost.com\/blog\/en\/wordpress-icin-sunucu-tarafi-optimizasyon-php-fpm-opcache-redis-ve-mysql-ile-neyi-ne-zaman-nasil-ayarlamalisin\/\">The Server\u2011Side Secrets That Make WordPress Fly<\/a>. For now, let\u2019s stay laser\u2011focused on Redis vs Memcached and how to make either sing.<\/p>\n<h2 id=\"section-2\"><span id=\"Redis_and_Memcached_Two_Personalities_One_Job\">Redis and Memcached: Two Personalities, One Job<\/span><\/h2>\n<p>Over the years, I\u2019ve learned to think about Redis and Memcached less like rivals and more like two personalities at the same party. Memcached is the minimalist who shows up on time, keeps things tidy, and leaves without drama. Redis is the talented friend who plays multiple instruments and tells great stories, but you\u2019ll want to set a few house rules if you don\u2019t want a jam session at 3 a.m.<\/p>\n<p>Memcached keeps it simple: strings in, strings out, blazing fast, multithreaded, designed to be ephemeral. For WordPress, most values are serialized arrays anyway, so simplicity works just fine. It\u2019s excellent when you want a dependable in\u2011RAM cache that won\u2019t surprise you with extra features you didn\u2019t ask for. I often reach for Memcached when I want minimal moving parts and I\u2019m fine with the cache vanishing on restart\u2014because remember, it\u2019s a cache. If it\u2019s gone, WordPress simply rebuilds it.<\/p>\n<p>Redis brings more to the table: richer data structures, optional persistence, replication, and a buffet of commands that can be fine\u2011tuned for all sorts of workloads. The main event loop is single\u2011threaded, which keeps things predictably fast for most workloads, and you can get fancy with eviction policies and memory accounting. For a WooCommerce site that needs more nuanced tuning under memory pressure, the extra control in Redis can be a lifesaver. I\u2019ve leveraged LFU eviction when product catalogs balloon, and it\u2019s remarkable how gracefully the hot keys keep winning when the going gets tough.<\/p>\n<p>Here\u2019s my honest take. If you\u2019re allergic to surprises and want a straight line to faster pages, Memcached is lovely. If you enjoy having dials to turn when traffic gets weird\u2014Black Friday weird\u2014Redis gives you the controller with more buttons. Both are great. The better pick is the one that matches your team\u2019s comfort and your store\u2019s volatility.<\/p>\n<h2 id=\"section-3\"><span id=\"TTL_Strategy_That_Doesnt_Bite_You_Later\">TTL Strategy That Doesn\u2019t Bite You Later<\/span><\/h2>\n<h3><span id=\"Start_with_how_dangerous_is_stale\">Start with \u201chow dangerous is stale?\u201d<\/span><\/h3>\n<p>I learned the hard way that not everything deserves a long TTL. A client once had a nightly import that tweaked prices and stock in small, predictable ways. We thought, cool\u2014let\u2019s set generous TTLs and enjoy the hit ratio. The next morning, support tickets piled up because a handful of bestsellers were showing yesterday\u2019s price for longer than we liked. That\u2019s when I started categorizing cache TTLs by how dangerous staleness feels, not by how expensive the query is.<\/p>\n<p>For WordPress and WooCommerce, I usually think in four buckets. First, highly dynamic user context (carts, sessions, personalized fragments) should be short\u2011lived, sometimes just a few minutes, because stale here can break trust. Second, medium\u2011dynamic content like product queries or category pages can sit in cache for a handful of minutes when traffic surges. Third, slow but safe data such as complex option lookups or rarely changing site settings can live longer, even hours. Fourth, things that don\u2019t really change at all (feature flags or sitewide toggles) can go long so long as you\u2019re ready to invalidate them when needed.<\/p>\n<h3><span id=\"Transients_and_the_object_cache_handshake\">Transients and the object cache handshake<\/span><\/h3>\n<p>WordPress transients are fascinating because they\u2019re a built\u2011in way to say \u201cthis piece of data has an expiration.\u201d When you enable a persistent object cache, transients become much cheaper to use. They\u2019ll live in Redis or Memcached instead of clogging your database tables. I like using transients for computed values\u2014say, a heavy product filter result that is safe to reuse for a few minutes. The trick is to choose expirations that reflect the business. If prices dance by the hour, don\u2019t give those computed pricing artifacts a day to lounge in memory. If stock changes constantly, keep anything derived from stock short and sweet.<\/p>\n<h3><span id=\"Group_behaviors_you_should_understand\">Group behaviors you should understand<\/span><\/h3>\n<p>Different WordPress core data behaves differently under cache. Options are frequent flyers; caching them reduces DB roundtrips across the whole site. User meta and term data can also feel heavier than they look, especially on sites with many roles or rich metadata. WooCommerce adds layers like tax rates, shipping zones, and catalog visibility logic. None of this is scary if you keep the \u201chow dangerous is stale?\u201d question on loop in your head. In my experience, many sites do fine with modest TTLs\u2014think minutes for dynamic content and hours for stable options\u2014so long as invalidations happen on writes.<\/p>\n<h3><span id=\"Keep_invalidation_predictable\">Keep invalidation predictable<\/span><\/h3>\n<p>Here\u2019s a sanity saver: rely on the hooks and invalidation that WooCommerce and your object cache plugin already provide. When products change, most reputable caching plugins know to invalidate related keys. Don\u2019t fight that; complement it. If you introduce your own layer\u2014custom transients, for example\u2014tie expiration or deletion to the same events that fire on product updates. It\u2019s not about fancy patterns; it\u2019s about making sure your cache clears itself when the world changes underneath it.<\/p>\n<h3><span id=\"My_default_TTL_mindset\">My default TTL mindset<\/span><\/h3>\n<p>When I don\u2019t have strong data yet, I start conservatively. Five to fifteen minutes for dynamic list pages. A minute or two for personalized fragments that appear on every page when logged in. One to twenty\u2011four hours for boring options that never change mid\u2011day. Then, I watch. If the store rarely changes prices midday, I stretch TTLs for that set. If the store has a habit of flash sales, I keep TTLs tighter and lean on targeted invalidation to keep pages fresh. You\u2019ll be amazed what a simple dashboard of \u201chit ratio vs. complaints\u201d will teach you in a week.<\/p>\n<h2 id=\"section-4\"><span id=\"Eviction_Tuning_How_to_Keep_Your_Hottest_Keys_from_Getting_Tossed\">Eviction Tuning: How to Keep Your Hottest Keys from Getting Tossed<\/span><\/h2>\n<h3><span id=\"Why_eviction_matters_more_during_the_storm\">Why eviction matters more during the storm<\/span><\/h3>\n<p>Eviction isn\u2019t a problem until suddenly it\u2019s the only problem. During traffic spikes, caches fill up fast. The question is: when memory is tight, which keys survive? That\u2019s where your store\u2019s personality meets your cache\u2019s personality. If you think of memory like a tiny apartment, eviction is deciding what to keep when you\u2019ve got more clothes than closet. Keep the items you wear daily. Donate the rest. Sounds obvious, but you\u2019d be surprised how many sites leave this to questionable defaults.<\/p>\n<h3><span id=\"Redis_policies_I_actually_use\">Redis policies I actually use<\/span><\/h3>\n<p>Redis lets you choose how it evicts. I\u2019ve had great results with <strong>allkeys-lfu<\/strong> on stores where a small set of keys get hammered (popular categories, featured products, option lookups), because LFU (least frequently used) tends to cherish long\u2011term hot keys better than LRU. If you\u2019re allergic to LFU, <strong>allkeys-lru<\/strong> is the old faithful that behaves very predictably. What I almost never choose for WooCommerce is <em>volatile\u2011only<\/em> policies, because they skip non\u2011TTL keys and can back you into out\u2011of\u2011memory traps. If you rely on volatile policies, be absolutely sure almost everything has TTLs set\u2014otherwise the cache refuses to evict the wrong stuff at the worst time.<\/p>\n<p>A small but important tip: set <strong>maxmemory<\/strong> with a little headroom under your server\u2019s RAM, especially on shared or containerized hosts. When Redis hits maxmemory, you want it evicting keys, not wrestling the kernel. If you\u2019re curious about the knobs, the official docs on eviction strategies are a nice, focused reference: <a href=\"https:\/\/redis.io\/docs\/latest\/operate\/oss_and_stack\/management\/configuration\/eviction\/\" rel=\"nofollow noopener\" target=\"_blank\">how Redis eviction policies work and how to pick one<\/a>.<\/p>\n<h3><span id=\"Memcacheds_quiet_superpower\">Memcached\u2019s quiet superpower<\/span><\/h3>\n<p>Memcached doesn\u2019t give you a menu of eviction policies, but it nails the one it has. Its LRU (least recently used) eviction, combined with slab classes, is simple and stable. The part that bites people is item size and slab fragmentation. If your cache values are frequently larger than Memcached\u2019s item size limit, they get rejected or chunked poorly. That\u2019s why setting the item size with <code>-I<\/code> (and appropriate memory with <code>-m<\/code>) is often a day\u2011one tweak. Modern Memcached also has background threads that keep LRU and slab management healthy; I keep the LRU maintainer and crawler on because they really do help during real traffic. If you want to explore those runtime switches and what they do, I like pointing folks to the concise notes here: <a href=\"https:\/\/github.com\/memcached\/memcached\/wiki\/ConfiguringServer\" rel=\"nofollow noopener\" target=\"_blank\">Memcached server configuration notes<\/a>.<\/p>\n<h3><span id=\"Dont_forget_the_warmup_plan\">Don\u2019t forget the warm\u2011up plan<\/span><\/h3>\n<p>Whether it\u2019s Redis or Memcached, a cold cache on a warm morning is a great way to make coffee and watch your CPU climb. If you restart your cache layer, consider a warm\u2011up strategy. For some stores, it\u2019s as simple as having a crontab hit your top landing pages or running a tiny script that pre\u2011queries your most expensive catalog pages. This isn\u2019t mandatory, but it makes launches feel less like crossing your fingers. In a pinch, you can also pre\u2011seed known\u2011hot options and term lookups, but for most teams, a simple homepage and category sweep is more than enough.<\/p>\n<h2 id=\"section-5\"><span id=\"Persistence_Restarts_and_High_Availability_Without_Overkill\">Persistence, Restarts, and High Availability Without Overkill<\/span><\/h2>\n<h3><span id=\"That_persistent_word_again\">That \u201cpersistent\u201d word again<\/span><\/h3>\n<p>Quick recap: in WordPress, a \u201cpersistent object cache\u201d is about living beyond the request, not about surviving restarts. Memcached never persists to disk. Redis can, but for pure object cache duties, I typically <strong>disable persistence<\/strong> and let it be purely in\u2011memory. Why? On restarts, Redis doesn\u2019t waste time replaying an AOF or loading an RDB snapshot. It just starts fresh, and WordPress fills it up again. If you\u2019re using Redis for other things\u2014sessions, queues\u2014then sure, keep persistence on for those use cases, or use separate instances: one persistent, one ephemeral.<\/p>\n<h3><span id=\"What_high_availability_looks_like_in_the_real_world\">What high availability looks like in the real world<\/span><\/h3>\n<p>This topic sounds scary until you strip it down to business needs. If your app can tolerate the cache being empty for a minute or two\u2014because it\u2019s a cache\u2014then you don\u2019t <em>need<\/em> a clustered cache to survive node failures. For stores running serious campaigns, though, I\u2019ve used Redis with replicas and Sentinel to fail over. WordPress object caching doesn\u2019t care about strong consistency; it cares about \u201cis there a cache available?\u201d during the next request. If your budget and stack allow, managed Redis with multi\u2011AZ is painless. For Memcached, a pool of nodes with client\u2011side hashing is common; losing one node degrades performance but doesn\u2019t bring you down.<\/p>\n<h3><span id=\"Dont_mix_everything_in_one_hotspot\">Don\u2019t mix everything in one hotspot<\/span><\/h3>\n<p>This is a small hill I\u2019ll gladly die on: avoid cramming everything into one Redis that\u2019s doing sessions, queues, full\u2011page caching, and object cache at the same time. It\u2019s tempting, I know. But each of those workloads spikes differently and needs different eviction behaviors. Separate them\u2014even if that\u2019s just logical DBs with strict memory limits per instance\u2014so a queue surge doesn\u2019t evict the object cache that keeps your catalog fast. Clarity beats cleverness here.<\/p>\n<h2 id=\"section-6\"><span id=\"A_Practical_Setup_You_Can_Actually_Run\">A Practical Setup You Can Actually Run<\/span><\/h2>\n<h3><span id=\"Redis_the_settings_I_reach_for\">Redis: the settings I reach for<\/span><\/h3>\n<p>When I\u2019m using Redis purely for WordPress\/WooCommerce object caching on a dedicated instance, I usually start with something like this:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># redis.conf (excerpt)\nmaxmemory 2gb\nmaxmemory-policy allkeys-lfu\nappendonly no  # disable AOF for pure cache\nsave &quot;&quot;       # disable RDB snapshotting for pure cache\nlatency-monitor-threshold 100\n<\/code><\/pre>\n<p>If the server only has 4 GB of RAM, I\u2019ll use about half for Redis, then watch and adjust. If you\u2019re running inside containers, leave headroom\u2014don\u2019t let the OOM killer get interested. I keep an eye on <code>used_memory<\/code>, <code>evicted_keys<\/code>, and <code>latency<\/code> via <code>INFO<\/code>. If <code>evicted_keys<\/code> climbs too fast, either increase memory or shorten TTLs for bulky groups. If latency spikes, I look for big values or long\u2011running commands. And yes, big values can hide in serialized arrays\u2014sometimes the answer is to cache fewer fields, not more bytes.<\/p>\n<h3><span id=\"Memcached_the_flags_that_avoid_landmines\">Memcached: the flags that avoid landmines<\/span><\/h3>\n<p>For Memcached, my baseline looks like this on a modest host:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">memcached -m 2048 -I 2m -o modern -o slab_automove=1 -o lru_crawler -o lru_maintainer -t 4 -v\n<\/code><\/pre>\n<p>That bumps the memory, allows 2 MB items, and enables the features that keep LRU healthy under changing workloads. The <code>-t<\/code> flag uses threads; with Memcached\u2019s multithreaded nature, that helps on multi\u2011core boxes. I keep an eye on <code>evictions<\/code> and <code>get_misses<\/code> via <code>stats<\/code>. If evictions are constant and your hit ratio falls off a cliff during campaigns, either add RAM or revisit TTLs for bulky keys. One sneaky culprit is very large, rarely reused items hogging slabs. Bigger item size limits can help, but don\u2019t let that turn your cache into a swap space for whales.<\/p>\n<h3><span id=\"WordPress_glue_what_to_actually_change\">WordPress glue: what to actually change<\/span><\/h3>\n<p>Most folks use a proven object cache drop\u2011in like the Redis Object Cache plugin or a Memcached drop\u2011in that talks to the right PHP extension. In <code>wp-config.php<\/code>, it\u2019s nice to define a few things explicitly so you know what\u2019s going on:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/\/ wp-config.php (conceptual snippets)\n\/\/ Choose one backend and its host\/port\/socket.\n\/\/ For Redis:\ndefine('WP_REDIS_HOST', '127.0.0.1');\ndefine('WP_REDIS_PORT', 6379);\n\/\/ Optional: define per-group TTLs if your drop-in supports it\n\/\/ e.g., options group longer, user-specific shorter.\n\n\/\/ For Memcached (using Memcached extension):\n$memcached_servers = [\n  ['127.0.0.1', 11211]\n];\n<\/code><\/pre>\n<p>Different plugins expose different settings for TTL and groups. If yours allows per\u2011group TTLs, I usually give <em>options<\/em> a long leash, keep <em>users<\/em> and <em>sessions<\/em> short, and hold <em>transients<\/em> to whatever lifespan matches the business logic that produced them. The nuance beats a one\u2011size\u2011fits\u2011all number every time.<\/p>\n<h3><span id=\"Observability_without_drowning_in_metrics\">Observability without drowning in metrics<\/span><\/h3>\n<p>What I actually watch day\u2011to\u2011day is simple: hit ratio, evictions, and 95th percentile response for cached vs. uncached requests (if my APM exposes it). If you don\u2019t have fancy tooling, a quick script that flicks through <code>redis-cli INFO<\/code> or <code>memcached stats<\/code> every minute and logs a few fields is plenty. After a week, patterns will pop. You\u2019ll know if the cache is too small or if TTLs are either too stingy or too generous.<\/p>\n<h3><span id=\"Deployment_tricks_that_prevent_drama\">Deployment tricks that prevent drama<\/span><\/h3>\n<p>If you deploy code that changes cache structure\u2014say, new serialization formats or new key naming\u2014consider a gentle cache flush right after deploy. It forces a rebuild but avoids the weirdness of old and new keys colliding. If you keep your cache warm with a crawler, point it at key templates you know matter: homepage, top categories, a couple of representative product pages, and the checkout start. It\u2019s like stretching before a run; your store appreciates it.<\/p>\n<h2 id=\"section-7\"><span id=\"When_to_Choose_Redis_When_to_Choose_Memcached\">When to Choose Redis, When to Choose Memcached<\/span><\/h2>\n<p>Let me tell you about two stores I still think about. One was a minimalist fashion brand with intermittent bursts of traffic from influencer drops. They needed reliability without a fuss. We put Memcached in, sized it generously, turned on the LRU maintainer, and let it cruise. It never called attention to itself, which is the highest compliment I can give a cache. The other was a sprawling catalog with a complex tax setup and a content team constantly rearranging collections. We used Redis with LFU eviction and slightly more generous memory. When traffic surged, the hot keys stayed hot and we didn\u2019t babysit it.<\/p>\n<p>In my experience, Memcached is a great fit when you want an uncomplicated, rock\u2011solid cache that does one thing fast. Redis shines when you want control\u2014tunable eviction, better visibility, optional replication, and room to separate workloads if you ever do more than object cache. You can\u2019t really \u201cwrong\u201d either choice for WordPress, but you can set yourself up for fewer 3 a.m. surprises by matching the tool to the personality of your store.<\/p>\n<h2 id=\"section-8\"><span id=\"Common_Gotchas_And_How_I_Learned_to_Avoid_Them\">Common Gotchas (And How I Learned to Avoid Them)<\/span><\/h2>\n<h3><span id=\"Toolong_TTLs_on_userspecific_data\">Too\u2011long TTLs on user\u2011specific data<\/span><\/h3>\n<p>Once, I thought I was clever by caching user fragments longer to boost hit ratios. It backfired when returning customers saw stale free\u2011shipping banners that no longer applied. Keep user\u2011specific TTLs short unless you absolutely control when they invalidate.<\/p>\n<h3><span id=\"Volatileonly_eviction_with_nonTTL_keys\">Volatile\u2011only eviction with non\u2011TTL keys<\/span><\/h3>\n<p>Redis can be set to evict only keys with TTLs. Sounds safe until a few non\u2011TTL keys show up and pin memory to the ceiling. If you use volatile policies, verify your drop\u2011in really does set TTLs on everything it writes. Otherwise, go with allkeys policies and choose the flavor\u2014LRU or LFU\u2014that matches your pattern.<\/p>\n<h3><span id=\"Memcached_item_size_too_small\">Memcached item size too small<\/span><\/h3>\n<p>Default item sizes can be tight. If you store serialized arrays that sometimes balloon, bump the item size limit. Nothing\u2019s more confusing than a page that\u2019s \u201ccached\u201d except for five heavy items that silently fail to store.<\/p>\n<h3><span id=\"Mixing_workloads_in_one_Redis\">Mixing workloads in one Redis<\/span><\/h3>\n<p>I know I mentioned it before, but it\u2019s a heartburn classic. Keep object cache separate from queues and sessions if you can. Eviction policies that are perfect for one are often terrible for another.<\/p>\n<h3><span id=\"Ignoring_the_basics_during_tuning\">Ignoring the basics during tuning<\/span><\/h3>\n<p>Sometimes the cache looks guilty when the culprit is elsewhere. If PHP workers are starved or your database is running on rusty defaults, your gains will be capped. If you want a friendly checklist that looks beyond caching, the article I linked earlier on server\u2011side tuning covers how I piece these layers together with minimal drama.<\/p>\n<h2 id=\"section-9\"><span id=\"Further_Reading_That_Wont_Waste_Your_Time\">Further Reading That Won\u2019t Waste Your Time<\/span><\/h2>\n<p>If you like reading docs that get straight to the point, two resources I nudge people toward are the <a href=\"https:\/\/developer.wordpress.org\/plugins\/performance\/persistent-object-cache\/\" rel=\"nofollow noopener\" target=\"_blank\">WordPress guide to persistent object cache<\/a> and the <a href=\"https:\/\/redis.io\/docs\/latest\/operate\/oss_and_stack\/management\/configuration\/eviction\/\" rel=\"nofollow noopener\" target=\"_blank\">Redis page on eviction policies<\/a>. If Memcached is your pick, this compact reference on <a href=\"https:\/\/github.com\/memcached\/memcached\/wiki\/ConfiguringServer\" rel=\"nofollow noopener\" target=\"_blank\">Memcached configuration and runtime flags<\/a> is great when you\u2019re deciding which switches to flip. Use them as touchstones, not commandments\u2014your store\u2019s behavior is the real teacher.<\/p>\n<h2 id=\"section-10\"><span id=\"WrapUp_A_Calm_Way_to_Pick_Tune_and_Sleep_at_Night\">Wrap\u2011Up: A Calm Way to Pick, Tune, and Sleep at Night<\/span><\/h2>\n<p>If we boil this whole discussion down to something you can carry into your next deploy, it\u2019s this: a persistent object cache is your shortcut to fewer database calls and faster pages, but it works best when you\u2019re intentional about TTLs and eviction. Redis gives you dials; Memcached gives you calm. Neither is wrong. Choose the one that matches your appetite for tuning and your store\u2019s traffic pattern. Start with safe TTLs, watch how the cache behaves under real load, and adjust once you\u2019ve learned something.<\/p>\n<p>My go\u2011to defaults are conservative: short TTLs for anything personalized, modest TTLs for dynamic lists, long TTLs for boring options. On Redis, pick an allkeys policy\u2014LRU or LFU\u2014set maxmemory with headroom, and disable persistence if it\u2019s cache\u2011only. On Memcached, raise memory, increase the item size limit if needed, and enable the maintenance threads that keep LRU healthy. Then, give yourself a small warm\u2011up routine after restarts and watch the simple metrics that matter. The rest is just reps and routine.<\/p>\n<p>Hope this was helpful! If you\u2019ve got a war story about Redis or Memcached during a sale rush, I\u2019d love to hear what saved your evening. Until next time, keep it fast and keep it friendly.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was, staring at a WooCommerce store that felt like it was wading through syrup, and it hit me: this wasn\u2019t a slow\u2011PHP problem or a broken CDN rule. It was that quiet little layer we forget until it screams\u2014object caching. Ever had that moment when a cart page drags after a product [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1356,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1355","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1355","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1355"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1355\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1356"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1355"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1355"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1355"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}