Technology

Redis vs Memcached for WordPress/WooCommerce: The TTL and Eviction Tuning Playbook I Wish I Had

So there I was, staring at a WooCommerce store that felt like it was wading through syrup, and it hit me: this wasn’t a slow‑PHP problem or a broken CDN rule. It was that quiet little layer we forget until it screams—object caching. Ever had that moment when a cart page drags after a product import, or when logged‑in customers feel like they’re on dial‑up while guests zip around? That’s often your object cache either doing you favors or tripping over its own shoelaces. In this piece, I want to walk you through the way I think about Redis vs Memcached for WordPress and WooCommerce, how I set TTLs without shooting myself in the foot, and the eviction policies that keep your store snappy when memory pressure hits. No tables, no stiff comparisons—just the real story of what tends to work and how to keep it working when traffic spikes at the worst possible time.

Here’s the thing: WordPress is chatty with the database, and WooCommerce adds even more chatter. A persistent object cache is like keeping your best friend on speed dial instead of calling directory assistance every time. Whether you pick Redis or Memcached, the mindset is the same—cache what’s useful, expire what’s dangerous when stale, and choose an eviction approach that won’t toss your most precious keys out the window. I’ll share the trade‑offs I see, the defaults I tweak, and the settings that have saved me during midnight sales. Grab a coffee; we’re going in.

What a Persistent Object Cache Really Is (And Why It Changes Everything)

I remember the first time I enabled a persistent object cache on a busy WooCommerce site. The homepage went from sluggish to sprightly, and the “why didn’t we do this sooner?” texts started rolling in. But let’s clear up a little confusion up front. In WordPress land, persistent object cache doesn’t mean “saved to disk forever.” It means your cached objects live beyond a single PHP request, so the next request can reuse them without hitting MySQL again. Whether Redis or Memcached is behind it, the goal is the same: cut down the database chatter and return answers fast.

Think of your database as the kitchen in a busy restaurant and your object cache as the heated pass. When a request comes in, you’d prefer to grab a dish that’s still hot and ready (cache) rather than cook the whole meal from scratch (database). WordPress asks for the same exact things over and over—options, user meta, term data, product data—and WooCommerce piles on with everything from stock checks to taxes and shipping. If those answers don’t change every second, they’re perfect candidates for caching.

One more gentle reminder: OPcache is for PHP bytecode, not data. It’s awesome and you should definitely use it, but it doesn’t replace an object cache. If you want the broader big‑picture setup—PHP‑FPM, OPcache, Redis and friends—I wrote a friendly deep dive you might like: The Server‑Side Secrets That Make WordPress Fly. For now, let’s stay laser‑focused on Redis vs Memcached and how to make either sing.

Redis and Memcached: Two Personalities, One Job

Over the years, I’ve learned to think about Redis and Memcached less like rivals and more like two personalities at the same party. Memcached is the minimalist who shows up on time, keeps things tidy, and leaves without drama. Redis is the talented friend who plays multiple instruments and tells great stories, but you’ll want to set a few house rules if you don’t want a jam session at 3 a.m.

Memcached keeps it simple: strings in, strings out, blazing fast, multithreaded, designed to be ephemeral. For WordPress, most values are serialized arrays anyway, so simplicity works just fine. It’s excellent when you want a dependable in‑RAM cache that won’t surprise you with extra features you didn’t ask for. I often reach for Memcached when I want minimal moving parts and I’m fine with the cache vanishing on restart—because remember, it’s a cache. If it’s gone, WordPress simply rebuilds it.

Redis brings more to the table: richer data structures, optional persistence, replication, and a buffet of commands that can be fine‑tuned for all sorts of workloads. The main event loop is single‑threaded, which keeps things predictably fast for most workloads, and you can get fancy with eviction policies and memory accounting. For a WooCommerce site that needs more nuanced tuning under memory pressure, the extra control in Redis can be a lifesaver. I’ve leveraged LFU eviction when product catalogs balloon, and it’s remarkable how gracefully the hot keys keep winning when the going gets tough.

Here’s my honest take. If you’re allergic to surprises and want a straight line to faster pages, Memcached is lovely. If you enjoy having dials to turn when traffic gets weird—Black Friday weird—Redis gives you the controller with more buttons. Both are great. The better pick is the one that matches your team’s comfort and your store’s volatility.

TTL Strategy That Doesn’t Bite You Later

Start with “how dangerous is stale?”

I learned the hard way that not everything deserves a long TTL. A client once had a nightly import that tweaked prices and stock in small, predictable ways. We thought, cool—let’s set generous TTLs and enjoy the hit ratio. The next morning, support tickets piled up because a handful of bestsellers were showing yesterday’s price for longer than we liked. That’s when I started categorizing cache TTLs by how dangerous staleness feels, not by how expensive the query is.

For WordPress and WooCommerce, I usually think in four buckets. First, highly dynamic user context (carts, sessions, personalized fragments) should be short‑lived, sometimes just a few minutes, because stale here can break trust. Second, medium‑dynamic content like product queries or category pages can sit in cache for a handful of minutes when traffic surges. Third, slow but safe data such as complex option lookups or rarely changing site settings can live longer, even hours. Fourth, things that don’t really change at all (feature flags or sitewide toggles) can go long so long as you’re ready to invalidate them when needed.

Transients and the object cache handshake

WordPress transients are fascinating because they’re a built‑in way to say “this piece of data has an expiration.” When you enable a persistent object cache, transients become much cheaper to use. They’ll live in Redis or Memcached instead of clogging your database tables. I like using transients for computed values—say, a heavy product filter result that is safe to reuse for a few minutes. The trick is to choose expirations that reflect the business. If prices dance by the hour, don’t give those computed pricing artifacts a day to lounge in memory. If stock changes constantly, keep anything derived from stock short and sweet.

Group behaviors you should understand

Different WordPress core data behaves differently under cache. Options are frequent flyers; caching them reduces DB roundtrips across the whole site. User meta and term data can also feel heavier than they look, especially on sites with many roles or rich metadata. WooCommerce adds layers like tax rates, shipping zones, and catalog visibility logic. None of this is scary if you keep the “how dangerous is stale?” question on loop in your head. In my experience, many sites do fine with modest TTLs—think minutes for dynamic content and hours for stable options—so long as invalidations happen on writes.

Keep invalidation predictable

Here’s a sanity saver: rely on the hooks and invalidation that WooCommerce and your object cache plugin already provide. When products change, most reputable caching plugins know to invalidate related keys. Don’t fight that; complement it. If you introduce your own layer—custom transients, for example—tie expiration or deletion to the same events that fire on product updates. It’s not about fancy patterns; it’s about making sure your cache clears itself when the world changes underneath it.

My default TTL mindset

When I don’t have strong data yet, I start conservatively. Five to fifteen minutes for dynamic list pages. A minute or two for personalized fragments that appear on every page when logged in. One to twenty‑four hours for boring options that never change mid‑day. Then, I watch. If the store rarely changes prices midday, I stretch TTLs for that set. If the store has a habit of flash sales, I keep TTLs tighter and lean on targeted invalidation to keep pages fresh. You’ll be amazed what a simple dashboard of “hit ratio vs. complaints” will teach you in a week.

Eviction Tuning: How to Keep Your Hottest Keys from Getting Tossed

Why eviction matters more during the storm

Eviction isn’t a problem until suddenly it’s the only problem. During traffic spikes, caches fill up fast. The question is: when memory is tight, which keys survive? That’s where your store’s personality meets your cache’s personality. If you think of memory like a tiny apartment, eviction is deciding what to keep when you’ve got more clothes than closet. Keep the items you wear daily. Donate the rest. Sounds obvious, but you’d be surprised how many sites leave this to questionable defaults.

Redis policies I actually use

Redis lets you choose how it evicts. I’ve had great results with allkeys-lfu on stores where a small set of keys get hammered (popular categories, featured products, option lookups), because LFU (least frequently used) tends to cherish long‑term hot keys better than LRU. If you’re allergic to LFU, allkeys-lru is the old faithful that behaves very predictably. What I almost never choose for WooCommerce is volatile‑only policies, because they skip non‑TTL keys and can back you into out‑of‑memory traps. If you rely on volatile policies, be absolutely sure almost everything has TTLs set—otherwise the cache refuses to evict the wrong stuff at the worst time.

A small but important tip: set maxmemory with a little headroom under your server’s RAM, especially on shared or containerized hosts. When Redis hits maxmemory, you want it evicting keys, not wrestling the kernel. If you’re curious about the knobs, the official docs on eviction strategies are a nice, focused reference: how Redis eviction policies work and how to pick one.

Memcached’s quiet superpower

Memcached doesn’t give you a menu of eviction policies, but it nails the one it has. Its LRU (least recently used) eviction, combined with slab classes, is simple and stable. The part that bites people is item size and slab fragmentation. If your cache values are frequently larger than Memcached’s item size limit, they get rejected or chunked poorly. That’s why setting the item size with -I (and appropriate memory with -m) is often a day‑one tweak. Modern Memcached also has background threads that keep LRU and slab management healthy; I keep the LRU maintainer and crawler on because they really do help during real traffic. If you want to explore those runtime switches and what they do, I like pointing folks to the concise notes here: Memcached server configuration notes.

Don’t forget the warm‑up plan

Whether it’s Redis or Memcached, a cold cache on a warm morning is a great way to make coffee and watch your CPU climb. If you restart your cache layer, consider a warm‑up strategy. For some stores, it’s as simple as having a crontab hit your top landing pages or running a tiny script that pre‑queries your most expensive catalog pages. This isn’t mandatory, but it makes launches feel less like crossing your fingers. In a pinch, you can also pre‑seed known‑hot options and term lookups, but for most teams, a simple homepage and category sweep is more than enough.

Persistence, Restarts, and High Availability Without Overkill

That “persistent” word again

Quick recap: in WordPress, a “persistent object cache” is about living beyond the request, not about surviving restarts. Memcached never persists to disk. Redis can, but for pure object cache duties, I typically disable persistence and let it be purely in‑memory. Why? On restarts, Redis doesn’t waste time replaying an AOF or loading an RDB snapshot. It just starts fresh, and WordPress fills it up again. If you’re using Redis for other things—sessions, queues—then sure, keep persistence on for those use cases, or use separate instances: one persistent, one ephemeral.

What high availability looks like in the real world

This topic sounds scary until you strip it down to business needs. If your app can tolerate the cache being empty for a minute or two—because it’s a cache—then you don’t need a clustered cache to survive node failures. For stores running serious campaigns, though, I’ve used Redis with replicas and Sentinel to fail over. WordPress object caching doesn’t care about strong consistency; it cares about “is there a cache available?” during the next request. If your budget and stack allow, managed Redis with multi‑AZ is painless. For Memcached, a pool of nodes with client‑side hashing is common; losing one node degrades performance but doesn’t bring you down.

Don’t mix everything in one hotspot

This is a small hill I’ll gladly die on: avoid cramming everything into one Redis that’s doing sessions, queues, full‑page caching, and object cache at the same time. It’s tempting, I know. But each of those workloads spikes differently and needs different eviction behaviors. Separate them—even if that’s just logical DBs with strict memory limits per instance—so a queue surge doesn’t evict the object cache that keeps your catalog fast. Clarity beats cleverness here.

A Practical Setup You Can Actually Run

Redis: the settings I reach for

When I’m using Redis purely for WordPress/WooCommerce object caching on a dedicated instance, I usually start with something like this:

# redis.conf (excerpt)
maxmemory 2gb
maxmemory-policy allkeys-lfu
appendonly no  # disable AOF for pure cache
save ""       # disable RDB snapshotting for pure cache
latency-monitor-threshold 100

If the server only has 4 GB of RAM, I’ll use about half for Redis, then watch and adjust. If you’re running inside containers, leave headroom—don’t let the OOM killer get interested. I keep an eye on used_memory, evicted_keys, and latency via INFO. If evicted_keys climbs too fast, either increase memory or shorten TTLs for bulky groups. If latency spikes, I look for big values or long‑running commands. And yes, big values can hide in serialized arrays—sometimes the answer is to cache fewer fields, not more bytes.

Memcached: the flags that avoid landmines

For Memcached, my baseline looks like this on a modest host:

memcached -m 2048 -I 2m -o modern -o slab_automove=1 -o lru_crawler -o lru_maintainer -t 4 -v

That bumps the memory, allows 2 MB items, and enables the features that keep LRU healthy under changing workloads. The -t flag uses threads; with Memcached’s multithreaded nature, that helps on multi‑core boxes. I keep an eye on evictions and get_misses via stats. If evictions are constant and your hit ratio falls off a cliff during campaigns, either add RAM or revisit TTLs for bulky keys. One sneaky culprit is very large, rarely reused items hogging slabs. Bigger item size limits can help, but don’t let that turn your cache into a swap space for whales.

WordPress glue: what to actually change

Most folks use a proven object cache drop‑in like the Redis Object Cache plugin or a Memcached drop‑in that talks to the right PHP extension. In wp-config.php, it’s nice to define a few things explicitly so you know what’s going on:

// wp-config.php (conceptual snippets)
// Choose one backend and its host/port/socket.
// For Redis:
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
// Optional: define per-group TTLs if your drop-in supports it
// e.g., options group longer, user-specific shorter.

// For Memcached (using Memcached extension):
$memcached_servers = [
  ['127.0.0.1', 11211]
];

Different plugins expose different settings for TTL and groups. If yours allows per‑group TTLs, I usually give options a long leash, keep users and sessions short, and hold transients to whatever lifespan matches the business logic that produced them. The nuance beats a one‑size‑fits‑all number every time.

Observability without drowning in metrics

What I actually watch day‑to‑day is simple: hit ratio, evictions, and 95th percentile response for cached vs. uncached requests (if my APM exposes it). If you don’t have fancy tooling, a quick script that flicks through redis-cli INFO or memcached stats every minute and logs a few fields is plenty. After a week, patterns will pop. You’ll know if the cache is too small or if TTLs are either too stingy or too generous.

Deployment tricks that prevent drama

If you deploy code that changes cache structure—say, new serialization formats or new key naming—consider a gentle cache flush right after deploy. It forces a rebuild but avoids the weirdness of old and new keys colliding. If you keep your cache warm with a crawler, point it at key templates you know matter: homepage, top categories, a couple of representative product pages, and the checkout start. It’s like stretching before a run; your store appreciates it.

When to Choose Redis, When to Choose Memcached

Let me tell you about two stores I still think about. One was a minimalist fashion brand with intermittent bursts of traffic from influencer drops. They needed reliability without a fuss. We put Memcached in, sized it generously, turned on the LRU maintainer, and let it cruise. It never called attention to itself, which is the highest compliment I can give a cache. The other was a sprawling catalog with a complex tax setup and a content team constantly rearranging collections. We used Redis with LFU eviction and slightly more generous memory. When traffic surged, the hot keys stayed hot and we didn’t babysit it.

In my experience, Memcached is a great fit when you want an uncomplicated, rock‑solid cache that does one thing fast. Redis shines when you want control—tunable eviction, better visibility, optional replication, and room to separate workloads if you ever do more than object cache. You can’t really “wrong” either choice for WordPress, but you can set yourself up for fewer 3 a.m. surprises by matching the tool to the personality of your store.

Common Gotchas (And How I Learned to Avoid Them)

Too‑long TTLs on user‑specific data

Once, I thought I was clever by caching user fragments longer to boost hit ratios. It backfired when returning customers saw stale free‑shipping banners that no longer applied. Keep user‑specific TTLs short unless you absolutely control when they invalidate.

Volatile‑only eviction with non‑TTL keys

Redis can be set to evict only keys with TTLs. Sounds safe until a few non‑TTL keys show up and pin memory to the ceiling. If you use volatile policies, verify your drop‑in really does set TTLs on everything it writes. Otherwise, go with allkeys policies and choose the flavor—LRU or LFU—that matches your pattern.

Memcached item size too small

Default item sizes can be tight. If you store serialized arrays that sometimes balloon, bump the item size limit. Nothing’s more confusing than a page that’s “cached” except for five heavy items that silently fail to store.

Mixing workloads in one Redis

I know I mentioned it before, but it’s a heartburn classic. Keep object cache separate from queues and sessions if you can. Eviction policies that are perfect for one are often terrible for another.

Ignoring the basics during tuning

Sometimes the cache looks guilty when the culprit is elsewhere. If PHP workers are starved or your database is running on rusty defaults, your gains will be capped. If you want a friendly checklist that looks beyond caching, the article I linked earlier on server‑side tuning covers how I piece these layers together with minimal drama.

Further Reading That Won’t Waste Your Time

If you like reading docs that get straight to the point, two resources I nudge people toward are the WordPress guide to persistent object cache and the Redis page on eviction policies. If Memcached is your pick, this compact reference on Memcached configuration and runtime flags is great when you’re deciding which switches to flip. Use them as touchstones, not commandments—your store’s behavior is the real teacher.

Wrap‑Up: A Calm Way to Pick, Tune, and Sleep at Night

If we boil this whole discussion down to something you can carry into your next deploy, it’s this: a persistent object cache is your shortcut to fewer database calls and faster pages, but it works best when you’re intentional about TTLs and eviction. Redis gives you dials; Memcached gives you calm. Neither is wrong. Choose the one that matches your appetite for tuning and your store’s traffic pattern. Start with safe TTLs, watch how the cache behaves under real load, and adjust once you’ve learned something.

My go‑to defaults are conservative: short TTLs for anything personalized, modest TTLs for dynamic lists, long TTLs for boring options. On Redis, pick an allkeys policy—LRU or LFU—set maxmemory with headroom, and disable persistence if it’s cache‑only. On Memcached, raise memory, increase the item size limit if needed, and enable the maintenance threads that keep LRU healthy. Then, give yourself a small warm‑up routine after restarts and watch the simple metrics that matter. The rest is just reps and routine.

Hope this was helpful! If you’ve got a war story about Redis or Memcached during a sale rush, I’d love to hear what saved your evening. Until next time, keep it fast and keep it friendly.

Frequently Asked Questions

Great question! Here’s the deal: Memcached is simple, multithreaded, and rock‑solid when you want an in‑RAM cache that “just works.” Redis gives you more dials—tunable eviction (LRU/LFU), optional replication, and better observability. If you want minimal moving parts and you’re fine with the cache resetting on restarts, Memcached is lovely. If you expect traffic spikes and want more control under memory pressure, Redis usually pays off.

I start with the danger of staleness. Personalized bits (carts, user fragments) get short TTLs—think a minute or two. Dynamic lists (category pages, some queries) get a few minutes, especially during campaigns. Stable options and boring data can stretch to hours. Use transients for computed values and keep expirations close to how fast the business changes. Then watch hit ratio and complaints, and adjust.

For WooCommerce, I’ve had great results with allkeys‑lfu because long‑term hot keys keep winning under pressure. allkeys‑lru is also predictable and safe. I avoid volatile‑only policies unless I’m absolutely certain every key has a TTL; otherwise you can get out‑of‑memory surprises. Whatever you choose, set a realistic maxmemory with headroom and monitor evictions so you know when to tweak.