ChainedFastBackend in Drupal 7

ChainedFastBackend in Drupal 7

Difficulty: 
Let's Rock

ChainedFastBackend is a new cache backend in Drupal 8 that allows you to chain 2 cache backends.

In order to mitigate a network roundtrip for each cache get operation, this cache allows a fast backend to be put in front of a slow(er) backend. Typically the fast backend will be something like APCu, and be bound to a single web node, and will not require a network round trip to fetch a cache item. The fast backend will also typically be inconsistent (will only see changes from one web node). The slower backend will be something like Mysql, Memcached or Redis, and will be used by all web nodes, thus making it consistent, but also require a network round trip for each cache get.

This is awesome news:

  • CLI (drush/console) performance is heavily boosted because these will not share in-memory caches with your web application, and thus always feel like starting on cold caches. Not again.
  • Some deployment procedures  make in-memory caches to be lost, and unless you heat up the caches right after deployment your application is starting from a very cold state. If you are on an emergency update under heavy load, starting from a cold state with high traffic can have very bad side effects.

On small sites we usually go for a nearly everything cached in-memory (volatile!) so that they run as fast as possible and the two previous situations are really an issue.

You can read the initial development story here: Add a cache backend that checks an inconsistent cache, then falls back to a consistent cache backend.

Backporting the ChainedFastBackend to Drupal 7 is quite straightforward. If you are using the Wincache module ChainedFastBackend is already available for use!

As always, I only recommend enterprise grade and robust storage backends, so you should be giving Couchbase a shot as an alternative to Redis or even Mongo. This is what the big guys are using (paypal, etc..) and it gives you persistent memached compatible storage.

For those that don't know what Couchbase is all about, if you are already using Memcache, you can hot swap with 0 issues to Couchbase because it has a Memcache compatibility layer, with the benefit that you can choose your storage to be persistent.

To setup your ChainedFastBackend follow these simple steps.

Register your cache backends in settings.php:

$conf['cache_backends'][] = 'sites/all/modules/contrib/wincachedrupal/drupal_win_cache.inc';
$conf['cache_backends'][] = 'sites/all/modules/contrib/wincachedrupal/ChainedFastBackend.inc';
$conf['cache_backends'][] = 'sites/all/modules/contrib/memcache/memcache.inc';

Tell the chained backend what it should use as fast and persistent backends:

$conf['fastBackend'] = 'DrupalWinCache';
$conf['consistentBackend'] = 'MemCacheDrupal';

Now distribute the binaries at your will, and make sure that regularly updated binaries are not sent to the Chained backend:

$conf['cache_default_class'] = 'ChainedFastBackend';
$conf['cache_class_fastcache'] = 'DrupalWinCache';

$conf['cache_class_cache_views'] = 'MemCacheDrupal';
$conf['cache_class_cache_form'] = 'MemCacheDrupal';
$conf['cache_class_cache_update'] = 'MemCacheDrupal';
$conf['cache_class_cache_menu'] = 'MemCacheDrupal';

Because this backend will mark all the cache entries in a bin as out-dated for each write to a bin, it is best suited to bins with fewer changes.

What kind of improvements can you expect?

Our deployment script triggers a custom crawler that heats up the application for anonymous and logged in users - after a full system wipe that includes rebuilding the menu and the registry. On the first site we tested this on, the first hit of the crawler after deployment went down from 13seconds to a mere 1.3seconds. Nice, but not surprising as we are moving from a completely lost in-memory caches situation, to an everything is still in Couchbase.

You will only see performance benefits if already using in-memory caching (APC/Wincache) and a persistent caching backend such as Couchbase.

Because this cache backend has to ensure that consistent data is served from sites with multiple webheads, it has quite a dangerous overhead because it completely invalidates a binary when an item of the binary is being writen. After having this running in production for a while we found that if the incorrect binaries are managed by this backend you will indeed se performance degradation.

 

Add new comment

By: david_garcia Sunday, June 28, 2015 - 00:00