Varnishing Retinas

Like everyone else involved in both photography and web sites (and interested in getting the most out of both with a new ), I’ve been looking at ways to deal with big retina displays.

This site already serves double-resolution thumbnails (as visitors will readily attest, and for over a year now). The file size difference is pretty much negligible at small resolutions, but once you start ramping up screen and image sizes things start getting a bit hairy.

Since I’ve been hacking away at a minimalist image gallery for a few months, I looked briefly at -based solutions like foresight.js, but found most of them needlessly fiddly, largely because I don’t want to pre-render several different resolutions and don’t care about (or rather, don’t believe in) bandwidth constraints1.

I eventually came across Adaptive Images (thanks to Pedro) and found it a sensible enough compromise:

  • The UA sets a cookie with its screen resolution and density upon initial page load
  • The server takes that information from subsequent HTTP requests and resizes images accordingly, with a few fail-safes.

The base implementation isn’t perfect for all purposes (since it’s designed for downsampling large images, it will merrily upscale small images), so I tweaked it a bit2 while porting it to .

Does it break URIs? Yes, of course it does - two different devices accessing the same image URL will get different binary representations of the same content depending on the cookie setting.

Do I care?

Seriously? No. Not having to pre-render multiple image resolutions during publishing or dealing with fiddly poking around at my server and issuing multiple requests per session is more than worth it for me, plus the overall user experience is seamless - no flashes, no double image loads, nothing.

The real trick, however, is getting Varnish to selectively play along with cookies. In order to do so, you need to add a vcl_hash function to tweak the cache hash key so that it will cache (and serve) the varying representations of the same URI accordingly:

sub vcl_hash {
     hash_data(req.url);
     if (req.http.host) {
         hash_data(req.http.host);
     } else {
         hash_data(server.ip);
     }
     # Hash data based on resolution cookie using a PCRE non-greedy regex
     if( req.http.Cookie ~ "resolution" ) {
         set req.http.X-Varnish-Hashed-On =
            regsub( req.http.Cookie, "^.*?resolution=([^;]*);*.*$", "\1" )
          + regsub( req.http.User-Agent, "^.*?(Mobile).*$", "\1" );
     }
     if( req.url ~ "/img" && req.http.X-Varnish-Hashed-On ) {
         hash_data(req.http.X-Varnish-Hashed-On);
     }
     return (hash);
}

So far, it’s working fine on my test gallery and on a few images on this site, and I’ll probably roll it out site-wide soon (as well as merging the revised code into the public Yaki source as time permits).

Update: It bears remembering that you should take the time to remove all unnecessary cookies in vcl_recv and pass this on selectively to your back-end as needed, otherwise there won’t really be any caching going on.


  1. A long, long time ago, in a galaxy far, far away, I worked on deep packet inspection and content adaptation, and every single solution that trusted what the user-agent reported as being their current bandwidth failed miserably in real life mobile scenarios. So I don’t think things like the network information API will ever work properly, and would rather optimize file size based on unchanging criteria like the UA’s screen size. ↩︎

  2. Besides serving minimal resolutions directly, I leveraged built in BLOB cache (so every size is only rendered once) and used ImageMagick’s -adaptive-resize flag instead of a convolution for rendering slightly sharper upscaled images. ↩︎

This page is referenced in: