When using Varnish Cache one of the most important things you need to understand is how and why various requests get labelled as they do. A “cache hit” and “cache miss” are easily understood - a cache hit is a request which is successfully served from the cache, and a cache miss is a request that goes through the cache but finds an empty cache and therefore has to be fetched from the origin.
A cache pass is a request which is labelled as uncacheable, meaning it will bypass the cache and go straight to the origin. This might be an item that is unique to each visitor or one that hasn’t been cached When examining your Varnish Cache metrics and logs you should aim for a high cache hit rate, which means you are successfully serving content from the cache to the majority of your visitors. This both speeds up response times for visitors and reduces load on your origin server, as it only has to generate a file when the cache file expires. If you are caching the majority of your static content and your HTML documents, you should be able to achieve a cache hit rate of 95% or higher.
Varnish Cache Hit-for-Pass
So, if we know what a hit, miss, and pass for, what is a Varnish Cache hit-for-pass? This is a response you may occasionally see and it can be somewhat confusing. A Varnish Cache hit-for-pass is related to the way Varnish Cache reacts when several visitors come to your website at the same time and encounter a cold cache, for example one that has just expired and thus needs to re-fetch and re-cache files from the origin.
As Varnish Software explains “When Varnish Cache is expecting a cacheable object and an uncacheable arrives it creates a hit-for-pass object.” Because Varnish Cache hasn’t been told to pass this object in vcl_recv and thinks that the files fetched will be cachable, it will only send one request back to the origin even if you have five people who have encountered a cold cache. Those requests are then put in a line so that once the first request comes back and fills the cache, the other requests can then be filled directly from the cache. This prevents your origin server from becoming overloaded with cachable requests.
The hit-for-pass comes in when Varnish Cache realizes that one of the objects it has requested is uncachable and will result in a pass. Rather than making all the requests wait in line only to discover they need to go to the backend for this request, Varnish Cache will mark this request as a “hit for pass,” essentially caching the fact that this is an uncacheable object. The next time someone requests this object, they will be sent straight to the origin and if multiple people request it at once they will all be sent to the origin rather than putting them in line.
The downside of a hit-for-pass is that it means Varnish Cache will not try to cache the object while the hit-for-pass note is on it. Because of this you should ensure a hit-for-pass does not have a long Time To Live (TTL), which could overwhelm your servers. The Varnish Software blog has a good explanation of how to avoid this and what VCL is used. You should also be aware that hit-for-pass is a read-only, internal note to Varnish Cache and it can cause problems if used in cases where you expect a cache pass.
Varnish Cache and Content Delivery
section.io gives users a choice of several Varnish Cache versions on our Edge PaaS along with detailed Varnish metrics and logs to help you troubleshoot your Varnish Cache configuration. The section.io team includes experts in Varnish Cache and VCL and we are always happy to help. Check out our community forum for common Varnish Cache questions or contact us if you’d like to get started with section.io’s Edge PaaS and Varnish Cache.