We’re pleased to release Varnish Cache 101: A Technical Guide to Getting Started with Varnish Cache and VCL. Varnish Cache is a powerful HTTP accelerator that is popular due to its speed and flexibility, as it allows users to cache both static and dynamic content, resulting in extremely fast page load times. However, it can be difficult for new users to learn how Varnish works and Varnish Configuration Language.
Varnish 101 is designed to give those looking to set up their own Varnish Cache server or learn VCL an introduction to how Varnish works, Varnish Configuration Language, configuration examples, and options to installing Varnish. You can read a preview below or download the full Varnish Cache guide now.
What is Varnish Cache
Varnish Cache is a reverse proxy for caching HTTP, also sometimes known as an HTTP accelerator. It is most often used to cache content in front of the web server - anything from static images and CSS files to full HTML documents can be cached by Varnish. The key advantages of Varnish are it’s speed and flexibility: It can speed up delivery of content by 300 - 1000x, and because of the flexibility of its domain specific language, Varnish Configuration Language (VCL), it can be configured to act as a load balancer, block IP addresses, and more. It is this combination of speed and configurability that has helped Varnish grow in popularity over older caching reverse proxies like Nginx and Squid.
Varnish Cache is an open-source project first developed by Poul-Henning Kamp in 2005, meaning it can be downloaded and installed by anyone for free. There are also several paid services which provide Varnish as a service or hosted versions of Varnish, including Varnish Software (the commercial arm of Varnish Cache), Fastly (a Content Delivery Network running modified Varnish 2.1), and section.io (a Content Delivery Grid offering 7 versions of unmodified Varnish up to 5.1.2). In this guide we will go through the basics of Varnish Cache and what you need to know to get started with VCL. By the end of this guide you should have an understanding of:
- The flow of traffic through Varnish Cache and your web server
- Enforcing HTTPs with Varnish Cache
- What type of content you can cache with Varnish
- What is VCL and how each built in subroutine handles HTTP traffic
- How to use the Built-in and Default VCL files
- Methods for caching static objects
- Considerations including cookies and your origin configurations
- Methods for caching dynamic content
- Caching pages with personalization using hole-punching
- Extending Varnish capabilities with VMODs
- Measuring the success of Varnish
Where the Varnish Cache Server Sits
Varnish Cache is deployed as a reverse proxy, a piece of software that sits in front of a web server and intercepts requests made to that server, therefore acting as a proxy to the server a visitor is trying to access. The reverse part comes in because a reverse proxy acts as the server itself, whereas a forward proxy acts on the client or user side to, for example, block access to certain sites within a company network.
Reverse proxies have a huge range of uses: They can examine traffic for threats, block bots, and serve cached content directly without traffic needing to go back to the origin server. The Varnish reverse proxy can be configured to do many things but for this paper we are focusing on its main use, caching content. Varnish sits in front of the origin server and any database servers and caches or stores copies of requests which can then be delivered back to visitors extremely quickly.
When a visitor attempts to visit your website or application by going to your IP address, they will be redirected to first go through your Varnish Cache instance. Varnish will immediately serve them any content that is stored in its cache, and then Varnish will make requests back to your origin server for any content that is not cached.
The amount of content that is served from Varnish depends on how you have configured your Varnish instance (ie if you have set it to only cache images) in addition to how “warm” the cache is. When you first cache content and every time the set cache time expires the cache will be “cold” and the next time a visitor comes to your website it will need to fetch content from the origin before delivering it to that visitor. Varnish fetches the content and, if is is cacheable, will then store it for the next visitor who can be served directly from cache.
Content that is labelled as uncachable will never be stored in Varnish and will always be fetched from the origin. Content might be labelled as uncachable if it has a max-age set at 0, if it has cookies attached, or because you don’t want it to be cached. By using VCL you can override headers set at your server that say content should not be cached, and even cache pages around content that is personalized. This means you should in theory be able to cache the majority of your requests and achieve a fast webpage for nearly all visitors. The next sections on VCL go more into detail on how to do this.
Varnish and HTTPS
One hurdle of Varnish is that it is designed to accelerate HTTP, not the secure HTTPS. As more and more websites are moving all of their pages to HTTPS for better protection against attacks, this has become something many Varnish users have to work around. To enforce HTTPS with Varnish you will need to put an SSL/TLS terminator in front of Varnish to convert HTTPS to HTTP.
One way to do this is by using Nginx as the SSL/TLS terminator. Nginx is another reverse proxy that is sometimes used to cache content, but Varnish is much faster. Because Nginx allows for HTTPS traffic, you can install Nginx in front of Varnish to perform the HTTPS to HTTP conversion. You should also install Nginx behind Varnish to fetch content from your origin over HTTPS.
In the above graph, the TLS/SSL terminator (such as Nginx) is sitting both in front of Varnish to intercept HTTPS traffic before it gets to Varnish, and behind Varnish so that requests are converted back to HTTPS before going to your origin. As shown by steps 7 and 8, if Varnish already has an item or full page in its cache it will serve the content directly through the first Nginx instance and will not need to request via HTTPS back to the origin.
For detailed instruction on setting up Varnish with HTTPS read this Digital Ocean tutorial. If you are deploying Varnish via a paid service or content delivery solution they may be able to handle this for you: section.io provides free SSL/TLS certificates for users and handles the SSL/TLS termination so users do not need to configure it separately.
What Content to Cache with Varnish
Varnish is powerful because it is so fast, but even more importantly because it has such a wide range of abilities. Many caching proxies only focus on caching static items like images and CSS files, but due to its flexibility Varnish Cache can cache static items, the HTML document, and even pages that have personalized elements. When caching first came along in the 1990s it was usually tied to CDNs. CDNs focused on caching images and other content that is the same for all users, often on a separate URL from other content - for example, all images might be fetched from cdn.yoursite.com.
While caching static content solved the challenges of websites in the 1990s - namely that both end-users and the data centers content was served from had much lower bandwidth than those of today - now with huge server farms and fast connections on both sides, to win the speed game you need to do more than caching static items.
Caching a range of content is where Varnish’s flexibility really shines: With Varnish you can cache what is sometimes called “dynamic content” but usually refers to the HTML document, and cache content around personalized elements using a “hole punching” technique. Example of content that can be cached with Varnish include:
- Images (png, jpg, gif)
- CSS stylesheets and fonts
- Downloadable files
- Full HTML documents
Caching the HTML document is where the real value of Varnish Cache comes in. The HTML document is the first piece of information delivered from the web server to the visitor’s browser and includes all the information needed to build a web page, including CSS files, text, and links to images. Before the HTML document is delivered to the browser, the visitor is looking at a blank page with no indication that the page is beginning to load. When the HTML document loads slowly it delays the time to first byte and start render time, which studies have shown are the most important metrics for both user experience and SEO.
If your web server needs to generate each HTML document individually, you will always need to plan for the peak amount of traffic you expect on your website, as your servers could get overloaded with HTML document requests. Even if images and other linked files are cached, the HTML document will need to be generated for each visitor. This would mean if you have 200 visitors in 1 minute, the web server needs to generate 200 HTML documents. By contrast, if you cache the HTML document you both reduce the time it takes for it to be delivered to the user and reduce the load on your origin servers.
If the HTML document is cached using Varnish and set to live for just one minute, then each minute Varnish will make one request to your backend server for the HTML document, and the other 199 requests will be served directly from Varnish. The speed that Varnish can serve a cached HTML document is extremely fast, often under 200ms, whereas a server generating and serving the HTML document often takes 1 second or longer.
By caching the HTML document the web server only needs to generate 1 HTML document per minute which dramatically reduces the number of servers a website needs to plan for peak traffic. In the below graph, a website’s backend needs 12 servers to adequately prepare for peak traffic without the HTML document cached, and only 2 when the HTML document is cached and they can predict exactly how many requests per minute the server will get. This practice saves hosting costs while keeping servers free for critical transactions, and improves user experience by reducing the time to first byte and start render time.
Varnish also allows for caching the HTML document even when it includes personalized elements such as cart size and account information. Using Varnish Configuration Language and a technique called “hole punching” websites can configure their pages so that a majority of the content can be served from cache. In the next section we go into VCL and how to use it to cache all types of content.
Understanding Varnish Configuration Language
The reason Varnish is so flexible is due to Varnish Configuration Language (VCL), the domain specific language for Varnish. VCL controls how Varnish handles HTTP requests, and can be thought of as a programming language for HTTP just as PHP is used for server side scripting. Understanding how VCL works is vital to getting a good outcome out of Varnish, but this can be a challenge as most developers will not be familiar with VCL until they start working with Varnish. To understand VCL you must have a grasp of what each VCL subroutine achieves as well as how the built-in VCL file interacts with the default VCL.
Varnish Cache gives users two files on installation: default.vcl and builtin.vcl. The default file is blank and only contains comments. This is the file that you will edit to write VCL specific to your web application. The built-in VCL file gets triggered if you have not overridden each routine in the default VCL.
If the default VCL is not edited, Varnish will fall through to the the built-in VCL logic. The built-in VCL does have some instructions to cache objects, however because it’s default behavior is to not cache requests with cookies, and to not override headers set from the server, it often will not cache anything for modern web applications. Because of this, when getting started with Varnish users must edit the default VCL to achieve a solid performance result for their application.
Although Varnish is built so that those requests that are not specifically called out in your default VCL go to the built-in VCL, at section.io we recommend programming your default VCL so the vast majority of request are handled there instead of falling back to the built-in logic. This gives you more control over how each request is handled.
If you have been caching static content with a basic system like Amazon’s CloudFront you could actually see a performance decrease if you switch to Varnish without configuring anything. Although Varnish is a faster and more sophisticated tool which will ultimately provide much better performance results, it requires some configuring to cache content for most visitors.
For this reason, we highly recommend reading through the next sections and using other resources like Varnish Software to get an understanding of what VCL will be needed for your application, rather than turning on Varnish with only the built-in VCL. While the built-in VCL is safe in that it will not cache uncachable content, with an understanding of VCL and your application you will get a much better result than with the built-in configurations and any other caching solution.
To be able to configure Varnish you will need to learn basic Varnish syntax as well as what each subroutine achieves. It’s important to note that the built-in Varnish is always running underneath the VCL you write: Unless you write VCL that terminates a state each request will fall back to the built-in code. For example, you would write VCL saying “if this request matches these parameters then try to retrieve it from the cache” and all requests that do not match those parameters would fall back to what is written in builtin.vcl.
The basic VCL request flow is illustrated below, where vcl_recv is the first subroutine that is executed after Varnish has examined the basic information on the request such as verifying it is a valid HTTP request. The flow is divided into client side and back end requests, with all back end requests starting with vcl_backend. While the full Varnish flow is more complex, this basic flow will get you started. It is important to understand the purpose of each subroutine in the below chart, however the ones you will likely alter to customize your Varnish configuration are vcl_recv, vcl_hash, vcl_backend_response, and vcl_deliver.
Each of the above subroutines have different purposes and terminate with actions including VCL pass, VCL pipe, VCL Hash, and VCL deliver. To learn about how to write Varnish Configuration Language to cache content for your application please download the full Varnish Cache Guide. If you have specific questions about Varnish Cache and VCL check out our community forum or contact us at email@example.com and one of our Varnish experts would be happy to help you.