Varnish Install Quick and Detailed

How To Install Varnish Cache

In this article we will work through the components required for a successful Varnish Cache implementation.

The quick version:

Ubuntu, Debian:

apt-get update
apt-get install Varnish Cache

RHEL, CentOS and Fedora:

yum update
yum install Varnish Cache

That’s it. Varnish Cache is now installed… Or is it?… Is it working?… Can you use it effectively?

The quick version Section style:

Get Started and get all of the below setup in seconds.

The longer (and closer to reality) version:

All high performance Varnish Cache implementations need to start with a quick architectural plan in order to understand how the solution fits into your environment.

Varnish is an HTTP accelerator (also called a reverse proxy) and hence needs to sit in front of your existing web site as close as possible to your users.

A successful implementation may look like:

Varnish Install Architecture

We will now work through each component required for a successful setup and provide links to specific code/tools where needed


Varnish does not support HTTPS/SSL traffic. Period. As the majority of the worlds websites are moving to be all HTTPS these days this presents an obstacle that needs to be overcome for just about any implementation. The standard practice to support HTTPS is to implement a layer infront of your Varnish Cache servers that performs HTTPS negotiation and then traffic can be passed through your Varnish Cache layer (Commonly with the addition of an X-Forwarded-Proto request header set to “https”).

A common solution to resolve the lack of SSL is to deploy an nginx tier infront of your Varnish Cache server. Here is some sample nginx/SSL configuration{:target=”_blank”}

Load Balancing:

Installing Varnish Cache on a new server is great but this implements a single point of failure in your infrastructure. Options available to resolve this are:

  1. Deploy Varnish Cache on each of your web applications servers - This can be a short term win as it doesn’t require additional hardware and along as you have multiple application servers you then have redundancy across your Varnish Cache implementation.

    This approach falls short when you have more than two web servers as your cache is split between each web server and cache hit rate will fall off dramatically as the web server count increases

  2. Implement a load balancer infront of your redundant Varnish Cache servers. This adds some complexity but is a scalable way to implement traffic balancing. Here is an example architecture link with code samples to deploy a load balanced environment{:target=”_blank”}

HTTPS / SSL to the Origin application:

It’s important to consider how traffic from your Varnish Cache tier passes back to your origin servers. Compliance and general security best practice mean that you should pass HTTPS traffic back to the origin application over HTTPS.

The best approach to resolving this is to have your front end HTTPS/SSL termination add an X-Forwarded-Proto request header containing the value “https”. You can then add another nginx tier as a Varnish Cache backend which will have a proxy pass statement{:target=”_blank”} that will use the X-Forwarded-Proto value to determine whether to connect to the origin servers via HTTP or HTTPS


Most Varnish Cache implementations focus on getting the solution installed into the environment. Tackling all of the above activities takes effort and architectural planning. An area where little time/focus is commonly spent is understanding how well your implementation performs once it is serving traffic, This is the critical step that delivers results over the short and long term. There are multiple areas of focus:

Individual requests

In order to understand how Varnish Cache is performing for specific requests there are several tools available. These tools provide information on the flow of execution through your VCL and whether each request was served from Varnish Cache or required the origin server to respond.

The best request specific debugger that’s quickly available on all Varnish Cache implementations is varnishlog. This tool allows you to send individual requests and analyse how each request has been handled.

Reference for: varnishlog{:target=”_blank”}

varnishtop is a second command available that shows continuously updated counters of values that you specify Reference for: varnishtop{:target=”_blank”}

Summarised information on cache hit rates overall

Tracking Varnish Cache performance on an ongoing basis is critical to understanding whether the solution is performing effectively. This is an area that often requires investment in additional logging/metric platforms. As data over time is hard to gather/parse from built in Varnish Cache tools.

varnishstat is an option for short term review as its run at the command line. Here is an article describing how to use varnishstat{:target=”_blank”}

The most robust method for metric management is to send Varnish Cache access log data to a metric/log management product. This is generally a requirement for an enterprise level deployment of Varnish Cache and can have its own cost to implement if nothing else is already available. In order to output access logs in an Apache style format you need to use the varnishncsa command, Here is a guide to setting up the varnishncsa command{:target=”_blank”}


On the surface a Varnish Cache install is a single command. Underneath this however there is actually a great deal of work that goes into a Varnish Cache deployment that is effective and secure. That’s why we made - All of the above comes out of the box, Is setup in seconds and is free until you start to use a serious amount of data. Check it out if you want a quick Varnish Cache win -

Similar Articles