Skip to main content

Porting your Fastly VCL Configuration To Varnish Cache On


This is a general guide to porting VCL (Varnish Cache Configuration Language) from your Fastly CDN service to a Varnish Cache container running on CloudFlow’s Kubernetes Edge Interface (KEI).

In general, much of the VCL used on the Fastly platform can directly port over to Varnish Cache (We recommend running Varnish Cache 7.0.2 or later), but there are a few caveats and things to consider.



Origin shielding

  • Origin shielding is a Fastly infrastructure feature that sends edge requests to ‘edge POPs’, which then make the request to a ‘shield POP’, increasing cache hit ratio in certain situations. If you are running Varnish Cache on a large number of POPs and would like to see if CloudFlow can help you build a similar system through our Professional Services program, please contact Support.


  • By default, Fastly uses an algorithm to route requests to the same URL to the same Varnish Cache instance in their POPs. This is known as ‘clustering’ and is not a default Varnish Cache behavior. If you have a large-scale CDN plan in mind, we recommend using a consistenthash container to provide similar functionality, increasing cache hit ratio for cacheable assets. Improvements in cache hit ratio via deploying consistenthash will be dependent on the number of Varnish Cache containers and endpoints you will be running.


  • Fastly logging format is defined in the VCL itself. On CloudFlow, you will define an external logging endpoint as we show in our Log Streaming guide. This has the advantage of being able to log all components to your logging system - so any A/B testing, IDS or bot detection components you choose to deploy can also send logs to your storage, enabling full visibility into the full application stack.

Dictionaries and ACLs

  • Dictionaries are a proprietary Fastly feature, and there is not yet equivalent functionality in the open source Varnish Cache.
  • Note that Access Control Lists (ACLs) do work in OSS Varnish Cache, so if you are using ACLs on your Fastly service to control access on an IP level, they will be usable.

Code Considerations

Special variables


  • This variable contains the request URL without any query strings. This is a proprietary Fastly feature that has no out of the box equivalent in OSS Varnish Cache. If you require this functionality, contact our Support Team to discuss your neesd and a possible implentation.



  • This is Fastly’s name for the vcl_backend_response subroutine, which runs immediately after the content is fetched either from the origin or cache, and before it is delivered to the browser (via vcl_deliver). If you would like to modify a response before delivering to the browser, place this in vcl_backend_response.

vcl_hit and vcl_miss

  • These functions are available to your Varnish Cache configuration, but are not present in the default config. Feel free to add them if they are needed.

VCL example

Fastly creates a new service with a basic configuration suitable for general caching needs. In most cases, the default Varnish Cache VCL below is a great starting point.

# This is an extension on the default VCL that CloudFlow has created to get
# you up and running with Varnish Cache.
# Please note: There is an underlying default Varnish Cache behavior that occurs after the VCL logic
# you see below. You can see the builtin code here
# See the VCL chapters in the Users Guide at
# and for more examples.

# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;

# Tells Varnish Cache the location of the upstream. Do not change .host and .port.
backend default {
.host = “”;
.port = "80";
.first_byte_timeout = 125s;
.between_bytes_timeout = 125s;

# The following VMODs are available for use if required:
#import std; # see
#import header; # see

# Method: vcl_recv
# Documentation:
# Description: Happens before we check if we have this in cache already.
# Purpose: Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
sub vcl_recv {

# CloudFlow default code
# Purpose: If the request method is not GET, HEAD or PURGE, return pass.
# Documentation: Reference documentation for vcl_recv.
if (req.method != "GET" && req.method != "HEAD" && req.method != "PURGE") {
return (pass);

# CloudFlow default code
# Purpose: If the request contains auth header return pass.
# Documentation: Reference documentation for vcl_recv.
if (req.http.Authorization) {
/* Not cacheable by default */
return (pass);


# Method: vcl_backend_fetch
# Documentation:
# Description: Called before sending the backend request.
# Purpose: Typically you alter the request for the backend here. Overriding to the
# required hostname, upstream Proto matching, etc
sub vcl_backend_fetch {
# No default CloudFlow code for vcl_backend_fetch

# Method: vcl_backend_response
# Documentation:
# Description: Happens after reading the response headers from the backend.
# Purpose: Here you clean the response headers, removing Set-Cookie headers
# and other mistakes your backend may produce. This is also where you can manually
# set cache TTL periods.
sub vcl_backend_response {

unset beresp.http.Vary;


# Method: vcl_deliver
# Documentation:
# Description: Happens when we have all the pieces we need, and are about to send the
# response to the client.
# Purpose: You can do accounting logic or modify the final object here.
sub vcl_deliver {
# CloudFlow default code
# Purpose: We are setting 'HIT' or 'MISS' as a custom header for easy debugging.
if (obj.hits > 0) {
set resp.http.section-io-cache = "Hit";
} else {
set resp.http.section-io-cache = "Miss";

set resp.http.hits = obj.hits;


sub vcl_synth {


# Method: vcl_hash
# Documentation:
# Description: This method is used to build up a key to look up the object in Varnish Cache.
# Purpose: You can specify which headers you want to cache by.
sub vcl_hash {
# CloudFlow default code
# Purpose: Split cache by HTTP and HTTPS protocol.