We are fans of agile development practices. We value the ability to push application changes to production quickly, safely and often.
We also love the incredible benefits which reverse proxies can provide in the delivery chain for websites. If you want your website to load faster, be more cost effective to deliver, stay up longer when traffic spikes, be more secure or generally delivery a richer browsing experience for your customers, then reverse proxy servers should be in your website delivery chain.
What we don’t enjoy…
…is seeing the way in which reverse proxy servers have been smashed into modern web delivery chains whilst ignoring the basic principles of agile development practices. We all know agile development environments should be the same (or as close to) production development environments as possible. We know reverse proxy servers have direct effects on the HTML and objects being delivered into users’ browsers; they are an extension of the application code base. However, up until now, reverse proxy servers have not been considered 1st class citizens of the development environment.
An interesting production example of the implications of this was the recent unfortunate incident experienced by store.steampowered.com. According to Valve’s post mortem blog, the Steam Store was the subject of a DDOS attack and in response, some new HTML caching configuration was rapidly deployed into the production reverse proxy servers. A quick DNS dig on steampowered.com indicates that the reverse proxies in question in this instance are Akamai’s Content Delivery Network (CDN).
I wonder if the HTML caching rules deployed went through Steam’s normal agile development cycle working from the developers’ machines all the way through staging and test environments before hitting production? I will the guess the answer is “no way”? If it takes hours for a config change on the Akamai network to propagate and the developers don’t have the Akamai reverse proxies on their development machines, there is little chance the developers would have enjoyed the luxury of a fast feedback loop on their local machines. They would not have had the opportunity to thoroughly test and tweak the effect of the new HTML caching rules prior to release nor test application code changes against any HTML caching ruleset.
I am guessing there was a pro services engagement to implement the caching rules by Akamai folks who (through no fault of their own) just can’t have an intimate knowledge of the Steam Store application in the same way the Steam Store developers do. In the midst of a DDoS attack there would have been pressure on to move fast (like agile fast!) but unfortunately, modern CDNs were built to support waterfall development models and not agile CI/CD workflows. Modern CDNs were built to go through a once off pro services engagement to get “set up” and then left alone. This approach is not workable when application code is being changed constantly and the application outcomes for users is so dependent on the interplay between application code and the reverse proxy configuration.
Broad and deep HTML caching correctly implemented is an excellent part of a website delivery chain. Unfortunately, many websites to date have not been able to avail themselves of the benefit precisely due to the “session leak” issues which Steam Store experienced on their CDN. Without the opportunity to configure and test the HTML caching properly in their development environments or then continue to test ongoing application code changes against the configured HTML caching rules, many websites choose not to take advantage of the tremendous benefits of HTML caching.
“When Dev and Prod conflict Production features are removed”
We see the same issues with Front End Optimisation reverse proxies such as Google’s ModPagespeed, Load Balancers like HA Proxy, Web Application Firewalls like ModSecurity and all the proprietary reverse proxies powering commercial CDNs. Often we have seen customers buy the comfort of a WAF deployed on their chosen CDN. They will take a waterfall approach to the configuration of the WAF, spending thousands on a once off project to set up or configure the WAF. Then, as application code is changed in an agile fashion and deployed weekly, daily, or hourly, the WAF settings are slowly turned down (or off) as the WAF configuration conflicts with the application code being deployed.
Reverse Proxies should absolutely be used to improve users’ experiences of web applications in the way in which Steam Store attempted with Akamai. They reduce the cost to serve, they improve security, they improve user experience and indeed give developers a whole new toolset in their armoury with which to bring innovation to web applications. Hey, if you want to go green, reverse proxies can even reduce the carbon footprint through overall reduction of compute!
However, unfortunate issues such as those experienced by the Valve team with Steam Store or the experience of many websites who have purchased WAFs and then turned them off, will continue to be a part of reverse proxy life (whether deployed as CDNs, or even Application Delivery Controllers (ADCs)) where those reverse proxies are not part of the agile development workflows.
Get more from your Reverse Proxies
We believe in empowering agile developers and operations teams to take full advantage of the awesome benefits of reverse proxies servers for their web applications. Developers and Ops teams should use reverse proxies in their development environment and have full and open access to all the metrics, logs and alerts they need in all environments to configure and manage those reverse proxies.