Section is the only content delivery platform specifically designed to accelerate, scale and secure web sites with the development lifecycle in mind. But before we get to why this is so important, you must understand how CDNs came about, what problems they were created to solve, and why they traditionally have stood far apart from the development cycle.
In the below post I go through a brief history of CDNs, why CDNs don’t integrate with continuous integration/continuous delivery cycles, and how Section’s Developer PoP solves these problems for devs, ops, SysOps and SecOps teams. Skip to the bottom if you just want to hear about how the DevPoP changes content delivery.
History of CDNs
When content delivery platforms were first conceived they set out to solve a couple of network layer problems:
- Datacenters in offices had tiny inbound bandwidth. As soon as you got a few users on your site, your link would be overwhelmed and your site would go offline.
- Users had low bandwidth and high latency connections. Dial up connections were the norm, and users were willing to wait for pages to load.
Web engineers quickly realized that the bulk of the data was images in web pages. Using HTTP caching directives like Expires and Cache-Control would allow the web browser to cache content in the local user’s browser, so some data could be removed from that link. This meant that as a user browsed the web site they would not download the same content over and over again - but, this only worked for each specific user.
Some smarter network engineers realised that they could introduce a HTTP reverse proxy between the user and the web server that could cache some of the web page’s content. If these HTTP reverse proxy servers were positioned strategically, they could help solve both of the aforementioned problems - not enough bandwidth at the server hosting, and high latency and low bandwidth on the user side.
Thus, the CDN was born. By putting many of these servers between the user and the web server, the aggregated bandwidth was much greater than a single hosting facility could provide. When one of the HTTP reverse proxies was able to able to serve an image from cache it removed some workload from the web server hosting, freeing it up to serve more users.
Simultaneously, because these proxies were located on many global servers rather than in one spot, they moved the content closer to the end user. This reduced the effect of latency on the end user’s side, further improving the website’s performance to the user.
Internet Advancements and CDNs Keeping Up
While CDNs did solve these two core problems of early websites, as the Internet advanced the problems were solved in other, more direct ways. Bandwidth at the web server eventually became cheaper and link sizes were increased. Then, cloud computing moved web server hosting out of the on-premise world into mega datacenters which had enough bandwidth to serve the entire site (without a CDN).
At the same time, end user bandwidth and latency improved. Dial up was replaced with DSL and cable, meaning that the connection times both at the web servers and on the end user side got much faster.
What then?
Despite these improvements in connections, CDNs were still around and more vendors entered the market with little to set them apart except for price. This competition between vendors is still going on between CDN companies that have not been able to differentiate themselves with higher value services.
Some vendors then started to reach for higher value services. At the time, the HTTP reverse proxies that supported CDNs didn’t have a great deal of control over what they would cache. HTTP caching directives were primarily used. This meant that engineers needed to meticulously examine their server configurations to make sure that the CDN would cache properly. CDN vendors added features to override situations where the server was or could not be configured correctly.
CDNs were also not capable of caching personalized content - that is content that is unique to each user. This collided with web application framework defaults, where frameworks like PHP and ASP.Net would unnecessarily drop cookies on users. These cookie headers made content uncacheable. This spawned the birth of solutions such as the “dynamic site accelerator” and “edge side includes” which aimed to solve these problems.
Dynamic site accelerators attempt to improve the speed of delivery (but not the scalability so much) of web pages that are not cacheable by removing overhead in the network layer. Instead of using default Internet routes to reach the web servers, the HTTP reverse proxies inside the CDN would measure connection, latency and throughput to the web servers. Where they found that hopping though one of their other proxies improved performance, they would route the traffic through that server instead of reaching back to the web servers directly.
They also improved performance by reducing the number of TCP connects and disconnects between the CDN reverse proxies and the web servers. Reducing the number of TCP connects improved the performance in many places, including to any load balancer running in front of the web servers and the web servers themselves. SSL was also computationally expensive at the time, so reducing the number of SSL negotiations made an impact.
However, the DSA didn’t deal with the underlying problem - developers found it difficult to build applications that were cacheable and the CDN did very little to help them.
CDNs Enter the Website Security Market
CDN vendors also reached for higher value services on the security front. As web sites transitioned from static content to dynamic applications security holes were introduced. Most dynamic applications were built on frameworks that had security vulnerabilities, and the applications themselves had vulnerabilities. Basically, anywhere you could enter data into a web site was an attack vector.
One common attack was the SQL Injection Attack. Hackers would find the login screen to a web site, then type something like “;select * from users” into the username field. Sometimes the application would fail to login - but it would also send back a list of all the users in the system into the web page!
As a HTTP reverse proxy is looking at all these HTTP requests, it seemed that it was a logical place to inspect the traffic. So a Web Application Firewall is simply a HTTP reverse proxy that looks at properties of the request looking for suspicious things like “SELECT”, which should not appear in the request. If it finds them, it blocks them.
CDNs and Agile Development Workflows
As we saw, CDN started to fix problems with pipes, but as that problem was solved by better datacenters and better end user connectivity that solution either evaporated or became commoditized. CDN vendors needed to add more value to maintain profitability so they started to deal with problems in the application stack, like uncacheable content and security holes.
This worked well for a while, especially in a waterfall project management world. However as application complexity increased the waterfall method became very expensive and error prone. Software development projects continually ran late, with a lot of bugs. This was often caused when integrating the work of smaller teams failed. For example, at the beginning of a project the system architecture was designed. Two teams would build one component each, and when finished, they would try and connect their components together. This action of connecting we call integration, where the components are integrated together. If the integration failed or had conflicts it caused huge headaches and loss of time for teams.
The project management system needed a top down rebuild. Some early adopters tried methods that they eventually called “agile”. Instead of working in isolation with a huge project plan and strict contracts between teams, the teams did the opposite. They made small incremental changes, and integrated them very often. In doing so they reduced the overall cost of software development. As a colleague said to me, “when you are not good at doing something, do it more often”. He was referencing the need to continually perform the integration phase. This was the birth of continuous integration, or CI.
Agile methods and engineering practices like CI took hold on the software ecosystem. Teams started to adopt these principles with success, applying them to the application code and also the database structures. Innovation in software started to increase.
So did consumer demand for Internet products. Traffic to web sites started to increase, so teams reached for CDNs to help solve performance and availability problems.
However, because the nature of original CDN design was to be only available as a service on the Internet, CDNs broke the fundamentals of these new agile practices. We saw earlier that the CDN was operating in the application space, however the CDN could not operate within the CI systems that the application developers were using.
This means that teams were again faced with a late-integration problem, just like when they were doing waterfall projects.
This late integration of CDN still manifests itself today. Examples of common problems that are caused by this late integration include:
- Failing to cache things that are cacheable
- Accidentally caching things and sharing them with the wrong users
- Accidentally blocking legitimate traffic with WAF
These problems happen because the CDN only lives on the Internet, not on an engineer’s computer, where they do their work and are trying to continuously integrate that work.
As a programmer changes an application, they test their changes on their computer. When they are happy with the changes they submit them for review (typically in a source control system like git). These reviews might be peer reviews, or they might be automatically tested by an automated test suite. If the review is successful, the changes are released to the live site.
Because the programmer cannot see the entire system including the CDN until the changes hit the live site, they make mistakes that are not caught in the review and can impact the production site, end user, and site reputation.
The Developer PoP: Brining Continuous Integration to Content Delivery
When we founded Section, we thought what if this was not the case? What if the CDN was treated like a proper layer in the stack, like the application and the database are today?
That’s exactly what Section’s Developer PoP does. Not only does our Edge PaaS behave as well as the traditional CDNs you’re used to, but it also empowers the entire team to drive their content delivery harder, getting better performance and security results.
Our Developer PoP is a virtual machine that runs on your local computer, in your CI/CD processes. It is a mirror of what you can expect on our global delivery grid because it pulls the configuration of our global PoPs down into the developer machine.
How Does a Developer Use the Developer PoP?
If you’re an application developer, you can launch the Developer PoP on your local computer, so that you can browse your site through all the layers. In the old system, you would have been browsing directly to your application, skipping the Developer PoP. You couldn’t see what was going on with the CDN as you worked, and you hoped that your changes wouldn’t cause problems for your business and embarrassment for you.
Now, you can see exactly what Section and any reverse proxies deployed within Section will do as you change your application. This can give you confidence that your changes are going to actually improve the system, and not break it.
Think of it as reducing the number of problems you get in production, letting you focus on building a better application, rather than fighting fires.
How do SysOps use the Developer PoP?
If you’re in an organization that has a dedicated sysops team that is responsible for the CDN configuration, I bet you are sick of being alerted when the developers release code that conflicts with the CDN and doesn’t integrate properly. This can be solved.
Don’t blame the developer for doing this. Developers have the best intentions in mind, however they don’t have the right tools given to them by their CDN.
As a sysop responsible for CDN configuration, the Developer PoP doesn’t mean you relinquish any control. You can still control the CDN to ensure that the configuration is tight. However, your developers can run your configuration on their computer as they write code. This means that they are able to see when their changes fail to integrate properly.
Think of it as empowering the developers so that they don’t interrupt you with failed integrations - better results for the application (customers) and fewer alerts for you.
How do SecOps use the Developer PoP?
In secops, your mind is on protecting users and the business by making the application safe. One problem that secops teams find with CDNs is that when the application changes, the security rules need to be reconfigured. Unfortunately, because the entire CDN industry hasn’t realized that they live in the application stack, code deployments often mean turning off WAF rules to resolve production incidents.
When using the Developer PoP developers actually see the problems that their application changes make because the WAF is running on their computer.
This doesn’t mean that they can turn on and off the rules - they can just verify that their changes are not introducing new problems.
Additionally, as you decide to improve the security of the application, you too can use the Developer PoP to test new rules before promoting them to production.
A Better CDN Development Experience is Possible
The Developer PoP is for everyone - dev, ops, DevOps, and SecOps. Improving the tools that your team uses for building applications is something every engineer loves doing. No one has tried to tackle this in the CDN industry. Until now. Contact us if you want to learn more about the DevPoP or to see a full demo of how it works.