Content Delivery Networks can often seem shrouded in mystery: They appear on countless lists as one of the top ways to improve website performance, and there are many Content Deliver Network companies globally, but the basics of how CDNs actually accomplish all that they promise (such as improved website speed, ability to handle more visitors, and protection from attacks) are still unknown to many.
Content Delivery Network History
To unravel how CDNs work, we have to go back to the origin of CDNs. In the past few years the CDN industry has grown rapidly, and now almost 50% of web traffic is served through a CDN. Companies such as Netflix serve so much traffic that they have built their own internal CDNs to support the high volume of content they are serving to global visitors on a daily basis. However, CDNs were born in a very different time.
In the late 1990s Internet usage was taking off and more and more visitors were accessing websites. Those websites were still mostly hosted in office data centers which had very small pipes connecting the web servers to the Internet. Each time a visitor tried to access a website, the browser would connect to the web server and then make subsequent requests to collect all the objects needed to build that page.
Due to the size of the pipes connecting the web server to visitors on the Internet, as more people tried to access websites there quickly became a bottleneck at the website server, slowing down response time for everyone and sometimes taking sites offline.
Content Delivery Networks came about in order to solve this problem of web servers getting overwhelmed by requests and slowing down. CDNs did this by installing a middleman in between browsers and servers that kept copies of the website in a cache. CDN servers had much bigger pipes than the web servers and were able to serve many requests at once.
When a user requested a page from a website with a CDN, they reached a CDN server first, and were served a cached copy of the website from the CDN without having to make any requests from the website server. If the CDN server did not have a cached copy of the website, it would make a request back to the website server, greatly reducing the number of requests to the website server.
The other main thing CDNs did to improve website performance was use multiple servers or Points of Presence located at various points around the globe. At this point visitors accessing websites also were using slow connections such as dial-up modems, so if a visitor in California was requesting a web page from a server in New York, the longer geographical distance could make a large impact on load time. By installing multiple distributed servers visitors could be directed the the server closest to them, reducing the distance requests needed to travel.
There are two main layers of a Content Delivery Network which perform the tasks described above:
A DNS layer that directs users to the server closest to them
A reverse proxy layer which imitates the website server and has additional functions such as caching or adding firewall protection.
These components work together to reduce the distance visitors need to travel to get content and serve visitors from CDN servers rather than the website servers. They make up the essential features of a CDN, as the reverse proxy layer can include many different reverse proxies such as:
- A caching proxy like Varnish Cache
- A Web Application Firewall (WAF) like ModSecurity
- A bot blocking reverse proxy
- A reverse proxy that enables A/B testing
Although CDNs may focus on different items such as performance or security, they are all relying on the basic setup of distributed servers + reverse proxies installed on those servers.
At section.io we use open-source versions of Varnish Cache for performance and ModSecurity for website security. We give developers complete control over how these reverse proxies are configured, meaning they can customize them to work for their specific website. To try out our tools, sign up for a 14-day free CDN trial or contact our team to learn more.