As discussed in the first part of this series The Complexities of Replicating the Cloud Developer Experience at the Edge, over the past couple of decades, cloud computing has become the preferred deployment model for developers. There are many reasons for this, which can mainly be summed up by cloud simplifying the management of delivery of services to end users, with functions that span compute, storage and delivery.
At the same time, many application creators have relied on complementary CDN technology to boost performance, security, and scalability. Placing parts of an application at the edge have provided obvious performance benefits but have also added certain complexity to the aforementioned simplicity of cloud by adding an additional and discrete delivery layer.
With the demand for faster user experiences being driven by emerging and evolving use cases (e.g. IoT, gaming, etc.), application creators are increasingly looking to offload more services to the edge. At the same time, application operations teams are looking to simplify their delivery stack. Bringing more of the application delivery cycle into a single cohesive edge delivery solution can achieve both of these goals concurrently.
While cloud providers have the flexibility to support a diverse range of workloads, developers working in the cloud are limited to a single provider’s network, or alternatively are responsible for managing workload orchestration across multiple providers.
CDNs, meanwhile, may have expansive global networks of infrastructure, but they are typically unable to support general purpose workloads beyond basic content delivery.
Why CDNs are not the Best Option for Edge Computing
CDNs are often thought of as the first evolution of edge computing. However, content delivery encompasses only a small subset of all edge workloads. As the diversity of edge workloads has expanded beyond content delivery, existing solutions fall short in terms of what they’re able to support.
Many CDNs were built around open source technologies, such as Varnish Cache and ModSecurity. Typically, they have customized the code base so much over the years, that developers using them are locked into “black box”, proprietary solutions that don’t offer the flexibility and control necessary to fit the unique requirements of each application.
Furthermore, growth in adoption of container technology and serverless functions has completely changed the game, leaving many legacy CDNs unequipped to support modern applications. With Kubernetes becoming the preferred container orchestration platform, edge solutions built on Kubernetes are significantly better positioned to support the needs of modern developers.
The Complexities in Moving Diverse Workloads to the Edge
Now, let’s take a deeper dive into some of the complexities involved in moving more diverse workloads to the edge, including selection, deployment and ongoing management.
Web Application Firewalls (WAFs) & Bot Management
DevOps teams are increasingly choosing to deploy WAFs and bot mitigation tools across a distributed architecture, with the goal of detecting and mitigating threats faster. Managing a WAF or bot mitigation deployment across a multi-cloud/edge network is no simple feat, however, and developers are turning to Edge as a Service platforms to help manage this.
Additionally, while many best-in-class WAF and bot management technologies have emerged - providers such as Wallarm, Snapt (WAF), ThreatX, Signal Sciences, Radware Bot Manager, and PerimeterX - most legacy CDNs still don’t give developers the option of deploying third-party solutions. Fastly, for example, recently acquired Signal Sciences, recognizing the need for more advanced WAF technology beyond their own proprietary solution.
At Section, we often speak with developers who are frustrated with the “black box”, built-in solutions of legacy CDNs, and demand more choice and flexibility. Edge as a Service providers need to support developers’ software choices, particularly when it comes to as critical a service as security.
Image Optimization
Beyond simple caching of images, developers, especially in the e-Commerce sector, are increasingly seeking out image optimization solutions (e.g. Optidash) that optimize and transform images on-the-fly. Benefits include:
- Faster page load times for end users
- Improvements to operational efficiency
- Removing the load on centralized infrastructure
Just as with security solutions, most legacy CDNs don’t support third-party software that specializes in point solutions. What’s more, if you’re operating a multi-cloud/edge environment, you will have to install and manage these types of image optimization tooling across the entire network. Edge as a Service solves this by acting as the orchestration layer to ensure that workloads are running in the right edge location at the right time.
Testing & Experience Optimization
Marketers, product managers, developers and others need the ability to effectively test and optimize applications across the client-side, server-side, single page application (SPA), mobile, redirects, and so on. Conventional A/B testing solutions use JavaScript tags to manipulate content on applications, which reduces site performance with flicker and increased latency.
Modern tools like SiteSpect, however, rethink this model by sitting in the flow of HTTP traffic. This allows them to support multiple user experience optimization techniques, including client-side, server-side, redirects, and SPA optimization.
Legacy CDNs can’t support this new architectural model and therefore require extra hops in the HTTP delivery chain, ironically negating many of the performance benefits they are aiming to solve. By supporting distributed deployment of more advanced workloads, Edge as a Service providers, like Section, make it easy to integrate solutions like SiteSpect into your edge stack.
“This goes to show how flexible both SiteSpect’s and Section’s platforms are, and how great their DevOps and technical support teams are in order to accommodate our needs. Since migrating to this new deployment, the traffic routing and customer experience have been seamless, and the performance and stability have improved tremendously.” - Mike Henriques, CIO, Temple & Webster | read full case study
Load Balancing
While most hyperscalers and edge providers offer load balancing, these solutions are often restricted to their own environments. Therefore, if you migrate your application to a different cloud or data center, the hyperscaler or edge provider’s proprietary load balancer won’t be able to follow.
In the instance of a traditional load balancer that is being deployed to the cloud, you need to use a virtual appliance. If you then decide to use a load balancer in a second cloud, that virtual appliance will need to be re-configured again… and so on for every cloud or data center it operates in. There is no communication between these two appliances. In this instance, you are operating two (or more) separate clouds that your teams will need to manage separately.
Organizations that use multi-cloud/edge networks are then faced with having to separately configure, monitor and manage delivery and security for each distinct environment. Similarly, for any application that changes hosting location, adjustments must be made on an individual basis. This not only increases complexity, but takes up valuable resources and limits much of the flexibility that is supposedly a key benefit of a multi-cloud/edge model.
Edge as a Service handles load balancing across multi-cloud/edge networks. For example, Section’s Adaptive Edge Engine has built-in health checks to monitor and automatically migrate traffic and workloads based on real-time traffic demands. Beyond this, a Layer 7 load balancer can sit on top of heterogeneous networks and route HTTP requests based on customized rules.
Containers
In a small environment with only a handful of systems, managing and automating orchestration is fairly straightforward, but when an enterprise has thousands of individual systems that interact with each other on some level, orchestration automation is both powerful and essential.
Containers are lightweight by definition with a low footprint, making them perfect candidates for running on edge devices. The main reason machine learning models leverage containers is because legacy devices can still interact with cloud services like AI/ML to achieve fast computation in-place.
Containers can be deployed to the device of your choosing and can be built using the architecture of your choice so long as it can run the container runtime. Updating containers in-place is simple, particularly when orchestration solutions like Kubernetes are used.
Consider SaaS providers who traditionally offered on-premise or single point of presence installations. As customers increasingly demand distributed deployment models, SaaS providers are faced with the build vs. buy dilemma.
The management of these complex clusters of devices, services, and networks can get highly complicated very quickly. With Edge as a Service, SaaS providers can simply containerize their applications and accelerate their path to the edge, rather than building out and managing their own edge networks.
Serverless
Serverless computing, also called function as a service (FaaS), enables the execution of event-driven logic without the burden of managing the underlying infrastructure. The name ‘serverless’ is characterized by the freedom that it gives developers to focus on building their applications without having to think about provisioning, managing, and scaling servers.
The concept of serverless was originally designed for cloud environments, eliminating the ‘always-on’ model to save on resource consumption, among other benefits. In recent years, advances in edge computing technology have led more developers to migrate serverless workloads to the edge. The benefits of serverless at the edge, when compared to alternatives like containers and VMs, include lighter resource consumption, improved cost efficiencies, code portability, and speed of deployment.
However, not all workloads are suitable for serverless models and it’s important to understand the requirements of a given workload when determining the most appropriate deployment model. Considerations such as code dependencies, cold starts and their effect on performance, security, and resource requirements are critical when designing edge architectures.
Edge as a Service providers can help streamline serverless deployments by offering flexible language support that allows developers to simply ship code and offload the responsibilities of deployment, management, and scaling of the underlying infrastructure to the edge compute platform.
Overcoming the Complexities with Section
As we’ve just started to look at in this series, the reality of deploying and managing workloads at the edge is far from simple.
The real power of edge arises when we provide application developers the opportunity to seamlessly run the software of their choice at the edge and application operations teams the simplicity of a single delivery plane so they have a reduced operational footprint (even with a larger geographic delivery footprint).
Section’s Edge as a Service simplifies all of the steps involved in deploying your application to the edge. You also gain the round the clock support of our dedicated team of expert engineers. We take care of the massive complexities and resources necessary to support distributed provisioning, orchestration, scaling, monitoring and routing, allowing you to focus on innovation.