When considering the merits of moving application workloads to the edge, it’s necessary to look not just at technical considerations – which we’ve spent a lot of time examining in previous posts – but overall business drivers and benefits, and how those apply to a particular organization, application and user base.
At its simplest level, the business decision around where to place application workloads boils down to: Am I able to cost-effectively deliver the application experience my users expect? Yet this relatively benign question hides a myriad of important business considerations. Let’s start with setting the playing field: user expectations.
For most organizations, the pandemic kicked digital transformation into overdrive. This is a topic that’s been beaten to death, but it’s still worth considering carefully – businesses have shifted more of their user interactions and engagement to application workloads, and users simply expect more from the applications they use. More features, more functionality, more responsiveness, more availability.
These expectations are only growing thanks to the ubiquity of mobile and SaaS apps in day-to-day life, and… these expectations will never reset. Like, ever. The bar is high, and will only climb higher. Organizations that haven’t come to grips with this new reality will inevitably fall behind.
Alright, so users expect a lot. Fine. How does that impact application deployment decisions? Consider this: no matter how good your application is, it will never be able to overcome a poor user experience. That experience fundamentally boils down to application responsiveness and availability.
Amongst other factors, responsiveness is a function of latency, or how long it takes for data to transfer from one point on a network to another. According to a 2021 survey by Quadrant Strategies and Lumen, 86% of C-suite executives and senior IT decision makers agree that low-latency applications help differentiate their organizations from the competition. At the same time, 80% are concerned that high latency is impacting the quality of their applications. More than 60% of respondents further defined low latency for mission critical apps as 10 milliseconds or less.
Let’s consider a simple example: deploying an application to the cloud with a user base that is equally distributed across the U.S. Where do you host the application? On the east coast, knowing that performance for 50% of your user base located outside the east coast will suffer? Somewhere in the middle of the country, so that the experience for everyone is equally sub-par?
Once you’ve accounted for all other factors, the only way to improve latency is to physically move workloads and data closer to users – in other words, toward the edge. The more geographically dispersed your user base, the more important this becomes. For a global user base, for instance, centralized cloud deployments quickly become untenable as workload scales; the only answer is edge deployment.
Availability is the other side of the experience coin, and the dirty secret is that any given network will, inevitably, go down. The result is a steady stream of headlines about major cloud outages and frustrated users. The way around this is to build in redundancy and resiliency for application workloads. Centralized cloud deployments have little resilience, as they are dependent on that cloud provider. When the network experiences an outage, so do the applications.
Edge deployments, on the other hand, can readily work around this, provided that the deployment isn’t tied to a single network operator. Workloads must be broadly distributed across heterogenous providers, such that if (or rather, when) one goes down, the problem can be routed around to ensure continued application availability.
Which brings us to the third leg of the stool: how cost effectively can I deliver the expected experience? This question can quickly devolve into technical considerations around workload scalability, allocation of compute resources, network operations, workload isolation, data compliance, etc. There are inevitable pros and cons to different deployment strategies that must be considered. However, all things being equal, the distributed edge beats centralized cloud every time.
But since we’re focused on business benefits, let’s try to step back and simplify the question by rephrasing it: Do I want my organization focused on delivering the best possible business applications and services, or on the cost and complexity of operating a distributed network? Because, as we’ve written about numerous times before, rolling your own edge is not for the faint of heart.
Many organizations are already rushing to modernize applications with multi-cluster Kubernetes deployments, and the edge is a natural extension of that strategy, delivering significant benefits in performance, scalability, resilience, isolation and more.
Those twin considerations – the modern edge provides for a significantly better application experience, but only if it can be simple and affordable to adopt – are the drivers behind the creation of Section’s Edge Hosting platform.
By delivering Edge as a Service that can simply be deployed as part of an existing containerized environment using familiar Kubernetes tooling, Section eliminates the challenge of deploying applications to the distributed edge. Section’s Composable Edge Cloud offers a federated on-demand multi-cloud network, ensuring high availability and resiliency worldwide. And the patent-pending Adaptive Edge Engine automatically orchestrates and intelligently scales workloads to meet real-time traffic demand, ensuring cost-effective low-latency responsiveness for users, no matter how many there are or where they’re located.
The business case for edge networking is that the edge is simply better, as long as it is simply deployed. That’s why we built Section’s Edge Hosting Platform.