Building Distributed K8s Orchestration on a Hyperscaler

At Section, we’ve been hard at work providing a Cloud-Native Hosting system that continuously optimizes the orchestration of secure and reliable global infrastructure for application delivery. If you’re a customer, you know Section’s sophisticated, distributed clusterless platform intelligently and adaptively manages workloads around performance, reliability, compliance, or other developer intent to ensure applications run at the right place and time.

In short, we make it easy to optimally run modern apps.

What would it take to roll your own version of the Section platform using off-the-shelf hyperscaler products from AWS, Azure, or GCP? And how close could you get to what Section offers?

Let’s run a little thought experiment using AWS as an example. To replicate the adaptive global Section platform, you’d first use AWS CloudFormation’s multi-region capabilities to simultaneously deploy identical Kubernetes clusters to all the AWS regions you select. Then, using AWS Global Accelerator it would be possible to dynamically route users via anycast to the Kubernetes cluster in the lowest-latency region for each user. The standard Kubernetes Horizontal Cluster Autoscaler and Horizontal Pod Autoscaler would allow management of your Kubernetes clusters so each region is only running the minimum infrastructure to service the active workload in the region.

How is this like Section?

This scenario would make it possible to deploy and manage apps using standard Kubernetes tools like kubectl, replicating what Section provides with our KEI Kubernetes Edge Interface. It would also allow deployment of applications in regions that are appropriate to your particular application (mirroring a key aspect of Section’s value proposition) – but only as long as those locations correspond to AWS regions. And it would ensure you only pay for the increased infrastructure in regions with an active workload (again, like Section).

Okay, so the big question: what are you missing taking this approach versus using the Section platform?

First and foremost, this approach requires you to be responsible for monitoring and operating all global Kubernetes Nodes. In contrast, Section’s Adaptive Edge Engine intelligently and continuously tunes and reconfigures your distributed delivery network to ensure your workloads are running the optimal compute for your application. In other words, we do this for you.

Second, this is a single-cloud deployment, limiting reach and resilience. Your region selection is limited to those provided by AWS, and when there is an inevitable outage your application will go down. Section, on the other hand, uses a federated multi-cloud on-demand network – we call it our Composable Edge Cloud – that distributes Kubernetes clusters across a vendor-agnostic selection of leading infrastructure providers (including AWS, Azure and GCP). This both extends the geographic locations your application can run, and means that even when there’s an outage with one provider your application workloads will dynamically adjust to run on another vendor’s network.

You will also pay for the minimum AWS infrastructure in each region, even when there is no traffic. With Spot Instance pricing this could be small enough to be negligible, and it’s possible using KEDA that you may even be able to scale-to-zero. However, this spend management and optimization is all on you, and be prepared to pay regardless of traffic if you don’t invest the necessary administration time. Section charges only for active workloads, and can dynamically spin workloads up and down based on policy parameters (for example: run containers where there are at least 20 HTTP requests per second) so you won’t pay at all when there’s no traffic.

Finally, you will be responsible for duplicating and maintaining clusters in each AWS region, and there won’t be any centralized management console to provide the necessary visibility into status and performance across all clusters and regions. At Section, the distributed network effectively becomes a cluster of Kubernetes clusters (i.e, clusterless) as our AEE automation and Composable Edge Cloud handles the global orchestration. Meanwhile, the Section Traffic Monitor and Section Console provide a single view of status, usage, traffic flow, and more.

So where does this leave us?

Overall, it’s possible to get many but not all of the same benefits you get with Section using a hyperscaler. However, with this sort of self-managed approach, it’s incumbent upon you to correctly configure and then continue to orchestrate the distributed network.

Or, as we like to say, you can just use Section. If you aren’t already a Section customer, get started with a platform demo today.

Similar Articles