Deploy Apps to the Edge as Easily as to a Single Cluster with Section’s New Kubernetes Edge Interface
The pandemic has kicked digital transformation into overdrive: businesses have shifted more user engagement to application workloads, while users expect more functionality, responsiveness and availability.
Moving workloads and data closer to users (to the edge) is the best solution, but managing the required distributed multi-cluster environments is hard – really hard. In fact, so much so that companies avoid doing it and cloud vendors like Google recommend against it. But what if you could skip all that and just manage edge workloads as if they were a single cluster? And use your existing Kubernetes and cloud-native tools?
We’re incredibly excited to launch our new patent-pending Kubernetes Edge Interface (KEI), making it possible for organizations to deploy application workloads to the distributed, federated edge as easily as they would to a single Kubernetes cluster.
As our CEO, Stewart McGrath, says, “Edge deployment is simply better than centralized data centers or single clouds in most every important metric – performance, scale, efficiency, resilience, usability, etc. Yet organizations historically put off edge adoption because it’s been complicated.” He goes onto say:
With Section’s KEI, teams don’t have to change tools or workflows; the distributed edge effectively becomes a cluster of Kubernetes clusters and our AEE (Section’s patented Adaptive Edge Engine) automation and Composable Edge Cloud handles the rest. Stewart McGrath, CEO, Section
That highlights the three main breakthroughs with KEI:
- It effectively turns the edge into a “cluster of clusters,” meaning you can gain all the benefits of a distributed multi-cloud, multi-region, multi-provider deployment – i.e., Section’s Composable Edge Cloud – with the simplicity of managing a single cluster.
- As an extension to the Kubernetes API, KEI allows you to use familiar tools and workflows, like kubectl and Helm, to manage and control your edge environment. No more learning specialized tooling, different workflows or proprietary flavors of Kubernetes, just work as you always have, and
- Use KEI to generate policy-driven controls for AEE to automatically tune, shape and optimize application workloads in the background across Section’s Composable Edge Cloud.
For example, if you’re looking to apply a simple application workload policy such as “run containers where there are at least 20 HTTP requests per second”, you can define this with a simple declaration in your application manifest, apply the configuration using kubectl, and AEE will continuously find and execute the optimal edge orchestration for that outcome. It’s that easy.
Here are some other things you can do with KEI:
- Configure service discovery, routing users to the best container instance
- Define complex applications, such as composite applications that consist of multiple containers
- Define system resource allocations
- Define scaling factors, such as the number of containers per location, and what signals should be used to scale in and out
- Enforce compliance requirements such as geographic boundaries or other network properties
- Maintain application code, configuration and deployment manifests in an organization’s own code management systems and image registries
- Control how the Adaptive Edge Engine schedules containers, performs health management, and routes traffic
Simply put, there is no easier, better or faster way to deploy and control application workloads at the edge. As you can imagine, we’ve been hard at work to bring KEI to life, and we’re proud and excited to share it with the larger community. Get in touch to start using the Kubernetes Edge Interface today.