The Edge Experience ­– Streamlining Multi-Cluster Kubernetes Deployment & Management

Considerations and approaches for streamlining and accelerating edge deployment and management.

Introduction

In our recent whitepaper on Why Organizations are Modernizing their Applications with Distributed, Multi-Cluster Kubernetes Deployments, we noted the strong correlation between Kubernetes containerization and workload distribution. The lightweight portability of containers makes them ideally suited to distribution, while their abstraction facilitates deployment to non-homogenous federated compute platforms. Moreover, Kubernetes brings configurable orchestration at each infrastructure location to best coordinate this sort of distributed multi-region, multi-cluster topology.

In other words, we noted, organizations that have already adopted Kubernetes are primed to rapidly adopt modern edge deployments for their application workloads. Even those that are still at the single-cluster stage are in a position to rapidly leap-frog to the distributed edge. Recent research by SlashData on behalf of the Cloud Native Computing Foundation confirms this correlation between edge, containers, and Kubernetes, noting developers working on edge computing having the highest usage of both containers (76%) and Kubernetes (63%) of surveyed segments. CNCF survey

Our earlier whitepaper talks in-depth about the “why” of modernizing through distributed, multi-cluster edge deployments, including broad benefits such as availability and resiliency, eliminating lock-in, improving performance and latency, increasing scalability, lowering cost, enhancing workload compliance and isolation, and more. If you haven’t read it yet, we encourage you to give it a look.

Once you’re clear on the benefits for your particular application workloads, how do you go about moving them to the edge? Or, if you’ve already started the journey, how do you streamline the process? Explaining the “how” of edge deployment is the subject of this paper.

The Complexity Conundrum

Complexity is frequently cited as one of the main challenges in adopting a multi-cluster topology, and the more distributed that topology, the more complex it can be to manage. This creates a conundrum, because as explained in our earlier paper, the more distributed your multi-cluster environment, the more benefits you reap.

Take latency, for example. This is one of the most important considerations for user experience, and one of the factors that’s directly impacted by edge compute. Simply put, the closer your application is to your user base, the more responsive that workload will be to user requests. This holds true regardless of whether your base is concentrated in a single region (for instance, the U.S.) or spread globally, but the broader the distribution, the more latency and workload placement need to be carefully considered.

Yet no less an authority than Google says that, while latency is one of the biggest factors in selecting compute regions (so important, they say, that you should regularly reevaluate and rebalance regions to reduce latency), concern over complexity can prevent addressing this issue. According to Google’s Best Practices for Region Selection “Even if your app serves a global user base, in many cases, a single region is still the best choice. The lower latency benefits [of distribution] might not outweigh the added complexity of multi-region deployment.”

Clearly, complexity (and inherent cost) is a major challenge for distributed deployments, particularly when those deployments are federated – so much so that it can prevent organizations from making decisions that are clearly in their best interest. Where does this complexity come from?

Two factors are in play. The first involves the tools and processes that are used to run the application workload itself in a multi-cluster Kubernetes deployment. The second is the management and operation of the underlying edge network, which gets progressively more complicated and cost-sensitive as distribution (across regions and networks/operators) increases.

While there is considerable overlap in a modern DevOps environment, the development team (the app owner) tends to be more focused on the process pipeline, while the operations team (the infrastructure owner) wrestles with cluster management and network optimization.

devops lifecycle
Image source: edureka

Both are important to how organizations manage overall application delivery, and each has its own complexity challenges to overcome. Let’s look in greater detail.

Application Deployment at the Edge ­– The Developer View

Moving application workloads to the edge differs from a centralized cloud deployment on two vectors: first, it’s multi-cluster instead of single cluster, and second, it’s distributed (and as part of that, often federated across different regions/operators). From a developer’s perspective, this distributed multi-cluster Kubernetes deployment impacts four elements of application ownership:

CI/CD Integration

Developers need to continuously build, modify and deliver/deploy containers. In typical cloud deployments, this involves a simple calculation of determining which single cloud location will deliver the best performance to the maximum number of users, then connecting your codebase/repository and automating build and deployment through CI/CD. But what happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? Which new tools and modified processes do you need to consider?

Cluster Discovery

With multiple distributed clusters, you’ll need to adjust processes and configuration for cluster discovery. Your systems need to be able to find the clusters in an automated fashion to cater for the sheer volume of potential runtime locations and, because the infrastructure team and tooling may change target or available locations. As importantly, you need to be able to decide how many and on which edge endpoints your code should be running at any given time for optimal cost and performance.

Orchestration and Failover Between Clusters

Operations teams need to manage and optimize the constant orchestration across these nodes among a heterogeneous makeup of infrastructure, and, should something go wrong, understand and control failover between clusters or providers to ensure application availability.

Credential Management

Finally, you need to ensure proper credential management across these distributed clusters (rotating secrets, generating TLS certificates, etc.), which becomes exponentially more complex vs a centralized single-cluster deployment.

Application Management at the Edge – the Ops View

Not surprisingly, managing multiple distributed clusters is much more complex than managing a centralized deployment, especially as those clusters spread across regions and operators in a federated environment. The things that ops teams are concerned about include:

Cluster Connection and Tracking

Maybe you’ve got 5-10 clusters, maybe you’ve got a hundred. Where are they? What’s their status? How are you handling DNS and BGP/IP address management? How are you ensuring availability to the development team? How and when are you optimizing cluster placement as workloads and demand shift (hourly, daily, over time)?

End-to-End Security

A distributed system inherently increases your attack surface. How are you compensating for that to ensure workload isolation? What’s your DDoS mitigation strategy across Layers 3, 4, and 7 (if needed)? With multiple locations (and perhaps multiple operators) running simultaneously, how are you ensuring TLS certificate deployment?

Policy Lock-Down (RBAC, etc.)

It probably goes without saying, but it’s likely you only trust application teams to a point, so you’ll want to implement role-based access controls (RBAC) and other policy management tools to lock down the infrastructure as necessary. Yet implementing and managing those policy controls is considerably more complex across clusters, regions, and operators.

Resource Management

One of the key advantages to a multi-cluster deployment is flexible application scalability, particularly the ability to fine-tune and scale workloads as needed. In fact, Kubernetes supports three autoscaling approaches (pod replica count, cluster autoscaler, and vertical pod autoscaling). From an operations standpoint, this scalability creates two problems. First, you need to implement and manage it. Which particular workloads require scaling (and should that be vertical or horizontal scaling)? Is that scaling provider- or region-dependent? The second issue is resource management. This easy scalability can quickly make multi-cluster environments popular with development teams. How does the ops team ensure adequate resource availability while minimizing the load on backend services and databases? How do you ensure that resource exhaustion doesn’t bring down your whole environment? Kubernetes has great capabilities for managing resource allocation, but now you’ve got to keep those synced across multiple distributed clusters.

How to Streamline Multi-Cluster Kubernetes Deployment & Management

So far we’ve identified a significant number of questions to ask and decisions to make for both the Dev and Ops deployment teams. As noted, edge deployment can be complex.

Earlier in this paper, we also noted Google’s admonition that most organizations should opt for a less-performant single region deployment to avoid the complexity involved in a distributed environment. We disagree. We think the right approach is to streamline deployments and eliminate complexity, so organizations can take full advantage of multi-cluster distribution.

The question is, how?

The solution must map to both teams – development and operations – and their specific identified challenges. And it must simplify, and ideally abstract, distributed deployments so teams are able to use existing tools and familiar workflows now applied to a multi-cluster environment.

We believe the right approach requires two elements working in concert:

KEI diagram

For example, such a solution would enable a developer, using Kubernetes-native tools such as kubectl, to set a simple policy such as “run containers where there are at least 20 HTTP requests per second” and have the backend engine continuously find and execute the optimal edge orchestration.

This approach offers two distinct benefits.

First, leveraging the Kubernetes API allows DevOps teams to continue using familiar cloud-native tools and workflows, such as kubectl and Helm, to deploy and manage applications across a distributed and federated multi-cluster environment. Teams interact with deployed applications as though running on a single cluster, dramatically simplifying the process of orchestrating a distributed environment. It also means that teams do not have to adopt different versions and flavors of managed Kubernetes service or other cloud-native tooling across different vendors in a federated topology. In short, it eliminates complexity and ensures consistency in the development process.

Second, employing policy-driven controls through an automated engine abstracts implementation of the distributed edge environment for an organization. Teams handle orchestration at the level of objectives and goals (i.e., policy) while the backend engine automates implementation. Even when teams choose to get more granular in tuning and shaping the environment, they can focus more on what they want to happen and let the backend system optimize and streamline execution. Teams can thus focus on building and managing their applications, not on operating a distributed network.

Among other benefits, this combination of Kubernetes-native tooling and backend automation addresses the shared commonalities for both Dev and Ops teams. For example, both need to track clusters. Both need to manage and control credentials or policies. Using this approach it’s possible to use existing Kubernetes tools and processes to:

  • Configure service discovery, routing users to the best container instance
  • Define complex applications, such as composite applications that consist of multiple containers
  • Define system resource allocations
  • Define scaling factors, such as the number of containers per location, and what signals should be used to scale in and out
  • Enforce compliance requirements such as geographic boundaries or other network properties
  • Maintain application code, configuration and deployment manifests in an organization’s own code management systems and image registries
  • Control how the back end engine schedules containers, performs health management, and routes traffic

Let’s look at how this would work.

Deploying an Application on Section using the Kubernetes Edge Interface (KEI)

Section’s patent-pending KEI enables application teams to use standard Kubernetes tooling to deploy application workloads across a distributed edge as a though it were a single cluster. KEI lets development teams already building Kubernetes applications continue using familiar tools and workflows (e.g. kubectl, Helm), yet deploy their application to a superior multi-cloud, multi-region and multi-provider network.

Teams interact with deployed applications as though running on a single cluster, while Section’s patented Adaptive Edge Engine (AEE) employs policy-driven controls to automatically tune, shape and optimize application workloads in the background across Section’s Composable Edge Cloud.

Getting started with KEI is quick and easy.

KEI step 1
KEI step 2
KEI step 3

For more in-depth around getting started with KEI, including what you can do with the interface, how to use it, and an overview of supported resources, visit the KEI Documentation.

If you’d like to chat with a solutions engineer to understand how the Section Edge Platform can work for you, contact our team.

Ready to Jump In?

Start realizing the benefits of Edge with no upfront commitments.