Cloud-Native principles can help make edge computing business and operational models more viable.
In this post, we will look at why cloud-native has become such a popular approach to building modern applications and the benefits of bringing cloud-native principles to the edge.
Cloud-Native design principles are increasingly being used for designing modern applications built to run in public, private and hybrid cloud architectures.
Cloud-native applications are designed to leverage the power of the cloud, take advantage of its ability to scale, and quickly recover in the event of server failure. The increased number of individual software modules and the demand to efficiently manage them in cloud-native environments has led to widespread adoption of container technology and dynamic orchestration of the containers to optimize resource utilization. This leads to a more flexible architecture and faster application delivery, helping to drive digital transformation.
The Cloud Native Computing Foundation (CNCF) definition of cloud-native:
Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
There are many benefits to a cloud-native approach, notably:
- Gains in efficiency due to easier development and deployment than traditional and legacy models.
- Composability (i.e. the ability to build applications from component parts) and reusability (through ready-to-use infrastructure, reducing development complexity and offering control, visibility and self-service for developers).
- Scalability due to the microservices architecture (since each microservice handles a specific function within an organization, the application can be scaled by creating more instances of only the services that are necessary to handle demand).
- The potential for innovation due to the cultural movement away from segmented development practices to a more interconnected, agile approach.
6 Benefits to Bringing Cloud-Native Principles to the Edge
“Cloud native can help organisations fully leverage edge computing by providing the same operational consistency at the edge as it does in the cloud. It offers high levels of interoperability and compatibility through the use of open standards and serves as a launchpad for innovation based on the flexible nature of its container orchestration engine. It also enables remote DevOps teams to work faster and more efficiently.” - Priyanka Sharma, General Manager, CNCF
There are many advantages to bringing cloud-native principles to the edge. Here are six of the main benefits:
1. Using industrialized and proven capabilities
Developers can utilize the momentum that CNCF (with applications like CloudEvents and NATS) and other organizations (such as The Reactive Foundation and Reactive Manifesto) have begun towards defining open standards, enabling greater consistency, accessibility, portability and productivity.
The use of cloud-native open source projects (Kubernetes, Docker, etc.) similarly has many advantages, helping provide consistency and resiliency while preventing businesses from becoming locked-in to one supplier and legacy products.
2. Ease of deployment
A cloud-native approach to edge extends the principles supporting a continuous integration/continuous deployment (CI/CD) model, which can streamline the delivery of code changes, enabling more frequent changes and updates. This includes completing faster rollbacks and being able to more quickly fix edge deployments that break or introduce bugs.
Observability is essential for running edge workloads to maximize performance, security and efficiency. Kubernetes provides full visibility into production workloads through its in-built monitoring system, enabling real-time insights and the opportunity for optimization of performance and efficiency. You can also use a full metrics pipeline, such as Prometheus, to access a richer set of metrics.
3. Flexibility of the container orchestration engine
Similarly to the cloud, Kubernetes enables organizations to efficiently run containers at the edge. There are three approaches to using Kubernetes in an edge-based architecture. The third approach, which is hierarchical cloud plus edge using a virtual kubelet as reference architecture, particularly allows for flexibility in resource consumption for edge-based architecture.
Flexible tooling allows developers to interact with the edge how and where they need to, reducing the complexities linked to running compute across distributed edge locations.
“Cloud native microservices provide an immensely flexible way of developing and delivering fine-grain service and control.” - William Fellows, Co-Founder and Research Director of 451 Research
In order for distributed edge systems to be economically viable, there needs to be a scalable way to manage them. Cloud-native principles present the opportunity for unified large-scale application delivery, maintenance and control on distributed edge devices.
The entire deployment can be replicated to different sites using robust automation. The Kubernetes control plane, for instance, can handle tens of thousands of containers running across hundreds of nodes, which allows applications to scale as needed, ideally suiting the management of distributed edge workloads.
Scaling through software as opposed to people leads to reduced costs and higher resiliency.
5. Efficiency gains
At the edge, cost margins matter in making business models profitable. Dynamic scaling eliminates the need to consume resources that won’t be utilized, and extra resources can be switched off once spikes subside. A great example of this type of context-aware scaling and workload scheduling is Section’s Adaptive Edge Engine.
Cloud-Native enables an OpEx model, allowing providers to leverage a demand-based approach, using flexible workflows and customized resource utilization to best meet the needs of each application’s unique requirements.
Cloud-Native principles applied to the edge can further reduce latency, benefiting both from running workloads closer to end users and cloud-native tooling that boosts performance, acting as an essential enabler to edge-critical applications that depend on low latency.
The Kubernetes Horizontal Pod Autoscaler, for instance, responds to latency and volume thresholds when deciding how to scale the number of pods up or down. Traffic can therefore be routed to the most optimal edge locations to reduce latency.
Section’s Experience of Deploying Cloud-Native Principles at the Edge
Cloud native edge solutions are still comparatively rare. IDC predicts this will change “as cloud-native edge solutions become more widely available and mature and we have more use cases take advantage of cloud as part of their design.”
At Section, three of CNCF’s cornerstone open source projects - Kubernetes, Prometheus, and CoreDNS - are at the heart of our Edge Compute Platform. Kubernetes has allowed us to improve and scale our platform in ways that would have been impossible without it. With Kubernetes, we can also benefit from being infrastructure-agonistic and manage a diverse set of workloads running across a vendor-neutral global network of leading infrastructure providers. In addition, Prometheus acts as our monitoring solution, informing our ongoing decision-making, and CoreDNS is not only used inside our Kubernetes clusters but also used separately for our Section Hosted DNS offering.
Other notable cloud native technologies that are on our radar include NATS, to replace our messaging system from pre-containerized days, and Jaegar/OpenTracing to augment our observability capabilities beyond simple logging and metrics.
We continue to find that cloud-native solutions help power greater flexibility, improved resource efficiency, better scalability, lower downtime and improved performance across the entire production lifecycle.