What To Consider When Developing Workloads at the Edge

As the hype around edge computing continues to mount, engineering teams across the globe are asking themselves how they can achieve lower latency and greater efficiency by migrating more processing to the edge. From retail environments and fast food chains, to wind energy and autonomous vehicles, the use cases for edge computing are seemingly endless. For developers building and operating in these increasingly distributed systems, there are some fundamental workload considerations to keep in mind at the edge.

Edge Workload Categories

There are two main types of workloads for developers to consider when it comes to edge computing: out-of-band and inline (or in-band). The more straightforward of the two are out-of-band workloads, which could also be categorized as synchronous or transactional. Within this kind of workload, a client issues a request and the system issues a block on the response, such as in the case of static file delivery.

Inline workloads are significantly more complex. Also thought of as asynchronous or non-transactional, these types of edge workloads contain custom logic to handle processing immediately upon ingestion of data, rather than sending it back to a centralized infrastructure to be processed. When inline workloads are introduced at the edge, the entire computing model changes. Excitingly, this is where some of the most promising (and challenging!) possibilities for edge computing lie.

Edge Workload Components

From a developer’s perspective, there are several components that guide decisions when approaching edge workload logic.

Web Servers
To date, load balancing and reverse proxies have been the primary distribution methods for legacy CDNs. However, as edge computing enters into the mix and workloads become more complex, software architects are increasingly turning to networks of containerized microservices to deliver more flexibility and scalability.

Alternative Triggers
‘Serverless functions’ have seen increased adoption as a method of running logic closer to the end user. When serverless functions are combined with edge cron jobs that gather and send only the essential data back to the centralized infrastructure, this is where edge computing starts to take shape. As market demands continue to drive the need for increasingly specialized infrastructure, developers are now turning to a ’serverless for containers’ model to run their containerized microservices at the edge without the hassle of managing the allocation and provisioning of servers in close proximity to users.

State Management
Currently, there are several different state management models that people refer to in relation to the edge: ephemeral, persistent, and distributed. Distributed state management is the most noteworthy and challenging model for edge computing. Consider this common use case for distributed state at the edge with web application firewalls, in which security administrators strive to block traffic at each endpoint and as soon as one endpoint detects malicious traffic, the other endpoints need to concurrently know about it. Managing, coordinating and synchronizing data over a range of edge locations or nodes is difficult; some have even defined edge computing as “a distributed data problem”. When state is local (such as when an IoT device manages its own state) or it is stateless, edge computing is fairly straightforward. Likewise, when stateful computing is centralized in a cloud data center, it is relatively simple. However, attempting to perform stateful computing at the network or infrastructure edge is not an easy task. Managing and coordinating state across a range of edge endpoints with the guarantee of consistency is difficult.

Every edge workload must be built to receive messages through low latency global message delivery, and API extensibility is critical for scalability.

When building distributed systems, traceability across the entire stack is essential. Developers and operators need effective mechanisms to be able to identify and diagnose issues when they occur and determine where they can make changes to optimize performance.

Needless to say, edge computing is evolving in an increasingly complex landscape, and the considerations outlined in this article are just the tip of the iceberg. As an edge compute platform, the tooling that Section provides to empower developers at the edge is intent on solving for these challenges, opportunities and beyond.

Similar Articles