Skip to main content

Adaptive Edge Engine

The Adaptive Edge Engine (AEE) is a collection of components that manage different aspects of dynamic, multi-provider, global edge deployments.

  1. LocationOptimizer manages the selection of locations where the workloads run.
  2. HealthChecker monitors workload health.
  3. TrafficDirector manages the routing of traffic to healthy workloads wherever they are deployed.

The above pieces work together in the following flow:

  • The AEE selects locations for your workload based upon your strategy definition in your location-optimizer ConfigMap (see below). These locations may be ones where the workload is currently running, or they may be new ones.
  • Next, the workload is deployed to any new selected locations. Workloads are tested, and when they become healthy they are "ready".
  • Finally, once a workload is ready, traffic becomes "directed", meaning we update records on the Internet so that traffic will reach the workload.
  • If the selected locations are the same as your current locations, then you will see no changes in these panels.

Other components manage additional dimensions of your Project in the background such as location orchestration and scaling.


Edge Location Strategies

The LocationOptimizer calls a strategy to provide the data and logic to obtain a solution, a set of locations where your Project will be deployed and to which traffic will be directed.

The SolverServiceV1 strategy implements an algorithm that seeks to select the number of locations of servers for your containers to minimize the aggregate distance traveled by the incoming data, subject to additional considerations for the workload. Users define parameters to pass to the strategy to modify its operation to best deliver the desired results.

The SolverServiceV1 strategy can be configured to obtain its solution according to additional parameters passed to the strategy. A generic LocationOptimizer configuration object looks like this:

apiVersion: v1
kind: ConfigMap
name: location-optimizer
namespace: default
strategy: |
"params": {
"policy": "dynamic",
"mustInclude": [
"mustNotInclude": [
"minimumLocations": 2,
"maximumLocations": 5,
"minimumUsagePerLocation": 20
  • Strategy: The algorithm + data used to solve the location problem.
  • Params: (not all parameters are valid for every policy)
    • Policy: a wrapper for variations on the strategy to obtain desired results. Must be included.
    • mustInclude: conditions that must be present in the solution. Exact interpretation depends on the policy. Default is NULL.
    • mustNotInclude: conditions that must not be present in the solution. Default is NULL.
    • minimumLocations: this establishes the minimum number of locations for your containers. The default is 2.
    • maximumLocations: This puts an upper limit on the locations that can be selected for your containers. This is a way to avoid accidentally deploying your containers to too many (excessively costly) locations. The default is 5.
    • minimumUsagePerLocation: This is another way to limit your exposure to too many locations. This puts a lower threshold on the aggregate usage (“traffic”) served out of a single location. The default is 20 http requests per second (rps). For example, if your containers experience a total of 50 rps at a given moment and you are using the default value of this parameter, the maximum number of locations you would obtain is 2. That is the maximum number of locations across which the traffic can be split and still keep all locations at or above the threshold. Reducing this threshold can result in your containers being deployed to more locations, but it also makes the selected locations more sensitive to variability (noise) in the traffic signal. If you set this too low, you may get highly variable locations sets and frequent deploy/undeploys.

Policy descriptions


This policy results in a set of locations that are dedicated to non-production and micro-environments.


This policy is available on the Enterprise plan. It should only be used under specific circumstances and requires supporting actions such as changing your DNS records. See the Anycast explanation to know if and how you should use our Anycast networks.

This policy results in a set of locations that are part of our Anycast network.


This policy results in a fixed set of locations that meet additional, stated requirements.


This policy uses usage data (e.g., http requests per second) to find an optimal set of locations for your workload subject to your parameter specifications. With the dynamic policy, the selected set of locations is expected to change over time as traffic patterns to your workload change.

Example policies and configurations

“policy”: “development”

  • Deploys workload to canary locations reserved for development/testing
  • No other options are honoured as this policy returns a predetermined set of locations

“policy”: “anycast”

  • Deploys workload to Anycast locations.
  • The "network" option indicates which Anycast IP space you are using in your DNS records as explained here.

“policy”: “static”

{"strategy":"SolverServiceV1","params":{"policy":"static", "mustInclude":[{"region":"europe"},{"region":"oceania"}]}}
  • Deploys workload to a fixed set of locations that meet the mustInclude conditions. Upon the first implementation of the static policy, the SolverService will solve for a set of locations that meets the specifications. This set will continue to be used as long as it meets the specifications. If it does not, as when the specifications have been changed, then a new set is obtained.

  • Required inputs

    • mustInclude: These conditions are definitive of the desired end state; we will select one location per condition specified in the mustInclude array. For example, in a static policy with mustInclude = [{"region": "europe"},{"region": "northamerica"},{"region": "northamerica"}] will result in 3 locations, one in Europe and 2 in North America. See table of available terms here.
  • Optional

    • chooseFrom: This feature is only available on the Enterprise plan.

“policy”: “dynamic”

  • Deploys workload to a set of locations that meet additional, stated requirements but the set of locations changes in response to the incoming usage (traffic) signal. See here for more information on the traffic signal.
  • All other parameters are optional. If left null, their default values (stated above) are used.
  • Optional inputs
    • minimumLocations: minimumLocations ensures that the workload will be deployed to a minimum number of locations to ensure availability.
    • maximumLocations: maximumLocations can be used to set a coarse upper limit on the number of locations to control costs.
    • minimumUsagePerLocation: This is also a means of controlling the upper limit of a workload’s footprint. When this parameter is 20, this means that each location must serve at least 20 requests per second to warrant selection. Low-traffic workloads may never exceed this threshold, but they will always be in at least as many locations as indicated by the minimumLocations parameter value. High-traffic workloads can expand into additional locations based on the traffic rates. Reducing this number can increase the number of locations, but it also makes your deployment more sensitive to variability (noise) in the traffic signal, which can be undesirable.
    • mustInclude: mustInclude conditions represent “include this in my solution” conditions. Multiple conditions can be specified as key:value pairs. See table of available terms here. With the dynamic policy, your selected locations may include additional locations determined by traffic.
    • mustNotInclude: These represent conditions that must not appear in the solution set. They are specified in the same manner as the mustInclude conditions.
    • chooseFrom: This feature is only available on the Enterprise plan.


The TrafficDirector is responsible for routing traffic to edge deployments and it has multiple strategies it can execute to manage this. Two DNS-based strategies are currently available with the default being a geo-DNS strategy that selects routes based on geographic proximity.


The HealthChecker executes one or more strategies for each workload to determine if the workload has been deployed/scheduled successfully and is ready to accept traffic. The HealthChecker also executes additional background strategies to monitor the health of the locations hosting workloads.

Two strategies are currently available to the HealthChecker configuration. Those strategies are:

  • deploymentMetricsHealthCheck: Monitors platform metrics to detect that the minimum replicas per container are running for each deployment.
  • envHTTPHealthCheck: An agent queries the workload with an HTTP POST request and monitors and interprets the response

The deploymentMetricsHealthCheck is included by default for all workloads. The envHTTPHealthCheck is included by default for HTTP workloads.