Kubernetes Edge Interface (KEI)
KEI - the Kubenetes Edge Interface - is the system that allows you to use standard Kubernetes tooling to deploy applications to the Section Edge.
For more information about Kuberbetes please refer to the Kubernetes Documentation.
KEI is a Kubernetes-compatible API that implements important Kubernetes Resources that are supported by the Section Edge.
What can you do with the KEI?
If you've created a containerized application, KEI is a simple way to:
- Deploy your container to multiple locations
- Configure service discovery, so that your users will be routed to the best container instance
- Define more complex applications, such as composite applications that consist of more than one container
- Define the resource allocations for your system
- Define scaling factors, such as the number of containers per location, and what signals should be used to scale in and out
- Maintain your application code, configuration and deployment manifests in your own code management systems and image registries
- Control how the Adaptive Edge Engine schedules containers, performs health management, and routes traffic
How do you use KEI?
Because KEI is compatible with the Kubernetes API, you can use standard Kubernetes tools to interact with the system.
The most common tool is
kubectl, which is the tool that this documentation is centered around.
When you use
kubectl you will create a configuration context, which will connect your tool to the Section KEI. From there you can continue to use
kubectl as you normally would with a single cluster.
As KEI is Kubenetes API compatible, you'll find that many existing tools work without complication. For example, you could use
helm to manage your system.
Check out Getting started for a step-by-step guide.
What resources are supported by KEI?
In your manifests you can use the following Kubernetes Resources
- Service (ClusterIP and ExternalName, but not types NodePort nor LoadBalancer)
- HorizontalPodAutoscaler (example)
KEI will create and manage the following resources for you, i.e. you cannot create them yourself:
For specification of location strategies, KEI recognizes a particular ConfigMap resource with a specific name of
For engaging KEI's HTTP Ingress, KEI recognizes a particular Service resource with a specific name of
When you define your Deployment objects, you can specify the CPU and RAM requests or limits for each container instance.
Use the standard Kubernetes methods for specifying your container's requirements.
- Please refer to the product pricing information to understand how your requests can impact your billing.
- Section may alter your YAML to ensure that request = limit.
- If request and limit are not equal, Section will use the higher of the two values.
- If only one of request or limit is specified, that value will be used for both request and limit.
- You cannot request ephemeral storage directly. Section will automatically apply the ephemeral storage limits when the deployment is created.
What storage systems are available for KEI workloads?
Section only supports ephemeral storage.
Section will automatically apply ephemeral storage limits to your containers based on the container size you've selected.
Your application can use the ephemeral storage in its local filesystem to perform disk IO activities.
"Ephemeral" means that there is no long-term guarantee about durability. You should take this into consideration in your application design.