Skip to main content

Deploy a Container to Section

Once you have Created an Environment for KEI you can now deploy your first container.

Use kubectl config to create a context

Let's configure kubectl to communicate with your environment.

  • If you haven't already, obtain your API token.
  • Use the Section Console to navigate to your environment where you will see your KEI_ENVIRONMENT_URL.

Define Section as a cluster using your KEI_ENVIRONMENT_URL:

  • Ubuntu

    kubectl config set-cluster section \
    --certificate-authority=/etc/ssl/certs/ca-certificates.crt \
  • MacOS

    kubectl config set-cluster section \
    --certificate-authority=/usr/local/etc/ca-certificates/cert.pem \

If you don't have the file /etc/ssl/certs/ca-certificates.crt because you're on non-WSL-Windows, you can obtain an equivalent file here: CA certificates

Save your API token

kubectl config set-credentials section-user \

Create the execution context

kubectl config set-context my-section-application \
--cluster=section \
--user=section-user \

Switch to the new context

kubectl config use-context my-section-application

Validate your setup

  kubectl version

More information about cluster, credential, and context information can be found in the Kubernetes documentation

Deploy a web server to the Section Edge for HTTP workload

Next we'll place an nginx webserver at the edge using a Kubernetes Deployment object. The nginx webserver container used in this example comes from the official nginx images on the DockerHub registry.

Create a Deployment object

Create a yaml file, such as my-first-edge-application.yaml

apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
app: nginx
replicas: 1
app: nginx
app: nginx
- name: nginx
image: nginx:1.21.6
imagePullPolicy: Always
memory: ".5Gi"
cpu: "500m"
memory: ".5Gi"
cpu: "500m"
- containerPort: 80

Use kubectl to apply your Deployment

Deploy your application

  kubectl apply -f my-first-edge-application.yaml

See your deployment running on Section

  kubectl get deployment nginx-deployment

See the pods running on Section's network

  kubectl get pods -o wide

The -o wide switch reveals where the pod is running according to the default AEE location optimization strategy. Ultimately you will have NxM pods running in the Section Composable Edge Cloud, where N is the number of replicas, and M is the number of edge locations where your workload is present.

Next, let's setup ingress to your web server.