Section operates an edge compute platform. At first glance, you might think that’s just a content delivery network (or CDN).
The distinction between what we call an edge compute platform and the traditional CDN you might be familiar with really comes down to choice. The edge compute platform allows users to run any containerized workload that meets the platform module contract. In the simplest terms, this means you can run the tools you need in front of your origin (or at the edge of your application boundary).
While the tools/products you might typically find in a CDN can be run in the edge compute platform (things like Varnish Cache 4.x through 6.x, image optimizers like Cloudinary or Kraken, or security products like PerimeterX Bot Defender or Signal Sciences Web Application Firewall), the features you will find in the edge compute platform cannot necessarily be run in a traditional CDN – at least not universally. In effect, the edge compute platform flips the hardware-centric model of a CDN to a software-centric model of a platform. The result is a choice when it comes to what you run at your edge.
One example of a feature the edge compute platform offers that really separates it from traditional CDNs is the ability to deploy custom code at the Edge or “Bring Your Own Workload”.
Bring Your Own Workload
Bring Your Own Workload refers to the capability for a customer to leverage the edge compute platform to write and maintain their own logic and deploy it at the edge of their application boundary. The edge compute platform then takes responsibility of orchestrating its global distribution, availability, elasticity, and lifecycle. You build it, the edge compute platform runs it.
Before diving into what it might look like to write your own workload and deploy it on the edge compute platform, a discussion of the platform and how it operates might be useful.
Edge Compute Platform
Section’s Edge Compute Platform is based on the orchestration technology Kubernetes. The platform allows containerized workloads to be run in a “daisy-chain” configuration in front of an origin (or your site/application). The goal is that each module (think pod in Kubernetes terms) in that chain can provide a host of edge services. Some of the Edge Services powered by these modules include features like content caching/enrichment while others can provide security/blocking capabilities found with bot blockers and WAFs.
The platform is responsible for ensuring these workloads are globally orchestrated across the desired Points of Presence (PoPs), scaled based on resource demands, and wired together. The platform, as well as the modules that are deployed for each environment, is controlled using a GitOps-based workflow. In addition to the modules that are inserted into the pipeline, the platform handles activities such as SSL termination/Geo enrichment and routing between different origins and alternate origins.
Developing Your Own Workload
The first thing we really need to understand is the proxy contract for any workload on the platform.
As discussed, the platform takes on the responsibility of orchestrating your workloads’ availability, elasticity, and lifecycle. To do that, the system needs to be able to have insight into how it’s operating. The proxy contract helps the system understand how to do that and how to connect to the next module in the chain.
The proxy contract requires:
- Workload should handle unencrypted web traffic (typically, this will be on port 80)
next-hop:80
resolves to the next proxy upstream in the chain- All access logs should be written to
stdout
and error logs should be written tostderr
in JSON format - A module validation script should be available at
/opt/section/validate.sh
- Configuration files for the module should be in
/opt/proxy_config/*
Good candidates for edge workloads include things that direct traffic, enrich/decorate the request, or can live entirely at the edge. Things that are less desirable at the edge include applications with global persistence requirements or those that operate Linux kernel space code.
How to Build Your Own Workload on the Section Edge Compute Platform
Configuring DevPoP
To begin local workload development, the first thing you need to do is build Developer PoP.
DevPoP is a local instance of one of the Points of Presence you’ll find in the edge compute platform. You can build the entire PoP and tear it down in minutes. At its core, DevPop uses Minikube to build a single node cluster that operates a Section PoP and enables you to fully test your edge before deploying it to your live environment.
First, let’s check Minikube is up and running. At a bash prompt, type:
bash-4.4$ minikube status
The output should be similar to this:
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
NOTE: If any of your services are not running, revisit configuring DevPoP
Next, let’s check that DevPoP has been deployed.
bash-4.4$ kubectl get pods -A
The output should be similar to this:
kube-system coredns-6955765f44-65zkk 1/1 Running 7 34d
kube-system coredns-6955765f44-xt77g 1/1 Running 7 34d
kube-system etcd-minikube 1/1 Running 7 34d
kube-system kube-apiserver-minikube 1/1 Running 7 34d
kube-system kube-controller-manager-minikube 1/1 Running 7 34d
kube-system kube-proxy-bgrdj 1/1 Running 7 34d
kube-system kube-scheduler-minikube 1/1 Running 8 34d
kube-system storage-provisioner 1/1 Running 11 34d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-6n5f4 1/1 Running 4 26d
kubernetes-dashboard kubernetes-dashboard-79d9cd965-xjz9n 1/1 Running 7 26d
section-bootstrap bootstrap-c8d8f95fd-wwxpm 1/1 Running 1 3m19s
section-delivery default-http-backend-5dfbff8bc6-c5hkj 1/1 Running 0 3m3s
section-delivery nginx-ingress-controller-shared-mtzmd 1/1 Running 0 3m3s
section-shared api-proxy-678f9df47f-js4fg 1/1 Running 0 3m15s
section-shared event-handler-869cd99465-fl4c9 1/1 Running 0 3m14s
section-shared git-daemon-669c9fc6bb-5tcrc 1/1 Running 0 3m14s
section-shared message-bus-7d48b56994-sqrsm 1/1 Running 0 3m14s
section-shared package-sync-vfgpp 1/1 Running 0 3m15s
section-shared time-sync-c4t8d 1/1 Running 0 3m3s
section-shared webux-f7ffcc568-npkcl 2/2 Running 0 3m15s
NOTE: As the tutorial mentions, each PoP is based on Kubernetes, so you can leverage existing Kubernetes knowledge to inspect and see the components of the platform. kubectl
is the common way an administrator would interact with Kubernetes.
You’re looking for pods using the section-shared
namespace. If you don’t see these, you’ll need to config DevPoP on your Minikube instance by running:
bash-4.4$ minikube ssh "docker run --rm --net=host -v /var/lib/minikube:/var/lib/minikube:ro sectionio/section-init"
You should see several pods start up with the section-shared
namespace. Give them a few minutes to stabilize, and you’re ready to move on to deploying your environment.
Deploying an Environment
Next, deploy an existing environment (or create a new one in Section’s Console) to your DevPoP. Once you have the environment, clone it down to your local machine.
git clone https://aperture.section.io/account/9999/application/9999/sample-application.git
Once cloned, if you open section.config.json
and, if it contains any modules at this point, you can remove the modules in the proxychain. We’ll be deploying initially without any modules.
At this point, the proxy chain in your section.config.json
should look something like this:
1{
2 "proxychain": [
3 ],
4 "environments": {
5 "Production": {
6 "origin": {
7 "address": "my-s3-bucket.s3-website.us-east-2.amazonaws.com",
8 "host_header": "my-s3-bucket.s3-website.us-east-2.amazonaws.com",
9 "enable_sni": false,
10 "verify_certificate": false
11 }
12 },
13 "Development": {
14 "origin": {
15 "address": "my-s3-bucket-dev.s3-website.us-east-2.amazonaws.com"
16 }
17 }
18 }
19}
Along with things like environment origin and alternate origins, the section.config.json
file lists each of the modules you have in your stack and the order in which they’re loaded.
At this point, we have built DevPoP, created an application in Section Console, and pulled it down locally. To deploy this environment to DevPoP, follow the normal workflow found in the Section docs.
It will basically be something to the effect of:
bash-4.4$ git add .
bash-4.4$ git commit -m "Deploying to DevPop"
bash-4.4$ git push developer-pop
You should be able to test and validate that site is working. Now it’s time to start building our own workload.
Building Your Workload
For this walkthrough, we’ll be using the example-simple reference. This example includes a simple nginx reverse proxy that participates in the request chain. (This example contains two examples: example-full is a more complete implementation that leverages a go-handler to better manage the application lifecycle and example-simple. The simple nginx reverse proxy.) For this tutorial, we’ll be using example-simple.
The main files we’ll be using and discussing in example-simple are:
create-module-assets.sh
Dockerfile
example-simple/.section-proxy.yaml
example-simple/prepare.sh
example-simple/validate.sh
example-simple/proxy_config/
example-simple/proxy/nginx.conf
create-module-assets.sh
: A simple bash script that takes a single optional argument. It will call either example-full and example-simple (for this tutorial, we’ll be using example-simple). The script will package your workload source code and then generate the configuration files required to deploy onto the platform. Those configuration files are injected into the pod running the git-daemon. We’ll see how that works in just a bit.
Dockerfile
: This is a standard Dockerfile. You can use it to configure the container running your workload.
example-simple/.section-proxy.yaml
: This file contains the configuration information for your workload. The parts you need to pay particular attention too are the image, names, and container definitions. When Section operates your module, we will tune and recommend additional settings.
example-simple/prepare.sh
and example-simple/validate.sh
: These are scripts that are automatically called before the script is deployed during a scaling event (prepare.sh
) and before it’s available to serve content (validate.sh
). These scripts should contain whatever is required for you to say your workload is prepped and ready to run. If you have no logic to run at these lifecycle stages, they must still be present and return an exit_val
of 0. Any other exit_val
indicates an invalid condition and will stop the deployment of the module.
example-simple/proxy_config/
: A folder that contains (by convention) any files that you’d like to see deployed for your workload. These files are unique to an environment. So if you have an API key or other configuration that may be different for a module configuration, you can deploy it into the module via this folder.
example-simple/proxy/nginx.conf
: Things in this folder are available on all modules. nginx.conf
is an example of one of those files required when setting up nginx. This file is common to all deployments of your module.
To build this example-simple module, clone the repo and change directory to its root. You should be in the same directory as create-module-assets.sh
.
Run:
bash-4.4$ create-module-assets.sh example-simple
When the command prompt comes back, you should see:
Built: gcr.io/section-io/example-simple:1.0.0-k1
With this successful, you can use Docker and see the images available. In order to do this, you’ll have to make sure Docker is pointed at Minikube (the script does this for you in the context of its execution; however, you can easily do this from the command line).
Run:
bash-4.4$ eval $(minikube docker-env)
Once you run configure things to point to your Minikube environment, type:
bash-4.4$ docker image ls
You should see the images you saw previously from the platform (these are images like nginx-ingress-controller
, kube-proxy
, and time-sync
). In addition, you should see your new image:
REPOSITORY TAG IMAGE ID CREATED SIZE
...
gcr.io/section-io/example-simple 1.0.0-k1 11835f231f19 15 minutes ago 153MB
One of the functions of the create-module-assets.sh
script is also to add the configuration files required to deploy this module to the git-daemon. You can kubectl exec into that pod if you’d like to see the files have been correctly deployed. You’ll find them in a symlinked folder, located here: /opt/proxy-packages/active
.
bash-4.4$ ls -lrta | grep example-simple
drwxr-xr-x 4 600 nobody 4096 May 5 21:17 example-simple
drwxr-xr-x 2 600 nobody 4096 May 5 21:17 example-simple@1.0.0
With the images in place, you can add this module to your environment and deploy it. Before we do that, let’s dive a little deeper into what we’re deploying with example-simple.
example-simple is a nginx reverse proxy. If you look in the proxy directory (of example-simple), you’ll see the nginx.conf that defines how the proxy works. Part of the module contract says that all modules should call next-hop
on port 80. next-hop is always defined on the platform to be the next upstream module of the current module (pod in kubernetes language). next-hop is injected into the networking stack by the platform and should always resolve to a VIP (virtual IP) that load balances to its members.
Here is the portion of the nginx.conf that defines next-hop:
35upstream next_hop_upstream {
36 server next-hop:80;
37 keepalive 1;
38}
39
40server {
41 listen 80;
42 server_name localhost;
43
44 location / {
45 proxy_http_version 1.1;
46 proxy_set_header X-Forwarded-For $http_x_forwarded_for;
47 proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
48 proxy_set_header Host $host;
49 proxy_pass "http://next_hop_upstream";
50 }
51}
This block of code shows the nginx configuration correctly calling next-hop. Custom lua or nginx C modules can be called on the request by inserting code here.
While we’re in this file, it’s also worth mentioning logging. You can see the log_format that we’re creating and shipping into our logging pipeline here. The access and error logs are writing to locations in: /var/log/nginx/
If you open the Dockerfile (the one inside the example-simple folder), you’ll see that the actual locations are symlinked to stderr
and stdout
:
15RUN ln -sf /proc/$$/fd/1 /var/log/nginx/access.log
16RUN ln -sf /proc/$$/fd/2 /var/log/nginx/error.log
The platform by default picks up and ships logs into the logging pipeline written to these files.
Deploying Your Workload
Open section.config.json
and locate the proxychain block. At this point, it should still be an empty json array. Update it with your example-simple module. If you made no changes to the configuration, it should look like this:
1"proxychain": [
2 {
3 "name": "example",
4 "image": "example-simple:1.0.0"
5 }
6 ],
Name can be anything you like, provided that it’s alphanumeric. Image is defined in the .section-proxy.yaml
file. You can see it defined on line 17:
1metadata:
2 configurationMountPath: /opt/proxy_config
3 httpContainer: example
4 image: gcr.io/section-io/example-simple:1.0.0-k1
5 logs:
6 additional:
7 - container: example
8 name: error.log
9 stream: stderr
10 handler: example
11 http:
12 container: example
13 stream: stdout
14 metrics:
15 path: /metrics
16 port: 9000
17 name: example-simple:1.0.0
18spec:
19 containers:
20 - name: example
21 resources:
22 limits:
NOTE: Line 4 has the image gcr.io/section-io/example-simple:1.0.0-k1
defined. While the two are similar, it’s only by convention. This is an actual version of the build and is different from the name. The name that you use in your section.config.json
file will use this file to match the image name that is stored in the local repository (what you saw when you typed docker image ls
).
One last configuration before we deploy. What you used in your name
field (in this case “example”), should match a folder under the root of the environment you cloned from the Section Console. All configuration files for the individual module will be placed in this folder. Your push to DevPoP will fail if this is missing.
Here’s what your cloned environment should look like for example-simple:
bash-4.4$ ls -a
. example
.. local.config.json.sample
.git outage_pages
.gitignore section.config.json
custom_errors
Once you’ve made these changes, type git add .
to add the files to your local repository.
Commit it with an appropriate message (ex: git commit -m “adding module to stack”
) and then deploy it to DevPoP (git push developer-pop
)
bash-4.4$ git push developer-pop
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 16 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 344 bytes | 344.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Validating configuration for proxy example...
To http://192.168.99.125:30090/www.site.com.git
2d813bb..8157bd7 build-own-module -> build-own-module
Once complete, type:
bash-4.4$ watch kubectl get pods -A
Every 2.0s: kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-dlxvz 1/1 Running 2 6d21h
kube-system coredns-6955765f44-txjtl 1/1 Running 2 6d21h
kube-system etcd-minikube 1/1 Running 2 6d21h
kube-system kube-apiserver-minikube 1/1 Running 4 6d21h
kube-system kube-controller-manager-minikube 1/1 Running 2 6d21h
kube-system kube-proxy-dpdg6 1/1 Running 2 6d21h
kube-system kube-scheduler-minikube 1/1 Running 2 6d21h
kube-system storage-provisioner 1/1 Running 4 6d21h
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-fgbwv 1/1 Running 2 5d18h
kubernetes-dashboard kubernetes-dashboard-79d9cd965-pxpgw 1/1 Running 4 5d18h
section-bootstrap bootstrap-c8d8f95fd-g5755 1/1 Running 0 20h
section-delivery default-http-backend-5dfbff8bc6-jwwn6 1/1 Running 0 20h
section-delivery nginx-ingress-controller-shared-tqnm4 1/1 Running 0 20h
section-shared api-proxy-678f9df47f-sw7nc 1/1 Running 0 20h
section-shared event-handler-869cd99465-g6rsh 1/1 Running 0 20h
section-shared git-daemon-669c9fc6bb-s487q 1/1 Running 0 20h
section-shared message-bus-7d48b56994-9x7fb 1/1 Running 0 20h
section-shared package-sync-sm8c2 1/1 Running 0 19h
section-shared time-sync-k69kr 1/1 Running 0 20h
section-shared webux-f7ffcc568-vqdls 2/2 Running 0 20h
section-wwwsitecom-master-95b50d99c9097 egress-68cc85b4fb-9hmzm 2/2 Running 1 20h
section-wwwsitecom-master-95b50d99c9097 environment-provisioner-8658b65cdb-lsks9 1/1 Running 0 20h
section-wwwsitecom-master-95b50d99c9097 example-655f95fcf6-cr58h 2/2 Running 0 70s
section-wwwsitecom-master-95b50d99c9097 private-ingress-5c857f6677-6hdfv 3/3 Running 0 20h
section-wwwsitecom-master-95b50d99c9097 static-server-85bbd5fff8-dvcd4 2/2 Running 0 20h
You will see the Kubernetes pods running, the section platform modes, and the pods associated with your environment. One of the pods will be the example we just deployed. In this case, it’s example-655f95fcf6-cr58h
To get your Minikube ip address, type:
bash-4.4$ minikube ip
192.168.99.125
Add this ip address to your host file. Make sure that you’re pointing to the domain that you used when setup the site in the Section Console. Here’s what my host file looks like:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost www.webserver.com
255.255.255.255 broadcasthosto
192.168.99.125 mydemo.sectiondemo.com
Now you can curl or go to a browser to run the proxychain and your new module in DevPoP.
bash-4.4$ curl mydemo.sectiondemo.com
One thing that is useful is to tail the logs your module generates when you make the request. You can use to see the requests as they go through your module
bash-4.4$ kubectl logs -f -n section-wwwsitecom-master-95b50d99c9097 example-655f95fcf6-cr58h -c example
Note: To get the namespace and pod name, remember you can use kubectl get pods -A
and to get the container name kubectl describe pod -n ...
. Hint: you defined it in your .section-proxy.yaml
file.
Wrap-up
The distinction between the edge compute platform and a legacy CDN comes down to choice of the products and tools you run at the edge of your system.
Section’s Edge Compute Platform allows users to run any of the prebuilt modules on the platform as well as your own workloads. In this tutorial, we walked through an example of building your own reverse proxy and inserting it into the pipeline. Using this pattern, you can insert your own logic to direct traffic, block requests, enrich data, or even serve requests directly at the edge.
Wesley Reisz is VP of Technology at Section, Chair of the LF Edge Landscape Working Group, Chair of QCon San Francisco, & Co-Host of The InfoQ Podcast. Wes enjoys driving innovation in the edge ecosystem through awareness, community, and technology.