Skip to main content

Deploy a CF Worker-like App on CloudFlow

Why run a container instead of Functions-as-a-Service (FaaS)?

  • Cloudflare Workers are resource bound, with 128 MB of memory allocated.
  • Cloudflare Workers are time capped. 10ms, 50ms, up to 30s depending on pricing tier.

If either of the above is causing a limitation for your workload, you should consider moving from a FaaS to a container hosted solution.

With containers, you have the ability to choose your own resource allocation, or define your own timeout duration for long running workloads (such as reports generation), leverage other containers from the community, and develop solutions using any language and technology of your choice.

In this tutorial we will show you how to convert your CF Worker to a container image, which you can then host on CloudFlow's multi-region, multi-cloud Kubernetes hosting solution.

This example app contains 2 function paths:

Step by Step

Prerequisites

  • Docker or equivalent installed
  • A public container repository account (eg GitHub or Docker Hub)
  • An ExpressJS app
  • (optional) kubectl

Steps

  1. Folder structure
  2. Create a Dockerfile
  3. Build & push your container image
  4. Deploy to CloudFlow

Folder structure

You can refer to our example repository here - https://github.com/section/cfworker-tutorial

In our example, we have an existing Express setup, with a subfolder containing each function's code.

.
├── functions
│ ├── helloWorld.js
│ ├── respondWithAnotherSite.js
│ └── ...
├── app.js
├── Dockerfile
└── package.json

Within app.js, you would write the specific paths that correspond with the functions to be run. In our example, the "/" route returns a basic Hello World html page, and the "/section" route returns a redirected page.

app.js
const express = require('express');
const helloWorld = require('./functions/helloWorld');
const respondWithAnotherSite = require('./functions/respondWithAnotherSite');

const app = express()
const port = process.env.PORT || 3000

// Return small HTML page - https://developers.cloudflare.com/workers/examples/return-html/
app.get('/', async function(req, res) {
res.send(await helloWorld())
})

// Respond with another site - https://developers.cloudflare.com/workers/examples/respond-with-another-site/
app.get('/section', async function(req, res) {
res.send(await respondWithAnotherSite())
})


app.listen(port, () => {
console.log(`App listening at http://localhost:${port}`)
})

To test that this works, you simply need to run node app.js

Creating your Dockerfile

Create a Dockerfile in the root folder of your app. It should sit alongside package.json.

The Dockerfile's contents should be the following:

Dockerfile
FROM node:alpine as runner
WORKDIR /app
COPY package*.json ./
RUN npm clean-install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

This script would use Node to run an Express app listening on port 3000 of the container for any incoming requests.

Building the container image

Simply run the following command from the Dockerfile's directory to build and tag your image.

# Replace these example values
USER=cloudflow
IMAGENAME=cf-worker
TAG=0.0.1

docker build . --tag ghcr.io/$USER/$IMAGENAME:$TAG

Push your image to a repository

We will be pushing the container image to GitHub for this example. Follow the instructions to do a docker login on your terminal before running the next command.

GITHUB_TOKEN="" # https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

echo $GITHUB_TOKEN | docker login ghcr.io -u $GITHUB_USER --password-stdin
docker push ghcr.io/$USER/$IMAGENAME:$TAG

Deploy to CloudFlow

Follow the steps in this doc - Deploy a Project - and simply insert your image name from before, and specify port 3000.