Skip to main content

Rust on CloudFlow

Learn how to run a default Rust app at the edge for low latency and high availability. You can use our repo as a template, or perform the steps yourself using the Kubernetes dashboard or kubectl commands.


Before starting, create a new CloudFlow Project and then delete the default Deployment and ingress-upstream Service to prepare the project for your new deployment.

Option 1 - Copy Our GitHub Repo

workflow status

  1. Make a new repo from our template: in your browser visit and select Use this template (don't clone, don't fork, but use the template). Choose yourself as an owner, give it a name of your choice, and make it be Public (not Private).
  2. In your new GitHub repo, under Settings > Secrets > Actions, use New repository secret to add these two:
  3. Make a simple change to the message in src/ and watch your changes go live.

Every time you push to the repo your project will be built and deployed to CloudFlow automatically using GitHub Actions.

Option 2 - Step by Step

Following are step-by-step instructions to deploy a Rust app to the edge on CloudFlow. We'll Dockerize it, push it to GitHub Packages, and deploy it on CloudFlow.


  • You need Docker and Rust installed so that you can build a Docker image.

Create the Rust App

Create a Rust app via the Rust cargo command:

cargo new rust-tutorial

After the Rust app has been created, we need to set up the web server. We'll be using the Actix Web framework to handle this. Add this dependency to your Rust app in the Cargo.toml file:

name = "rust-tutorial"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at

actix-web = "4"

Next, update the src/ file to the following:

use actix_web::{get, App, HttpResponse, HttpServer, Responder};

async fn hello() -> impl Responder {
HttpResponse::Ok().body("Hello World from Rust on CloudFlow!")

async fn main() -> std::io::Result<()> {
HttpServer::new(|| {

Lastly, we'll want to build and run the Rust app by running the following:

cd rust-tutorial

cargo build --release

cargo run --release

Test it by running curl http://localhost:8080 in your terminal or by visiting http://localhost:8080 in your browser. You should get a "Hello World from Rust on CloudFlow!" message.

Dockerize It

Let's build the container image that we'll deploy to CloudFlow. First, make a Dockerfile in your directory with the following content:

FROM rust:latest as build

RUN cargo new --bin rust-tutorial
WORKDIR /rust-tutorial

COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml

RUN cargo build --release
RUN rm src/*.rs

COPY ./src ./src

RUN rm ./target/release/deps/rust_tutorial*
RUN cargo build --release

FROM debian:buster-slim

COPY --from=build /rust-tutorial/target/release/rust-tutorial .

CMD ["./rust-tutorial"]

Create a .dockerignore file from the .gitignore file:

cp .gitignore .dockerignore

Build and tag the Docker image:

docker build . -t

Push It

Push it to GitHub Packages. This makes it available to CloudFlow.

docker push

Be sure to make it public. To see your packages and make this change, visit

Deploy It

Next, create a CloudFlow deployment for the Rust app with a rust-deployment.yaml file, substituting YOUR_GITHUB_USERNAME and the environment variables accordingly. This will direct CloudFlow to distribute the container you've pushed to GitHub Packages.

apiVersion: apps/v1
kind: Deployment
name: rust
app: rust
replicas: 1
app: rust
app: rust
- name: rust
imagePullPolicy: Always
cpu: ".1"
memory: ".1Gi"
cpu: ".1"
memory: ".1Gi"
- containerPort: 8080

Apply this deployment resource to your Project with either the Kubernetes dashboard or kubectl apply -f rust-deployment.yaml.

Expose It

Expose it on the internet, mapping the container's port 8080.

apiVersion: v1
kind: Service
name: ingress-upstream
app: ingress-upstream
app: rust
- name: 80-to-8080
protocol: TCP
port: 80
targetPort: 8080

Apply this service resource to your Project with either the Kubernetes dashboard or kubectl apply -f ingress-upstream.yaml.

See the pods running on CloudFlow's network with either the Kubernetes dashboard or kubectl get pods -o wide. The -o wide switch shows where your app is running according to the default AEE location optimization strategy. Your app will be optimally deployed according to traffic. In lieu of significant traffic, your deployment will be made to default locations.

Finally, follow the instructions that configure DNS and TLS.

See What You've Built

See the Rust app you've built by visiting the https://YOUR.DOMAIN.COM, substituting YOUR.DOMAIN.COM according to your DNS and HTTPS configuration.