Skip to main content

Kubernetes and CloudFlow

Our platform is designed to provide you with a simple on ramp to a federated, dynamic, global, cluster of Kubernetes clusters.

Working with the CloudFlow platform will feel just as though you are working with a single Kubernetes Cluster and that cluster is provided for you "As a Service".

Because we expose Kubernetes API exactly as per the Kubernetes Docs, working with CloudFlow is a Cloud Native Dream.

You can use standard Kubernetes tooling to deploy applications to CloudFlow's Distributed Platform. We provide you with a Kubernetes Dashboard for every Project or you can use the Kubernetes API endpoint provided for each Project.

Kubernetes API and CloudFlow

For more information about Kuberbetes please refer to the Kubernetes Documentation.

Using the Kubernetes API with CloudFlow allows you to implement important Kubernetes Resources that are supported by CloudFlow.

What can you use the Kubernetes API to do with CloudFlow?

Once you've created a containerized application, you can use the Kubernetes API to do the following on the CloudFlow platform:

  • Deploy your container to multiple locations
  • Configure service discovery, so that your users will be routed to the best container instance
  • Define more complex applications, such as composite applications that consist of more than one container
  • Define the resource allocations for your system
  • Define scaling factors, such as the number of containers per location, and what signals should be used to scale in and out
  • Maintain your application code, configuration and deployment manifests in your own code management systems and image registries
  • Control how the Adaptive Edge Engine schedules containers, performs health management, and routes traffic

How can you use the Kubernetes API with CloudFlow?

Because CloudFlow is compatible with the Kubernetes API, you can use standard Kubernetes tools to interact with the system.

The most common tool is kubectl, which is the tool that this documentation is centered around.

When you use kubectl you will create a configuration context, which will connect your tool to CloudFlow. From there you can continue to use kubectl as you normally would with a single cluster.

As CloudFlow is Kubenetes API compatible, you'll find that many existing tools work without complication. For example, you could use helm to manage your system.

Check out Getting started with Kubernetes API for a step-by-step guide.

What Kubernetes resources are supported by CloudFlow?

In your manifests you can use the following Kubernetes Resources

  • Deployment
  • ConfigMap
  • Secret
  • Service (ClusterIP and ExternalName, but not types NodePort nor LoadBalancer)
  • HorizontalPodAutoscaler (example)

CloudFlow will create and manage the following resources for you, i.e. you cannot create them yourself:

  • Namespace
  • NetworkPolicy
  • ReplicaSet
  • Pod

For specification of location strategies, CloudFlow recognizes a particular ConfigMap resource with a specific name of location-optimizer.

For engaging CloudFlow's HTTP Ingress, CloudFlow recognizes a particular Service resource with a specific name of ingress-upstream.

Container Resources

When you define your Deployment objects, you can specify the CPU and RAM requests or limits for each container instance.

Use the standard Kubernetes methods for specifying your container's requirements.

Some notes:

  • Please contact us to understand how your requests can impact your billing.
  • CloudFlow may alter your YAML to ensure that request = limit.
    • If request and limit are not equal, CloudFlow will use the higher of the two values.
    • If only one of request or limit is specified, that value will be used for both request and limit.
  • You cannot request ephemeral storage directly. CloudFlow will automatically apply the ephemeral storage limits when the deployment is created.

What storage systems are available for CloudFlow workloads?

CloudFlow supports ephemeral storage.

CloudFlow will automatically apply ephemeral storage limits to your containers based on the container size you've selected.

Your application can use the ephemeral storage in its local filesystem to perform disk IO activities.

"Ephemeral" means that there is no long-term guarantee about durability. You should take this into consideration in your application design.