Skip to main content

Multidomain and Path Routing with Nginx

In this step-by-step tutorial, we will use CloudFlow and Nginx to create microservices with multidomain and multipath routing. We will outline how we can have multiple services and deployments all talking to each other within a CloudFlow application.

You will create two different deployments - hello-node-deployment.yaml and hello-ruby-deployment.yaml, as well as a service for each. Each deployment will return “hello world” in either node or ruby based on the routing.

Prerequisites

Before starting, create a new CloudFlow Project and then delete the default Deployment and ingress-upstream Service to prepare the project for your new deployment.

Add the Domains

Add the domains you’d like to handle the routing for in your CloudFlow application. In this example, we will add the domains www.example-domain-1.com and www.example-domain-2.com.

Setup Project Structure

Create the following /k8s directory and structure:

.
└── /k8s/
    ├── /base
    ├── hello-node-deployment.yaml
    ├── hello-ruby-deployment.yaml
    ├── hello-node-service.yaml
    ├── hello-ruby-service.yaml
    ├── ingress-service.yaml
    ├── kustomization.yaml
    ├── router.yaml
    └── router.conf

Create Deployment Files

Create hello-node-deployment.yaml and hello-ruby-deployment.yaml as two nginx pods running different versions.

Copy the following code into hello-node-deployment.yaml:

hello-node-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-node-deployment
  labels:
    app: hello-node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-node
  template:
    metadata:
      labels:
        app: hello-node
    spec:
      containers:
        - name: hello-node
          image: pvermeyden/nodejs-hello-world:a1e8cf1edcc04e6d905078aed9861807f6da0da4
          imagePullPolicy: Always
          ports:
          - containerPort: 80
          resources:
            requests:
              memory: "1Gi"
              cpu: "500m"
            limits:
              memory: "1Gi"
              cpu: "500m"

Copy the following code into hello-ruby-deployment.yaml:

hello-ruby-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-ruby-deployment
  labels:
    app: hello-ruby
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-ruby
  template:
    metadata:
      labels:
        app: hello-ruby
    spec:
      containers:
        - name: hello-ruby
          image: sebp/ruby-hello-world
          imagePullPolicy: Always
          ports:
          - containerPort: 80
          resources:
            requests:
              memory: "1Gi"
              cpu: "500m"
            limits:
              memory: "1Gi"
              cpu: "500m"

Create Service Files

Create hello-node-service.yaml and hello-ruby-service.yaml files.

Copy the following code into hello-node-service.yaml:

hello-node-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-node-service
  name: hello-node-service
  namespace: default
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: hello-node
  sessionAffinity: None
  type: ClusterIP

Copy the following code into hello-ruby-service.yaml:

hello-ruby-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-ruby-service
  name: hello-ruby-service
  namespace: default
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: hello-ruby
  sessionAffinity: None
  type: ClusterIP

Use configMapGenerator to Apply the Nginx Configuration

Create a kustomization.yaml file in the /k8s directory and add the following content:

kustomization.yaml
configMapGenerator:
- name: router-config-mount
   files:
     - ./router.conf

resources:
- hello-node-deployment.yaml
- hello-node-service.yaml
- hello-ruby-deployment.yaml
- hello-ruby-service.yaml
- router.yml
- ingress-service.yml

The configMapGenerator defines the name and location of the file to mount. In this case the file path is ./router.conf with the name router-config-mount.

Create Router Deployment

Create a router.yaml file in the /k8s directory and add the following content:

router.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.21.6
          imagePullPolicy: Always
          volumeMounts:
            - name: router-config
              mountPath: "/etc/nginx/conf.d"
          resources:
            requests:
              memory: ".5Gi"
              cpu: "500m"
            limits:
              memory: ".5Gi"
              cpu: "500m"
          ports:
            - containerPort: 80
      volumes:
        - name: router-config
          configMap:
            name: router-config-mount

Create Nginx Configuration

Create a router.conf file in the /k8s directory and add the following content:

router.conf
server {
    listen       80;
    listen  [::]:80;
    server_name  www.example-domain-1.com;

    location /node {
        proxy_pass http://hello-node-service;
    }

    location /ruby {
        proxy_pass http://hello-ruby-service;
    }
}

server {
    listen       80;
    listen  [::]:80;
    server_name  www.example-domain-2.com;

    location /node {
        proxy_pass http://hello-node-service;
    }

    location /ruby {
        proxy_pass http:/hello-ruby-service;
    }
}

This configuration controls the routing for the two different domains. Both domains return the same applications for each location block. However these can point to different services. The important thing to note here is that we can use proxy_pass to point our routing to different services that we’ve created.

Apply the Configuration

Now that you have created all the necessary files, apply the configuration using kubectl:

kubectl apply -k /path/to/your/k8s/directory

This architecture allows us to have multiple applications and microservices all in one CloudFlow application. With relatively simple configuration, we can communicate back and forth between different pods and services.