Skip to main content

Creating Multidomain and Path Routing with Nginx

Introduction

In this tutorial, we will use the Section Kubernetes Edge Interface (KEI) and Nginx to create microservices with multidomain and multipath routing. We will outline how we can have multiple services and deployments all talking to each other within a KEI application.

You will create two different deployments - hello-node-deployment.yaml and hello-ruby-deployment.yaml, as well as a service for each. Each deployment will return “hello world” in either node or ruby based on the routing.

There are many scenarios that can benefit from this architecture. For example Section has three different frontend microservices all serving different applications (repositories) to the same website. These include www.section.io, www.section.io/docs and www.section.io/engineering-education which are all static websites. Our single KEI application also includes an application programming interface (API) that we use to fetch data from the Google Analytics API in order to display authors number of page views on the EngEd program. Each of these repositories have their own CI/CD pipelines allowing us to have a combination of public and private repos with added flexibility.

Prerequisites

This tutorial assumes that you know the basics of Kubernetes and have experience working with containerized applications.

Add the Domains

Add the domains you’d like to handle the routing for in your KEI application. In this example we will add the domains www.example-domain-1.com and www.example-domain-2.com.

Setup Project Structure

Follow the Getting Started steps in Docs if you’re just starting out with Section and KEI. If you already have a KEI application, point your ingress-service container to a Nginx container. It may help to use Kustomization if you aren’t already. With the use of kustomization, we can utilize the configMapGenerator to volume mount our nginx configuration.

Create the following /k8s directory and structure:

  • /k8s /base hello-node-deployment.yaml hello-ruby-deployment.yaml hello-node-service.yaml hello-ruby-service.yaml ingress-service.yaml kustomization.yaml router.yaml router.conf For this example we will create deployment-app1.yaml and deployment-app2.yaml` as two nginx pods running different versions.
hello-node-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node-deployment
labels:
app: hello-node
spec:
replicas: 1
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: pvermeyden/nodejs-hello-world:a1e8cf1edcc04e6d905078aed9861807f6da0da4
ports:
- containerPort: 80
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"

hello-ruby-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-ruby-deployment
labels:
app: hello-ruby
spec:
replicas: 1
selector:
matchLabels:
app: hello-ruby
template:
metadata:
labels:
app: hello-ruby
spec:
containers:
- name: hello-ruby
image: sebp/ruby-hello-world
ports:
- containerPort: 80
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "500m"
hello-world-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-world-service
name: hello-world-service
namespace: default
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-world
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

hello-ruby-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-ruby-service
name: hello-ruby-service
namespace: default
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-ruby
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

Use configMapGenerator to apply the Nginx Configuration

Create a volume mount and configmap to load a custom nginx configuration for router.yaml and define the resources you’d like kustomization to manage.

kustomization.yaml
configMapGenerator:
- name: router-config-mount
files:
- ./router.conf

resources:
- hello-world-deployment.yaml
- hello-world-service.yaml
- hello-ruby-deployment.yaml
- hello-ruby-service.yaml
- router.yml
- ingress-service.yml

The configMapGenerator defines the name and location of the file to mount. In this case the file path is ./router.conf with the name router-config-mount.

Apply the configMap via volumes and volumeMounts In our router.yaml below we will add the configMap we just created via volumes and volumeMounts.

router.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.6
volumeMounts:
- name: router-config
mountPath: "/etc/nginx/conf.d"
resources:
requests:
memory: ".5Gi"
cpu: "500m"
limits:
memory: ".5Gi"
cpu: "500m"
ports:
- containerPort: 80
volumes:
- name: router-config
configMap:
name: router-config-mount

Nginx configuration

Create a router.conf file in the k8s/base directory. This configuration controls the routing for the two different domains. Both domains return the same applications for each location block. However these can point to different services. The important thing to note here is that we can use proxy_pass to point our routing to different services that we’ve created.

router.conf
server {
listen 80;
listen [::]:80;
server_name www.example-domain-1.com;

location /node {
proxy_pass http://hello-world-service;
}

location /ruby {
proxy_pass http://hello-ruby-service;
}
}

server {
listen 80;
listen [::]:80;
server_name www.example-domain-2.com;

location /node {
proxy_pass http://hello-world-service;
}

location /ruby {
proxy_pass http:/hello-ruby-service;
}
}

By adding multiple server blocks, we can create routing for different domains. Each server block can contain multiple location blocks. In each location block we can make use of Nginx Reverse Proxy by using the proxy_pass key we can specify the name of a service we want the location to route to. location / { proxy_pass http://hello-world-service; }

In the above example we are handling the routing for both domains www.example-domain-1.com and www.example-domain-2.com. When a request comes from www.example-domain-1.com/node we route the request using proxy_pass to our service hello-node-service.yaml, this returns “hello world” from NodeJS.

Similarly, a request from www.example-domain-1.com/ruby points to our service hello-ruby-service.yaml, returning us “hello world” from Ruby. As you can see the code above has the same configuration for both domains. However, in a real world scenario these can all have different configurations with different types of applications.

This architecture allows us to have multiple applications and microservices all in one KEI application. With relatively simple configuration we can communicate back and forth between different pods and services.