Skip to main content

Create and Mount a Persistent Volume Claim

Cloud applications often require a place to persist data. Containers inside pods have ephemeral filesystems that are lost when a pod restarts or terminates. Persistent volumes solve this problem by allowing data to persist beyond a pod's lifetime. This tutorial explains how to create a Persistent Volume Claim, mount it in a pod, and then demonstrate that it works across two replicas of a pod.

Make the Claim

First, here is the yaml the makes a claim to a volume. It will be dynamically created once it is mounted.

  • The size of my volume below is set to 30Mi.
  • The access mode for a volume intended to be shared between many pods should be set to ReadWriteMany.
  • The volume mode is set to Filesystem, which mounts it as a directory in the pod's filesystem.
pvc-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
namespace: default
spec:
resources:
requests:
storage: 30Mi
volumeMode: Filesystem
accessModes:
- ReadWriteMany # access by many pods

Apply and check the PersistentVolumeClaim resource.

$ kubectl apply -f pvc-claim.yaml
persistentvolumeclaim/pvc-nginx created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nginx Pending 14m
$

Mount It

Next, we'll mount the claim in a simple NGINX deployment. This will cause the volume to be created dynamically at this time. Things to note:

  • We've specifically asked for 2 replicas so that we can demonstrate access to the volume from each replica.
  • We've asked for the volume to be mounted in our pod's filesystem at /tmp/nginx-data.
pvc-mount.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-data
mountPath: /tmp/nginx-data/
resources:
requests:
cpu: 150m
memory: 100Mi
limits:
cpu: 150m
memory: 100Mi
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: pvc-nginx

Apply and check the deployment resource.

$ kubectl apply -f pvc-mount.yaml 
deployment.apps/nginx created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-65bbb4d46c-2rwl8 0/1 ContainerCreating 0 72s <none> nyc-adljs <none> <none>
nginx-65bbb4d46c-4hp4s 1/1 Running 0 72s 10.244.9.148 sfo-gwuie <none> <none>
nginx-65bbb4d46c-dng7q 1/1 Running 0 72s 10.245.160.39 nyc-adljs <none> <none>
nginx-65bbb4d46c-mg9pr 1/1 Running 0 71s 10.245.182.89 sfo-gwuie <none> <none>
$

Test It

Now, let's exec into a pod running in the NYC node. We'll place a file there, and then demonstrate that it is also available in the other pod in the NYC node.

$ kubectl exec -it nginx-65bbb4d46c-dng7q -- sh
# cd /tmp/nginx-data
# echo hi there > file.txt
# ls -l
total 4
-rw-r--r-- 1 nobody nogroup 9 Jan 4 22:09 file.txt
# exit
$

Let's now exec into the other pod running in the NYC node and check for that same file.

$ kubectl exec -it nginx-65bbb4d46c-2rwl8 -- sh
# cd /tmp/nginx-data
# ls -l
total 4
-rw-r--r-- 1 nobody nogroup 9 Jan 4 22:09 file.txt
# cat file.txt
hi there
# exit
$

Success!

Finally, in order to shut everything down, we first need to delete the dployment, so that no pods have the volume mounted. Then we need to delete the PVC itself.

$ kubectl delete deploy nginx
deployment.apps "nginx" deleted
$ kubectl delete pvc
persistentvolumeclaime/pvc-nginx deleted
$

Alternately, you can delete those resources from the Kubernetes dashboard as well.

And that's it!

Next, try installing Postgres on your PVC.