Pods by themselves are useful, but many workloads require exchanging data between containers, or persisting some form of data.
For this task we have Volumes, Persistent Volumes, Persistent Volume Claims, and Storage Classes.
- Storage
- Index
- Before you Begin
- Volumes
- Persistent Volumes and Claims
- Storage Classes
- Helpful Resources
Kind comes with a default storage class provisioner that can get in the way when trying to explore how storage is used within a Kubernetes cluster. For these exercises, it should be disabled.
$ kubectl annotate --overwrite sc standard storageclass.kubernetes.io/is-default-class="false"
When done, re-enabling the default-storageclass will automatically turn it back on.
$ kubectl annotate --overwrite sc standard storageclass.kubernetes.io/is-default-class="true"
Volumes within Kubernetes are storage that is tied to the Pod’s lifecycle.
A pod can have one or more type of volumes attached to it. These volumes are consumable by any of the containers within the pod.
They can survive Pod restarts; however their durability beyond that is dependent on the Volume Type.
Objective: Understand how to add and reference volumes to a Pod and their containers.
- Create a Pod with from the manifest
manifests/volume-example.yaml
or the yaml below.
manifests/volume-example.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-example
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: content
image: alpine:latest
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
echo $(date)"<br />" >> /html/index.html;
sleep 5;
done
volumes:
- name: html
emptyDir: {}
Command
$ kubectl create -f manifests/volume-example.yaml
Note the relationship between volumes
in the Pod spec, and the volumeMounts
directive in each container.
- Exec into
content
container within thevolume-example
Pod, andcat
thehtml/index.html
file.
$ kubectl exec volume-example -c content -- /bin/sh -c "cat /html/index.html"
You should see a list of date time-stamps. This is generated by the script being used as the entrypoint (args
) of the
content container.
- Now do the same within the
nginx
container, usingcat
to see the content of/usr/share/nginx/html/index.html
example.
$ kubectl exec volume-example -c nginx -- /bin/sh -c "cat /usr/share/nginx/html/index.html"
You should see the same file.
- Now try to append "nginx" to
index.html
from thenginx
container.
$ kubectl exec volume-example -c nginx -- /bin/sh -c "echo nginx >> /usr/share/nginx/html/index.html"
It should error out and complain about the file being read only. The nginx
container has no reason to write to the
file, and mounts the same Volume as read-only. Writing to the file is handled by the content
container.
Summary: Pods may have multiple volumes using different Volume types. Those volumes in turn can be mounted to one
or more containers within the Pod by adding them to the volumeMounts
list. This is done by referencing their name and
supplying their mountPath
. Additionally, volumes may be mounted both read-write or read-only depending on the
application, enabling a variety of use-cases.
Clean Up Command
kubectl delete pod volume-example
Persistent Volumes and Claims work in conjunction to serve as the direct method in which a Pod Consumes Persistent storage.
A PersistentVolume
(PV) is a representation of a cluster-wide storage resource that is linked to a backing storage
provider - NFS
, GCEPersistentDisk
, RBD
etc.
A PersistentVolumeClaim
acts as a namespaced request for storage that satisfies a set of a requirements instead
of mapping to the storage resource directly.
This separation of PV and PVC ensures that an application’s ‘claim’ for storage is portable across numerous backends or providers.
Objective: Gain an understanding of the relationship between Persistent Volumes, Persistent Volume Claims, and the multiple ways they may be selected.
- Create PV
pv-sc-example
from the manifestmanifests/pv-sc-example.yaml
or use the yaml below. Ensure to note that its labeled withtype=hostpath
, its Storage Class Name is set tomypvsc
, and usesDelete
for the Reclaim Policy.
manifests/pv-sc-example.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-sc-example
labels:
type: hostpath
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: mypvsc
hostPath:
type: DirectoryOrCreate
path: "/data/mypvsc"
Command
$ kubectl create -f manifests/pv-sc-example.yaml
- Once created, list the available Persistent Volumes.
$ kubectl get pv
You should see the single PV pv-sc-example
flagged with the status Available
. Meaning no claim has been issued
that targets it.
- Create PVC
pvc-selector-example
from the manifestmanifests/pvc-selector-example.yaml
or the yaml below.
manifests/pvc-selector-example.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-selector-example
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
Command
$ kubectl create -f manifests/pvc-selector-example.yaml
Note that the selector targets type=hostpath
.
- Then describe the newly created PVC
$ kubectl describe pvc pvc-selector-example
The pvc pvc-selector-example
should be in a Pending
state with the Error Event FailedBinding
and
no Persistent Volumes available for this claim and no storage class is set
. If a PV is given a storageClassName
,
ONLY PVCs that request that Storage Class may use it, even if the selector has a valid target.
- Now create the PV
pv-selector-example
from the manifestmanifests/pv-selector-example.yaml
or the yaml below.
manifests/pv-selector-example.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-selector-example
labels:
type: hostpath
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
type: DirectoryOrCreate
path: "/data/mypvselector"
Command
$ kubectl create -f manifests/pv-selector-example.yaml
- Give it a few moments and then look at the Persistent Volumes once again.
$ kubectl get pv
The PV pv-selector-example
should now be in a Bound
state, meaning that a PVC has been mapped or "bound" to it.
Once bound, NO other PVCs may make a claim against the PV.
- Create the pvc
pvc-sc-example
from the manifestmanifests/pvc-sc-example.yaml
or use the yaml below.
manifests/pvc-sc-example.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-sc-example
spec:
accessModes:
- ReadWriteMany
storageClassName: mypvsc
resources:
requests:
storage: 1Gi
Command
$ kubectl create -f manifests/pvc-sc-example.yaml
Note that this PVC has a storageClassName
reference and no selector.
- Give it a few seconds and then view the current PVCs.
$ kubectl get pvc
The pvc-sc-example
should be bound to the pv-sc-example
Volume. It consumed the PV with the corresponding
storageClassName
.
- Delete both PVCs.
$ kubectl delete pvc pvc-sc-example pvc-selector-example
- Then list the PVs once again.
$ kubectl get pv
The pv-sc-example
will not be listed. This is because it was created with a persistentVolumeReclaimPolicy
of Delete
meaning that as soon as the PVC was deleted, the PV itself was deleted.
PV pv-selector-example
, was created without specifying a persistentVolumeReclaimPolicy
and was in turn created
with the default for PVs: Retain
. It's state of Released
means that it's associated PVC has been deleted.
In this state no other PVC's may claim it, even if pvc-selector-example
was created again. The PV must manually
be reclaimed or deleted. This ensures the preservation of the state of the Volume in the event that its PVC was
accidentally deleted giving an administrator time to do something with the data before reclaiming it.
- Delete the PV
pv-selector-example
.
$ kubectl delete pv pv-selector-example
Summary: Persistent Volumes and Persistent Volume Claims when bound together provide the primary method of
attaching durable storage to Pods. Claims may reference PVs by specifying a storageClassName
, targeting them
with a selector, or a combination of both. Once a PV is bound to a PVC, it becomes a tightly coupled relationship and
no further PVCs may issue a claim against the PV, even if the binding PVC is deleted. How PVs are reclaimed is
configured via the PV attribute persistentVolumeReclaimPolicy
where they can either be deleted automatically when
set to Delete
or require manual intervention when set to Retain
as a data-preservation safe-guard.
Objective: Learn how to consume a Persistent Volume Claim within a Pod, and explore some of the ways they may be used.
- Create PV and associated PVC
html
using the manifestmanifests/html-vol.yaml
manifest/html-vol.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: html
labels:
type: hostpath
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: html
persistentVolumeReclaimPolicy: Delete
hostPath:
type: DirectoryOrCreate
path: "/tmp/html"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: html
spec:
accessModes:
- ReadWriteMany
storageClassName: html
resources:
requests:
storage: 1Gi
Command
$ kubectl create -f manifests/html-vol.yaml
- Create Deployment
writer
from the manifestmanifests/writer.yaml
or use the yaml below. It is similar to thevolume-example
Pod from the first exercise, but now uses apersistentVolumeClaim
Volume instead of anemptyDir
.
manifests/writer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: writer
spec:
replicas: 1
selector:
matchLabels:
app: writer
template:
metadata:
labels:
app: writer
spec:
containers:
- name: content
image: alpine:latest
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 5;
done
volumes:
- name: html
persistentVolumeClaim:
claimName: html
Command
$ kubectl create -f manifests/writer.yaml
Note that the claimName
references the previously created PVC defined in the html-vol
manifest.
- Create a Deployment and Service
reader
from the manifestmanifests/reader.yaml
or use the yaml below.
manifests/reader.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: reader
spec:
replicas: 3
selector:
matchLabels:
app: reader
template:
metadata:
labels:
app: reader
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
volumes:
- name: html
persistentVolumeClaim:
claimName: html
---
apiVersion: v1
kind: Service
metadata:
name: reader
spec:
selector:
app: reader
ports:
- protocol: TCP
port: 80
targetPort: 80
Command
$ kubectl create -f manifests/reader.yaml
- With the
reader
Deployment and Service created, usekubectl proxy
to view thereader
Service.
$ kubectl proxy
URL
http://127.0.0.1:8001/api/v1/namespaces/default/services/reader/proxy/
The reader
Pods can reference the same Claim as the writer
Pod. This is possible because the PV and PVC were
created with the access mode ReadWriteMany
.
- Now try to append "nginx" to
index.html
from one of thereader
Pods.
$ kubectl exec reader-<pod-hash>-<pod-id> -- /bin/sh -c "echo nginx >> /usr/share/nginx/html/index.html"
The reader
Pods have mounted the Volume as read only. Just as it did with exercise 1, The command should error out
with a message complaining about not being able to modify a read-only filesystem.
Summary: Using Persistent Volume Claims with Pods is quite easy. The attribute persistentVolumeClaim.claimName
simply must reference the name of the desired PVC in the Pod's Volume definition. Multiple Pods may reference the same
PVC as long as their access mode supports it.
Clean Up Command
kubectl delete -f manifests/reader.yaml -f manifests/writer.yaml -f manifests/html-vol.yaml
Storage classes are an abstraction on top of an external storage resource (PV). They work directly with the external storage system to enable dynamic provisioning and remove the need for the cluster admin to pre-provision Persistent Volumes.
Objective: Understand how it's possible for a Persistent Volume Claim to consume dynamically provisioned storage via a Storage Class.
- Re-enable the kind default-storageclass, and wait for it to become available
$ kubectl annotate --overwrite sc standard storageclass.kubernetes.io/is-default-class="true"
- Describe the new Storage Class
$ kubectl describe sc standard
Note the fields IsDefaultClass
, Provisioner
, and ReclaimPolicy
. The Provisioner
attribute references the
"driver" for the Storage Class. Kind comes with it's own driver rancher.io/local-path
that simply mounts
a hostpath from within the VM as a Volume.
- Create PVC
pvc-standard
from the manifestmanifests/pvc-standard.yaml
or use the yaml below.
manifests/pvc-standard.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-standard
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 1Gi
Command
$ kubectl create -f manifests/pvc-standard.yaml
- Describe the PVC
pvc-standard
$ kubectl describe pvc pvc-standard
The Events
lists the actions that occurred when the PVC was created. The external provisioner standard
provisions
a Volume for the claim default/pvc-standard
and is assigned the name pvc-<pvc-standard uid>
.
- List the PVs.
$ kubectl get pv
The PV pvc-<pvc-standard uid>
will be the exact size of the associated PVC.
- Now create the PVC
pvc-selector-example
from the manifestmanifests/pvc-selector-example.yaml
or use the yaml below.
manifests/pvc-selector-example.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-selector-example
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
Command
$ kubectl create -f manifests/pvc-selector-example.yaml
- List the PVCs.
$ kubectl get pvc
The PVC pvc-selector-example
was bound to a PV automatically, even without a valid selector target. The standard
Storage Class was configured as the default, meaning that any PVCs that do not have a valid target will default to
using the standard
Storage Class.
- Delete both PVCs.
$ kubectl delete pvc pvc-standard pvc-selector-example
- List the PVs once again.
$ kubectl get pv
The PVs were automatically reclaimed following the ReclaimPolicy
that was set by the Storage Class.
Summary: Storage Classes provide a method of dynamically provisioning Persistent Volumes from an external Storage
System. They have the same attributes as normal PVs, and have their own methods of being garbage collected. They may
be targeted by name using the storageClassName
within a Persistent Volume Claim request, or a Storage Class may be
configured as default ensuring that Claims may be fulfilled even when there is no valid selector target.