Kubernetes Handbook

    Dynamic Provisioning in Kubernetes

    Introduction#

    Welcome to the world of Dynamic Provisioning in Kubernetes! If you’ve ever wished storage could just appear for your apps without manual setup, you’re in for a treat.

    Imagine you’re at a magical library where you ask for a diary, and—poof!—one is created just for you, perfectly sized. That’s what Dynamic Provisioning does in Kubernetes: it automatically creates Persistent Volumes (PV) when your app’s Persistent Volume Claim (PVC) asks for storage. Honestly, when I first tackled storage, I spent hours making PVs by hand 😴 Dynamic Provisioning is like having a storage wizard, and I’m here to show you how it works with super simple examples.

    In this blog, we’ll explore what Dynamic Provisioning is, why it’s a game-changer, and how to set it up. By the end, you’ll be conjuring storage like a Kubernetes pro.

    What Is Dynamic Provisioning?#

    In Kubernetes, Persistent Volume Claims (PVC) are like requests for diaries (PVs) to store your app’s data. Without Dynamic Provisioning, you (or an admin) must manually create PVs, like crafting diaries before anyone can use them. Dynamic Provisioning automates this: when a PVC asks for storage, the cluster creates a PV on the spot, using a StorageClass to define the rules.

    Think of a StorageClass as a recipe for diaries—how big, what type (cloud or local), and how to clean up. You might think Dynamic Provisioning is only for fancy cloud setups, but that’s not true—it works anywhere with the right setup. It’s like ordering a custom pizza without kneading the dough!

    Why Dynamic Provisioning Is Awesome#

    • Saves Time: No need to manually create PVs—the cluster does it for you.
    • Super Flexible: Works with cloud storage (like AWS EBS) or on-premises systems (like NFS).
    • Scales Easily: Apps can request storage anytime, perfect for growing workloads.

    When to Use Dynamic Provisioning#

    Use Dynamic Provisioning whenever your apps need storage without the hassle of pre-making PVs, like for databases, web apps, or user data. It’s a must for real-world clusters, especially in the cloud.

    How Does Dynamic Provisioning Work?#

    Dynamic Provisioning 

    Dynamic Provisioning is like a vending machine for storage. You press a button (create a PVC), and out pops a diary (PV). Let’s break it down:

    1. Set Up a StorageClass: Define a recipe for PVs, like “make 10GiB diaries on Google Cloud.”
    2. Create a PVC: Your app requests storage, referencing the StorageClass.
    3. Cluster Magic: The cluster uses the StorageClass to create a PV and binds it to the PVC.
    4. App Uses Storage: Your Pod or Deployment writes to the PV via the PVC, keeping data safe.

    I once tried manually creating PVs for a dozen apps—total chaos! 😅 Dynamic Provisioning saved my sanity by automating it. Let’s see it in action with some easy examples.

    Example 1: Saving a Journal Entry with Dynamic Provisioning#

    Let’s make a Pod save a journal entry, like “Loved the beach,” using Dynamic Provisioning on Google Kubernetes Engine (GKE). We’ll create a StorageClass, a PVC, and a Pod to use it.

    Step 1: Create a StorageClass#

    First, we need a StorageClass to define how PVs are made.

    apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gke-standard provisioner: pd.csi.storage.gke.io parameters: type: pd-standard reclaimPolicy: Delete volumeBindingMode: Immediate

    Step-by-Step Breakdown of the YAML:

    1. Name the recipe: metadata: name: gke-standardThis names the StorageClass, like labeling a diary blueprint.
    2. Set the maker: provisioner: pd.csi.storage.gke.ioThis tells GKE to create PVs using Google Cloud’s disk system.
    3. Choose the type: parameters: type: pd-standardThis picks standard disks, like choosing a basic diary.
    4. Clean up rule: reclaimPolicy: DeleteWhen the PVC is deleted, the diary (PV) is erased, like shredding it.

    Apply it:

    kubectl apply -f gke-standard-sc.yaml

    Output:

    storageclass.storage.k8s.io/gke-standard created

    Check the StorageClass:

    kubectl get storageclass

    Output:

    NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gke-standard pd.csi.storage.gke.io Delete Immediate true 10s

    Step 2: Create a Persistent Volume Claim#

    Now, let’s create a PVC that uses this StorageClass.

    apiVersion: v1 kind: PersistentVolumeClaim metadata: name: journal-claim namespace: simple-namespace spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gke-standard

    Step-by-Step Breakdown of the YAML:

    1. Ask for a diary: resources: requests: storage: 10GiThis requests a diary with 10GiB of space.
    2. Set access: accessModes: - ReadWriteOnceOnly one Pod on one computer can write in the diary.
    3. Use the recipe: storageClassName: gke-standardThis tells the cluster to use the gke-standard StorageClass to create a PV.
    4. Name the request: metadata: name: journal-claimThis names the PVC, like a library slip.

    Apply it:

    kubectl apply -f journal-claim.yaml

    Output:

    persistentvolumeclaim/journal-claim created

    Check the PVC:

    kubectl get pvc -n simple-namespace

    Output:

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE journal-claim Bound pvc-123-90ab-cdef-1234 10Gi RWO gke-standard 10s

    The cluster created a PV automatically, and the PVC is Bound to it!

    Step 3: Use the PVC in a Pod#

    Let’s make a Pod use this PVC to save “Loved the beach.”

    apiVersion: v1 kind: Pod metadata: name: journal-pod namespace: simple-namespace spec: containers: - name: note-writer image: busybox command: ["/bin/sh", "-c", "echo 'Loved the beach' > /notes/journal.txt; sleep 3600"] volumeMounts: - name: note-storage mountPath: /notes volumes: - name: note-storage persistentVolumeClaim: claimName: journal-claim

    Step-by-Step Breakdown of the YAML:

    1. Link the diary: volumeMounts: - name: note-storage, mountPath: /notesThis connects the diary to the Pod’s /notes folder.
    2. Use the diary: volumes: - name: note-storage, persistentVolumeClaim: claimName: journal-claimThis links the Pod to the diary via the PVC.
    3. Write the entry: command: ["/bin/sh", "-c", "echo 'Loved the beach' > /notes/journal.txt; sleep 3600"]This saves “Loved the beach” in the diary.

    Apply it:

    kubectl apply -f journal-pod.yaml

    Output:

    pod/journal-pod created

    Check the note:

    kubectl exec -it journal-pod -n simple-namespace -- cat /notes/journal.txt

    Output:

    Loved the beach

    The cluster created a PV dynamically, and the Pod used it via the PVC. If the Pod crashes, “Loved the beach” stays safe. You might mess this up the first time—totally fine, just try again!

    Example 2: Saving a To-Do List with Dynamic Provisioning and a Deployment#

    Since Deployments are super common for apps, let’s save a to-do list, like “Buy flowers,” using Dynamic Provisioning on AWS Elastic Kubernetes Service (EKS) with a Deployment.

    Step 1: Create a StorageClass#

    Here’s a StorageClass for AWS EBS:

    apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: eks-ebs provisioner: ebs.csi.aws.com parameters: type: gp3 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer

    Step-by-Step Breakdown of the YAML:

    1. Name the recipe: metadata: name: eks-ebsThis names the StorageClass, like a diary blueprint.
    2. Set the maker: provisioner: ebs.csi.aws.comThis tells EKS to create PVs using AWS EBS.
    3. Choose the type: parameters: type: gp3This picks gp3 disks, like a sturdy diary.
    4. Clean up rule: reclaimPolicy: DeleteThe diary is erased when the PVC is deleted.
    5. Wait to bind: volumeBindingMode: WaitForFirstConsumerThe PV is created only when a Pod needs it, like making a diary on demand.

    Apply it:

    kubectl apply -f eks-ebs-sc.yaml

    Output:

    storageclass.storage.k8s.io/eks-ebs created

    Check the StorageClass:

    kubectl get storageclass

    Output:

    NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE eks-ebs ebs.csi.aws.com Delete WaitForFirstConsumer true 10s

    Step 2: Create a Persistent Volume Claim#

    Let’s create a PVC using this StorageClass.

    apiVersion: v1 kind: PersistentVolumeClaim metadata: name: todo-claim namespace: simple-namespace spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: eks-ebs

    Step-by-Step Breakdown of the YAML:

    1. Ask for a diary: resources: requests: storage: 5GiThis requests a diary with 5GiB of space.
    2. Set access: accessModes: - ReadWriteOnceOnly one Pod on one computer can write in the diary.
    3. Use the recipe: storageClassName: eks-ebsThis uses the eks-ebs StorageClass to create a PV.
    4. Name the request: metadata: name: todo-claimThis names the PVC, like a library slip.

    Apply it:

    kubectl apply -f todo-claim.yaml

    Output:

    persistentvolumeclaim/todo-claim created

    Step 3: Use the PVC in a Deployment#

    Now, let’s create a Deployment to save “Buy flowers.”

    apiVersion: apps/v1 kind: Deployment metadata: name: todo-deployment namespace: simple-namespace spec: replicas: 1 selector: matchLabels: app: todo template: metadata: labels: app: todo spec: containers: - name: note-writer image: busybox command: ["/bin/sh", "-c", "echo 'Buy flowers' > /notes/todo.txt; sleep 3600"] volumeMounts: - name: note-storage mountPath: /notes volumes: - name: note-storage persistentVolumeClaim: claimName: todo-claim

    Step-by-Step Breakdown of the YAML:

    1. Set up the app: replicas: 1, selector: matchLabels: app: todoThis runs one Pod, labeled “todo,” managed by the Deployment.
    2. Link the diary: volumeMounts: - name: note-storage, mountPath: /notesThis connects the diary to the Pod’s /notes folder.
    3. Use the diary: volumes: - name: note-storage, persistentVolumeClaim: claimName: todo-claimThis links the Pod to the diary via the PVC.
    4. Write the entry: command: ["/bin/sh", "-c", "echo 'Buy flowers' > /notes/todo.txt; sleep 3600"]This saves “Buy flowers” in the diary.

    Apply it:

    kubectl apply -f todo-deployment.yaml

    Output:

    deployment.apps/todo-deployment created

    Check the Pod:

    kubectl get pods -n simple-namespace

    Output:

    NAME READY STATUS RESTARTS AGE todo-deployment-abc123-xyz789 1/1 Running 0 10s

    Check the note:

    kubectl exec -it todo-deployment-abc123-xyz789 -n simple-namespace -- cat /notes/todo.txt

    Output:

    Buy flowers

    The cluster created a PV dynamically, and the Deployment used it. If the Pod crashes, “Buy flowers” stays safe, and a new Pod picks it up.

    Common Dynamic Provisioning Questions (FAQ)#

    Here are some questions I had as a beginner:

    • What if I don’t have a StorageClass? Without a StorageClass, you must create PVs manually—Dynamic Provisioning won’t work. It’s like a library with no diary-making machine.
    • Can I use Dynamic Provisioning on my own server? Yes, with provisioners like NFS or local storage, but cloud providers like GKE or EKS make it easier.
    • What happens if I delete the PVC? With reclaimPolicy: Delete, the PV is erased; with Retain, it stays but needs manual cleanup.

    Best Practices for Dynamic Provisioning#

    To master Dynamic Provisioning, try these tips:

    • Pick the Right StorageClass: Choose a StorageClass that matches your needs, like pd-standard for GKE or gp3 for EKS.
    • Test First: Try Dynamic Provisioning in a small cluster before production, like practicing with a mini library.
    • Set Reclaim Policies: Use Delete for auto-cleanup or Retain for keeping data, depending on your app.
    • Monitor Storage: Check kubectl get pvc and kubectl get pv to ensure PVs are created and bound.
    • Use Namespaces: Organize PVCs and StorageClasses in namespaces.

    Conclusion#

    In this blog, you learned how Dynamic Provisioning makes Kubernetes storage effortless, creating Persistent Volumes automatically for your Persistent Volume Claims. We explored what it is, how it works, and set up examples with a Pod and a Deployment, saving journal entries and to-do lists on GKE and EKS.

    Last updated on May 06, 2025