Kubernetes Architecture Overview
Kubernetes is a lot like a well-oiled orchestra — every component plays a distinct role, and together, they ensure your containerized applications run harmoniously. But before you start deploying apps or scaling services, it’s critical to understand what makes this powerhouse tick.
In our previous blog, we explored What is Kubernetes, unraveling the basics and why it’s a game-changer. Now, let’s dive into the core of it all — the Kubernetes architecture.
Why Should You Understand Kubernetes Architecture?#
Think of Kubernetes like a city. The architecture tells you how roads (networking), buildings (pods), rules (controllers), and services (load balancers, schedulers) are planned and governed. If you know how the city is laid out, it becomes way easier to build, scale, fix, and innovate.
Without knowing the inner workings, you'd be flying blind. And trust me, Kubernetes has its quirks. But once you understand its brain and body, everything else makes sense.
So, let’s pop the hood.
Kubernetes Architecture#

Control Plane (The Brain)#
The Control Plane is responsible for managing the overall Kubernetes cluster. It makes decisions about scheduling, responding to changes, and maintaining the desired state of the system. Think of the Control Plane as the brain of Kubernetes.
The key components of the Control Plane include:
API Server – The "Front Desk"#
This is the front-facing part of Kubernetes. Whenever you interact with the cluster — whether via kubectl
, Helm, or other tools — it all goes through the API Server.
- What it does: Accepts REST requests, validates them, and updates the system accordingly.
- Why it matters: It’s the entry point for all administrative tasks.
Example: When you run kubectl get pods
, you’re actually talking to the API Server, which then fetches the data from etcd.
Controller Manager – The "Manager on Duty"#
This component ensures that the cluster is always in the desired state.
- What it does: Runs various controllers that watch the cluster and make changes as needed.
- Key Controllers:
- Node Controller: Watches node availability.
- Deployment Controller: Ensures deployments match the spec.
- Replication Controller: Ensures the correct number of pod replicas.
Example: If a node crashes and a pod disappears, the Controller Manager notices and creates a new pod elsewhere.
Scheduler – The "Matchmaker"#
The Scheduler is responsible for assigning workloads (pods) to worker nodes based on resource availability and constraints.
- What it does: Evaluates available nodes and decides where to place new pods.
- Why it’s useful: It ensures optimized distribution of workloads across nodes.
Example: Node A is busy but Node B has room — the Scheduler picks Node B for your new app.
etcd – Kubernetes’ Memory#
etcd is Kubernetes' memory. It’s a distributed key-value store that stores all cluster data.
- What it does: Saves desired and actual cluster states, configs, secrets, and service info.
- Why it matters: If something crashes, etcd can restore the previous known good state.
Example: etcd stores your deployment details so the system can recover them even if multiple components fail.
Worker Nodes (The Hands)#
Worker Nodes are where the actual workloads (applications) run. Each node in a Kubernetes cluster contains the following essential components:
Kubelet – The Executor#
- What it does: Talks to the API Server and ensures that the containers described in pod specs are running correctly.
- How it works: Continuously monitors containers and reports status.
Example: A new pod is assigned to a node → the Kubelet spins it up.
Container Runtime – The Worker Engine#
- What it does: Responsible for pulling container images, starting/stopping containers, and managing execution.
- Supported Runtimes: Docker, containerd, CRI-O.
Example: The runtime pulls your Node.js image and runs your app.
Kube Proxy – The Network Enabler#
- What it does: Manages networking on each node, routes traffic to appropriate pods/services.
- Why it’s important: It enables load balancing and service discovery.
Example: A frontend app needs to talk to the backend → Kube Proxy makes it happen.
How These Components Work Together#
Let’s bring this to life:
- A developer applies a YAML file using
kubectl apply
. - API Server receives and validates the request.
- etcd stores the desired state.
- Controller Manager notices what’s missing and takes action.
- Scheduler assigns pods to a node with sufficient resources.
- Kubelet on the selected node pulls images and launches containers.
- Kube Proxy sets up the necessary networking.
Everything works in harmony to ensure your app is up, running, and reachable.
Control Plane vs Worker Nodes#
Control Plane | Worker Nodes |
---|---|
API Server, Scheduler, etcd, Controllers | Kubelet, Kube Proxy, Container Runtime |
Thinks, plans, orchestrates | Executes, runs, connects |
This separation of roles ensures scalability, fault tolerance, and manageability.
Real-World Example: A Fault Tolerant Web App#
Let’s say you’re running a weather forecast app with 3 pods. Suddenly, Node A crashes:
- Node Controller detects the node failure.
- etcd still remembers that 3 pods are needed.
- Controller Manager realizes only 2 are running.
- Scheduler finds a healthy node and schedules the 3rd pod.
- Kubelet spins up the pod again.
- Kube Proxy ensures the new pod can communicate.
The app stays online. No panic. Kubernetes handled everything behind the scenes.
Conclusion#
In this blog, you explored the inner workings of the Kubernetes Architecture, demystifying key components like the API Server, etcd, Scheduler, and more. You now know who’s calling the shots (Control Plane), and who’s doing the heavy lifting (Nodes).
Understanding the architecture is like getting a map before exploring a new city — it makes everything else easier. And you’ve just earned that map.
Up Next: We’ll get hands-on and show you how to set up your own Kubernetes cluster using Minikube in our next blog. Get ready to roll up your sleeves!