In this article, I will cover the process of deploying a minimal Kubernetes cluster, consisting of one Control-plane node and one Worker node using kubeadm tool. We will walk through a few steps: OS preparation, installing Container Runtime and necessary Kubernetes packages, forming a cluster and in addition, we will install Cilium as a Container Network Interface.
My setup includes two VMs:
Control-plane – k8s-control-01
Worker node – k8s-worker-01
Both systems run under Ubuntu 24.04 with the latest updates.
As a Container Runtime Interface (CRI) I will use containerd and for Container Network Interface (CNI) I will install Cilium, which is widely adopted nowadays.
The overall procedure is simple and consists of the following steps:
1. OS Preparation and installing all the packages;
2. Creating the Cluster;
3. Installing Cilium CNI.
1. OS Preparation
Follow the steps below for each Kubernetes node.
Disable swap, if enabled:
sudo sed -i '/swap.img/d' /etc/fstab
sudo swapoff /swap.imgAlternatively, you can keep swap enabled, but you need to follow instructions.
Enable IPv4 Forwarding:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
sudo sysctl --systemVerification
sysctl net.ipv4.ip_forwardInstall containerd:
sudo apt-get update
sudo apt-get install -y containerdGenerate default containerd config file and configure systemd cgroup driver:
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.tomlRestart containerd and verify it’s running:
sudo systemctl restart containerd
sudo systemctl status containerdAdd Kubernetes apt repository:
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.listInstall kubelet, kubeadm and kubectl:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubeletAt this point, the node is ready to create or join the cluster. If you performed those steps on the control-plane node, make the same on the worker as well.
2. Creating the Cluster
Now let’s create a Kubernetes cluster from the control-plane node.
sudo kubeadm init --service-dns-domain "vmik.lab" --upload-certs --pod-network-cidr=172.16.0.0/16 --service-cidr=172.17.0.0/16In this example, I use POD Network CIDR and Service CIRD from the 172.16 and 172.17 ranges. You can keep default ranges, but make sure it’s not overlapping with your production network.
If everything is correct, and you performed all the preparation steps, you should see the similar output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.251:6443 --token 64xjl9.jvcyw82bek9eqr8c \
--discovery-token-ca-cert-hash sha256:1b7b6d13de65427e8b831020f7fd6848925dcac9547c276f647b371a0cc561c8First, copy the kubeadm join … command:
kubeadm join 10.0.0.251:6443 --token 64xjl9.jvcyw82bek9eqr8c \
--discovery-token-ca-cert-hash sha256:1b7b6d13de65427e8b831020f7fd6848925dcac9547c276f647b371a0cc561c8We will use this command to join worker node to the cluster later.
Now we can check the status of the cluster, but let’s copy kubeconfig file to the home directory first:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configBy default, kubectl (Kubernetes managing tool) looks for the configuration file in the user’s home .kube directory.
Check cluster nodes and running pods:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-01 NotReady control-plane 2m44s v1.35.4
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7d764666f9-8v98c 0/1 Pending 0 2m26s
kube-system coredns-7d764666f9-cqvz8 0/1 Pending 0 2m26s
kube-system etcd-k8s-control-01 1/1 Running 0 2m33s
kube-system kube-apiserver-k8s-control-01 1/1 Running 0 2m32s
kube-system kube-controller-manager-k8s-control-01 1/1 Running 0 2m32s
kube-system kube-proxy-vfl4d 1/1 Running 0 2m27s
kube-system kube-scheduler-k8s-control-01 1/1 Running 0 2m33s
As you can see. For now only there is one node in the cluster, and only service pods are running.
It’s time to connect a worker. Connect to the Worker node and run kubeadm join command:
sudo kubeadm join 10.0.0.251:6443 --token 64xjl9.jvcyw82bek9eqr8c \
--discovery-token-ca-cert-hash sha256:1b7b6d13de65427e8b831020f7fd6848925dcac9547c276f647b371a0cc561c8You should see an output like this:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.Run kubectl get nodes on the control plane and check the status:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-01 NotReady control-plane 11m v1.35.4
k8s-worker-01 NotReady <none> 3m26s v1.35.4We can see two nodes, first node with the control-plane role and a worker node without any roles.
You can notice that both nodes are NotReady. There are some reasons, that can put the node to the NotReady status, but usually the first thing in the newly deployed cluster is CNI. We need to install it.
3. Installing Cilium CNI
In this example, we will use Cilium cli, to install the Cilium CNI. Alternatively, we can use Helm.
Let’s install Cilium cli from the node with the active kubeconfig file. In this example, I will do this from the control-plane:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}Before installing Cillium to the Kubernetes cluster, make sure, you have copied kubeconfig file to the /home/user/.kube directory.
After downloading the CLI, install Cilium to the Kubernetes cluster:
cilium install --version 1.19.3 --set ipam.mode=kubernetes
ℹ️ Using Cilium version 1.19.3
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installedBy default, Cilium does not follow Kubernetes POD and Service networks and uses 10.0.0.0 CIDR instead. To respect Kubernetes IPAM, add a parameter --set ipam.mode=kubernetes to the cilium install command.
If you type kubectl get pods -n kube-system, you can see new pods in the kube-system namespace:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
cilium-brvhq 0/1 Init:0/6 0 13s
cilium-envoy-qdmwm 0/1 ContainerCreating 0 13s
cilium-envoy-r7wwp 0/1 ContainerCreating 0 13s
cilium-j78l2 0/1 Init:0/6 0 13s
cilium-operator-86b4d5df4f-mgbrj 0/1 ContainerCreating 0 13sWe can also check cilium status as well, using corresponding command:
cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-envoy Running: 1
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 2/2 managed by Cilium
Helm chart version: 1.19.3
Image versions cilium quay.io/cilium/cilium:v1.19.3@sha256:2e61680593cddca8b6c055f6d4c849d87a26a1c91c7e3b8b56c7fb76ab7b7b10: 1
cilium-envoy quay.io/cilium/cilium-envoy:v1.36.6-1776000132-2437d2edeaf4d9b56ef279bd0d71127440c067aa@sha256:ba0ab8adac082d50d525fd2c5ba096c8facea3a471561b7c61c7a5b9c2e0de0d: 1
cilium-operator quay.io/cilium/operator-generic:v1.19.3@sha256:205b09b0ed6accbf9fe688d312a9f0fcfc6a316fc081c23fbffb472af5dd62cd: 1If we check cluster nodes now, we will see that both of them are ready now:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-control-01 Ready control-plane 5m16s v1.35.4
k8s-worker-01 Ready <none> 4m35s v1.35.4And this is how node’s conditions will look like:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 30 Apr 2026 21:45:53 +0000 Thu, 30 Apr 2026 21:45:53 +0000 CiliumIsUp Cilium is running on this node
MemoryPressure False Thu, 30 Apr 2026 22:20:59 +0000 Thu, 30 Apr 2026 21:42:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 30 Apr 2026 22:20:59 +0000 Thu, 30 Apr 2026 21:42:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 30 Apr 2026 22:20:59 +0000 Thu, 30 Apr 2026 21:42:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 30 Apr 2026 22:20:59 +0000 Thu, 30 Apr 2026 21:45:55 +0000 KubeletReady kubelet is posting ready status4. Verification
It’s time to run our first pod:
kubectl run nginx --image=nginx
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 18sIf we get pod’s info, we will see that it uses Kubernetes CIDR, instead of default 10.0.0.0:
kubectl describe pod nginx
Status: Running
IP: 172.16.1.176If we create a service, we should see the same:
kubectl create service clusterip --tcp=32222:80 test
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.17.0.1 <none> 443/TCP 6m45s
test ClusterIP 172.17.46.49 <none> 32222/TCP 3sAnd that’s all.
We successfully created a small Kubernetes cluster, consisting of one control-plane node and one worker-node, we also installed Calico CNI and deployed a simple POD and Service.
In the next article I will cover a process of creating a high-available cluster, includes three control-plane nodes and a load balancer.
![]()