How to Setup Kubernetes Cluster Using Kubeadm

Setup Kubernetes Cluster Using Kubeadm

In this blog you will learn to setup Kubernetes cluster using Kubeadm utility. It is step by step guide for beginners.

This Kubeadm setup can be used Kubernetes certification preparation (CKA, CKAD and CKS) lab. The certification environment is based in the latest kubernetes version.

Kubernetes Cluster Prerequisites

To follow this setup, you need to have a minimum of two Ubuntu servers with the following CPU and memory requirements.

  1. Server 01 – Controlle Node (Minimum  2 vCPU and 2GB RAM.)
  2. Server 01 – Worker Node (Minimu 1vCPU and 2 GB RAM)

If you are having the servers on cloud, ensure the following ports are allowed between servers and from workstation that you want to connect to the clusters.

For example, port 6443 and 30000-32767 is required to access the API server and applications running on NodePort.

ComponentPort RangePurposeUsed By
API Server6443Kubernetes API serverAll
etcd2379-2380etcd server client APIkube-apiserver, etcd
Kubelet API10250Kubelet APISelf, Control plane
kube-scheduler10251kube-schedulerSelf
kube-controller10252kube-controller-managerSelf
NodePort Services30000-32767NodePort ServicesAll

Installing a Kubernetes cluster is easy as long as you the prerequisites setup.

Note: This setup is based on ubuntu 22.04 server version. It should work in all the latest Ubuntu based servers.

Install Container Runtime on All Nodes

First we need to install container runtime on all the nodes.

Tip: Make use of tmux to execute commands on all the nodes parallelly.

Note: We will use the CRI-O container runtime because it is used in all the certification exam environments like CKA, CKAD, and CKS. You will need to use the crictl utility to interact with the containers on the node for troubleshooting purposes, especially for the CKS exam.

First execute the following commands on all the nodes to enable IPtables bridged traffic and to enable SWAP. Which is a requirement.

Note: Execute all the commands as root

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

sudo swapoff -a
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true

Execute the following commands to install the latest CRIO container runtime.

OS="xUbuntu_22.04"

VERSION="1.28"

cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -

sudo apt-get update
sudo apt-get install cri-o cri-o-runc cri-tools -y

sudo systemctl daemon-reload
sudo systemctl enable crio --now

Install Kubeadm & Kubelet & Kubectl On All the Nodes

Kubeadm & Kubelet & Kubectl  needs to be installed on all the nodes.

Execute the following command to install it all the nodes.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://dl.k8s.io/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update -y

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

Note: The latest kubernetes version is 1.29. If you want lesser kubernetes version, you need to install the specific version.

Set KUBELET_EXTRA_ARGS to Node IP on All Nodes

You need to set the KUBELET_EXTRA_ARGS to the nodes private IP.

Execute the following command to set the variable.

Note: The comand uses inet to get the private IP. If your node has a different interface, use that interface in the command.

sudo apt-get install -y jq
local_ip="$(ip --json a s | jq -r '.[] | if .ifname == "eth1" then .addr_info[] | if .family == "inet" then .local else empty end else empty end')"
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$local_ip
EOF

Verify the IP update using the following command.

cat /etc/default/kubelet

Initialize the Control Plane

Create a file named kubeadm.config on the control plane node with the following content. Replace 192.168.201.10 with the IP of your control plane node.

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  certSANs:
    - 127.0.0.1
    - 192.168.201.10
  extraArgs:
    bind-address: "0.0.0.0"
scheduler:
  extraArgs:
    bind-address: "0.0.0.0"
controllerManager:
  extraArgs:
    bind-address: "0.0.0.0"
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"

Execute the following command to initialize the cluster

kubeadm init --config=kubeadm.config

On a successful execution, you will get two commands in the output as shown below. Note down the node join command which is required to join the worker nodes to the master.

image 26

Execute the following command to add the kubeconfig file home folder to use kubectl command with the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Validate cluster control plane using the following command.

kubectl get pods --all-namespaces

You should get an output as shown below with all the control plane pod in ready and running state.

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-5dd5756b68-8bzhx               1/1     Running   0          3m50s
kube-system   coredns-5dd5756b68-g5psg               1/1     Running   0          3m50s
kube-system   etcd-controlplane                      1/1     Running   0          4m4s
kube-system   kube-apiserver-controlplane            1/1     Running   0          4m5s
kube-system   kube-controller-manager-controlplane   1/1     Running   0          4m4s
kube-system   kube-proxy-77tll                       1/1     Running   0          3m50s
kube-system   kube-scheduler-controlplane            1/1     Running   0          4m4s
root@controlplane:~#

Add Worker Nodes to Control Plane

Execute the node join you noted down on all the worker nodes.

For example,

kubeadm join 192.168.249.131:6443 --token ro4vsa.bt6u81o18y2iibc7 --discovery-token-ca-cert-hash sha256:1fc7175f359ad5fb2d6e8d7a55773767e7958d939511ce36f19a9ba5c8b975de

If you dont have the command, execute the following command on the control plane node to get the join command.

kubeadm token create --print-join-command

Once the nodes are added successful, you should be able to list the nodes in ready status as show below.

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
controlplane   Ready    control-plane   7m33s   v1.28.2
node01         Ready    <none>          31s     v1.28.2
node02         Ready    <none>          29s     v1.28.2

Install Network Plugin

To enable pod networking, you need to install a network plugin. We will use Calico network plugin.

Download the plugin manifest on the control plane node.

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml -O

Apply the manifest

kubectl apply -f calico.yaml

Validate the Calico deployment by listing the Calico pods

kubectl get pods -n kube-system

You should see the running calico pods as shown below.

root@controlplane:~# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7c968b5878-w526t   1/1     Running   0          2m35s
calico-node-8wxkq                          1/1     Running   0          2m35s
calico-node-bkltz                          1/1     Running   0          2m35s
calico-node-jvpn5                          1/1     Running   0          2m35s
coredns-5dd5756b68-8bzhx                   1/1     Running   0          52m
coredns-5dd5756b68-g5psg                   1/1     Running   0          52m
etcd-controlplane                          1/1     Running   0          52m
kube-apiserver-controlplane                1/1     Running   0          52m
kube-controller-manager-controlplane       1/1     Running   0          52m
kube-proxy-28fpr                           1/1     Running   0          45m
kube-proxy-4zrj9                           1/1     Running   0          45m
kube-proxy-77tll                           1/1     Running   0          52m
kube-scheduler-controlplane                1/1     Running   0          52m
root@controlplane:~#

Validate the Cluster by Deploying an Application

Execute the following YAML on the control plane node to deploy a Nginx application on NodePort 32000.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 32000
EOF

Get the service endpoint details

kubectl get svc -o wide

You should be able to get the Nginx homepage on any of the worker Nodes IP with port 32000.

image 27

Conclusion

Kubeadm cluster is one of the best ways to run Kubernetes multi-node cluster locally as well as on cloud.

Also, it serves as a lab setup for Kubernetes certifications.

If you are planning for Kubernetes certifications, checkout the Linux Foundation Coupons to get the latest certification voucher codes.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like