Install Cluster Kubernates using Kubeadm Ubuntu 22.04

Requirement :

  • 3 machines running Ubuntu 22.04 – Jammy
  • 4 GiB or more of RAM per machine–any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.
  • Full network connectivity among all machines in the cluster. You can use either a public or a private network.

Initialize node

reference : https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

Make sure date time are correct and synced

We use timesyncd, check the status

systemctl status systemd-timesyncd

Edit your NTP Server

root@node1:/home/chairul# cat /etc/systemd/timesyncd.conf                  
[Time]

NTP=10.0.0.21

Restart the service

systemctl restart systemd-timesyncd

Check your time and timezone, set your timezone if neccessary

$timedatectl

$timedatectl set-timezone Asia/Jakarta

Remove Swap

$sudo -i
#swapoff -a
#exit
$strace -eopenat kubectl version
#sudo -s
#sudo sed -i '/\tswap\t/d' /etc/fstab

Sometime etcd having problem running in Ubuntu 22.04, edit the grub config,, edit a line

#cat /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT=”systemd.unified_cgroup_hierarchy=0″

For all Node, run as root

$sudo -s
#apt update

Initilize the node parameter

#cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

$sudo modprobe overlay
$sudo modprobe br_netfilter

### sysctl params required by setup, params persist across reboots
#cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

### Apply sysctl params without reboot
#sudo sysctl --system

### Verify
#lsmod | grep br_netfilter
#lsmod | grep overlay

#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Install Containerd

reference : https://docs.docker.com/engine/install/ubuntu/

For all nodes :

Update package

$sudo apt update

Setup the Repository

### Add Docker's official GPG key:
$sudo apt-get update
$sudo apt-get install ca-certificates curl gnupg
$sudo install -m 0755 -d /etc/apt/keyrings
$curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$sudo chmod a+r /etc/apt/keyrings/docker.gpg

### Add the repository to Apt sources:
$echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$sudo apt-get update
$sudo apt-get install containerd.io

#Verify
sudo systemctl status containerd

Use correct cgroup

Verify cgroup drivers
$sudo ps -p 1

If it systemd, config the cgroup driver:

Edit /etc/containerd/config.toml, and remove all the content. Replace it with this following

#cat /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Setup kernel parameter for containerd

#mkdir -p /etc/systemd/system/containerd.service.d

#cat <<EOF | tee /etc/systemd/system/containerd.service.d/override.conf
[Service]
LimitMEMLOCK=4194304
LimitNOFILE=1048576
EOF

#cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

#systemctl daemon-reload
#systemctl restart containerd

Installing kubeadm, kubelet and kubect

Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Do this on all nodes

Update the apt package index and install packages needed to use the Kubernetes apt repository:

$sudo apt-get update
### apt-transport-https may be a dummy package; if so, you can skip that package
$sudo apt-get install -y apt-transport-https ca-certificates curl

Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:

#curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add the appropriate Kubernetes apt repository:

### This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Deploy the cluster

Do this on the Master node

Get the IP address of the Master main interface/adapter of the controlplane

ip a

root@node1:/home/chairul# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 32:5d:39:78:34:01 brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    inet 10.6.12.101/24 brd 10.6.12.255 scope global ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::305d:39ff:fe78:3401/64 scope link 
       valid_lft forever preferred_lft forever

Set the master hostname to be resolve locally to the listener IP

root@node1:~# cat /etc/hosts
127.0.0.1 localhost
10.6.12.101 node1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@node1:~# 

Initialize the Kubernates Master with the IP

$sudo kubeadm init \
   --apiserver-cert-extra-sans=node1\
   --apiserver-advertise-address 10.6.12.101\
   --pod-network-cidr=10.24.0.0/16


###at the final output you should get this
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.6.12.101:6443 --token ivto1s.cgkjbmkqk4ombnf1 \
        --discovery-token-ca-cert-hash sha256:318200ac981c5fed6de6fe2465f8bb08f8e05f13604380f03f1aef4f1469a2ae 

It means the cluster are installed successfully

Then Follow the info from the output :

$mkdir -p $HOME/.kube
$sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify your cluster running :

#kubectl get pods -A

root@node1:/home/chairul# kubectl get pods -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS       AGE
kube-system   coredns-5dd5756b68-74jw7   0/1     Pending   0              88s
kube-system   coredns-5dd5756b68-vnhk2   0/1     Pending   0              89s
kube-system   kube-apiserver-node1       0/1     Running   1 (2m3s ago)   2m37s
kube-system   kube-proxy-qvss9           1/1     Running   1 (88s ago)    89s

Setup the pod network Addon

Click the add-on link from the init Master output

https://kubernetes.io/docs/concepts/cluster-administration/addons/

In this part we will use weave net

reference : https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

in the Master node, install the weave daemonset

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

This is the #kubectl get pods -A at this stage. Make sure all are running

root@node1:/home/chairul# kubectl get pods -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS         AGE
kube-system   coredns-5dd5756b68-52vpg        1/1     Running   0                32m
kube-system   coredns-5dd5756b68-p6cgk        1/1     Running   0                32m
kube-system   etcd-node1                      1/1     Running   9 (8m13s ago)    31m
kube-system   kube-apiserver-node1            1/1     Running   7 (6m ago)       32m
kube-system   kube-controller-manager-node1   1/1     Running   14 (9m14s ago)   31m
kube-system   kube-proxy-9l2gv                1/1     Running   11 (6m ago)      32m
kube-system   kube-scheduler-node1            1/1     Running   12 (8m43s ago)   31m
kube-system   weave-net-w7k6x                 2/2     Running   1 (48s ago)      64s

Add worker node to cluster

By the initialize Master output, you can paste the join command to the Worker node


kubeadm join 10.6.12.101:6443 --token ivto1s.cgkjbmkqk4ombnf1 \
        --discovery-token-ca-cert-hash sha256:318200ac981c5fed6de6fe2465f8bb08f8e05f13604380f03f1aef4f1469a2ae 

Verify the cluster

###Check Component
#kubectl get componentstatuses

###Check Node
#kubectl get node
root@node1:/home/chairul# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
node1   Ready    control-plane   43m    v1.28.2
node2   Ready    <none>          116s   v1.28.2
node3   Ready    <none>          98s    v1.28.2


###Check Pods
#kubectl get pods -A
root@node1:/home/chairul# kubectl get pods -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS       AGE
kube-system   coredns-5dd5756b68-52vpg        1/1     Running   0              42m
kube-system   coredns-5dd5756b68-p6cgk        1/1     Running   0              42m
kube-system   etcd-node1                      1/1     Running   9 (18m ago)    42m
kube-system   kube-apiserver-node1            1/1     Running   7 (16m ago)    43m
kube-system   kube-controller-manager-node1   1/1     Running   14 (19m ago)   42m
kube-system   kube-proxy-9l2gv                1/1     Running   11 (16m ago)   42m
kube-system   kube-proxy-ksvbt                1/1     Running   2 (18s ago)    113s
kube-system   kube-proxy-sxkmr                1/1     Running   1 (74s ago)    95s
kube-system   kube-scheduler-node1            1/1     Running   12 (19m ago)   42m
kube-system   weave-net-dcfsf                 2/2     Running   2 (36s ago)    95s
kube-system   weave-net-vfzkf                 2/2     Running   2 (55s ago)    113s
kube-system   weave-net-w7k6x                 2/2     Running   1 (11m ago)    11m

Troubleshooting and Verification Command

###Check Component Status
#kubectl get componentstatuses

###System Journal, if kubectl failing
#sudo journalctl --since "10min ago" > all_logs.txt

###Use containerd management to inspect container of kube-system, example to check kubectl-api container
# crictl pods | grep kubectl-api 
# crictl ps --pod a40aa4b396b9b
# crictl logs 0072c84f747ce |& tail -2

Leave a Reply

Your email address will not be published. Required fields are marked *