Kubernetes Standarization
To standarize a Kubernetes Cluster,
A Cluster must have this following:
– Dynamic Provisioning Storage (NFS, or any backend storage)
– Container Network Interface ( Calico/Weave)
– Load Balancer Network Provisioning ( MetalLB)
– DNS to resolve ( 8.8.8.8/<any>)
– Containerd configured
Below are the steps to setup all necessary requirements above.
REQUIREMENTS TO RUN THIS GUIDE IS THE FOLLOWING:
– MUST HAVE LINUX KNOWLEDGE
– MUST HAVE KNOWLEDGE ABOUT CONTAINER AND BASIC NETWORKING
– MUST KNOW BASIC CONCEPT OF DYNAMIC PROVISIONING
– MUST BE ABLE TO UNDERSTAND YAML (INDENTATION IN THIS CASE)
Installing jq
sudo apt install -y jq |
Setting up containerd modules
cat <<- EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF |
Setting up Modprobe
sudo modprobe overlay sudo modprobe br_netfilter |
Setting configuration for net
cat <<- EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF |
Modifiy changes
sudo sysctl –system |
download containerd package.
wget https://github.com/containerd/containerd/releases/download/v1.7.16/containerd-1.7.16-linux-amd64.tar.gz |
extract downloaded package to /usr/local
sudo tar xvf containerd-1.7.16-linux-amd64.tar.gz -C /usr/local |
create new directory
sudo mkdir -p /etc/containerd |
create containerd config
cat <<- TOML | sudo tee /etc/containerd/config.toml version = 2 [plugins] [plugins.”io.containerd.grpc.v1.cri”] [plugins.”io.containerd.grpc.v1.cri”.containerd] discard_unpacked_layers = true [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes] [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc] runtime_type = “io.containerd.runc.v2” [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options] SystemdCgroup = true TOML |
Download runc for container
wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64 sudo install -m 755 runc.amd64 /usr/local/sbin/runc |
Download cni plugins for container
wget https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz sudo mkdir -p /opt/cni/bin sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.1.tgz |
create containerd on systemd
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service sudo mkdir -p /usr/local/lib/systemd/system/ sudo mv containerd.service /usr/local/lib/systemd/system/ sudo systemctl daemon-reload sudo systemctl enable –now containerd.service |
Install kubelet,kubeadm and kubectl with its pre-requisites
sudo apt-get install -y apt-transport-https curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg –dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo ‘deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /’ | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl |
Deploy kubernetes cluster
sudo kubeadm init |
Create a kube directory and copy the admin config ( access to cluster)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
Install CRD for network ( calico)
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml |
Deploy Metallb
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml |
IF THE COMMAND INSERTION BELOW SHOWS ERROR, PLEASE REWRITE THE COMMAND MANUALLY
cat <<- EOF | tee ./metallb-l2config.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: – 192.168.122.200-192.168.122.219 — apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2configuration namespace: metallb-system spec: ipAddressPools: – first-pool EOF |
Apply the created file on kubernetes
kubectl apply -f ./metallb-l2config.yaml |
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml |
Install NFS subdir external provisioner
On NFS server – or on control-plane
sudo apt update sudo apt install nfs-kernel-server sudo mkdir -p /srv/nfs/share sudo chown nobody:nogroup /srv/nfs/share sudo chmod 777 /srv/nfs/share sudo vi /etc/exports /srv/nfs/share 192.168.122.0/24(rw,sync,no_subtree_check) sudo exportfs -a sudo systemctl enable nfs-kernel-server sudo systemctl start nfs-kernel-server |
On Worker Nodes
sudo apt update sudo apt install nfs-common showmount -e |
On Control Plane Node
wget https://get.helm.sh/helm-v3.14.4-linux-amd64.tar.gz tar zxvf ./helm-v3.14.4-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ –set nfs.server=192.168.3.202 \ –set nfs.path=/mnt/nfs_share |
Template for Dynamic Provisioning test Volume
apiVersion: v1 kind: PersistentVoLumeClaim metadata: name: nfs-pvc-test spec: storageClassName: nfs-client # SAME NAME AS THE STORAGECLASS accessModes: – ReadWriteMany # must be the same as PersistentVolume resources: requests: storage: 16Mi |