Well, there is quite a bit to setting up a Kubernetes (K8) cluster. Remember this, this is a full K8 cluster setup with all needed steps to get K8 running on Debian 11.
Found this article: https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/
This includes the following:
2 master nodes and 2 worker nodes. All will have Docker installed and all will be running in ProxMox
Also, I will be setting up a HAProxy server using Debian 11 as a load balancer between the master nodes. Note, this will become the address used to access the master nodes. This VIP will balance and failover between any master nodes configured to work in this cluster. At least two masters are recommended in an HA setup. Also, it would be possible to add this configuration to pfSense and avoid the added HAProxy server. However, for this demo I am leaving pfSense out of the picture. In the end, I will include a new series of posts that will outline my entire HA Proxmox setup that is running the k8 cluster.
Pre Step 1: Define IP’s and DNS Names
So, best to plan out all names domain info and IP’s needed for this configuration.
|ha-01.dmz.alshowto.com||192.168.2.222||Load Balancer||Main Load Balancer for Master Node access|
|k8-m-01.dmz.alshowto.com||192.168.2.200||Master Node||First master node in cluster|
|k8-m-02.dmz.alshowto.com||192.168.2.201||Master Node||Second master node in cluster|
|k8-n-01.dmz.alshowto.com||192.168.2.202||Worker Node||First worker node in cluster|
|k8-n-02.dmz.alshowto.com||192.168.2.203||Worker Node||Second worker node in cluster|
|k8-n-03.dmz.alshowto.com||192.168.2.204||Worker Node||Third worker node in cluster|
Also, it is important to add all of the names into DNS and make sure each is pingable from each other. At least, set the names and ip in the /etc/hosts file on each server that will interact with cluster.
Step 1: Install VM on ProxMox
Step 2: Make HAProxy VM
Simple, just create a Debian 11 container or VM and install HAProxy on it.
Step 2: Remove swap
Note, If using Debain 11 cloud images no swap so this can be skipped.
nano /etc/fstab # swap was on /dev/sda6 during installation # UUID=18b43798-486f-499d-9edf-2c551b34b5a1 none swap sw 0 0
Comment the following line:
# swap was on /dev/sda6 during installation # UUID=18b43798-486f-499d-9edf-2c551b34b5a1 none swap sw 0 0
Step 3: Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh
sudo usermod -aG docker $USER newgrp docker docker run hello-world
Step 4: Install K8 on VM to Become a Template
Install needed packages.
KEYRING=/usr/share/keyrings/kubernetes.gpg #Get Key add to keyring curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor | sudo tee "$KEYRING" >/dev/null #add associated packages to apt echo 'deb [signed-by=/usr/share/keyrings/kubernetes.gpg] http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kurb.list >/dev/null
sudo apt update sudo apt install kubelet kubeadm kubectl -y
Step 5: Make This VM a Template
At this point, it is a good idea to make this a template. Then it can be used for all Master and Worker nodes going forward.
Step 6: Make First Master Node
error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: E1007 05:47:14.357349 847 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" time="2022-10-07T05:47:14Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
sudo rm /etc/containerd/config.toml sudo systemctl restart containerd kubeadm init
Step 5: Install K8 Worker Nodes
kubeadm join 192.168.1.139:6443 --token 4bp4wb.kpfgj1x7zulq5sy1 \ --discovery-token-ca-cert-hash sha256:eeb7acbd33ae54874c0481af94a6d3b8abcd84c61a7552d5e19099ec8498ece8
-control-plane kubeadm join 192.168.1.139:6443 --token 4bp4wb.kpfgj1x7zulq5sy1 \ --discovery-token-ca-cert-hash \sha256:eeb7acbd33ae54874c0481af94a6d3b8abcd84c61a7552d5e19099ec8498ece8