Kubernetes cluster creation using Kubekey
--
“Gratitude is an abstraction in life that creates Happiness. All we have to do is, use it more often and save the time we spend searching for happiness…”
Creating and setting up a Kubernetes cluster in the native way is often not straightforward. The administrator usually uses kubeadm to set up and bootstrap the cluster. It involves multiple steps to install and configure. I had covered the steps in my below previous article — https://medium.com/geekculture/a-step-by-step-demo-on-kubernetes-cluster-creation-f183823c0411
I recently came across Kubekey, which is a product of a project called KubeShpere — a distributed operating system for cloud native applications using kubernetes as its kernel. Learn more about Kubesphere — Here.
It adds a level of abstraction for k8s cluster creation. We can create k8s cluster with a single command, passing the cluster node details in a yaml file as an argument. It eases the life of cluster administrator in a lot more ways than just cluster creation. We are going to discuss its features in this article and share my experience while using this tool.
Installation of kubekey binary is pretty straightforward as below.
root@asish-lab:~/lab/kubekey# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -Downloading kubekey v1.2.1 from https://github.com/kubesphere/kubekey/releases/download/v1.2.1/kubekey-v1.2.1-linux-amd64.tar.gz ...Kubekey v1.2.1 Download Complete!root@asish-lab:~/lab/kubekey# ls -ltr
total 23112
-rw-r--r-- 1 1001 121 24468 Dec 17 06:44 README_zh-CN.md
-rw-r--r-- 1 1001 121 24539 Dec 17 06:44 README.md
-rwxr-xr-x 1 1001 121 11970124 Dec 17 06:46 kk
-rw-r--r-- 1 root root 11640487 Jan 2 02:39 kubekey-v1.2.1-linux-amd64.tar.gz
First lets check the supported k8s versions in kubekey. We can use only the k8s versions. ( I wish there were more. )
root@asish-lab:~/lab/kubekey# ./kk version --show-supported-k8s
v1.15.12
v1.16.8
v1.16.10
v1.16.12
v1.16.13
v1.17.0
v1.17.4
v1.17.5
v1.17.6
v1.17.7
v1.17.8
v1.17.9
v1.18.3
v1.18.5
v1.18.6
v1.18.8
v1.19.0
v1.19.8
v1.19.9
v1.20.4
v1.20.6
v1.20.10
v1.21.4
v1.21.5
v1.22.1
We need to provide the node details where the k8s cluster will be created. For this demo I am going to use lxc.
root@asish-lab:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=15GB]: 20GB
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
root@asish-lab:~# lxc launch ubuntu:18.04 kmaster1
Creating kmaster1
Starting kmaster1
root@asish-lab:~# lxc launch ubuntu:18.04 kworker1
Creating kworker1
Starting kworker1
root@asish-lab:~# lxc launch ubuntu:18.04 kworker2
Creating kworker2
Starting kworker2root@asish-lab:~# lxc ls -c nsa4
+----------+---------+--------------+-----------------------+
| NAME | STATE | ARCHITECTURE | IPV4 |
+----------+---------+--------------+-----------------------+
| kmaster1 | RUNNING | x86_64 | 10.108.169.53 (eth0) |
+----------+---------+--------------+-----------------------+
| kworker1 | RUNNING | x86_64 | 10.108.169.148 (eth0) |
+----------+---------+--------------+-----------------------+
| kworker2 | RUNNING | x86_64 | 10.108.169.185 (eth0) |
+----------+---------+--------------+-----------------------+
Create a kubekey config file as below, with details of the cluster we are going to create. We are mentioning the ip address, ssh account details (need to check how to pass ssh keys instead of passwords) and which server to be master and worker nodes, along with k8s cluster internal ip ranges.
root@asish-lab:~/lab/kubekey# cat kubekey-config.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: mykubekey-cluster-1
spec:
hosts:
- {name: kubekey-master1, address: 10.67.130.163, internalAddress: 10.67.130.163, user: ubuntu, password: ubuntu}
- {name: kubekey-worker1, address: 10.67.130.149, internalAddress: 10.67.130.149, user: ubuntu, password: ubuntu}
- {name: kubekey-worker2, address: 10.67.130.53, internalAddress: 10.67.130.53, user: ubuntu, password: ubuntu}
roleGroups:
etcd:
- kubekey-master1
master:
- kubekey-master1
worker:
- kubekey-worker1
- kubekey-worker2
controlPlaneEndpoint:
domain: mykubekey-cluster1.local
address: "10.67.130.163"
port: 6443
kubernetes:
version: v1.22.1
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: flannel
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
Lets run the command and see;
Kubekey verifies if some required packages are available on the nodes. sudo
curl
openssl
ebtables
socat
ipset
conntrack
docker
nfs client
ceph client
glusterfs client
In this case it is complaining about conntrack. I am going to install conntrack and see how it progresses.
root@asish:~/lab# cat bootstrap.sh
apt-get update
apt-get install conntrack -yroot@asish-lab:~/lab# for i in kmaster1 kworker1 kworker2; do cat bootstrap.sh | lxc exec $i -- bash ; done
I am selecting yes to continue the installation. It downloads the dependent packages and installs packages related to cluster creation like kubeadm, kubelet,kubectl, helm, etcd, docker etc.
I encountered some errors as below
[kmaster1 10.108.169.53] MSG:
swapoff: Not superuser.
modprobe: FATAL: Module ip_vs not found in directory /lib/modules/4.15.0-163-generic
modprobe: FATAL: Module ip_vs_rr not found in directory /lib/modules/4.15.0-163-generic
modprobe: FATAL: Module ip_vs_wrr not found in directory /lib/modules/4.15.0-163-generic
modprobe: FATAL: Module ip_vs_sh not found in directory /lib/modules/4.15.0-163-generic
modprobe: FATAL: Module nf_conntrack not found in directory /lib/modules/4.15.0-163-generic
net.ipv4.ip_forward = 1
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
net.ipv4.ip_local_reserved_ports = 30000-32767
sysctl: permission denied on key 'vm.max_map_count'
sysctl: permission denied on key 'vm.swappiness'
sysctl: permission denied on key 'fs.inotify.max_user_instances'
/usr/local/bin/kube-scripts/initOS.sh: line 94: /proc/sys/vm/drop_caches: Permission denied
But the installation continued. Finally it failed with below error
p/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
ERRO[12:45:00 UTC] Failed to install etcd binaries.: Failed to exec command: sudo -E /bin/sh -c "tar -zxf /tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz && cp -f etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/ && chmod +x /usr/local/bin/etcd* && rm -rf etcd-v3.4.13-linux-amd64"
tar: etcd-v3.4.13-linux-amd64/etcdctl: Cannot change ownership to uid 630384594, gid 600260513: Invalid argument
tar: etcd-v3.4.13-linux-amd64/README.md: Cannot change ownership to uid 630384594, gid 600260513: Invalid argument
It looks like it can be mitigated by passing “ — no-same-owner” to the tar command. Lets try with the source build
root@asish-lab:~/lab/kubekey# git remote -v
origin https://github.com/kubesphere/kubekey.git (fetch)
origin https://github.com/kubesphere/kubekey.git (push)root@asish-lab:~/lab# cd kubekey/
root@asish-lab:~/lab/kubekey# grep -r 'tar -zxf' *
Dockerfile.binaries: wget https://get.helm.sh/helm-${HELMVERSION}-linux-${ARCH}.tar.gz && tar -zxf helm-${HELMVERSION}-linux-${ARCH}.tar.gz && \
Binary file output/kk matches
output/kubekey/logs/kubekey.log.20220104:install etcd binaries failed: Failed to exec command: sudo -E /bin/bash -c "tar -zxf /tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz && cp -f etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/ && chmod +x /usr/local/bin/etcd* && rm -rf etcd-v3.4.13-linux-amd64"
pkg/binaries/kubernetes.go: helm.GetCmd = fmt.Sprintf("%s && cd %s && tar -zxf helm-%s-linux-%s.tar.gz && mv linux-%s/helm . && rm -rf *linux-%s*", getCmd, filepath, helm.Version, helm.Arch, helm.Arch, helm.Arch)
pkg/binaries/k3s.go: helm.GetCmd = fmt.Sprintf("%s && cd %s && tar -zxf helm-%s-linux-%s.tar.gz && mv linux-%s/helm . && rm -rf *linux-%s*", getCmd, filepath, helm.Version, helm.Arch, helm.Arch, helm.Arch)
pkg/kubernetes/tasks.go: if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("tar -zxf %s -C /opt/cni/bin", dst), false); err != nil {
pkg/etcd/tasks.go: installCmd := fmt.Sprintf("tar -zxf %s/%s.tar.gz && cp -f %s/etcd* /usr/local/bin/ && chmod +x /usr/local/bin/etcd* && rm -rf %s", common.TmpDir, etcdFile, etcdFile, etcdFile)
pkg/container/containerd.go: fmt.Sprintf("mkdir -p /usr/bin && tar -zxf %s -C /usr/bin ", dst),
pkg/container/docker.go: fmt.Sprintf("mkdir -p /usr/bin && tar -zxf %s && mv docker/* /usr/bin && rm -rf docker", dst),
pkg/files/file.go: fmt.Sprintf("%s && cd %s && tar -zxf helm-%s-linux-%s.tar.gz && mv linux-%s/helm . && rm -rf *linux-%s*",
pkg/k3s/tasks.go: if _, err := runtime.GetRunner().SudoCmd(fmt.Sprintf("tar -zxf %s -C /opt/cni/bin", dst), false); err != nil {
I am updating the tar command in the go scripts.
Added the option to every go file which had the tar command mentioned. Then ran the build below. ( had to install docker to have the build done )
root@asish-lab:~/lab/kubekey# ./build.sh.....
root@asish-lab:~/lab/kubekey# find . -name kk
./output/kk
root@asish-lab:~/lab/kubekey# cd output/
root@asish-lab:~/lab/kubekey/output# ./kk create cluster -f ../../kube-key.conf --debug
When the environment is not HA, the LB address does not need to be entered, so delete the corresponding value.
I removed the section for “controlPlaneEndpoint” and continued the installation.
This time it went ahead and installed everything, except it complained about some kubernetes cluster preflight issues.
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[ERROR Swap]: running with swap on is not supported. Please disable swap
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-163-generic\n", err: exit status 1
[ERROR SystemVerification]: unsupported graph driver: btrfs
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
The default lxc profile is created with 1 CPU. Lets use a custom profile.
root@asish-lab:~/lab# lxc profile copy default k8sroot@asish-lab:~/lab# export EDITOR=vim
root@asish-lab:~/lab# lxc profile edit k8s
config:
limits.cpu: "2"
limits.memory: 2GB
limits.memory.swap: "false"
linux.kernel_modules: ip_tables,ip6_tables,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw"
security.nesting: "true"
security.privileged: "true"
description: Kubernetes LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
Removed the old lxcs and created new one.
root@asish-lab:~/lab# lxc delete kmaster1 kworker1 kworker2 --force
root@asish-lab:~/lab# lxc launch images:ubuntu/18.04 kmaster1 --profile k8s
Creating kmaster1
Starting kmaster1
root@asish-lab:~/lab# lxc launch images:ubuntu/18.04 kworker1 --profile k8s
Creating kworker1
Starting kworker1
root@asish-lab:~/lab# lxc launch images:ubuntu/18.04 kworker2 --profile k8s
Creating kworker2
Starting kworker2
It continued with the cluster creation and created the cluster with one master and two worker nodes.
Now its time to use kubekey to complete the cluster creation
root@asish-lab:~/lab/kubekey# ./kk create cluster -f kubekey-config.yaml
Overall I find kubekey is helpful in creating kubernetes clusters and adding new nodes to the cluster and maintaining node details in a config file.
To add a new node to the cluster, we can run below command
root@asish-lab:~/lab/kubekey# ./kk add nodes -f kubekey-config.yaml
kubekey-config.yaml :
root@asish-lab:~/lab/kubekey# cat kubekey-config.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: mykubekey-cluster-1
spec:
hosts:
- {name: kubekey-master1, address: 10.67.130.163, internalAddress: 10.67.130.163, user: ubuntu, password: ubuntu}
- {name: kubekey-worker1, address: 10.67.130.149, internalAddress: 10.67.130.149, user: ubuntu, password: ubuntu}
- {name: kubekey-worker2, address: 10.67.130.53, internalAddress: 10.67.130.53, user: ubuntu, password: ubuntu}
roleGroups:
etcd:
- kubekey-master1
master:
- kubekey-master1
worker:
- kubekey-worker1
- kubekey-worker2
controlPlaneEndpoint:
domain: mykubekey-cluster1.local
address: "10.67.130.163"
port: 6443
kubernetes:
version: v1.22.1
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: flannel
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
We can delete a node as below
root@asish-lab:~/lab/kubekey# ./kk delete node kubekey-worker2 -f kubekey-config.yaml
After deleting the node, we can remove the node from the worker list in the kukekey config file.
We can delete the whole cluster as below
root@asish-lab:~/lab/kubekey# ./kk delete cluster -f kubekey-config.yaml
There is provision to upgrade the cluster using kubekey as below
./kk create cluster [--with-kubernetes version] [--with-kubesphere version]
It also allows you to federate multiple clusters using cluster role. I am yet to try both cluster upgrade and federation using kubekey.
Conclusion
Overall I find kubekey extremely helpful in creating and managing kubernetes clusters. I used LXC containers for the k8s cluster and had to face some challenges. But in real practical environments we will be using VMs and these challenges will be rare. I like the way we define cluster nodes in a config file and passing the config file to kk command and updating the cluster. In the config file I had to mention passwords in plain text, but there should be a way to encrypt them.
We can achieve similar feats using ansible, which will be my future topic — “To create and manage kubernetes clusters using ansible and kubeadm”.
If you like this article kindly like and share. Follow me for more such articles.