Creating a kubernetes cluster using Vagrant machines as nodes and Containerd
as a container runtime
-
This role is just an automation for the steps of aCloudGuru CKS Lesson Building a Kubernetes Cluster
-
Parts of the roles are also borrowed from my previous projects for creating K8s cluster
-
Vagrant ansible local provisioner is used to execute the roles on the target hosts
-
Kubernetes version to be used can be modified by changing the fact inside kontainerd role
- set_fact:
k8s_version: 1.20.1-00 # Change to whatever desired version
Machine | Address | FQDN |
---|---|---|
master | 192.168.100.11 | master master.com |
worker | 192.168.100.10 | worker worker.com |
# Clone the repo
git clone https://github.com/theJaxon/Kontainerd.git
cd Kontainerd
# Start the machines
vagrant up
# SSH into any of the machines
vagrant ssh < master | worker >
- Since docker isn't available an alternative is to use podman as a container engine, to do this just include
podman.yml
in kontainerd role.
- name: Install Podman
include_tasks: podman.yml
- One of the quirks i've faced was with installing kubernetes packages (kubeadm, kubectl and kubelet), the problem has to do with the order so i started first by installing
kubelet
which in turn installed kubectl (and this was breaking the installation since it was installing latest kubectl version not the version i'm specifying) so upon continuing the task another attempt to install kubectl is made with a downgraded version thus ansible errors. - The workaround was to change the sequence and start by installing the desired kubectl version