In Februrary 2020, I created a gist instruction to set up minikube here.
Instead, this README provides the instruction to set up kuberentes (kubeadm) in a gcloud n1-standard-1 ubuntu instance.
As a bonus stage, I also added the instruction to register Go apps to dockerhub and deploy them with Metallb, nginx-ingress and the persistant volume claims.
Table of Contents, Create a gcloud new project, Ubuntu vm setup (terraform), Terraform, Install kubeadm in the Ubuntu vm, Init kubeadm, Install flannel network fabricator, Install kubernetes Dashboard, Login the kubernetes dashboard from the kubectl in your laptop
Deploy my golang apps, Register my golang apps to DockerHub, Metallb host IP, Reduce CPU request, Nginx-ingress reverse proxy, Final deployment with pvc
settings.json vscode
"[hcl]": {
"editor.tabSize": 2
},
Create a service account.
Add the compute admin role.
Create a json key.
Enable compute engine api.
ryoji@ubuntu:/media/local/bin$ wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip ryoji@ubuntu:/media/local/bin$ unzip terraform_0.12.28_linux_amd64.zip Archive: terraform_0.12.28_linux_amd64.zip inflating: terraform ryoji@ubuntu:/media/local/bin$ rm terraform_0.12.28_linux_amd64.zip
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ terraform init Initializing the backend... Successfully configured the backend "local"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider "google" (hashicorp/google) 3.30.0... Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ terraform plan -out out.plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # google_compute_firewall.allow-external will be created + resource "google_compute_firewall" "allow-external" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + id = (known after apply) + name = "k8s-allow-external" + network = "kubeadm" + priority = 1000 + project = (known after apply) + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "k8s-node", ] + allow { + ports = [ + "22", + "6443", ] + protocol = "tcp" } + allow { + ports = [] + protocol = "icmp" } } # google_compute_firewall.allow-internal will be created + resource "google_compute_firewall" "allow-internal" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + id = (known after apply) + name = "k8s-allow-internal" + network = "kubeadm" + priority = 1000 + project = (known after apply) + self_link = (known after apply) + source_ranges = [ + "10.240.0.0/24", ] + target_tags = [ + "k8s-node", ] + allow { + ports = [] + protocol = "icmp" } + allow { + ports = [] + protocol = "ipip" } + allow { + ports = [] + protocol = "tcp" } + allow { + ports = [] + protocol = "udp" } } # google_compute_instance.primary_node will be created + resource "google_compute_instance" "primary_node" { + can_ip_forward = false + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = false + guest_accelerator = (known after apply) + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + machine_type = "n1-standard-1" + metadata = { + "block-project-ssh-keys" = "true" + "sshKeys" = <<~EOT ryoji:ssh-rsa AAAAB3NzaC1yc2EAAAADA... EOT } + metadata_fingerprint = (known after apply) + min_cpu_platform = (known after apply) + name = "primary-node" + project = (known after apply) + self_link = (known after apply) + tags = [ + "k8s-node", ] + tags_fingerprint = (known after apply) + zone = "europe-north1-a" + boot_disk { + auto_delete = true + device_name = (known after apply) + disk_encryption_key_sha256 = (known after apply) + kms_key_self_link = (known after apply) + mode = "READ_WRITE" + source = (known after apply) + initialize_params { + image = "ubuntu-2004-focal-v20200701" + labels = (known after apply) + size = 10 + type = "pd-ssd" } } + network_interface { + name = (known after apply) + network = (known after apply) + network_ip = (known after apply) + subnetwork = "k8s-nodes" + subnetwork_project = (known after apply) + access_config { + nat_ip = (known after apply) + network_tier = (known after apply) } } + scheduling { + automatic_restart = (known after apply) + on_host_maintenance = (known after apply) + preemptible = (known after apply) + node_affinities { + key = (known after apply) + operator = (known after apply) + values = (known after apply) } } } # google_compute_network.kubeadm will be created + resource "google_compute_network" "kubeadm" { + auto_create_subnetworks = false + delete_default_routes_on_create = false + gateway_ipv4 = (known after apply) + id = (known after apply) + ipv4_range = (known after apply) + name = "kubeadm" + project = (known after apply) + routing_mode = (known after apply) + self_link = (known after apply) } # google_compute_subnetwork.kubeadm will be created + resource "google_compute_subnetwork" "kubeadm" { + creation_timestamp = (known after apply) + enable_flow_logs = (known after apply) + fingerprint = (known after apply) + gateway_address = (known after apply) + id = (known after apply) + ip_cidr_range = "10.240.0.0/24" + name = "k8s-nodes" + network = "kubeadm" + project = (known after apply) + region = "europe-north1" + secondary_ip_range = (known after apply) + self_link = (known after apply) } Plan: 5 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ This plan was saved to: out.plan To perform exactly these actions, run the following command to apply: terraform apply "out.plan"
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ terraform apply "out.plan" google_compute_network.kubeadm: Creating... google_compute_network.kubeadm: Still creating... [10s elapsed] google_compute_network.kubeadm: Still creating... [20s elapsed] google_compute_network.kubeadm: Creation complete after 24s [id=projects/kubeadm20200717/global/networks/kubeadm] google_compute_subnetwork.kubeadm: Creating... google_compute_firewall.allow-external: Creating... google_compute_firewall.allow-internal: Creating... google_compute_subnetwork.kubeadm: Still creating... [10s elapsed] google_compute_firewall.allow-external: Still creating... [10s elapsed] google_compute_firewall.allow-internal: Still creating... [10s elapsed] google_compute_firewall.allow-internal: Creation complete after 12s [id=projects/kubeadm20200717/global/firewalls/k8s-allow-internal] google_compute_firewall.allow-external: Creation complete after 12s [id=projects/kubeadm20200717/global/firewalls/k8s-allow-external] google_compute_subnetwork.kubeadm: Still creating... [20s elapsed] google_compute_subnetwork.kubeadm: Creation complete after 24s [id=projects/kubeadm20200717/regions/europe-north1/subnetworks/k8s-nodes] google_compute_instance.primary_node: Creating... google_compute_instance.primary_node: Still creating... [10s elapsed] google_compute_instance.primary_node: Creation complete after 14s [id=projects/kubeadm20200717/zones/europe-north1-a/instances/primary-node] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the path below. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the `terraform show` command. State path: .private/terraform.tfstate
References:
- https://www.terraform.io/docs/providers/google/r/compute_instance.html
- https://docs.projectcalico.org/getting-started/kubernetes/self-managed-public-cloud/gce
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ ssh [email protected] Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1019-gcp x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Fri Jul 17 17:45:33 UTC 2020 System load: 0.0 Processes: 99 Usage of /: 13.8% of 9.52GB Users logged in: 0 Memory usage: 6% IPv4 address for ens4: 10.240.0.2 Swap usage: 0% 0 updates can be installed immediately. 0 of these updates are security updates. The list of available updates is more than a week old. To check for new updates run: sudo apt update The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law.
ryoji@primary-node:~$ sudo apt-get update
ryoji@primary-node:~$ sudo apt-get install -y docker apt-transport-https curl docker.io
ryoji@primary-node:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK
ryoji@primary-node:~$ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main
ryoji@primary-node:~$ sudo apt-get update
References:
ryoji@primary-node:~$ sudo swapon -s
ryoji@primary-node:~$ sudo apt-get install -y kubelet kubeadm kubectl
ryoji@primary-node:~$ sudo apt-mark hold kubelet kubeadm kubectl kubelet set on hold. kubeadm set on hold. kubectl set on hold.
ryoji@primary-node:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 W0717 18:02:27.766223 17123 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.6 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher ryoji@primary-node:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU W0717 18:03:42.756173 17475 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.6 [preflight] Running pre-flight checks [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [primary-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [primary-node localhost] and IPs [10.240.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [primary-node localhost] and IPs [10.240.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0717 18:04:11.877869 17475 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0717 18:04:11.880165 17475 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.002549 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node primary-node as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node primary-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: kfxhxv.qjhv4zdm1p2aogmp [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.240.0.2:6443 --token kfxhxv.qjhv4zdm1p2aogmp \ --discovery-token-ca-cert-hash sha256:8f8e3287d763379feca311829d764d83b8c093f527bcedf5d260ded12c1de154
ryoji@primary-node:~$ sudo vim /etc/docker/daemon.json ryoji@primary-node:~$ cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" }
Add Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd"
ryoji@primary-node:~$ sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ryoji@primary-node:~$ sudo cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/default/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
ryoji@primary-node:~$ mkdir -p $HOME/.kube ryoji@primary-node:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ryoji@primary-node:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
ryoji@primary-node:~$ sudo reboot ryoji@primary-node:~$ Connection to 35.228.189.126 closed by remote host. Connection to 35.228.189.126 closed.
https://github.com/coreos/flannel
ryoji@primary-node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2020-07-17 18:12:34 UTC; 1min 3s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 552 (kubelet)
Tasks: 14 (limit: 4410)
Memory: 101.1M
CGroup: /system.slice/kubelet.service
└─552 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs ->
ryoji@primary-node:~$ kubectl get node NAME STATUS ROLES AGE VERSION primary-node NotReady master 10m v1.18.6
ryoji@primary-node:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
ryoji@primary-node:~$ kubectl get node NAME STATUS ROLES AGE VERSION primary-node Ready master 12m v1.18.6
ryoji@primary-node:~$ kubectl cluster-info Kubernetes master is running at https://10.240.0.2:6443 KubeDNS is running at https://10.240.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ryoji@primary-node:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
Re-create a service account.
References:
ryoji@primary-node:~$ vim kube-dashboard-access.yaml ryoji@primary-node:~$ cat kube-dashboard-access.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard
ryoji@primary-node:~$ kubectl delete -f kube-dashboard-access.yaml serviceaccount "kubernetes-dashboard" deleted clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted ryoji@primary-node:~$ kubectl create -f kube-dashboard-access.yaml serviceaccount/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
Restart the dashboard pod.
ryoji@primary-node:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-cfgbt 1/1 Running 0 18m kube-system coredns-66bff467f8-jdj9t 1/1 Running 0 18m kube-system etcd-primary-node 1/1 Running 1 18m kube-system kube-apiserver-primary-node 1/1 Running 1 18m kube-system kube-controller-manager-primary-node 1/1 Running 1 18m kube-system kube-flannel-ds-amd64-4scsr 1/1 Running 0 7m23s kube-system kube-proxy-zvmll 1/1 Running 1 18m kube-system kube-scheduler-primary-node 1/1 Running 1 18m kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-mnsxf 1/1 Running 0 3m56s kubernetes-dashboard kubernetes-dashboard-7b544877d5-sqkcv 1/1 Running 0 3m56s ryoji@primary-node:~$ kubectl delete pod kubernetes-dashboard-7b544877d5-sqkcv -n kubernetes-dashboard pod "kubernetes-dashboard-7b544877d5-sqkcv" deleted
Now you are supposed to log in kubernetes dashabord with this kubernetes-dashboard-token.
ryoji@primary-node:~$ kubectl get secrets -n kubernetes-dashboard NAME TYPE DATA AGE default-token-dshzr kubernetes.io/service-account-token 3 5m20s kubernetes-dashboard-certs Opaque 0 5m20s kubernetes-dashboard-csrf Opaque 1 5m20s kubernetes-dashboard-key-holder Opaque 2 5m20s kubernetes-dashboard-token-2r5sf kubernetes.io/service-account-token 3 2m3s ryoji@primary-node:~$ kubectl describe secret kubernetes-dashboard-token-2r5sf -n kubernetes-dashboard Name: kubernetes-dashboard-token-2r5sf Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 85b3b976-643c-4039-83a8-42034ca852ab Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImptR3c3TFJrSmlsbDNIdzZYc2ZQRWdyX....
Get .kube/config of the newly created k8s cluster.
ryoji@primary-node:~$ cat .kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRU....= server: https://10.240.0.2:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1..... client-key-data: LS0tLS1CRUdJTiBSU......==
In your laptop, swap or merge your kubeconfig.
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ mv ~/.kube/config ~/.kube/config.back-20200717
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ vim ~/.kube/config
Use public IP address of the VM.
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ sed -i 's/10.240.0.2/35.228.189.126/' ~/.kube/config
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ kubectl --insecure-skip-tls-verify cluster-info Kubernetes master is running at https://35.228.189.126:6443 KubeDNS is running at https://35.228.189.126:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ kubectl --insecure-skip-tls-verify proxy Starting to serve on 127.0.0.1:8001
Get token to log in.
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ kubectl --insecure-skip-tls-verify -n kubernetes-dashboard describe secret $(kubectl --insecure-skip-tls-verify -n kubernetes-dashboard get secret | grep kubernetes-dashboard-token | awk '{print $1}') | grep token: | awk '{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6ImptR3c3TFJrSmlsb...
To copy it to your clipboard directly,
ryoji@ubuntu:/media/VirtualBox VMs/vm-k8s$ kubectl --insecure-skip-tls-verify -n kubernetes-dashboard describe secret $(kubectl --insecure-skip-tls-verify -n kubernetes-dashboard get secret | grep kubernetes-dashboard-token | awk '{print $1}') | grep token: | awk '{print $2}' | xclip -i -selection clipboard
Log in to the dashboard
Useful command:
ryoji@primary-node:~$ kubectl get role,rolebinding -n kubernetes-dashboard NAME CREATED AT role.rbac.authorization.k8s.io/kubernetes-dashboard 2020-07-17T18:19:14Z NAME ROLE AGE rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard Role/kubernetes-dashboard 27m
In January 2019, I created tiny-tiny a golang app to scrape the Japanese yahoo news and another one to read those scraped articles. I re-use them this time.
- https://github.com/growingspaghetti/20190220-ynews/tree/master/golang/scraper/sqlite -> kubernetes CronJob
- https://github.com/growingspaghetti/20190220-ynews/tree/master/golang/viewer/sqlite -> kubernetes Service
Create a repository.
Link github account.
Edit build configuration with the correct path of Dockerfile.
Here a mini CI is working.
- https://hub.docker.com/r/ryojikodakari/ynews-mini-scraper-20200718
- https://hub.docker.com/r/ryojikodakari/ynews-mini-viewer-20200718
Reference:
First, reduce the CPU request (below).
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
Set the external IP address of this Ubuntu VM.
ryoji@primary-node:~$ cat metallb-layer2-config.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: my-ip-space protocol: layer2 addresses: - 35.228.189.126/32
ryoji@primary-node:~$ kubectl apply -f metallb-layer2-config.yaml configmap/config created
With this instruction, CPU request seems to be 95% and needs to be reduced.
In kube-system, modify ReplicaSets, DaemonSets and Deployment, reduce the desired pot number from 2 to 1. Set request CPU to be 25m.
spec:
containers:
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 25m
memory: 10Mi
References:
ryoji@primary-node:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created
Change type from NodePort to LoadBalancer.
Then you can see Exeternal Endpoints are set with your Ubuntu VM external IP address.
To bind this single node to nginx-ingress, add this:
hostNetwork: true
In GCP, add another firewall rule for :80 and :443.
Then now, nginx->metallb->kubernetes-cluster is routed.
References:
Create /mnt/data directory with 1000:1000.
ryoji@primary-node:/mnt/data$ cd .. ryoji@primary-node:/mnt$ sudo chown -R 1000:1000 data ryoji@primary-node:/mnt$ cd data ryoji@primary-node:/mnt/data$ ls -la total 8 drwxr-xr-x 2 ubuntu ubuntu 4096 Jul 18 07:42 . drwxr-xr-x 3 root root 4096 Jul 18 07:42 ..
ryoji@primary-node:~$ cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ynews-mini-pv labels: type: local spec: storageClassName: manual capacity: storage: 0.5Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
ryoji@primary-node:~$ cat pv-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ynews-mini-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 0.4Gi
ryoji@primary-node:~$ cat cron.yaml apiVersion: batch/v1beta1 kind: CronJob metadata: name: ynews-mini-scraper-20200718 spec: schedule: "0 12 */5 * *" jobTemplate: spec: template: spec: volumes: - name: claim-volume persistentVolumeClaim: claimName: ynews-mini-pv-claim containers: - name: ynews-mini-scraper-20200718 image: ryojikodakari/ynews-mini-scraper-20200718 args: ["https://headlines.yahoo.co.jp/list/?m=kyodonews"] volumeMounts: - mountPath: "/app/data" name: claim-volume securityContext: runAsUser: 1000 runAsGroup: 1000 resources: requests: cpu: 10m restartPolicy: OnFailure
(Fetching 10 articles every 5 days is just enough for this purpose.)
ryoji@primary-node:~$ cat deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ynews-mini-viewer-20200718 labels: app: ynews-mini-viewer-20200718 spec: replicas: 1 selector: matchLabels: app: ynews-mini-viewer-20200718 template: metadata: labels: app: ynews-mini-viewer-20200718 spec: volumes: - name: claim-volume persistentVolumeClaim: claimName: ynews-mini-pv-claim containers: - name: ynews-mini-viewer-20200718 image: ryojikodakari/ynews-mini-viewer-20200718 ports: - containerPort: 8080 volumeMounts: - mountPath: "/app/data" name: claim-volume securityContext: runAsUser: 1000 runAsGroup: 1000 resources: requests: cpu: 10m
ryoji@primary-node:~$ cat service.yaml apiVersion: v1 kind: Service metadata: name: ynews-mini-viewer-20200718 spec: selector: app: ynews-mini-viewer-20200718 ports: - protocol: TCP port: 80 targetPort: 8080
Create a secret.
ryoji@ubuntu:~$ htpasswd -c auth ryoji New password: Re-type new password: Adding password for user ryoji ryoji@ubuntu:~$ kubectl --insecure-skip-tls-verify create secret generic basic-auth --from-file=auth secret/basic-auth created
ryoji@primary-node:~$ cat ingress.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ynews-mini-viewer-20200718 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required' spec: rules: - http: paths: - path: / backend: serviceName: ynews-mini-viewer-20200718 servicePort: 80
https://35.228.189.126/ u:ryoji p:k8s
78.5% 💦