Giter Club home page Giter Club logo

replicatedhq / kurl Goto Github PK

View Code? Open in Web Editor NEW
717.0 23.0 77.0 41.7 MB

Production-grade, airgapped Kubernetes installer combining upstream k8s with overlays and popular components

Home Page: https://kurl.sh

License: Apache License 2.0

Dockerfile 0.29% Makefile 0.49% Shell 93.60% JavaScript 0.26% Go 5.29% Jsonnet 0.06% Smarty 0.01% Open Policy Agent 0.01%
kubernetes kubernetes-installation kubeadm rook ceph contour kurl airgapped kubernetes-installer kubernetes-cluster

kurl's People

Contributors

alexanderjophus avatar areed avatar camilamacedo86 avatar chase-replicated avatar dependabot[bot] avatar diamonwiggins avatar divolgin avatar e3b0c442 avatar emosbaugh avatar garcialuis avatar github-actions[bot] avatar grantmiller avatar graysonnull avatar jala-dx avatar kcboyle avatar laverya avatar marccampbell avatar moonlight16 avatar murphybytes avatar mzaneri avatar paha avatar replicated-ci-kots avatar replicated-ci-kurl avatar replicatedbot avatar ricardomaraschini avatar rrpolanco avatar sgalsaleh avatar stascool avatar stefanrepl avatar syntastical avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kurl's Issues

Request for calling out minimum resource requirements for default control plane node

It would be helpful to first time user to have recommended configuration for the control plane node called out in README. Especially in the case of vanilla k8s env (not Nodeless), since the workloads are running on the master node(s), minimum requirements needed to run default Replicated + Weave + Rook + Contour pods would be helpful.

i winged the install with t2.medium instance on AWS with 2 vcpus, 4GB. This was a freshly provisioned ec2 instance with nothing else running on it. With the default Replicated stack (weave, rook, contour), cpu is at 10% utilization and memory is at 30% utilization. Hence, would be useful to get guidance on min requirement from README. Could you please consider calling out the reqs? Thanks a lot!

Include krew in every k8s installation

We should install the krew plugin manager (https://krew.dev) in every Kubernetes installation. This should not be an option, it doesn't change anything on the server.

Additionally, we should install the preflight and support-bundle plugins to enable easier managing of these servers

Kotsadm Addon: ability to scale kotsadm replicas

Right now there's no way to make kotsadm HA via an installer spec. It would be great to be able to increase the replicas for at least the stateless components like kotsadm-api, kotsadm, and kurl-proxy.

node join scripts do not include kubeadm token ca hash when generated on additional masters

The generated node join script does not include the kubeadm-token-ca-hash parameter when run on a new master node following the destruction of the original master node. For example:

To add worker nodes to this installation, run the following script on your other nodes
    curl https://staging.kurl.sh/62906b1/join.sh | sudo bash -s kubernetes-master-address=10.2.131.201 kubeadm-token=1oc3h1.cql0qqc7evfyju16 kubeadm-token-ca-hash= kubernetes-version=1.15.3

This prevents new nodes from being joined to the cluster.

There should be a prometheus addon

We should start with support for the newest version (2.12.0 at this time)

The addon should conform to the standard kurl.sh add-ons and require no additional configuration for a basic prom install

The default “latest” kurl.sh should be updated to include prom

There should be a subfield for grafana version

This should install the default, usable kubernetes monitoring dashboards

apiVersion: kurl.sh/v1beta1
kind: Installer
metadata:
  name: ""
spec:
  kubernetes:
    version: "latest"
  prometheus:
    version: "latest"
    grafana:
      version: "latest"   

Add support for deploying a docker registry

This should be enabled in the YAML, but we should support deploying a Docker registry to the cluster and enable it to be pulled from all nodes (deploy an imagePullSecret with it)

Support for windows

It should be possible to run a kurl.sh installer on Windows.

We can approach this in 2 phases, the first is to allow worker nodes to install on Windows. This is useful by itself to have a cluster with some Linux and some Windows nodes.

Once we have this, then we can plan a release that supports Windows for the master node(s) also.

Error message when upgrading to lower k8s version needs line breaks

When attempting to run a kurl installer on a node that already has a later k8s version installed, the displayed error message is missing separators between sentences.

actual:
The currently installed kubernetes version is 1.16.4The requested version to upgrade to is 1.15.3Since the currently installed version is newer than the requested version, no action will be taken

expected:
The currently installed kubernetes version is 1.16.4. The requested version to upgrade to is 1.15.3. Since the currently installed version is newer than the requested version, no action will be taken.

Airgap bundles using "latest" aren't rebuilt

If you create an installer that is a mix of fixed versions and latest then when a new version is released for the component pinned to latest, the airgap bundle is not re-generated.

For example, if you POST this yaml when Contour latest is 0.14.0 you'd get back the install URL https://kurl.sh/68970f9 and the airgap bundle would be available at https://kurl.sh/bundle/68970f9.tar.gz.

apiVersion: kurl.sh/v1beta1
spec:
  kubernetes:
    version: "1.15.3"
  weave:
    version: "2.5.2"
  contour:
    version: "latest"

Then if Contour latest were updated to 0.14.1 the online install URL would correctly install that version but the airgap bundle would continue to install 0.14.0.

"reset" flag fails to uninstall k8s env

  1. On a freshly provisioned t2.medium Ubuntu18.08 machine, ran curl https://kurl.sh/latest | sudo bash. K8s cluster came up fine:
root@ip-172-31-44-19:~# kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
ip-172-31-44-19   Ready    master   43m   v1.15.1

root@ip-172-31-44-19:~# kubectl get pods --all-namespaces
NAMESPACE        NAME                                          READY   STATUS      RESTARTS   AGE
heptio-contour   contour-7cbd8fbb7d-vblt6                      2/2     Running     1          41m
heptio-contour   contour-7cbd8fbb7d-vdtds                      2/2     Running     0          41m
kube-system      coredns-5c98db65d4-s9v2t                      1/1     Running     0          43m
kube-system      coredns-5c98db65d4-wlzmd                      1/1     Running     0          43m
kube-system      etcd-ip-172-31-44-19                          1/1     Running     0          42m
kube-system      kube-apiserver-ip-172-31-44-19                1/1     Running     0          42m
kube-system      kube-controller-manager-ip-172-31-44-19       1/1     Running     0          31m
kube-system      kube-proxy-8wx88                              1/1     Running     0          43m
kube-system      kube-scheduler-ip-172-31-44-19                1/1     Running     0          42m
kube-system      weave-net-t7fx7                               2/2     Running     0          43m
rook-ceph        rook-ceph-agent-m854d                         1/1     Running     0          42m
rook-ceph        rook-ceph-mgr-a-5cbf7bd5d9-v8xtz              1/1     Running     0          41m
rook-ceph        rook-ceph-mon-a-549b788f4c-6zf79              1/1     Running     0          42m
rook-ceph        rook-ceph-operator-54b54c5765-cp6g7           1/1     Running     1          43m
rook-ceph        rook-ceph-osd-prepare-ip-172-31-44-19-bsl6f   0/2     Completed   0          30m
rook-ceph        rook-discover-ln6m2                           1/1     Running     0          42m
root@ip-172-31-44-19:~# 
  1. Attempted to cleanup k8s cluster. reset option did not cleanup env, it attempted to recreate k8s env:
root@ip-172-31-44-19:~# curl https://kurl.sh/latest | sudo bash -s reset
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 53115  100 53115    0     0   648k      0 --:--:-- --:--:-- --:--:--  648k
The installer was unable to automatically detect the private IP address of this machine.
Please choose one of the following network interfaces:
[0] eth0 	172.31.44.19
[1] docker0	172.17.0.1
[2] kube-ipvs0	10.96.0.10
[3] kube-ipvs0	10.96.0.1
[4] kube-ipvs0	10.97.63.57
[5] kube-ipvs0	10.98.108.189
[6] kube-ipvs0	10.101.244.64
[7] kube-ipvs0	10.96.78.238
[8] kube-ipvs0	10.111.115.231
[9] weave	10.32.0.1
Enter desired number (0-9): 0
The installer will use network interface 'eth0' (with IP address '172.31.44.19').
⚙  Install kubelet, kubeadm, kubectl and cni binaries
✔ Kubernetes components already installed
⚙  Initialize Kubernetes
⚙  generate kubernetes bootstrap token
Kubernetes bootstrap token: rkjyhb.hjvl6irz1w2npebd
This token will expire in 24 hours
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   358  100   358    0     0  10529      0 --:--:-- --:--:-- --:--:-- 10529
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    89  100    89    0     0   4684      0 --:--:-- --:--:-- --:--:--  4684
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   418  100   418    0     0  13933      0 --:--:-- --:--:-- --:--:-- 13933
main: line 1393: is_rook_1: command not found
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING Port-6443]: Port 6443 is in use
	[WARNING Port-10251]: Port 10251 is in use
	[WARNING Port-10252]: Port 10252 is in use
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING IPVSProxierCheck]: error getting ipset version, error: executable file not found in $PATH
	[WARNING Port-2379]: Port 2379 is in use
	[WARNING Port-2380]: Port 2380 is in use
	[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 0.024622 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ip-172-31-44-19 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[bootstrap-token] Using token: rkjyhb.hjvl6irz1w2npebd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.44.19:6443 --token rkjyhb.hjvl6irz1w2npebd \
    --discovery-token-ca-cert-hash sha256:5311f4ddd40dd4b2659c734e5917602dab931806ee7625158a0efcec3e80cd8a 
main: line 1401: is_rook_1: command not found
✔ Kubernetes Master Initialized
Kubernetes master is running at https://172.31.44.19:6443
KubeDNS is running at https://172.31.44.19:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
✔ Cluster Initialized
⚙  Addon weave 2.5.2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2116  100  2116    0     0   114k      0 --:--:-- --:--:-- --:--:--  114k
serviceaccount/weave-net unchanged
role.rbac.authorization.k8s.io/weave-net unchanged
clusterrole.rbac.authorization.k8s.io/weave-net unchanged
rolebinding.rbac.authorization.k8s.io/weave-net unchanged
clusterrolebinding.rbac.authorization.k8s.io/weave-net unchanged
daemonset.apps/weave-net configured
⚙  Addon rook 1.0.4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8488  100  8488    0     0   207k      0 --:--:-- --:--:-- --:--:--  207k
namespace/rook-ceph unchanged
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io unchanged
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io unchanged
serviceaccount/rook-ceph-mgr unchanged
serviceaccount/rook-ceph-osd unchanged
serviceaccount/rook-ceph-system unchanged
role.rbac.authorization.k8s.io/rook-ceph-mgr unchanged
role.rbac.authorization.k8s.io/rook-ceph-osd unchanged
role.rbac.authorization.k8s.io/rook-ceph-system unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system-rules unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt-rules unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-global-rules unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-global configured
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster-rules unchanged
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster configured
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt unchanged
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system unchanged
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr unchanged
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd unchanged
rolebinding.rbac.authorization.k8s.io/rook-ceph-system unchanged
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global unchanged
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster unchanged
deployment.apps/rook-ceph-operator unchanged
storageclass.storage.k8s.io/default unchanged
cephblockpool.ceph.rook.io/replicapool unchanged
cephcluster.ceph.rook.io/rook-ceph configured
⚙  Addon contour 0.14.0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3334  100  3334    0     0   171k      0 --:--:-- --:--:-- --:--:--  171k
namespace/heptio-contour unchanged
customresourcedefinition.apiextensions.k8s.io/ingressroutes.contour.heptio.com unchanged
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.contour.heptio.com unchanged
serviceaccount/contour unchanged
clusterrole.rbac.authorization.k8s.io/contour unchanged
clusterrolebinding.rbac.authorization.k8s.io/contour unchanged
service/contour unchanged
deployment.apps/contour unchanged


		Installation
		  Complete ✔

To access the cluster with kubectl, reload your shell:

    bash -l


To add worker nodes to this installation, run the following script on your other nodes
    curl https://kurl.sh/join.sh | sudo bash -s kubernetes-master-address=172.31.44.19 kubeadm-token=rkjyhb.hjvl6irz1w2npebd kubeadm-token-ca-hash=sha256:5311f4ddd40dd4b2659c734e5917602dab931806ee7625158a0efcec3e80cd8a kubernetes-version=1.15.1 


root@ip-172-31-44-19:~# 
root@ip-172-31-44-19:~# kubectl get pods --all-namespaces
NAMESPACE        NAME                                          READY   STATUS      RESTARTS   AGE
heptio-contour   contour-7cbd8fbb7d-vblt6                      2/2     Running     1          38m
heptio-contour   contour-7cbd8fbb7d-vdtds                      2/2     Running     0          38m
kube-system      coredns-5c98db65d4-s9v2t                      1/1     Running     0          40m
kube-system      coredns-5c98db65d4-wlzmd                      1/1     Running     0          40m
kube-system      etcd-ip-172-31-44-19                          1/1     Running     0          39m
kube-system      kube-apiserver-ip-172-31-44-19                1/1     Running     0          39m
kube-system      kube-controller-manager-ip-172-31-44-19       1/1     Running     0          28m
kube-system      kube-proxy-8wx88                              1/1     Running     0          40m
kube-system      kube-scheduler-ip-172-31-44-19                1/1     Running     0          39m
kube-system      weave-net-t7fx7                               2/2     Running     0          40m
rook-ceph        rook-ceph-agent-m854d                         1/1     Running     0          40m
rook-ceph        rook-ceph-mgr-a-5cbf7bd5d9-v8xtz              1/1     Running     0          39m
rook-ceph        rook-ceph-mon-a-549b788f4c-6zf79              1/1     Running     0          39m
rook-ceph        rook-ceph-operator-54b54c5765-cp6g7           1/1     Running     1          40m
rook-ceph        rook-ceph-osd-prepare-ip-172-31-44-19-bsl6f   0/2     Completed   0          27m
rook-ceph        rook-discover-ln6m2                           1/1     Running     0          40m
root@ip-172-31-44-19:~#
  1. Noticed force-reset flag in the kurl script. Tried it instead of reset, but noticed the same behavior (reinstall instead of cleanup).

`getNoProxyAddresses` doesn't exist

The latest kurl.sh references a function "getNoProxyAddresses" that does not exist. So, when a customer tries to specify a proxy, the install script fails. See command below highlighting the issue.

$ curl -sSL https://kurl.sh/latest | grep -n -5 'getNoProxyAddresses'
1827-        if [ -z "$PROXY_ADDRESS" ] && [ "$AIRGAP" != "1" ]; then
1828-            promptForProxy
1829-        fi
1830-
1831-        if [ -n "$PROXY_ADDRESS" ]; then
1832:            getNoProxyAddresses "$PRIVATE_ADDRESS" "$SERVICE_CIDR"
1833-        fi
1834-    fi
1835-
1836-    prompt_airgap_preload_images
1837-}

EDIT: example output here:

The installer will use network interface 'xxx' (with IP address 'x.x.x.x’)

Does this machine require a proxy to access the Internet? (y/N) y

Enter desired HTTP proxy address: https://proxy-name:8080

The installer will use the proxy at 'https://proxy-name:8080'

./script.sh: line 1832: getNoProxyAddresses: command not found

Install script hangs if weave add-on isn't deployed successfully

Currently, the default kurl install process attempts to deploy weave, and then continues after applying the weave yaml to the cluster, regardless of whether or not the weave deployment succeeds.

If there are problems with the weave deployment, no error is shown to the user, and the install proceeds to try to install the rook addon, which will likely hang during its rook_ready_spinner stage. This is a problem, because it makes a weave problem look like a rook problem.

The weave add-on should use a similar pattern to the rook add-on, waiting for evidence that the weave deployment was successful, and exiting the install with an error if the deployment doesn't complete within a reasonable timeout period.

The simplest way to reproduce the weave failure is to use the ip-alloc-range parameter when running the kurl.sh install script, specifying an ip range that deliberately conflicts with reserved ips.

For example, to reproduce the problem on a GCP instance within the us-west2 region, you could run curl https://kurl.sh/latest | sudo bash -s ip-alloc-range=10.168.0.1/16

This problem reproduces with the default kurl.sh install config, but should also reproduce with any other kurl config that includes both weave and rook.

The output after installing is too hard to grok to understand next steps

What if we printed very simple, clear next steps, but added a command to view detailed logs and output.

For example, when using kurl with kotsadm, the next steps are to copy the generated password and visit a URL (:8800) to continue. Ideally this output would be all that's displayed, along with a command that can be used to view additional output.

Installation URL filters User-Agent HTTP header

Why does the installation URL only serve the bash script to HTTP requests with the User-Agent HTTP header set to curl?

Steps to reproduce

# This returns the bash script
curl -v https://kurl.sh/latest | less

Opening https://kurl.sh/latest in a browser or performing a GET request using Postman returns a 404 Not Found.

Feature: Support ACME cert creation on the TLS page of kURL

The TLS page of kURL prompts to upload a key and cert to secure the Admin Console, or to continue with a self signed cert. For clusters that are connected to the public internet, we could start a HTTP Challenge server and provision a cert from LetsEncrypt or another ACME provider here. Then, the kURL proxy service should also be responsible for renewing the cert from the issuer before expiration.

Proposal: Add a "script" addon

the script add on would just run the script, on the server.

for example:

apiVersion: "kurl.sh/v1beta1"
kind: "Installer"
metadata: 
  name: ""
spec: 
  kubernetes: 
    version: "latest"
  script: |
    #! /bin/bash
    ...
    ...

The script should be saved to a tmp file, named with the hash of the contents, made executable and then executed.

More user task oriented docs

A task I've found error-prone and also poorly documented is decommissioning worker/master nodes in a cluster. It'd be really helpful to have some kurl-specific docs on https://kurl.sh/docs.

Also rebooting worker/master nodes.

Kotsadm addon: standard type for TLS secret

Right now with the kotsadm addon, kurl.sh will create a secret called kotsadm-tls of type Opaque. It would be great if this could be of type kubernetes.io/tls so it can be re-used in Ingress specs and other K8s tools which process Secrets of this type.

(Per @areed this was done so we can store additional TLS-related fields in the secret)

I'd like to be able to re-use these certs in any applications being managed by kotsadm, specifcally to integrate with the countor addon in this case.

There's an open question around whether kotsadm should further manage this by propagating secrets to application namespaces, but I think I'd consider that out of scope for a v1 of this.

Support for non-Docker environments

Let's track adding support for operating systems and environments that don't have Docker. There are various options being discussed (containerd, podman) and this issue can be used to track the overall progress.

Once completed, this will enable:

  • Any operating system that doesn't have Docker
  • Building installers without a docker plugin (even for OS with docker support)
  • CentOS 8
  • RHEL 8

kURL fails with kotsadm

Hi,

I use Vagrant to setup a Ubuntu 18.04 VM. It meets all requirements as defined in the docs: https://kurl.sh/docs/install-with-kurl/system-requirements

When installing without the kotsadm addon everything works perfectly.
However, when installing with kotsadm the installation fails.

Here is the error:

[...]
    default: ⚙  Addon kotsadm 1.13.6
    default:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    default:                                  
    default: Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- -
    default: -:--:-- --:--:--    
    default:  0
    default: curl: (6) Could not resolve host: kotsadm
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

The kURL config:

apiVersion: "https://kurl.sh/v1beta1"
kind: "Installer"

metadata: 
  name: ""

spec: 
  kubernetes: 
    version: "1.16.4"
  docker: 
    version: "18.09.8"
  prometheus: 
    version: "0.33.0"
  registry: 
    version: "2.7.1"
  kotsadm: 
    version: "1.13.6"

The Vagrantfile I use:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-18.04"

  config.vm.provider "virtualbox" do |vb|
    vb.name = "app"
    vb.memory = 8192
    vb.cpus = 4
    vb.gui = false
  end

  config.vm.network "forwarded_port", guest: 6443, host: 6443, protocol: "tcp"
  config.vm.network "forwarded_port", guest: 6783, host: 6783, protocol: "tcp"
  config.vm.network "forwarded_port", guest: 6783, host: 6783, protocol: "udp"
  config.vm.network "forwarded_port", guest: 6784, host: 6784, protocol: "udp"

  config.vm.network "forwarded_port", guest: 5000, host: 5000, protocol: "tcp"
  config.vm.network "forwarded_port", guest: 8800, host: 8800, protocol: "tcp"

end

Any idea why?

Thank you for your time.

"make watchrsync" hangs after rsyncing addons

Below is verbose output from the last rsync (for addons):

root@ip-172-31-71-50:~/src/github.com/elotl/kurl# HOST=ec2-18-212-119-34.compute-1.amazonaws.com USER=ubuntu make watchrsync
rsync -r build/ubuntu-18.04 [email protected]:kurl
rsync -r build/rhel-7 [email protected]:kurl
bin/watchrsync.js
rsync Manifest [email protected]:kurl
rsync -r scripts [email protected]:kurl
rsync -r yaml [email protected]:kurl
rsync -vvv -r addons [email protected]:kurl
opening connection using: ssh -l ubuntu ec2-18-212-119-34.compute-1.amazonaws.com rsync --server -vvvre.iLsfx . kurl  (9 args)
sending incremental file list
[sender] make_file(addons,*,0)
send_file_list done
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/
[sender] make_file(addons/contour,*,2)
[sender] make_file(addons/weave,*,2)
[sender] make_file(addons/rook,*,2)
send_files starting
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/contour/
[sender] make_file(addons/contour/0.14.0,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/contour/0.14.0/
[sender] make_file(addons/contour/0.14.0/service.yaml,*,2)
[sender] make_file(addons/contour/0.14.0/install.sh,*,2)
[sender] make_file(addons/contour/0.14.0/kustomization.yaml,*,2)
[sender] make_file(addons/contour/0.14.0/rbac.yaml,*,2)
[sender] make_file(addons/contour/0.14.0/deployment.yaml,*,2)
[sender] make_file(addons/contour/0.14.0/patches,*,2)
[sender] make_file(addons/contour/0.14.0/common.yaml,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/contour/0.14.0/patches/
[sender] make_file(addons/contour/0.14.0/patches/service-node-port.yaml,*,2)
[sender] make_file(addons/contour/0.14.0/patches/deployment-images.yaml,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/rook/
[sender] make_file(addons/rook/1.0.4,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/rook/1.0.4/
[sender] make_file(addons/rook/1.0.4/install.sh,*,2)
[sender] make_file(addons/rook/1.0.4/cluster,*,2)
[sender] make_file(addons/rook/1.0.4/operator,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/rook/1.0.4/cluster/
[sender] make_file(addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/ceph-block-pool.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/ceph-cluster.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/kustomization.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/patches,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/rook/1.0.4/cluster/patches/
[sender] make_file(addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/rook/1.0.4/operator/
[sender] make_file(addons/rook/1.0.4/operator/ceph-operator.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/operator/ceph-common.yaml,*,2)
[sender] make_file(addons/rook/1.0.4/operator/kustomization.yaml,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/weave/
[sender] make_file(addons/weave/2.5.2,*,2)
[sender] pushing local filters for /root/src/github.com/elotl/kurl/addons/weave/2.5.2/
[sender] make_file(addons/weave/2.5.2/daemonset.yaml,*,2)
[sender] make_file(addons/weave/2.5.2/install.sh,*,2)
[sender] make_file(addons/weave/2.5.2/kustomization.yaml,*,2)
[sender] make_file(addons/weave/2.5.2/encrypt.yaml,*,2)
[sender] make_file(addons/weave/2.5.2/rbac.yaml,*,2)
[sender] make_file(addons/weave/2.5.2/tmpl-ip-alloc-range.yaml,*,2)
[sender] make_file(addons/weave/2.5.2/tmpl-secret.yaml,*,2)
server_recv(2) starting pid=18534
recv_file_name(addons)
received 1 names
recv_file_list done
recv_file_name(addons/contour)
recv_file_name(addons/weave)
recv_file_name(addons/rook)
received 3 names
recv_file_list done
get_local_name count=4 kurl
generator starting pid=18534
delta-transmission enabled
recv_generator(addons,1)
recv_generator(addons,2)
recv_generator(addons/contour,3)
recv_generator(addons/rook,4)
recv_generator(addons/weave,5)
recv_files(1) starting
send_files(2, addons)
[receiver] receiving flist for dir 1
recv_file_name(addons/contour/0.14.0)
received 1 names
recv_file_list done
[receiver] receiving flist for dir 4
recv_file_name(addons/contour/0.14.0/service.yaml)
recv_file_name(addons/contour/0.14.0/install.sh)
recv_file_name(addons/contour/0.14.0/kustomization.yaml)
recv_file_name(addons/contour/0.14.0/rbac.yaml)
recv_file_name(addons/contour/0.14.0/deployment.yaml)
recv_file_name(addons/contour/0.14.0/patches)
recv_file_name(addons/contour/0.14.0/common.yaml)
received 7 names
recv_file_list done
[receiver] receiving flist for dir 5
recv_file_name(addons/contour/0.14.0/patches/service-node-port.yaml)
recv_file_name(addons/contour/0.14.0/patches/deployment-images.yaml)
received 2 names
recv_file_list done
[receiver] receiving flist for dir 2
recv_file_name(addons/rook/1.0.4)
received 1 names
recv_file_list done
[receiver] receiving flist for dir 6
recv_file_name(addons/rook/1.0.4/install.sh)
recv_file_name(addons/rook/1.0.4/cluster)
recv_file_name(addons/rook/1.0.4/operator)
received 3 names
recv_file_list done
[receiver] receiving flist for dir 7
recv_file_name(addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml)
recv_file_name(addons/rook/1.0.4/cluster/ceph-block-pool.yaml)
recv_file_name(addons/rook/1.0.4/cluster/ceph-cluster.yaml)
recv_file_name(addons/rook/1.0.4/cluster/kustomization.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches)
received 5 names
recv_file_list done
[receiver] receiving flist for dir 9
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml)
received 4 names
recv_file_list done
[receiver] receiving flist for dir 8
recv_file_name(addons/rook/1.0.4/operator/ceph-operator.yaml)
recv_file_name(addons/rook/1.0.4/operator/ceph-common.yaml)
recv_file_name(addons/rook/1.0.4/operator/kustomization.yaml)
received 3 names
recv_file_list done
[receiver] receiving flist for dir 3
recv_file_name(addons/weave/2.5.2)
received 1 names
recv_file_list done
[receiver] receiving flist for dir 10
recv_file_name(addons/weave/2.5.2/daemonset.yaml)
recv_file_name(addons/weave/2.5.2/install.sh)
recv_file_name(addons/weave/2.5.2/kustomization.yaml)
recv_file_name(addons/weave/2.5.2/encrypt.yaml)
recv_file_name(addons/weave/2.5.2/rbac.yaml)
recv_file_name(addons/weave/2.5.2/tmpl-ip-alloc-range.yaml)
recv_file_name(addons/weave/2.5.2/tmpl-secret.yaml)
received 7 names
recv_file_list done
[generator] receiving flist for dir 1
recv_file_name(addons/contour/0.14.0)
received 1 names
recv_file_list done
recv_generator(addons/contour,6)
recv_generator(addons/contour/0.14.0,7)
[generator] receiving flist for dir 4
recv_file_name(addons/contour/0.14.0/service.yaml)
recv_file_name(addons/contour/0.14.0/install.sh)
recv_file_name(addons/contour/0.14.0/kustomization.yaml)
recv_file_name(addons/contour/0.14.0/rbac.yaml)
recv_file_name(addons/contour/0.14.0/deployment.yaml)
recv_file_name(addons/contour/0.14.0/patches)
recv_file_name(addons/contour/0.14.0/common.yaml)
received 7 names
recv_file_list done
recv_generator(addons/contour/0.14.0,8)
recv_generator(addons/contour/0.14.0/common.yaml,9)
generating and sending sums for 9
count=9 rem=564 blength=700 s2length=2 flength=6164
recv_generator(addons/contour/0.14.0/deployment.yaml,10)
generating and sending sums for 10
count=4 rem=539 blength=700 s2length=2 flength=2639
recv_generator(addons/contour/0.14.0/install.sh,11)
generating and sending sums for 11
count=1 rem=424 blength=700 s2length=2 flength=424
recv_generator(addons/contour/0.14.0/kustomization.yaml,12)
generating and sending sums for 12
count=1 rem=144 blength=700 s2length=2 flength=144
recv_generator(addons/contour/0.14.0/rbac.yaml,13)
generating and sending sums for 13
count=2 rem=260 blength=700 s2length=2 flength=960
recv_generator(addons/contour/0.14.0/service.yaml,14)
generating and sending sums for 14
count=2 rem=313 blength=700 s2length=2 flength=1013
recv_generator(addons/contour/0.14.0/patches,15)
[generator] receiving flist for dir 5
recv_file_name(addons/contour/0.14.0/patches/service-node-port.yaml)
recv_file_name(addons/contour/0.14.0/patches/deployment-images.yaml)
received 2 names
recv_file_list done
recv_generator(addons/contour/0.14.0/patches,16)
recv_generator(addons/contour/0.14.0/patches/deployment-images.yaml,17)
generating and sending sums for 17
count=1 rem=403 blength=700 s2length=2 flength=403
recv_generator(addons/contour/0.14.0/patches/service-node-port.yaml,18)
generating and sending sums for 18
count=1 rem=103 blength=700 s2length=2 flength=103
[generator] receiving flist for dir 2
recv_file_name(addons/rook/1.0.4)
received 1 names
recv_file_list done
recv_generator(addons/rook,19)
recv_generator(addons/rook/1.0.4,20)
[generator] receiving flist for dir 6
recv_file_name(addons/rook/1.0.4/install.sh)
recv_file_name(addons/rook/1.0.4/cluster)
recv_file_name(addons/rook/1.0.4/operator)
received 3 names
recv_file_list done
recv_generator(addons/rook/1.0.4,21)
recv_generator(addons/rook/1.0.4/install.sh,22)
generating and sending sums for 22
count=6 rem=166 blength=700 s2length=2 flength=3666
recv_generator(addons/rook/1.0.4/cluster,23)
recv_generator(addons/rook/1.0.4/operator,24)
[generator] receiving flist for dir 7
recv_file_name(addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml)
recv_file_name(addons/rook/1.0.4/cluster/ceph-block-pool.yaml)
recv_file_name(addons/rook/1.0.4/cluster/ceph-cluster.yaml)
recv_file_name(addons/rook/1.0.4/cluster/kustomization.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches)
received 5 names
recv_file_list done
recv_generator(addons/rook/1.0.4/cluster,25)
recv_generator(addons/rook/1.0.4/cluster/ceph-block-pool.yaml,26)
generating and sending sums for 26
count=2 rem=223 blength=700 s2length=2 flength=923
recv_generator(addons/rook/1.0.4/cluster/ceph-cluster.yaml,27)
generating and sending sums for 27
count=9 rem=391 blength=700 s2length=2 flength=5991
recv_generator(addons/rook/1.0.4/cluster/kustomization.yaml,28)
generating and sending sums for 28
count=1 rem=215 blength=700 s2length=2 flength=215
recv_generator(addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml,29)
generating and sending sums for 29
count=1 rem=255 blength=700 s2length=2 flength=255
recv_generator(addons/rook/1.0.4/cluster/patches,30)
[generator] receiving flist for dir 9
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml)
recv_file_name(addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml)
received 4 names
recv_file_list done
recv_generator(addons/rook/1.0.4/cluster/patches,31)
recv_generator(addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml,32)
generating and sending sums for 32
count=1 rem=145 blength=700 s2length=2 flength=145
recv_generator(addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml,33)
generating and sending sums for 33
count=1 rem=153 blength=700 s2length=2 flength=153
recv_generator(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml,34)
generating and sending sums for 34
count=1 rem=156 blength=700 s2length=2 flength=156
recv_generator(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml,35)
generating and sending sums for 35
count=1 rem=184 blength=700 s2length=2 flength=184
[generator] receiving flist for dir 8
recv_file_name(addons/rook/1.0.4/operator/ceph-operator.yaml)
recv_file_name(addons/rook/1.0.4/operator/ceph-common.yaml)
recv_file_name(addons/rook/1.0.4/operator/kustomization.yaml)
received 3 names
recv_file_list done
recv_generator(addons/rook/1.0.4/operator,36)
recv_generator(addons/rook/1.0.4/operator/ceph-common.yaml,37)
generating and sending sums for 37
count=21 rem=152 blength=700 s2length=2 flength=14152
recv_generator(addons/rook/1.0.4/operator/ceph-operator.yaml,38)
generating and sending sums for 38
count=10 rem=254 blength=700 s2length=2 flength=6554
recv_generator(addons/rook/1.0.4/operator/kustomization.yaml,39)
generating and sending sums for 39
count=1 rem=51 blength=700 s2length=2 flength=51
[generator] receiving flist for dir 3
recv_file_name(addons/weave/2.5.2)
received 1 names
recv_file_list done
recv_generator(addons/weave,40)
recv_generator(addons/weave/2.5.2,41)
[generator] receiving flist for dir 10
recv_file_name(addons/weave/2.5.2/daemonset.yaml)
recv_file_name(addons/weave/2.5.2/install.sh)
recv_file_name(addons/weave/2.5.2/kustomization.yaml)
recv_file_name(addons/weave/2.5.2/encrypt.yaml)
recv_file_name(addons/weave/2.5.2/rbac.yaml)
recv_file_name(addons/weave/2.5.2/tmpl-ip-alloc-range.yaml)
recv_file_name(addons/weave/2.5.2/tmpl-secret.yaml)
received 7 names
recv_file_list done
recv_generator(addons/weave/2.5.2,42)
recv_generator(addons/weave/2.5.2/daemonset.yaml,43)
generating and sending sums for 43
count=4 rem=587 blength=700 s2length=2 flength=2687
recv_generator(addons/weave/2.5.2/encrypt.yaml,44)
generating and sending sums for 44
count=1 rem=313 blength=700 s2length=2 flength=313
recv_generator(addons/weave/2.5.2/install.sh,45)
generating and sending sums for 45
count=3 rem=609 blength=700 s2length=2 flength=2009
recv_generator(addons/weave/2.5.2/kustomization.yaml,46)
generating and sending sums for 46
count=1 rem=40 blength=700 s2length=2 flength=40
recv_generator(addons/weave/2.5.2/rbac.yaml,47)
generating and sending sums for 47
count=3 rem=208 blength=700 s2length=2 flength=1608
recv_generator(addons/weave/2.5.2/tmpl-ip-alloc-range.yaml,48)
generating and sending sums for 48
count=1 rem=243 blength=700 s2length=2 flength=243
recv_generator(addons/weave/2.5.2/tmpl-secret.yaml,49)
generating and sending sums for 49
count=1 rem=132 blength=700 s2length=2 flength=132
generate_files phase=1
send_files(6, addons/contour)
send_files(8, addons/contour/0.14.0)
send_files(9, addons/contour/0.14.0/common.yaml)
send_files mapped addons/contour/0.14.0/common.yaml of size 6164
calling match_sums addons/contour/0.14.0/common.yaml
addons/contour/0.14.0/common.yaml
built hash table
hash search b=700 len=6164
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=700 n=0
match at 2800 last_match=2800 j=4 len=700 n=0
match at 3500 last_match=3500 j=5 len=700 n=0
match at 4200 last_match=4200 j=6 len=700 n=0
match at 4900 last_match=4900 j=7 len=700 n=0
match at 5600 last_match=5600 j=8 len=564 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=9 matches=9
sender finished addons/contour/0.14.0/common.yaml
send_files(10, addons/contour/0.14.0/deployment.yaml)
send_files mapped addons/contour/0.14.0/deployment.yaml of size 2639
calling match_sums addons/contour/0.14.0/deployment.yaml
addons/contour/0.14.0/deployment.yaml
built hash table
hash search b=700 len=2639
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=539 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=4 matches=4
sender finished addons/contour/0.14.0/deployment.yaml
send_files(11, addons/contour/0.14.0/install.sh)
send_files mapped addons/contour/0.14.0/install.sh of size 424
calling match_sums addons/contour/0.14.0/install.sh
addons/contour/0.14.0/install.sh
built hash table
hash search b=700 len=424
match at 0 last_match=0 j=0 len=424 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/contour/0.14.0/install.sh
send_files(12, addons/contour/0.14.0/kustomization.yaml)
send_files mapped addons/contour/0.14.0/kustomization.yaml of size 144
calling match_sums addons/contour/0.14.0/kustomization.yaml
addons/contour/0.14.0/kustomization.yaml
built hash table
hash search b=700 len=144
match at 0 last_match=0 j=0 len=144 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/contour/0.14.0/kustomization.yaml
send_files(13, addons/contour/0.14.0/rbac.yaml)
send_files mapped addons/contour/0.14.0/rbac.yaml of size 960
calling match_sums addons/contour/0.14.0/rbac.yaml
addons/contour/0.14.0/rbac.yaml
built hash table
hash search b=700 len=960
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=260 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=2 matches=2
sender finished addons/contour/0.14.0/rbac.yaml
send_files(14, addons/contour/0.14.0/service.yaml)
send_files mapped addons/contour/0.14.0/service.yaml of size 1013
calling match_sums addons/contour/0.14.0/service.yaml
addons/contour/0.14.0/service.yaml
built hash table
hash search b=700 len=1013
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=313 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=2 matches=2
sender finished addons/contour/0.14.0/service.yaml
send_files(16, addons/contour/0.14.0/patches)
send_files(17, addons/contour/0.14.0/patches/deployment-images.yaml)
send_files mapped addons/contour/0.14.0/patches/deployment-images.yaml of size 403
calling match_sums addons/contour/0.14.0/patches/deployment-images.yaml
addons/contour/0.14.0/patches/deployment-images.yaml
built hash table
hash search b=700 len=403
match at 0 last_match=0 j=0 len=403 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/contour/0.14.0/patches/deployment-images.yaml
send_files(18, addons/contour/0.14.0/patches/service-node-port.yaml)
send_files mapped addons/contour/0.14.0/patches/service-node-port.yaml of size 103
calling match_sums addons/contour/0.14.0/patches/service-node-port.yaml
addons/contour/0.14.0/patches/service-node-port.yaml
built hash table
hash search b=700 len=103
match at 0 last_match=0 j=0 len=103 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/contour/0.14.0/patches/service-node-port.yaml
send_files(19, addons/rook)
send_files(21, addons/rook/1.0.4)
send_files(22, addons/rook/1.0.4/install.sh)
send_files mapped addons/rook/1.0.4/install.sh of size 3666
calling match_sums addons/rook/1.0.4/install.sh
addons/rook/1.0.4/install.sh
built hash table
hash search b=700 len=3666
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=700 n=0
match at 2800 last_match=2800 j=4 len=700 n=0
match at 3500 last_match=3500 j=5 len=166 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=6 matches=6
sender finished addons/rook/1.0.4/install.sh
send_files(25, addons/rook/1.0.4/cluster)
send_files(26, addons/rook/1.0.4/cluster/ceph-block-pool.yaml)
send_files mapped addons/rook/1.0.4/cluster/ceph-block-pool.yaml of size 923
calling match_sums addons/rook/1.0.4/cluster/ceph-block-pool.yaml
addons/rook/1.0.4/cluster/ceph-block-pool.yaml
built hash table
hash search b=700 len=923
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=223 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=2 matches=2
sender finished addons/rook/1.0.4/cluster/ceph-block-pool.yaml
send_files(27, addons/rook/1.0.4/cluster/ceph-cluster.yaml)
send_files mapped addons/rook/1.0.4/cluster/ceph-cluster.yaml of size 5991
calling match_sums addons/rook/1.0.4/cluster/ceph-cluster.yaml
addons/rook/1.0.4/cluster/ceph-cluster.yaml
built hash table
hash search b=700 len=5991
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=700 n=0
match at 2800 last_match=2800 j=4 len=700 n=0
match at 3500 last_match=3500 j=5 len=700 n=0
match at 4200 last_match=4200 j=6 len=700 n=0
match at 4900 last_match=4900 j=7 len=700 n=0
match at 5600 last_match=5600 j=8 len=391 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=9 matches=9
sender finished addons/rook/1.0.4/cluster/ceph-cluster.yaml
send_files(28, addons/rook/1.0.4/cluster/kustomization.yaml)
send_files mapped addons/rook/1.0.4/cluster/kustomization.yaml of size 215
calling match_sums addons/rook/1.0.4/cluster/kustomization.yaml
addons/rook/1.0.4/cluster/kustomization.yaml
built hash table
hash search b=700 len=215
match at 0 last_match=0 j=0 len=215 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/kustomization.yaml
send_files(29, addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml)
send_files mapped addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml of size 255
calling match_sums addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml
addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml
built hash table
hash search b=700 len=255
match at 0 last_match=0 j=0 len=255 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml
send_files(31, addons/rook/1.0.4/cluster/patches)
send_files(32, addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml)
send_files mapped addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml of size 145
calling match_sums addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml
addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml
built hash table
hash search b=700 len=145
match at 0 last_match=0 j=0 len=145 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml
send_files(33, addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml)
send_files mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml of size 153
calling match_sums addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml
addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml
built hash table
hash search b=700 len=153
match at 0 last_match=0 j=0 len=153 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml
send_files(34, addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml)
send_files mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml of size 156
calling match_sums addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml
addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml
built hash table
hash search b=700 len=156
match at 0 last_match=0 j=0 len=156 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml
send_files(35, addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml)
send_files mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml of size 184
calling match_sums addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml
addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml
built hash table
hash search b=700 len=184
match at 0 last_match=0 j=0 len=184 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml
send_files(36, addons/rook/1.0.4/operator)
send_files(37, addons/rook/1.0.4/operator/ceph-common.yaml)
send_files mapped addons/rook/1.0.4/operator/ceph-common.yaml of size 14152
calling match_sums addons/rook/1.0.4/operator/ceph-common.yaml
addons/rook/1.0.4/operator/ceph-common.yaml
built hash table
hash search b=700 len=14152
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=700 n=0
match at 2800 last_match=2800 j=4 len=700 n=0
match at 3500 last_match=3500 j=5 len=700 n=0
match at 4200 last_match=4200 j=6 len=700 n=0
match at 4900 last_match=4900 j=7 len=700 n=0
match at 5600 last_match=5600 j=8 len=700 n=0
match at 6300 last_match=6300 j=9 len=700 n=0
match at 7000 last_match=7000 j=10 len=700 n=0
match at 7700 last_match=7700 j=11 len=700 n=0
match at 8400 last_match=8400 j=12 len=700 n=0
match at 9100 last_match=9100 j=13 len=700 n=0
match at 9800 last_match=9800 j=14 len=700 n=0
match at 10500 last_match=10500 j=15 len=700 n=0
match at 11200 last_match=11200 j=16 len=700 n=0
match at 11900 last_match=11900 j=17 len=700 n=0
match at 12600 last_match=12600 j=18 len=700 n=0
match at 13300 last_match=13300 j=19 len=700 n=0
match at 14000 last_match=14000 j=20 len=152 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=21 matches=21
sender finished addons/rook/1.0.4/operator/ceph-common.yaml
send_files(38, addons/rook/1.0.4/operator/ceph-operator.yaml)
send_files mapped addons/rook/1.0.4/operator/ceph-operator.yaml of size 6554
calling match_sums addons/rook/1.0.4/operator/ceph-operator.yaml
addons/rook/1.0.4/operator/ceph-operator.yaml
built hash table
hash search b=700 len=6554
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=700 n=0
match at 2800 last_match=2800 j=4 len=700 n=0
match at 3500 last_match=3500 j=5 len=700 n=0
match at 4200 last_match=4200 j=6 len=700 n=0
match at 4900 last_match=4900 j=7 len=700 n=0
match at 5600 last_match=5600 j=8 len=700 n=0
match at 6300 last_match=6300 j=9 len=254 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=10 matches=10
sender finished addons/rook/1.0.4/operator/ceph-operator.yaml
send_files(39, addons/rook/1.0.4/operator/kustomization.yaml)
send_files mapped addons/rook/1.0.4/operator/kustomization.yaml of size 51
calling match_sums addons/rook/1.0.4/operator/kustomization.yaml
addons/rook/1.0.4/operator/kustomization.yaml
built hash table
hash search b=700 len=51
match at 0 last_match=0 j=0 len=51 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/rook/1.0.4/operator/kustomization.yaml
send_files(40, addons/weave)
send_files(42, addons/weave/2.5.2)
send_files(43, addons/weave/2.5.2/daemonset.yaml)
send_files mapped addons/weave/2.5.2/daemonset.yaml of size 2687
calling match_sums addons/weave/2.5.2/daemonset.yaml
addons/weave/2.5.2/daemonset.yaml
built hash table
hash search b=700 len=2687
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=700 n=0
match at 2100 last_match=2100 j=3 len=587 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=4 matches=4
sender finished addons/weave/2.5.2/daemonset.yaml
send_files(44, addons/weave/2.5.2/encrypt.yaml)
send_files mapped addons/weave/2.5.2/encrypt.yaml of size 313
calling match_sums addons/weave/2.5.2/encrypt.yaml
addons/weave/2.5.2/encrypt.yaml
built hash table
hash search b=700 len=313
match at 0 last_match=0 j=0 len=313 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/weave/2.5.2/encrypt.yaml
send_files(45, addons/weave/2.5.2/install.sh)
send_files mapped addons/weave/2.5.2/install.sh of size 2009
calling match_sums addons/weave/2.5.2/install.sh
addons/weave/2.5.2/install.sh
built hash table
hash search b=700 len=2009
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=609 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=3 matches=3
sender finished addons/weave/2.5.2/install.sh
send_files(46, addons/weave/2.5.2/kustomization.yaml)
send_files mapped addons/weave/2.5.2/kustomization.yaml of size 40
calling match_sums addons/weave/2.5.2/kustomization.yaml
addons/weave/2.5.2/kustomization.yaml
built hash table
hash search b=700 len=40
match at 0 last_match=0 j=0 len=40 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/weave/2.5.2/kustomization.yaml
send_files(47, addons/weave/2.5.2/rbac.yaml)
send_files mapped addons/weave/2.5.2/rbac.yaml of size 1608
calling match_sums addons/weave/2.5.2/rbac.yaml
addons/weave/2.5.2/rbac.yaml
built hash table
hash search b=700 len=1608
match at 0 last_match=0 j=0 len=700 n=0
match at 700 last_match=700 j=1 len=700 n=0
match at 1400 last_match=1400 j=2 len=208 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=3 matches=3
sender finished addons/weave/2.5.2/rbac.yaml
send_files(48, addons/weave/2.5.2/tmpl-ip-alloc-range.yaml)
send_files mapped addons/weave/2.5.2/tmpl-ip-alloc-range.yaml of size 243
calling match_sums addons/weave/2.5.2/tmpl-ip-alloc-range.yaml
addons/weave/2.5.2/tmpl-ip-alloc-range.yaml
built hash table
hash search b=700 len=243
match at 0 last_match=0 j=0 len=243 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/weave/2.5.2/tmpl-ip-alloc-range.yaml
send_files(49, addons/weave/2.5.2/tmpl-secret.yaml)
send_files mapped addons/weave/2.5.2/tmpl-secret.yaml of size 132
calling match_sums addons/weave/2.5.2/tmpl-secret.yaml
addons/weave/2.5.2/tmpl-secret.yaml
built hash table
hash search b=700 len=132
match at 0 last_match=0 j=0 len=132 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=1 matches=1
sender finished addons/weave/2.5.2/tmpl-secret.yaml
recv_files(addons)
recv_files(addons/contour)
recv_files(addons/contour/0.14.0)
recv_files(addons/contour/0.14.0/common.yaml)
recv mapped addons/contour/0.14.0/common.yaml of size 6164
got file_sum
renaming addons/contour/0.14.0/.common.yaml.mzHYt1 to addons/contour/0.14.0/common.yaml
recv_files(addons/contour/0.14.0/deployment.yaml)
recv mapped addons/contour/0.14.0/deployment.yaml of size 2639
got file_sum
renaming addons/contour/0.14.0/.deployment.yaml.1REmA9 to addons/contour/0.14.0/deployment.yaml
recv_files(addons/contour/0.14.0/install.sh)
recv mapped addons/contour/0.14.0/install.sh of size 424
got file_sum
renaming addons/contour/0.14.0/.install.sh.lZFLGh to addons/contour/0.14.0/install.sh
recv_files(addons/contour/0.14.0/kustomization.yaml)
recv mapped addons/contour/0.14.0/kustomization.yaml of size 144
got file_sum
renaming addons/contour/0.14.0/.kustomization.yaml.ilibNp to addons/contour/0.14.0/kustomization.yaml
recv_files(addons/contour/0.14.0/rbac.yaml)
recv mapped addons/contour/0.14.0/rbac.yaml of size 960
got file_sum
renaming addons/contour/0.14.0/.rbac.yaml.8PDBTx to addons/contour/0.14.0/rbac.yaml
recv_files(addons/contour/0.14.0/service.yaml)
recv mapped addons/contour/0.14.0/service.yaml of size 1013
got file_sum
renaming addons/contour/0.14.0/.service.yaml.70F2ZF to addons/contour/0.14.0/service.yaml
recv_files(addons/contour/0.14.0/patches)
recv_files(addons/contour/0.14.0/patches/deployment-images.yaml)
recv mapped addons/contour/0.14.0/patches/deployment-images.yaml of size 403
got file_sum
renaming addons/contour/0.14.0/patches/.deployment-images.yaml.diuu6N to addons/contour/0.14.0/patches/deployment-images.yaml
recv_files(addons/contour/0.14.0/patches/service-node-port.yaml)
recv mapped addons/contour/0.14.0/patches/service-node-port.yaml of size 103
got file_sum
renaming addons/contour/0.14.0/patches/.service-node-port.yaml.TObXcW to addons/contour/0.14.0/patches/service-node-port.yaml
recv_files(addons/rook)
recv_files(addons/rook/1.0.4)
recv_files(addons/rook/1.0.4/install.sh)
recv mapped addons/rook/1.0.4/install.sh of size 3666
got file_sum
renaming addons/rook/1.0.4/.install.sh.GCCqj4 to addons/rook/1.0.4/install.sh
recv_files(addons/rook/1.0.4/cluster)
recv_files(addons/rook/1.0.4/cluster/ceph-block-pool.yaml)
recv mapped addons/rook/1.0.4/cluster/ceph-block-pool.yaml of size 923
got file_sum
renaming addons/rook/1.0.4/cluster/.ceph-block-pool.yaml.W20Upc to addons/rook/1.0.4/cluster/ceph-block-pool.yaml
recv_files(addons/rook/1.0.4/cluster/ceph-cluster.yaml)
recv mapped addons/rook/1.0.4/cluster/ceph-cluster.yaml of size 5991
got file_sum
renaming addons/rook/1.0.4/cluster/.ceph-cluster.yaml.yV1pwk to addons/rook/1.0.4/cluster/ceph-cluster.yaml
recv_files(addons/rook/1.0.4/cluster/kustomization.yaml)
recv mapped addons/rook/1.0.4/cluster/kustomization.yaml of size 215
got file_sum
renaming addons/rook/1.0.4/cluster/.kustomization.yaml.X5XVCs to addons/rook/1.0.4/cluster/kustomization.yaml
recv_files(addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml)
recv mapped addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml of size 255
got file_sum
renaming addons/rook/1.0.4/cluster/.tmpl-ceph-storage-class.yaml.DjDsJA to addons/rook/1.0.4/cluster/tmpl-ceph-storage-class.yaml
recv_files(addons/rook/1.0.4/cluster/patches)
recv_files(addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml)
recv mapped addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml of size 145
got file_sum
renaming addons/rook/1.0.4/cluster/patches/.ceph-cluster-mons.yaml.Ga8ZPI to addons/rook/1.0.4/cluster/patches/ceph-cluster-mons.yaml
recv_files(addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml)
recv mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml of size 153
got file_sum
renaming addons/rook/1.0.4/cluster/patches/.tmpl-ceph-block-pool-replicas.yaml.aPuyWQ to addons/rook/1.0.4/cluster/patches/tmpl-ceph-block-pool-replicas.yaml
recv_files(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml)
recv mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml of size 156
got file_sum
renaming addons/rook/1.0.4/cluster/patches/.tmpl-ceph-cluster-image.yaml.f2z72Y to addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-image.yaml
recv_files(addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml)
recv mapped addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml of size 184
got file_sum
renaming addons/rook/1.0.4/cluster/patches/.tmpl-ceph-cluster-storage.yaml.YbeH96 to addons/rook/1.0.4/cluster/patches/tmpl-ceph-cluster-storage.yaml
recv_files(addons/rook/1.0.4/operator)
recv_files(addons/rook/1.0.4/operator/ceph-common.yaml)
recv mapped addons/rook/1.0.4/operator/ceph-common.yaml of size 14152
got file_sum
renaming addons/rook/1.0.4/operator/.ceph-common.yaml.G3Rhgf to addons/rook/1.0.4/operator/ceph-common.yaml
recv_files(addons/rook/1.0.4/operator/ceph-operator.yaml)
recv mapped addons/rook/1.0.4/operator/ceph-operator.yaml of size 6554
got file_sum
renaming addons/rook/1.0.4/operator/.ceph-operator.yaml.1a5Tmn to addons/rook/1.0.4/operator/ceph-operator.yaml
recv_files(addons/rook/1.0.4/operator/kustomization.yaml)
recv mapped addons/rook/1.0.4/operator/kustomization.yaml of size 51
got file_sum
renaming addons/rook/1.0.4/operator/.kustomization.yaml.lJfxtv to addons/rook/1.0.4/operator/kustomization.yaml
recv_files(addons/weave)
recv_files(addons/weave/2.5.2)
recv_files(addons/weave/2.5.2/daemonset.yaml)
recv mapped addons/weave/2.5.2/daemonset.yaml of size 2687
got file_sum
renaming addons/weave/2.5.2/.daemonset.yaml.O0ebAD to addons/weave/2.5.2/daemonset.yaml
recv_files(addons/weave/2.5.2/encrypt.yaml)
recv mapped addons/weave/2.5.2/encrypt.yaml of size 313
got file_sum
renaming addons/weave/2.5.2/.encrypt.yaml.ij5PGL to addons/weave/2.5.2/encrypt.yaml
recv_files(addons/weave/2.5.2/install.sh)
recv mapped addons/weave/2.5.2/install.sh of size 2009
got file_sum
renaming addons/weave/2.5.2/.install.sh.SxvvNT to addons/weave/2.5.2/install.sh
recv_files(addons/weave/2.5.2/kustomization.yaml)
recv mapped addons/weave/2.5.2/kustomization.yaml of size 40
got file_sum
renaming addons/weave/2.5.2/.kustomization.yaml.etCbU1 to addons/weave/2.5.2/kustomization.yaml
recv_files(addons/weave/2.5.2/rbac.yaml)
recv mapped addons/weave/2.5.2/rbac.yaml of size 1608
got file_sum
renaming addons/weave/2.5.2/.rbac.yaml.u5MS09 to addons/weave/2.5.2/rbac.yaml
recv_files(addons/weave/2.5.2/tmpl-ip-alloc-range.yaml)
recv mapped addons/weave/2.5.2/tmpl-ip-alloc-range.yaml of size 243
got file_sum
renaming addons/weave/2.5.2/.tmpl-ip-alloc-range.yaml.vlzA7h to addons/weave/2.5.2/tmpl-ip-alloc-range.yaml
recv_files(addons/weave/2.5.2/tmpl-secret.yaml)
recv mapped addons/weave/2.5.2/tmpl-secret.yaml of size 132
got file_sum
renaming addons/weave/2.5.2/.tmpl-secret.yaml.vadjeq to addons/weave/2.5.2/tmpl-secret.yaml
send_files phase=1
recv_files phase=1
generate_files phase=2
send_files phase=2
send files finished
total: matches=90  hash_hits=90  false_alarms=0 data=0
recv_files phase=2
recv_files finished
generate_files phase=3
generate_files finished

sent 2,597 bytes  received 20,033 bytes  15,086.67 bytes/sec
total size is 51,327  speedup is 2.27
[sender] _exit_cleanup(code=0, file=main.c, line=1183): about to call exit(0)

On the remote compute, destination kurl tree looks fine, and i was able to create a k8s cluster successfully.

ubuntu@ip-172-31-44-19:~/kurl$ ls -R
.:
Manifest  addons  rhel-7  scripts  ubuntu-18.04  yaml

./addons:
contour  rook  weave

./addons/contour:
0.14.0

./addons/contour/0.14.0:
common.yaml  deployment.yaml  install.sh  kustomization.yaml  patches  rbac.yaml  service.yaml

./addons/contour/0.14.0/patches:
deployment-images.yaml  service-node-port.yaml

./addons/rook:
1.0.4

./addons/rook/1.0.4:
cluster  install.sh  operator

./addons/rook/1.0.4/cluster:
ceph-block-pool.yaml  ceph-cluster.yaml  kustomization.yaml  patches  tmpl-ceph-storage-class.yaml

./addons/rook/1.0.4/cluster/patches:
ceph-cluster-mons.yaml  tmpl-ceph-block-pool-replicas.yaml  tmpl-ceph-cluster-image.yaml  tmpl-ceph-cluster-storage.yaml

./addons/rook/1.0.4/operator:
ceph-common.yaml  ceph-operator.yaml  kustomization.yaml

./addons/weave:
2.5.2

./addons/weave/2.5.2:
daemonset.yaml  encrypt.yaml  install.sh  kustomization.yaml  rbac.yaml  tmpl-ip-alloc-range.yaml  tmpl-secret.yaml

./rhel-7:
packages

./rhel-7/packages:
k8s

./rhel-7/packages/k8s:
14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm
aa386b8f2cac67415283227ccb01dc043d718aec142e32e1a2ba6dbd5173317b-kubeadm-1.15.1-0.x86_64.rpm
conntrack-tools-1.4.4-4.el7.x86_64.rpm
ebtables-2.0.10-16.el7.x86_64.rpm
ethtool-4.8-9.el7.x86_64.rpm
f27b0d7e1770ae83c9fce9ab30a5a7eba4453727cdc53ee96dc4542c8577a464-kubectl-1.15.1-0.x86_64.rpm
f5edc025972c2d092ac41b05877c89b50cedaa7177978d9e5e49b5a2979dbc85-kubelet-1.15.1-0.x86_64.rpm
iproute-4.11.0-14.el7_6.2.x86_64.rpm
iptables-1.4.21-28.el7.x86_64.rpm
kubeadm
libmnl-1.0.3-7.el7.x86_64.rpm
libnetfilter_conntrack-1.0.6-1.el7_3.x86_64.rpm
libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
libnfnetlink-1.0.1-4.el7.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
tcp_wrappers-libs-7.6-77.el7.x86_64.rpm

./scripts:
common  install.sh  join.sh

./scripts/common:
addon.sh  common.sh  discover.sh  flags.sh  preflights.sh  prepare.sh  prompts.sh  rook.sh  test  yaml.sh

./scripts/common/test:
cli-script-test.sh  docker-test.sh          ip-address-test.sh  semver-test.sh
common-test.sh      docker-version-test.sh  proxy-test.sh       test.sh

./ubuntu-18.04:
packages

./ubuntu-18.04/packages:
k8s

./ubuntu-18.04/packages/k8s:
conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb  libelf1_0.170-0.4ubuntu0.1_amd64.deb
cri-tools_1.13.0-00_amd64.deb                            libip4tc0_1.6.1-2ubuntu2_amd64.deb
ebtables_2.0.10.4-3.5ubuntu2.18.04.3_amd64.deb           libip6tc0_1.6.1-2ubuntu2_amd64.deb
ethtool_1%3a4.15-0ubuntu1_amd64.deb                      libiptc0_1.6.1-2ubuntu2_amd64.deb
iproute2_4.15.0-2ubuntu1_amd64.deb                       libkmod2_24-1ubuntu3.2_amd64.deb
iptables_1.6.1-2ubuntu2_amd64.deb                        libmnl0_1.0.4-2_amd64.deb
kmod_24-1ubuntu3.2_amd64.deb                             libnetfilter-conntrack3_1.0.6-2_amd64.deb
kubeadm                                                  libnfnetlink0_1.0.1-3_amd64.deb
kubeadm_1.15.1-00_amd64.deb                              libwrap0_7.6.q-27_amd64.deb
kubectl_1.15.1-00_amd64.deb                              libxtables12_1.6.1-2ubuntu2_amd64.deb
kubelet_1.15.1-00_amd64.deb                              multiarch-support_2.27-3ubuntu1_amd64.deb
kubernetes-cni_0.7.5-00_amd64.deb                        partial
libatm1_1%3a2.5.1-2build1_amd64.deb                      socat_1.7.3.2-2ubuntu2_amd64.deb

./ubuntu-18.04/packages/k8s/partial:

./yaml:
kubeadm-cluster-config-v1beta2.yml  kubeadm-join-config-v1beta2.yaml
kubeadm-init-config-v1beta2.yml     kubeproxy-config-v1alpha1.yml
ubuntu@ip-172-31-44-19:~/kurl$ 
ubuntu@ip-172-31-44-19:~/kurl$ cd ~/kurl && sudo bash scripts/install.sh
...

		Installation
		  Complete ✔

To access the cluster with kubectl, reload your shell:

    bash -l


To add worker nodes to this installation, run the following script on your other nodes
    curl /join.sh | sudo bash -s kubernetes-master-address=172.31.44.19 kubeadm-token=mn10pm.87aek8ewxnwgst1c kubeadm-token-ca-hash=sha256:5311f4ddd40dd4b2659c734e5917602dab931806ee7625158a0efcec3e80cd8a kubernetes-version=1.15.1 


ubuntu@ip-172-31-44-19:~/kurl$

Proposal: Package kurl as a kubectl plugin

Proposal: Package kURL as a kubectl plugin

Today, for kURL to be useful, the kurl.sh site is required. The site is where anyone can upload a new kURL spec, and this site serves the installs scripts and the handles building and serving airgap packages. This proposal is to explore ways to enhance kURL to be more useful as a standalone tool.

If we package kURL as a kubectl plugin, then it will be possible to use it locally. This would support both online and offline capabilities and create new ways to run kURL.

Installing a new cluster

Given a kURL spec saved locally as a file named my-cluster.yaml, installing on the local machine could be as easy as:

$ kubectl kurl apply -f ./my-cluster.yaml

This will continue to support URLs on remote servers and even the kurl.sh site:

$ kubectl kurl apply -f https://kurl.sh/latest

Upgrading a cluster

Once a kURL cluster is running, if the my-cluster.yaml is updated (version numbers, new add-on, etc), the change could be applied to the cluster with:

$ kubectl kurl apply -f ./my-cluster.yaml

We should also support a dry-run on the upgrade command:

$ kubectl kurl apply -f ./my-cluster.yaml --dry-run

In this mode, the output is printed, but no changes are applied to the cluster. This is useful to detect when there are changes, or to see what these changes are.

Building Airgap

If you have the kURL plugin installed locally (on your workstation), you could build a new airgap package for server-side installation using:

$ kubectl kurl build -f ./my-cluster.yaml

This will create a file named my-cluster.tar.gz that contains the required packages and artifacts for the cluster described in my-cluster.yaml. Running build will require internet access as it will download and collect the artifacts into a single package. This can be run on a DMZ, in a CI service, or even in-cluster (no root access is needed).

Transfering this file to the server, it can be installed via:

$ kubectl kurl apply -f ./my-cluster.tar.gz

The plugin will detect that this is a tar.gz, not a yaml file, and will extract and run offline. No outbound internet access will be required or attempted.

Implementation

This should be written in go, and packaged as a krew plugin for distribution in the krew index. The build command already exists as a go package in kURL. The apply command would have to be written, since that's currently a shell script. We could use the shell script as times, and interact with the cluster using client-go when needed also.

Questions

  • Why kubectl plugin instead of a binary that's installable via brew/snap/etc?
    For ease of distribution across multiple platforms.

  • Should the plugin be able to execute client side? For example, if I turn on new EC2 instance, can I run kubectl kURL from my workstation to install?
    Possibly. I think starting with "localhost" as the target is incredibly useful.

  • What about Cluster API, doesn't that project handle some of this also?
    There is some overlap between clusterapi and kURL. kURL should support cluster api natively. The big difference is that kURL supports add-ons, while cluster-api provides a declarative way to deploy a cluster.

  • What other questions?

Getting the add worker nodes script with parameters after installation is completed

At the end of the installation using kURL we get a nice reference on how to add worker nodes, i.e

To add worker nodes to this installation, run the following script on your other nodes
    curl -sSL https://kurl.sh/xxx/join.sh | sudo bash -s ...

Can I re-create this script command on demand? If not, would it be possible to plot it to some reference file in the first place during installation script?

kurl.sh does not support adding node labels

There should be a way to set labels on the original node as well as any additional nodes that are added to the cluster. Labels should not be required to be shared between nodes, either.

This can be done manually with kubectl after the fact in many situations, but that also invites race conditions if labels are used for scheduling and the cluster is in active use.

node join scripts use the instance IP in HA clusters

When creating a node join script for an HA cluster, the master address should be the DNS name for the kube api, not the node IP (when provided). This script should continue to function even if the original master no longer exists.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.