Giter Club home page Giter Club logo

hetzner-k3s's People

Contributors

acschm1d avatar aleksasiriski avatar compi653 avatar creib avatar cwilhelm avatar derlinuxer avatar easystartup-io avatar floppy012 avatar funzinator avatar janosmiko avatar jpetazzo avatar khustochka avatar lloesche avatar malte-j avatar mgalesloot avatar mike667 avatar n3rdc4ptn avatar pysen avatar quorak avatar systeemkabouter avatar szepeviktor avatar tunatoksoz avatar vitobotta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hetzner-k3s's Issues

Ruby installation issues

Hey Vito,

thanks for this amazing project! :)
I wanted to try a little bit around, but with Ubuntu 20.04 and standard-repo Ruby I get issues, that some packages are to new. Before I start digging into this, since I have exactly 0 Ruby knowledge, I'd like to hear at least someting about your Ruby runtime.

I'd be happy to enhance Readme afterwards :)

Kind Regards,
Nico

Mysql for k3s db

Hi, as per k3s HA installation guide they need external db as the k3s db quite huge.

Did you see any issue with embedded db?

chmod on kubeconfig

Thanks for your project, it saved me a lot of time.

I got a WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: ~/kubeconfig., which I fixed using chmod go-r ~/.kube/config. May be you can implement something similar while write file.

cannot connect to server

When I run hetzner-k3s create-cluster --config-file hetzner-cluster.yaml the programm finished in an error state. The connection to the server cannot be established. I'm using ssh-key to authenticate and the the key is already loaded to my keyring. After the creation of the servers, I can connect to them with ssh [email protected].

hetzner-k3s create-cluster --config-file hetzner-cluster.yaml

Firewall already exists, skipping.


Private network already exists, skipping.


SSH key already exists, skipping.


API load balancer already exists, skipping.





Server kubeedge-cpx21-master2 already exists, skipping.

Server kubeedge-cpx21-master1 already exists, skipping.

Server kubeedge-cpx21-master3 already exists, skipping.

Server kubeedge-cpx21-pool-small-worker1 already exists, skipping.


Waiting for server kubeedge-cpx21-pool-small-worker1 to be up...
Waiting for server kubeedge-cpx21-master3 to be up...
Waiting for server kubeedge-cpx21-master1 to be up...
Waiting for server kubeedge-cpx21-master2 to be up...
#<Thread:0x00007f16c80435e0 /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:146:in `block (2 levels) in create_resources'
	10: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:448:in `wait_for_ssh'
	 9: from /usr/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	 8: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	 7: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	 6: from /usr/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	 5: from /usr/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
	 4: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:453:in `block in wait_for_ssh'
	 3: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:453:in `loop'
	 2: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:454:in `block (2 levels) in wait_for_ssh'
	 1: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.3.2/lib/hetzner/k3s/cluster.rb:468:in `ssh'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:268:in `start': Authentication failed for user [email protected] (Net::SSH::AuthenticationFailed)

Hetzner Cloud Console Account got closed down!

I used this create Cluster Script - see its output downstairs.

Hetzner as a result closed our Cloud Console Account, because the "Script / Tool"
was disfunctional and made too many API Requests in the Hetzner API System.

They wrote that the script:

"Es scheint so, als ob Sie viele API Requests zum Anlegen von Servern erzeugt haben, wo der Name aber bereits an einen anderen Server vergeben wurde. Können Sie dies einmal überprüfen? Anschließend würden wir den Zugriff auf die API wieder freischalten.

Wir arbeiten bereits an einer Lösung bei uns, sodass solche API Requests keine solchen Auswirkungen mehr hat.
"

Cheers,
rené


hetzner_token: "ea2*********************w"
cluster_name: "zooo-k3s"
kubeconfig_path: "../kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: '/.ssh/id_rsa.pub'
private_ssh_key_path: '
/.ssh/id_rsa'
ssh_allowed_networks:

  • 0.0.0.0/0
    verify_host_key: false
    location: fsn1
    masters:
    instance_type: cpx11
    instance_count: 3
    worker_node_pools:
  • name: small
    instance_type: cpx11
    instance_count: 4
  • name: big
    instance_type: cpx21
    instance_count: 2

OUTPUT:

Creating server zooo-k3s-cpx11-master1...
Creating server zooo-k3s-cpx11-pool-small-worker3...
Creating server zooo-k3s-cpx11-master3...
Creating server zooo-k3s-cpx11-master2...
Creating server zooo-k3s-cpx21-pool-big-worker1...
Creating server zooo-k3s-cpx11-pool-small-worker2...
Creating server zooo-k3s-cpx11-pool-small-worker1...
Creating server zooo-k3s-cpx11-pool-small-worker4...
Creating server zooo-k3s-cpx21-pool-big-worker2...
...server zooo-k3s-cpx11-pool-small-worker1 created.

Error creating server zooo-k3s-cpx11-master1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:17 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1259", "ratelimit-reset"=>"1637088138", "x-correlation-id"=>"47ca9b2d-a61c-42fc-a263-52bc6b6be110", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker3. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:17 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1258", "ratelimit-reset"=>"1637088139", "x-correlation-id"=>"b37320da-4f11-4309-96f0-086424565fc4", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx21-pool-big-worker2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:20 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1258", "ratelimit-reset"=>"1637088142", "x-correlation-id"=>"4c983418-1542-4e4e-8793-535dee5b3040", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker4. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:20 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1257", "ratelimit-reset"=>"1637088143", "x-correlation-id"=>"02156b5c-be08-4009-b0bb-7e0f918bb145", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-master2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:22 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1256", "ratelimit-reset"=>"1637088146", "x-correlation-id"=>"eb8df639-e8b9-48e3-8e2e-3ac7ef5dcf07", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx21-pool-big-worker1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:22 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1255", "ratelimit-reset"=>"1637088147", "x-correlation-id"=>"b450319d-f78f-4fd1-b5e1-d53dcfcb75cc", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-pool-small-worker2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:26 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1253", "ratelimit-reset"=>"1637088153", "x-correlation-id"=>"7fa772e7-8629-49fd-86e6-2109707ba68e", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server zooo-k3s-cpx11-master3. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Tue, 16 Nov 2021 18:03:28 GMT", "Content-Type"=>"application/json", "Content-Length"=>"98", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"1253", "ratelimit-reset"=>"1637088155", "x-correlation-id"=>"09da4401-3890-48f6-8627-ea28f5cebb90", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>

question about annotation

Hi me again, yeah sorry I'm quite busy with the cluster lately
I applied the following custom configuration to the ingress nginx:

controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: kluster-ingress-nginx
      load-balancer.hetzner.cloud/use-private-ip: "true"
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'

but for some reason my oauth2proxy service is not working correctly. this page describes basically my issue. Now is see that I miss the hostname annotation. this could be related (not sure but I need to find out)
load-balancer.hetzner.cloud/hostname: <a valid fqdn>

Can you tell me what this should be?
the same as the cloud/name of the nginx ingress loadbalancer? (kluster-ingress-nginx) or the ip? but fqdn = only a name right?
or should this be the domain name I'm using in the cert-manager?

thanks again!

missing documentation on how to specify a target project

I've tried to read thoroughly the documentation, but could not find where to specify the Hetzner cloud project where the cluster will be provisioned, until I noted that the hetzner_token in the YAML config file is project related.

It may be good to have this documented in the README.md

Looking forward to test this for the first time! Thany you @vitobotta !

No kubeconfig with docker

Hi, I tried to use your tool by installing it in Ubuntu, but I had problems with old Ruby versions so I decided to try to use your Docker image. The problem that I'm encountering is kubeconfig doesn't get generated for some reason. It says in the output that kube.service failed Job for k3s.service failed because the control process exited with error code.. I don't know if that is related. All the servers, a load balancer and firewall get created, but I don't know about k3s since I cannot connect to the cluster without kubeconfig. Also, I'm running this on Windows inside WSL2 if it makes a difference.

I would appreciate it if you could look into this.

cluster_config.yaml

---
hetzner_token: <TOKEN>
cluster_name: k3s
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.4+k3s1
ssh_key_path: "/tmp/.ssh/id_rsa.pub"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
masters:
  instance_type: cx11
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cx11
  instance_count: 1

Command and output:

docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.0 create-cluster --config-file /cluster/cluster_config.yaml

Creating firewall...
...firewall created.


Creating private network...
...private network created.


Creating SSH key...
...SSH key created.


Creating API load_balancer...
...API load balancer created.





Creating server k3s-cx11-master2...
Creating server k3s-cx11-master1...
Creating server k3s-cx11-master3...
Creating server k3s-cx11-pool-small-worker1...
...server k3s-cx11-master3 created.

...server k3s-cx11-master1 created.

...server k3s-cx11-master2 created.

...server k3s-cx11-pool-small-worker1 created.


Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
Waiting for server k3s-cx11-master1 to be up...
Waiting for server k3s-cx11-master2 to be up...
...server k3s-cx11-master1 is now up.
...server k3s-cx11-master2 is now up.
Waiting for server k3s-cx11-master3 to be up...
Waiting for server k3s-cx11-pool-small-worker1 to be up...
...server k3s-cx11-master3 is now up.
...server k3s-cx11-pool-small-worker1 is now up.

Deploying k3s to first master (k3s-cx11-master1)...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

...k3s has been deployed to first master.

Deploying k3s to master k3s-cx11-master2...

Deploying k3s to master k3s-cx11-master3...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details.

...k3s has been deployed to master k3s-cx11-master2.

...k3s has been deployed to master k3s-cx11-master3.

Deploying k3s to worker (k3s-cx11-pool-small-worker1)...
[INFO]  Using v1.21.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

...k3s has been deployed to worker (k3s-cx11-pool-small-worker1).

Deploying Hetzner Cloud Controller Manager...
...Cloud Controller Manager deployed

Deploying Hetzner CSI Driver...
...CSI Driver deployed

Deploying k3s System Upgrade Controller...
...k3s System Upgrade Controller deployed

Little help needed for my https deployment on own domain

Hi,
first off all:
sorry for the stupid questions, maybe not quite related to what you made but i could really use a little help right now.

I have created the cluster (not via the Docker way) and am trying to use my own (sub)domains in my deployments. on http I get this to work but unfortunately not on https.
I also have to say that the documentation on Hetzner Cloud Controller Manager isn't very good either... I can't find any good instructions on the Internet.

You shared an example of your Service annotations:

  service:
    annotations:
      load-balancer.hetzner.cloud/hostname: <a valid fqdn>
      load-balancer.hetzner.cloud/http-redirect-https: 'false'
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: <lb name>
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'
      load-balancer.hetzner.cloud/use-private-ip: "true"

but shouldn't these 2 lines be added too?

load-balancer.hetzner.cloud/http-certificates
load-balancer.hetzner.cloud/protocol

But when I add the protocol line in the Service, the loadbalancer is crashing in the Hetzner cloud.

Anyway, this is how my whoami.yaml deployment file looks like:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: containous/whoami
  selector:
    matchLabels:
      app: whoami
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  labels:
    app: whoami
  annotations:
    load-balancer.hetzner.cloud/hostname: 'whoami.mydomain.com'
    load-balancer.hetzner.cloud/http-redirect-https: 'false'
    load-balancer.hetzner.cloud/location: 'nbg1'
    load-balancer.hetzner.cloud/name: 'a0377f5249b9myidd34a497800858'
    load-balancer.hetzner.cloud/uses-proxyprotocol: 'false'
    load-balancer.hetzner.cloud/use-private-ip: 'true'
spec:
  type: LoadBalancer
  ports:
    - port: 443
      targetPort: 80
  selector:
    app: whoami

Then at Cloudflare to manage the DNS, I've created an A-record pointing mydomain.com to the IP of the loadbalancer
And a second A-record for whoami.mydomain to the same IP of the loadbalancer. Not sure if both are needed though.

When I apply the deployment a loadbalancer is created and the whoami service gets available at:

http://whoami.dockerjourney.ovh:443/
and http://loadbalancerip:443

but NOT on the https port haha ... Is Let's encrypt not included in the Hetzner Cloud Controller Manager?

Maybe something needs to be set manually in Hetnzer cloud? for example at Loadbalancer, Networking, PUBLIC NETWORK. you can fill in a Reserved DNS name here. But Iam not sure...
Or do I need to Create a certificate in the Hetnzer cloud, and then use the Service annotations?

Thanks in advance for your help, I would really appreciate it

Use of dedicated servers

Hi,
Any plans or guidance to setup k3s on dedicated servers? I know you are running dynablogger on dedicated servers now 😀

wait until route to host is up

i know this from my ansible scripts. just retry 3 times and everything runs

Traceback (most recent call last):
	14: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	13: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	12: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	10: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:73:in `initialize'
	 6: from /usr/lib/ruby/2.7.0/socket.rb:632:in `tcp'
	 5: from /usr/lib/ruby/2.7.0/socket.rb:227:in `foreach'
	 4: from /usr/lib/ruby/2.7.0/socket.rb:227:in `each'
	 3: from /usr/lib/ruby/2.7.0/socket.rb:642:in `block in tcp'
	 2: from /usr/lib/ruby/2.7.0/socket.rb:137:in `connect'
	 1: from /usr/lib/ruby/2.7.0/socket.rb:64:in `connect_internal'
/usr/lib/ruby/2.7.0/socket.rb:64:in `connect': No route to host - connect(2) for 116.203.69.124:22 (Errno::EHOSTUNREACH)

Issues on the execution of main command

Hi there! Looks like is not able to execute:

I did a clean and a docker attempt:

hetzner-k3s create-cluster --config-file=./cluster/test_filled.yaml 

________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_03 @ jperez-mbp (jperez)
| => hetzner-k3s create-cluster --config-file=./cluster/test_filled.yaml

Firewall already exists, skipping.


Private network already exists, skipping.


Creating SSH key...
...SSH key created.

Traceback (most recent call last):
	10: from /usr/local/bin/hetzner-k3s:23:in `<main>'
	 9: from /usr/local/bin/hetzner-k3s:23:in `load'
	 8: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/exe/hetzner-k3s:4:in `<top (required)>'
	 7: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
	 6: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	 5: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	 4: from /Library/Ruby/Gems/2.6.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	 3: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
	 2: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cluster.rb:39:in `create'
	 1: from /Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/k3s/cluster.rb:105:in `create_resources'
/Library/Ruby/Gems/2.6.0/gems/hetzner-k3s-0.4.2/lib/hetzner/infra/ssh_key.rb:26:in `create': undefined method `[]' for nil:NilClass (NoMethodError)
________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_03 @ jperez-mbp (jperez)

What I am doing is a prev envsub

envsubst < ./cluster/test.yaml | tee ./cluster/test_filled.yaml 

Where I am redacting:

hetzner_token: 0lQ5BEtHPChange_me9YVUOSIiOj8Kt68LNM2bV
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa_no_pass.pub"
private_ssh_key_path: "~/.ssh/id_rsa_no_pass"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: hel1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 3
- name: big
  instance_type: cpx31
  instance_count: 2

Any ideas? Cheers!

Feature Request: add possiblity to install custom packages on Worker Nodes

When I create a worker node using hetzner-k3s, there are some packages which I have to manually install. The most useful example in my case is nfs-common which is required for multiple storage classes (eg. longhorn PVC in RWX mode).

Could you make it possible to add an additional_packages list to the config.yaml. These packages could be a part of user-data in hetzner install.

eg.:

---
hetzner_token: 1234
cluster_name: name
kubeconfig_path: "/cluster/kubeconfig"
k3s_version: v1.21.6+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
additional_packages:
  - "nfs-common"
masters:
  instance_type: cpx41
  instance_count: 1
worker_node_pools:
  - name: worker
    instance_type: cpx51
    instance_count: 3

Timeout when trying to create +9 servers at once

Hi!
I found that when I tried to create 9 servers as per your example, I will always get only 8 servers created and the script timeouts when the 9th "waiting for server to be up" (also the script exits before completion).

#<Thread:/home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true): Traceback (most recent call last): 7: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146:in block (2 levels) in create_resources' 6: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:448:in wait_for_ssh' 5: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:110:in timeout' 4: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 3: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 2: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in block in catch' 1: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:95:in block in timeout' /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:449:in block in wait_for_ssh': undefined method []' for nil:NilClass (NoMethodError) #<Thread:/home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146 run> terminated with exception (report_on_exception is true): Traceback (most recent call last): 7: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:146:in block (2 levels) in create_resources' 6: from /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:448:in wait_for_ssh' 5: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:110:in timeout' 4: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 3: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in catch' 2: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:33:in block in catch' 1: from /usr/share/rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/timeout.rb:95:in block in timeout' /home/xx/.rvm/gems/ruby-2.7.2/gems/hetzner-k3s-0.3.7/lib/hetzner/k3s/cluster.rb:449:in block in wait_for_ssh': undefined method []' for nil:NilClass (NoMethodError)

I tried with 6 servers and it worked well. :)

Improvement: HTTP.follow.get retries

Thanks for the excellent repo!

A little improvement that could save a lot of time for those living in China or other oppressed countries, would be to add retry policy to HTTP.follow.get calls (or add an option for air-gapped installation).

Github calls mostly work, but raw.githubusercontent.com is typically either blocked outright or DNS poisoned, so we have to run Docker with --network host + export https_proxy... to have anything downloaded, but that also sometimes requires 10+ reruns of the script.

[Windows] Create cluster - No such file or directory

Hello,

the program stop in the cloud controller manager setup. Is there a way to fix it?

Deploying Hetzner Cloud Controller Manager...
Traceback (most recent call last):
        10: from C:/Ruby27-x64/bin/hetzner-k3s:23:in `<main>'
         9: from C:/Ruby27-x64/bin/hetzner-k3s:23:in `load'
         8: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/exe/hetzner-k3s:4:in `<top (required)>'
         7: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
         6: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
         5: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
         4: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
         3: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
         2: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:46:in `create'
         1: from C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:346:in `deploy_cloud_controller_manager'
C:/Ruby27-x64/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:346:in `write': No such file or directory @ rb_sysopen - /tmp/cloud-controller-manager.yaml (Errno::ENOENT)

Thanks in advance!

Regards
ludgart

Undefined method `[]`

Hi. When I run the command hetzner-k3s create-cluster --config-file cluster_config.yaml I get a few of these errors:

Traceback (most recent call last):
        7: from /home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:146:in `block (2 levels) in create_resources'
        6: from /home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:448:in `wait_for_ssh'
        5: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:108:in `timeout'
        4: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `catch'
        3: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `catch'
        2: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:33:in `block in catch'
        1: from /home/luna/.rvm/rubies/ruby-2.6.0/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout'
/home/luna/.rvm/gems/ruby-2.6.0/gems/hetzner-k3s-0.3.5/lib/hetzner/k3s/cluster.rb:449:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)

Not sure how to fix or what to even do. The system I'm running it from is Manjaro. I updated hetzner-k3s to the latest and still had the issue.

Edit: Fixed the codeblock.

New Master not added to Cluster

Hi

Deleted my problematic Master node and reran create-cluster CLI but the node was created as a standalone K3s node.

kubeclt get nodes -- lists only the new master.

little help needed

sorry, Ive been very busy recently, but Ive got time now 👍
I still have the problem that I can't reach the hello-world on my own domain, this works well with portforward. I doubt very much whether it is because of the way the cluster is made (firewall, loadbalancers etc) or whether it is my own fault.
it would mean a lot to me if you could help me!

How I created the cluster:

---
hetzner_token: myapikeyisremoved
cluster_name: kluster
kubeconfig_path: "/cluster/kubeconfig"
k3s_version: v1.22.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 2

I executed the following commands to deploy nginx:

  1. helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  2. helm repo update
  3. helm upgrade --install --namespace ingress-nginx --create-namespace -f C:\kluster\ingress-nginx.yaml ingress-nginx ingress-nginx/ingress-nginx

How the ingress-nginx.yaml look like:

controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: kluster-ingress-nginx
      load-balancer.hetzner.cloud/use-private-ip: "true"

So Iam trying to deploy this hello-world app, it is basically this file:
https://gist.githubusercontent.com/vitobotta/6e73f724c5b94355ec21b9eee6f626f1/raw/3036d4c4283a08ab82b99fffea8df3dded1d1f78/deployment.yaml
One thing changed though:

spec:
  rules:
  - host: mydomain.com

Everything is running well, when I Portforward the pod is gives me the Hello-world page.
But I cant get this to work on my own domain...

When I descibe the hello-world it shows me 10.43.229.204, I think an internal IP right?

PS C:\kluster> kubectl describe service hello-world -n ingress-nginx
Name:              hello-world
Namespace:         ingress-nginx
Labels:            <none>
Annotations:       <none>
Selector:          app=hello-world
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.229.204
IPs:               10.43.229.204
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.4.7:80
Session Affinity:  None
Events:            <none>

When I describe the ingress used by Hello-world it gives me Host mydomain.com, so this seems to be OK right?

PS C:\kluster> kubectl describe ingress hello-world -n ingress-nginx
Name:             hello-world
Namespace:        ingress-nginx
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  mydomain.com  
              /   hello-world:80 (10.244.4.7:80)
Annotations:  <none>
Events:       <none>

When I execute this one, it gives me the IP of the external IP of the loadbalancer (162.55.152.65)
This is the IP that I need to use in the DNS settings right?

PS C:\kluster> kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP                                   PORT(S)      
                AGE
hello-world                          ClusterIP      10.43.229.204   <none>                                        80/TCP       
                15m
ingress-nginx-controller             LoadBalancer   10.43.131.110   10.0.0.8,162.55.152.65,2a01:4f8:1c1d:201::1   80:30584/TCP,
443:31524/TCP   22m
ingress-nginx-controller-admission   ClusterIP      10.43.59.144    <none>                                        443/TCP      
                22m

How my DNS settings look like, so basically the same IP of the public loadbalancer (ingress-nginx-controller)

A | mydomain.com | 162.55.152.65
A | www | 162.55.152.65

But when I go to mydomain.com it gives me an error, nothing is there to see...
Is the IP correct that Iam trying to use at the DNS settings?
or do I need to add some annotations?
thanks in advance

Originally posted by @jboesh in #30 (comment)

rancher

Hi Vito

short question ; do you run rancher on your own cluster?
I followed your instructions (here #13) to install it, but can't get it to work properly.
image

I get a 404 error
image

is there anything missing from the instructions? Which helm values are you using?
I don't think it has anything to do with creating the cluster, if you might have some time for it, I'll be very grateful!

status of the deployment, seems to be OK:

PS D:\kluster> kubectl -n cattle-system rollout status deploy/rancher
deployment "rancher" successfully rolled out

the deployments:

PS D:\kluster> kubectl get deployments -n cattle-system

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
rancher           3/3     3            3           10m
rancher-webhook   1/1     1            1           9m13s

the ingress:

PS D:\kluster> kubectl get ingress -n cattle-system

NAME                        CLASS    HOSTS                ADDRESS   PORTS     AGE
cm-acme-http-solver-bwjcq   <none>   rancher.mydomain.com             80        10m
rancher                     <none>   rancher.mydomain.com             80, 443   10m

certificate and ingress seems to be OK:

PS D:\kluster> kubectl describe ingress -n cattle-system

Name:             cm-acme-http-solver-bwjcq
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                Path  Backends
  ----                ----  --------
  rancher.mydomain.com  
                      /.well-known/acme-challenge/YrJZvovWuc7sAMX0UDoDU5iCKRix3nsIOE6dBndGlvU   cm-acme-http-solver-gk64f:8
089 (10.244.2.223:8089)
Annotations:          nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0
Events:               <none>


Name:             rancher
Namespace:        cattle-system
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  tls-rancher-ingress terminates rancher.mydomain.com
Rules:
  Host                Path  Backends
  ----                ----  --------
  rancher.mydomain.com  
                         rancher:80 (10.244.1.245:80,10.244.2.221:80,10.244.2.222:80)
Annotations:          cert-manager.io/issuer: rancher
                      cert-manager.io/issuer-kind: Issuer
                      meta.helm.sh/release-name: rancher
                      meta.helm.sh/release-namespace: cattle-system
                      nginx.ingress.kubernetes.io/proxy-connect-timeout: 30
                      nginx.ingress.kubernetes.io/proxy-read-timeout: 1800
                      nginx.ingress.kubernetes.io/proxy-send-timeout: 1800
Events:
  Type    Reason             Age   From          Message
  ----    ------             ----  ----          -------
  Normal  CreateCertificate  11m   cert-manager  Successfully created Certificate "tls-rancher-ingress"

Certificate:

PS D:\kluster> kubectl get certificate -n cattle-system

NAME                  READY   SECRET                AGE
tls-rancher-ingress   True    tls-rancher-ingress   12m

all seems to be OK so I dont get it.sign

Allow to customize use used image

It would be nice to be able to use custom image. E.x a Snapshot

I installed longhorn on me Cluster and wanted to use a RWX PVC. NFS Support Must be installed in the system. I would like to install it once, create a snapshot and use it as base for the nodes.

Load balancer annotations.

Edit: Made some parts easier to read.

This isn't an issue with this script or anything, it seems that I'm just very stupid 😓, I don't know where else to ask except here since using this script and easier point to go from.

So, I give up lol. I've looked through so many docs and guides and what not but, I'm so confused how I'm suppose to get rancher to work. I mean, rancher and cert-manager boots up and the pods gets ready, but the whole load balancers and that confuses the hell out of me. Load Balancers It's that part that confuses me a lot. Where I would place those and so on.

I thought I made progress when I found The load balancer README on hcloud CCM, but even that confuses me and I'm not sure how I'm suppose to set it all up going from this script.
I do feel like I still made some progress, specially also using Lens as that made it a bit easier to have a look at everything going on with the cluster.

I assume the hostname would be load.example.com with setting up the DNS on cloudflare, with type A where load.example.com points to Load balancer Public IP. I might be wrong with even that.

So basically, would be forever grateful with some help and guidance with this. Cause either I'm missing something in all docs and guides I've been reading, or I can't read, or I'm simply just stupid lol.

Pass API Token as Environment Variable

Hey, thank you for this awesome tool, saved me a lot of headache :)

I'd like to run the tool from a CI to create multiple clusters. Is there any way to pass the API Token as Environment Variable (HCLOUD_TOKEN) instead of the config?

Cheers

block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)

First tun I got this exceptions. Second run the script installed kubernetes as expected. Maybe increase timeout?

#<Thread:0x000000011009bd00 /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
#<Thread:0x000000011009bc10 /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /Users/xxx/.rvm/gems/ruby-2.7.3/gems/hetzner-k3s-0.4.5/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /Users/xxx/.rvm/rubies/ruby-2.7.3/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'

Having problems with NodePort

Hi there, loving this tool!

I'm having some problems getting NodePort services working. For some reason, even if I disable the automatically created firewalls, attempting to access a service using a NodePort configuration doesn't seem to work. Do you have any suggestions?

cluster_config.yaml.example - needs update

I copied the cluster_config.yaml.example to my personal k3s-hetzner.yaml

it took me some time to figured out, that

  • ssh_key_path is now public_ssh_key_path
  • private_ssh_key_path is missing
  • ssh_allowed_network is missing

create_cluster.yaml.example is != of readme section "Creating a cluster".

issues creating the cluster -

Hi again, well I am giving it another try as I am following the project and seems I am stuck on something here .. while I can see the resources created on the cloud side ... seems like something doesn't follow along ....

jc@infra-0:~$ hetzner-k3s create-cluster --config-file cluster_config.yaml

Placement group already exists, skipping.
Firewall already exists, skipping.
Private network already exists, skipping.
SSH key already exists, skipping.
API load balancer already exists, skipping.


Creating server k3s-wireguard-cpx21-master1...
Creating server k3s-wireguard-cpx21-pool-small-worker2...
Creating server k3s-wireguard-cpx21-pool-small-worker3...
Creating server k3s-wireguard-cpx21-master3...
Creating server k3s-wireguard-cpx21-pool-small-worker1...
Creating server k3s-wireguard-cpx21-master2...
...server k3s-wireguard-cpx21-pool-small-worker3 created.

Error creating server k3s-wireguard-cpx21-master2. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Wed, 01 Dec 2021 18:28:58 GMT", "Content-Type"=>"application/json", "Content-Length"=>"202", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"3583", "ratelimit-reset"=>"1638383354", "x-correlation-id"=>"74604510-3a34-4d9b-ad6f-815e3ece5f4a", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
Error creating server k3s-wireguard-cpx21-pool-small-worker1. Response details below:

#<HTTP::Response/1.1 403 Forbidden {"Date"=>"Wed, 01 Dec 2021 18:28:58 GMT", "Content-Type"=>"application/json", "Content-Length"=>"202", "Connection"=>"close", "ratelimit-limit"=>"3600", "ratelimit-remaining"=>"3584", "ratelimit-reset"=>"1638383353", "x-correlation-id"=>"0a07d11e-a88e-4e57-876f-ad7722144ce2", "Strict-Transport-Security"=>"max-age=15724800; includeSubDomains", "Access-Control-Allow-Origin"=>"*", "Access-Control-Allow-Credentials"=>"true"}>
...server k3s-wireguard-cpx21-pool-small-worker2 created.

...server k3s-wireguard-cpx21-master3 created.

...server k3s-wireguard-cpx21-master1 created.


Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
#<Thread:0x00007ff16c059cf8 /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
#<Thread:0x00007ff16c059c08 /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master1 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
...server k3s-wireguard-cpx21-master1 is now up.
Waiting for server k3s-wireguard-cpx21-pool-small-worker2 to be up...
Waiting for server k3s-wireguard-cpx21-master3 to be up...
Waiting for server k3s-wireguard-cpx21-pool-small-worker3 to be up...
...server k3s-wireguard-cpx21-pool-small-worker3 is now up.
Traceback (most recent call last):
	7: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:168:in `block (2 levels) in create_resources'
	6: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:460:in `wait_for_ssh'
	5: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /home/jc/.rbenv/versions/2.7.5/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/home/jc/.rbenv/versions/2.7.5/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:461:in `block in wait_for_ssh': undefined method `[]' for nil:NilClass (NoMethodError)
jc@infra-0:~$

My configuration looks like:

cat << 'EOF' > cluster_config.yaml
---
hetzner_token: "2qSTPIXRedactedY7l7hQ4L"
cluster_name: k3s-wireguard
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: hel1
schedule_workloads_on_masters: false
masters:
  instance_type: cpx21
  instance_count: 3
worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 3
EOF

My env looks like:

jc@infra-0:~$ docker --version
Docker version 20.10.11, build dea9396
jc@infra-0:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
jc@infra-0:~$ ruby -v
ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
jc@infra-0:~$

So as you can see I can connect over ssh like:

jc@infra-0:~$ ssh -i ~/.ssh/id_rsa [email protected]
Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-90-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed 01 Dec 2021 06:38:15 PM UTC

  System load:             0.04
  Usage of /:              2.1% of 74.82GB
  Memory usage:            5%
  Swap usage:              0%
  Processes:               150
  Users logged in:         0
  IPv4 address for enp7s0: 10.0.0.6
  IPv4 address for eth0:   95.217.21.109
  IPv6 address for eth0:   2a01:4f9:c010:3690::1


5 updates can be applied immediately.
4 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable


Last login: Wed Dec  1 18:37:34 2021 from 95.111.222.111
root@k3s-wireguard-cpx21-pool-small-worker2:~#

Thanks!

ArgumentError when deploying Cloud Controller Manager

For some reason, my cluster creations have started failing recently at the Cloud Controller Manager step.

...snip...
...k3s has been deployed to worker (test-cpx21-pool-small-worker1).

Deploying Hetzner Cloud Controller Manager...
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
/var/lib/gems/2.7.0/gems/dry-types-0.13.4/lib/dry/types/hash/schema.rb:151: warning: Capturing the given block using Proc.new is deprecated; use `&block` instead
Traceback (most recent call last):
        13: from /usr/local/bin/hetzner-k3s:23:in `<main>'
        12: from /usr/local/bin/hetzner-k3s:23:in `load'
        11: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/exe/hetzner-k3s:4:in `<top (required)>'
        10: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
         9: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
         8: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
         7: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
         6: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
         5: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:42:in `create'
         4: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:320:in `deploy_cloud_controller_manager'
         3: from /var/lib/gems/2.7.0/gems/k8s-ruby-0.10.5/lib/k8s/resource.rb:38:in `from_files'
         2: from /var/lib/gems/2.7.0/gems/yaml-safe_load_stream-0.1.1/lib/yaml/safe_load_stream.rb:56:in `safe_load_stream'
         1: from /var/lib/gems/2.7.0/gems/yaml-safe_load_stream-0.1.1/lib/yaml/safe_load_stream.rb:16:in `safe_load_stream'
/var/lib/gems/2.7.0/gems/psych-4.0.1/lib/psych.rb:452:in `parse_stream': wrong number of arguments (given 2, expected 1) (ArgumentError)

Further Access Restrictions using the firewall

Hey,
the tool creates a loadbalancer that forwards 6443 to the internal network.
As far as I understands, this makes the firewall rule that allows access to 6443 obsolete and it should be removed, so it's not possible to access the nodes directly.

Furthermore I think it would be helpful to have an option, to restrict access to SSH to specific IPs (e.g. company network) to further harden the cluster.

Api-Platform integration issues

I have been procrastinating for a long time now before creating this thread and I know it's not a mistake on your part but I don't know where else to ask.

I have created a cluster with your tool and everything works so far. I use API-Platform and am now trying to integrate a load balancer. My expected result is that a load balancer is created and it has my worker nodes as a service.

In the values.yaml I have tried to insert the following in all places, always under annotations.

    load-balancer.hetzner.cloud/name: cluster-name-ingress-nginx
    load-balancer.hetzner.cloud/use-private-ip: "true"

among others the following

ingress:
  enabled: true
  annotations:
    load-balancer.hetzner.cloud/hostname: "random-hostname.com"
    load-balancer.hetzner.cloud/http-redirect-https: 'false'
    load-balancer.hetzner.cloud/location: nbg1
    load-balancer.hetzner.cloud/health-check-port: "30787"
    #load-balancer.hetzner.cloud/name: "backend-lb-prod"
    load-balancer.hetzner.cloud/uses-proxyprotocol: 'true' #load-balancer.hetzner.cloud/name: 'backend-lb-prod
    load-balancer.hetzner.cloud/use-private-ip: 'true'
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: 'true'
  hosts:
    - host: random-hostname.com
      paths:
        - path: "/"
          pathType: "Prefix
          backend:
            service:
              name: "test"
              port:
                number: 80
  tls: []

Since it didn't work, I also tried to write the whole thing directly into the ingress.yaml under the points as explained in the instructions.

After deployment, no load balancer is created.

Is there anyone here who can help me? I would be very grateful, I've been dealing with this for over a week now and can't get any further. I think other API Platform users would also appreciate it.

EDIT:
If i run
kubectl describe ingress main-api-platform

I got

Name:             main-api-platform
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  random-hostname.com  
                       /   main-api-platform:80 (10.244.2.20:80)
Annotations:           kubernetes.io/ingress.class: nginx
                       kubernetes.io/tls-acme: true
                       load-balancer.hetzner.cloud/health-check-port: 30787
                       load-balancer.hetzner.cloud/hostname: random-hostname.com
                       load-balancer.hetzner.cloud/http-redirect-https: false
                       load-balancer.hetzner.cloud/location: nbg1
                       load-balancer.hetzner.cloud/use-private-ip: true
                       load-balancer.hetzner.cloud/uses-proxyprotocol: true
                       meta.helm.sh/release-name: main
                       meta.helm.sh/release-namespace: default
Events:                <none>

Dont know this info helps but seems like it takes the annotations... I guess on the wrong place?

Unable to create cluster

Hello,

I tried to create cluster via docker (docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.4 create-cluster --config-file /cluster/cluster.yaml)

My config looks like this

---
hetzner_token: xxx
cluster_name: test
kubeconfig_path: "./kubeconfig"
k3s_version: v1.21.3+k3s1
public_ssh_key_path: "~/.ssh/id_rsa.pub"
private_ssh_key_path: "~/.ssh/id_rsa"
ssh_allowed_networks:
  - 0.0.0.0/0
verify_host_key: false
location: nbg1
schedule_workloads_on_masters: false
masters:
  instance_type: cx11
  instance_count: 1
worker_node_pools:
- name: small
  instance_type: cx21
  instance_count: 1

But it gave me this error:


Placement group already exists, skipping.


Firewall already exists, skipping.


Private network already exists, skipping.


SSH key already exists, skipping.



Creating server test-cx11-master1...
Creating server test-cx21-pool-small-worker1...
...server test-cx11-master1 created.

...server test-cx21-pool-small-worker1 created.


#<Thread:0x000000400a44ec98 /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m
#<Thread:0x000000400a44eba8 /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162 run> terminated with exception (report_on_exception is true):
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m
�[1mTraceback�[m (most recent call last):
	7: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:162:in `block (2 levels) in create_resources'
	6: from /usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:454:in `wait_for_ssh'
	5: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	4: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	3: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	2: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	1: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
/usr/local/bundle/gems/hetzner-k3s-0.4.4/lib/hetzner/k3s/cluster.rb:455:in `block in wait_for_ssh': �[1mundefined method `[]' for nil:NilClass (�[1;4mNoMethodError�[m�[1m)�[m

host key mismatch

im pretty sure the reason is that i had this ip address in the past

#<Thread:0x0000558fa7638238 /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	22: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	21: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	20: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	19: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	18: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	17: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:90:in `initialize'
	14: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `wait'
	13: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `loop'
	12: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:225:in `block in wait'
	11: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `poll_message'
	10: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `loop'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:210:in `block in poll_message'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:184:in `accept_kexinit'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:245:in `proceed!'
	 6: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:445:in `exchange_keys'
	 5: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:49:in `exchange_keys'
	 4: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:77:in `verify_server_key'
	 3: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new_or_local_tunnel.rb:17:in `verify'
	 2: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new.rb:18:in `verify'
	 1: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:32:in `verify'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:50:in `process_cache_miss': fingerprint SHA256:67JzxGepeDTloRSotU9vlZ7OuucQBji3F5Qw7otu6xU does not match for "116.203.224.30" (Net::SSH::HostKeyMismatch)
Traceback (most recent call last):
	22: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:144:in `block (2 levels) in create_resources'
	21: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `wait_for_ssh'
	20: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:438:in `loop'
	19: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:439:in `block in wait_for_ssh'
	18: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:452:in `ssh'
	17: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:90:in `initialize'
	14: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `wait'
	13: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:223:in `loop'
	12: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:225:in `block in wait'
	11: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `poll_message'
	10: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:190:in `loop'
	 9: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:210:in `block in poll_message'
	 8: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:184:in `accept_kexinit'
	 7: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:245:in `proceed!'
	 6: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:445:in `exchange_keys'
	 5: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:49:in `exchange_keys'
	 4: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/transport/kex/abstract.rb:77:in `verify_server_key'
	 3: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new_or_local_tunnel.rb:17:in `verify'
	 2: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/accept_new.rb:18:in `verify'
	 1: from /var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:32:in `verify'
/var/lib/gems/2.7.0/gems/net-ssh-6.1.0/lib/net/ssh/verifiers/always.rb:50:in `process_cache_miss': fingerprint SHA256:67JzxGepeDTloRSotU9vlZ7OuucQBji3F5Qw7otu6xU does not match for "116.203.224.30" (Net::SSH::HostKeyMismatch)

Stuck on Pending and CrashLoopBackOff

More issues haha.
I follow the guide and I showed my yaml in the other issue I opened.
When I run the command to get all pods in all namespaces, this is the result:

NAMESPACE        NAME                                              READY   STATUS             RESTARTS   AGE
kube-system      coredns-7448499f4d-5zwgx                          0/1     Pending            0          18m
kube-system      hcloud-cloud-controller-manager-9546b6cc6-8wgrs   1/1     Running            0          17m
kube-system      hcloud-csi-controller-0                           0/5     Pending            0          17m
kube-system      hcloud-csi-node-bb5pw                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-bgqfx                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-ht5d7                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-nzbw4                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-vhlkg                             2/3     CrashLoopBackOff   9          17m
kube-system      hcloud-csi-node-znmzx                             2/3     CrashLoopBackOff   9          17m
system-upgrade   system-upgrade-controller-677965cc4d-cdrvp        0/1     Pending            0          17m

It stays like that and it's the same if I install cert-manager, it just stays pending. The output in the codeblock above is from newely created Cluster, the one I created very first after fixing the last issue, is where I saw that it's been like this since it was created.

When I run the command to describe the pods, this is the message most of them says (give or take a few changes like the ready numbers):

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  5m38s  default-scheduler  0/6 nodes are available: 6 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  5m36s  default-scheduler  0/6 nodes are available: 6 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.

Not really sure how to fix this.

Another question I got since I've seen k3s with external database, is that something I still need to setup using this way?
I'm still fairly new with all of this 😅

delete-cluster deletes all resources in hetzner-cloud project!

hetzner-k3s delete-cluster --config-file cluster_config.yam

expected: Should delete all resources which where previous created with "create cluster"

actual: Deletes all resources in selected hetzner-cloud project! Backups included!

I have to say: I didn't like that!. Of course I forget to protect another resource which I had created just before. But I did not expect that deletion. Especially since that hadn't occured in the predecessor: https://github.com/vitobotta/hetzner-cloud-k3s

NetworkUnavailable - Could not create route

Hi,

thank you So mutch for your Work!!!!

After i Create the K3s Cluster, i Becom on every Node te Message, that the Network is Unavailable.
(combined from similar events): Could not create route ddb43c79-3777-40bb-9a53-7d75689f101f 10.244.0.0/24 for node agirancher-cpx21-master1 after 2.331535684s: hcloud/CreateRoute: hcops/AllServersCache.ByName: agirancher-cpx21-master1 hcops/AllServersCache.getCache: not found

do you have any idea what the problem can be ?

Thanks,
Basti

username and password to access the server (for the volume). I need to edit a file in a volume

Hi!

I need to edit some values in a config file, which is saved on a volume (On a specific Hetzner cloud server)
what is the way to do this?
When I logon to Hetzner cloud, volumes and click on the cloud server and 'Console', it is asking for an username and password.
These are not the Hetzner cloud credentails.
image

a reply from their support

As mentioned the login for a cloud server is always "root".

If you created the ssh key after creating the server it is not deposited on the remote system. This means you have to reset the password of your server.

If you have created the server and selected the ssh key during initial setup you can login without a password.

does your script generate the ssh key before or after the creating of the server?
anyway, when I try some combinations (username Root and or without password), none of them works

What am I doing wrong?

Can I reset the password without any problems? the server should just keep working from the cluster

Create cluster with docker - fails because of an error

Hi!

I'm trying to create a cluster with the docker command, but no matter what I try it doesn't work.
Do you have an example where you don't use ${PWD} and ${HOME}? So eg use a full path.
because somehow that's not recognized on my Win10 laptop (I'm using Docker Desktop)

Folder structure:

C:\Users\John\Downloads\vitobotta\cluster.yaml
C:\Users\John\Downloads\vitobotta\.ssh

What i tried:

C:\Users\John\Downloads\vitobotta>docker run --rm -it -v C:\Users\John\Downloads\vitobotta:/cluster -v C:\Users\John\Downloads\vitobotta.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.3 create-cluster --config-file cluster.yaml

It gives me this error message everytime:

chmod: /root/.ssh/*: No such file or directory
chmod: /root/.ssh/*.pub: No such file or directory
Please specify a correct path for the config file.
Traceback (most recent call last):
        8: from /usr/local/bundle/bin/hetzner-k3s:23:in `<main>'
        7: from /usr/local/bundle/bin/hetzner-k3s:23:in `load'
        6: from /usr/local/bundle/gems/hetzner-k3s-0.4.3/exe/hetzner-k3s:4:in `<top (required)>'
        5: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
        4: from /usr/local/bundle/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
        3: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
        2: from /usr/local/bundle/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
        1: from /usr/local/bundle/gems/hetzner-k3s-0.4.3/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
/usr/local/bundle/gems/hetzner-k3s-0.4.3/lib/hetzner/k3s/cli.rb:355:in `find_hetzner_token': undefined method `dig' for nil:NilClass (NoMethodError)

When I execute this one:
docker run --rm -it -v ${PWD}:/cluster -v ${HOME}/.ssh:/tmp/.ssh vitobotta/hetzner-k3s:v0.4.3 create-cluster --config-file cluster.yaml
it gives me:

docker: Error response from daemon: create ${PWD}: "${PWD}" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.

Thats why Ive tried to change the 2 volumes ( PWD and HOME ). I can change the PWD volume without getting an error, but the HOME volume keeps giving me errors.

Thanks!

Error on existing ssh key

Creating SSH key...
...SSH key created.

Traceback (most recent call last):
	10: from /usr/local/bin/hetzner-k3s:23:in `<main>'
	 9: from /usr/local/bin/hetzner-k3s:23:in `load'
	 8: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/exe/hetzner-k3s:4:in `<top (required)>'
	 7: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
	 6: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	 5: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	 4: from /var/lib/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	 3: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cli.rb:20:in `create_cluster'
	 2: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:34:in `create'
	 1: from /var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/k3s/cluster.rb:92:in `create_resources'
/var/lib/gems/2.7.0/gems/hetzner-k3s-0.2.0/lib/hetzner/infra/ssh_key.rb:26:in `create': undefined method `[]' for nil:NilClass (NoMethodError)

works after i deleted my ssh key in the hetzner gui

hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:279:in `deploy_kubernetes': undefined method `[]' for nil:NilClass (NoMethodError)

I seem to have an issue with this ruby gem:

hetzner-k3s create-cluster --config-file cluster_config.yaml

Placement group already exists, skipping.


Creating firewall...
...firewall created.


Creating private network...
...private network created.


SSH key already exists, skipping.




Creating server kubernetes-cx11-pool-small-worker1...
Creating server kubernetes-cx21-pool-big-worker1...
Creating server kubernetes-cpx31-master1...
...server kubernetes-cx21-pool-big-worker1 created.

...server kubernetes-cx11-pool-small-worker1 created.

...server kubernetes-cpx31-master1 created.


Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
Waiting for server kubernetes-cpx31-master1 to be up...
Waiting for server kubernetes-cx21-pool-big-worker1 to be up...
...server kubernetes-cx21-pool-big-worker1 is now up.
...server kubernetes-cpx31-master1 is now up.
Waiting for server kubernetes-cx11-pool-small-worker1 to be up...
...server kubernetes-cx11-pool-small-worker1 is now up.

Traceback (most recent call last):
        9: from /root/.rbenv/versions/2.7.4/bin/hetzner-k3s:23:in `<main>'
        8: from /root/.rbenv/versions/2.7.4/bin/hetzner-k3s:23:in `load'
        7: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/exe/hetzner-k3s:4:in `<top (required)>'
        6: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
        5: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
        4: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
        3: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
        2: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cli.rb:28:in `create_cluster'
        1: from /root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:42:in `create'
/root/.rbenv/versions/2.7.4/lib/ruby/gems/2.7.0/gems/hetzner-k3s-0.4.8/lib/hetzner/k3s/cluster.rb:279:in `deploy_kubernetes': undefined method `[]' for nil:NilClass (NoMethodError)

invalid byte sequence in UTF-8 (ArgumentError)

Hi @vitobotta,

i tried it out and all resources are created (loadbalancer, servers, network), but after that it crashes with the following error:

Traceback (most recent call last):
	28: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:147:in `block (2 levels) in create_resources'
	27: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:425:in `wait_for_ssh'
	26: from /usr/local/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
	25: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	24: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `catch'
	23: from /usr/local/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
	22: from /usr/local/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
	21: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:430:in `block in wait_for_ssh'
	20: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:430:in `loop'
	19: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:431:in `block (2 levels) in wait_for_ssh'
	18: from /usr/local/bundle/gems/hetzner-k3s-0.4.0/lib/hetzner/k3s/cluster.rb:445:in `ssh'
	17: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `start'
	16: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh.rb:251:in `new'
	15: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:88:in `initialize'
	14: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:88:in `new'
	13: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:153:in `initialize'
	12: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/algorithms.rb:277:in `prepare_preferred_algorithms!'
	11: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/transport/session.rb:98:in `host_keys'
	10: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:55:in `search_for'
	 9: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `search_in'
	 8: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `flat_map'
	 7: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `each'
	 6: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:61:in `block in search_in'
	 5: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:131:in `keys_for'
	 4: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:131:in `open'
	 3: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:132:in `block in keys_for'
	 2: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:132:in `each_line'
	 1: from /usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:133:in `block (2 levels) in keys_for'
/usr/local/bundle/gems/net-ssh-6.1.0/lib/net/ssh/known_hosts.rb:133:in `split': invalid byte sequence in UTF-8 (ArgumentError)

I have tried bot gem installation (rvm with ruby 2.7.4) and the docker image. I am on MacOS M1.
I also checked the config file: yml lint and UTF-8 encoding. Any idea what might cause this?

is downgrade k3s possible?

Hi!
a quick question because the readme only describes the k3s upgrade process. Is downgrade also possible?
the reason why Iam asking is, I have the 1.22.3 + k3s1 version but Rancher requires < 1.22.0-0

PS C:\kluster> helm upgrade --install --namespace cattle-system --set hostname=rancher.mydomain.com --set ingress.tls.source=letsEncrypt --set [email protected] rancher rancher-stable/rancher

Release "rancher" does not exist. Installing it now.
helm : Error: chart requires kubeVersion: < 1.22.0-0 which is incompatible with Kubernetes v1.22.3+k3s1
At line:1 char:1
+ helm upgrade --install --namespace cattle-system --set hostname=ranch ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (Error: chart re...es v1.22.3+k3s1:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError

Allow to configure private ssh key

Right now the tool fallbacks to the ssh keys provided by the ssh agent.
This can become complicated when running on a CI server, as the agent does not necessarily contain the correct keys.
It would be great to have an option that sets the private ssh key to be used (ideally both as value in the config and as env variable).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.