Giter Club home page Giter Club logo

terraform-module-k3s's Introduction

terraform-module-k3s

Terraform Version GitHub tag (latest SemVer) GitHub issues Open Source Helpers MIT Licensed

Terraform module to create a k3s cluster with multi-server and annotations/labels/taints management features.

⚠️ Security disclosure

Because the use of external references on the destroy provisioner is deprecated by Terraform, storing information inside each resource is mandatory in order to manage several functionalities such as automatic node draining and field management. As a result, several fields such as the connection block will be available in your TF state. This means that the password or private key used will be clearly readable in this TF state.
Please be very careful to store your TF state securely if you use a private key or password in the connection block.

Example (based on Hetzner Cloud example)

module "k3s" {
  source = "xunleii/k3s/module"

  depends_on_    = hcloud_server.agents
  k3s_version    = "latest"
  cluster_domain = "cluster.local"
  cidr = {
    pods     = "10.42.0.0/16"
    services = "10.43.0.0/16"
  }
  drain_timeout  = "30s"
  managed_fields = ["label", "taint"] // ignore annotations

  global_flags = [
    "--flannel-iface ens10",
    "--kubelet-arg cloud-provider=external" // required to use https://github.com/hetznercloud/hcloud-cloud-controller-manager
  ]

  servers = {
    for i in range(length(hcloud_server.control_planes)) :
    hcloud_server.control_planes[i].name => {
      ip = hcloud_server_network.control_planes[i].ip
      connection = {
        host        = hcloud_server.control_planes[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }
      flags = [
        "--disable-cloud-controller",
        "--tls-san ${hcloud_server.control_planes[0].ipv4_address}",
      ]
      annotations = { "server_id" : i } // theses annotations will not be managed by this module
    }
  }

  agents = {
    for i in range(length(hcloud_server.agents)) :
    "${hcloud_server.agents[i].name}_node" => {
      name = hcloud_server.agents[i].name
      ip   = hcloud_server_network.agents_network[i].ip
      connection = {
        host        = hcloud_server.agents[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }

      labels = { "node.kubernetes.io/pool" = hcloud_server.agents[i].labels.nodepool }
      taints = { "dedicated" : hcloud_server.agents[i].labels.nodepool == "gpu" ? "gpu:NoSchedule" : null }
    }
  }
}

Inputs

Name Description Type Default Required
servers K3s server nodes definition. The key is used as node name if no name is provided. map(any) n/a yes
agents K3s agent nodes definitions. The key is used as node name if no name is provided. map(any) {} no
cidr K3s network CIDRs (see https://rancher.com/docs/k3s/latest/en/installation/install-options/).
object({
pods = string
services = string
})
{
"pods": "10.42.0.0/16",
"services": "10.43.0.0/16"
}
no
cluster_domain K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). string "cluster.local" no
depends_on_ Resource dependency of this module. any null no
drain_timeout The length of time to wait before giving up the node draining. Infinite by default. string "0s" no
generate_ca_certificates If true, this module will generate the CA certificates (see k3s-io/k3s#1868 (comment)). Otherwise rancher will generate it. This is required to generate kubeconfig bool true no
global_flags Add additional installation flags, used by all nodes (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). list(string) [] no
k3s_install_env_vars map of enviroment variables that are passed to the k3s installation script (see https://docs.k3s.io/reference/env-variables) map(string) {} no
k3s_version Specify the k3s version. You can choose from the following release channels or pin the version directly string "latest" no
kubernetes_certificates A list of maps of cerificate-name.[crt/key] : cerficate-value to copied to /var/lib/rancher/k3s/server/tls, if this option is used generate_ca_certificates will be treat as false
list(
object({
file_name = string,
file_content = string
})
)
[] no
managed_fields List of fields which must be managed by this module (can be annotation, label and/or taint). list(string)
[
"annotation",
"label",
"taint"
]
no
name K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). This input is deprecated and will be remove in the next major release. Use cluster_domain instead. string "!!!DEPRECATED!!!" no
separator Separator used to separates node name and field name (used to manage annotations, labels and taints). string "|" no
use_sudo Whether or not to use kubectl with sudo during cluster setup. bool false no

Outputs

Name Description
kube_config Genereated kubeconfig.
kubernetes Authentication credentials of Kubernetes (full administrator).
kubernetes_cluster_secret Secret token used to join nodes to the cluster
kubernetes_ready Dependency endpoint to synchronize k3s installation and provisioning.
summary Current state of k3s (version & nodes).

Providers

Name Version
http ~> 3.0
null ~> 3.0
random ~> 3.0
tls ~> 4.0

Frequently Asked Questions

How to customise the generated kubeconfig

It is sometimes necessary to modify the context or the cluster name to adapt kubeconfig to a third-party tool or to avoid conflicts with existing tools. Although this is not the role of this module, it can easily be done with its outputs :

module "k3s" {
  ...
}

local {
  kubeconfig = yamlencode({
    apiVersion      = "v1"
    kind            = "Config"
    current-context = "my-context-name"
    contexts = [{
      context = {
        cluster = "my-cluster-name"
        user : "my-user-name"
      }
      name = "my-context-name"
    }]
    clusters = [{
      cluster = {
        certificate-authority-data = base64encode(module.k3s.kubernetes.cluster_ca_certificate)
        server                     = module.k3s.kubernetes.api_endpoint
      }
      name = "my-cluster-name"
    }]
    users = [{
      user = {
        client-certificate-data : base64encode(module.k3s.kubernetes.client_certificate)
        client-key-data : base64encode(module.k3s.kubernetes.client_key)
      }
      name : "my-user-name"
    }]
  })
}

License

terraform-module-k3s is released under the MIT License. See the bundled LICENSE file for details.

Generated with ❤️ by terraform-docs

terraform-module-k3s's People

Contributors

bs2bot avatar caleb-devops avatar corwind avatar dblk avatar debovema avatar djh00t avatar falcosuessgott avatar github-actions[bot] avatar guitcastro avatar hobbypunk90 avatar n7knightone avatar nicowde avatar orf avatar ptu avatar pturbing avatar renovate-bot avatar renovate[bot] avatar solidnerd avatar sschaeffner avatar tchoupinax avatar tedsteen avatar xunleii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

terraform-module-k3s's Issues

Taking a node out of the configuration keeps the node within the cluster but cordoned

I have a total of four nodes in my K3S cluster, where three of them function as both master and worker nodes, and the 4th node serves as a worker node.

When I adjust the configuration to exclude the 4th node and then run 'terraform apply,' the 4th node is cordoned but remains in the K3S cluster with K3S still operational on it.

$ terraform apply
module.k3s.null_resource.agents_drain["node-4"] (remote-exec): node/node-4 drained
module.k3s.null_resource.agents_drain["node-4"]: Destruction complete after 1s
module.k3s.null_resource.agents_install["node-4"]: Destroying... [id=788767868902175907]
module.k3s.null_resource.agents_install["node-4"]: Destruction complete after 0s

Apply complete! Resources: 0 added, 0 changed, 3 destroyed.
$ kubectl get nodes -A -w
NAMESPACE   NAME     STATUS                     ROLES                       AGE     VERSION
            node-1   Ready                      control-plane,etcd,master   3h31m   v1.28.1+k3s1
            node-2   Ready                      control-plane,etcd,master   3h31m   v1.28.1+k3s1
            node-3   Ready                      control-plane,etcd,master   3h31m   v1.28.1+k3s1
            node-4   Ready,SchedulingDisabled   <none>                   3h31m    v1.28.1+k3s1

Make it possible to have additional flags per agent

It's not possible to pass additional agent flags per agent at the moment.
This becomes a problem when using a cloud-controller-manager, as each worker node need the provider-id to be set to the specific agents id, see here for example.

mkdir: cannot create directory ‘/var/lib/rancher’: Permission denied

I'm not that familiar with terraform, so I'm wondering if there's something simple that I do to work around not being able to ssh to root (the user I'm using to talk to this host has sudo, but needs a password - which is sadly "normal" for corp managed hosts)

module.k3s.module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec): Connected!
module.k3s.module.k3s.null_resource.k8s_ca_certificates_install[1] (remote-exec): mkdir: cannot create directory ‘/var/lib/rancher’: Permission denied



module.k3s.module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): mkdir: cannot create directory ‘/var/lib/rancher’: Permission denied
module.k3s.module.k3s.null_resource.k8s_ca_certificates_install[2] (remote-exec): mkdir: cannot create directory ‘/var/lib/rancher’: Permission denied

Error: error executing "/tmp/terraform_1310554883.sh": Process exited with status 1

terraform destroy gets stuck while draining the last node

For a 3 node deployment, master + worker roles on each node, running terraform destroy drains 2 nodes out of 3 and gets stuck while draining the 3rd one.
No other PODs are deployed except the K3S pods.

module.k3s.null_resource.servers_drain["node-1"]: Still destroying... [id=8973657412223471573, 4m40s elapsed]
module.k3s.null_resource.servers_drain["node-1"]: Still destroying... [id=8973657412223471573, 4m50s elapsed]
╷
│ Error: remote-exec provisioner error
│ 
│   with module.k3s.null_resource.servers_drain["node-1"],
│   on .terraform/modules/k3s/server_nodes.tf line 267, in resource "null_resource" "servers_drain":
│  267:   provisioner "remote-exec" {
│ 
│ timeout - last error: dial tcp <node ip>:22: connect: no route to host

Cluster state:

$ kubectl get nodes -A -w
NAMESPACE   NAME     STATUS                     ROLES                       AGE   VERSION
            node-1   Ready                      control-plane,etcd,master   21m   v1.28.1+k3s1
            node-2   Ready,SchedulingDisabled   control-plane,etcd,master   21m   v1.28.1+k3s1
            node-3   Ready,SchedulingDisabled   control-plane,etcd,master   21m   v1.28.1+k3s1

Custom k3s cluster name inside of the admin kubeconfig

Hi,

I'm looking for a method to define the cluster name within the k3s admin kubeconfig file, particularly in situations where the kubeconfig is generated through Rancher. This becomes necessary when managing multiple distinct clusters, as the default name isn't sufficient.

At present, the admin kubeconfig contains the user and cluster name set as 'default'. One possible solution might involve substituting the 'default' string within the file with an alternative label. However, this adjustment could inadvertently affect the user value. I'm uncertain whether this could potentially cause any unforeseen issues.

Moreover, in cases where 'generate_ca_certificate = true', the cluster_domain is utilized. Nevertheless, its reliance on 'cluster.local' as a standard value could lead to a lack of uniqueness across clusters, considering it's utilized by k3s as a domain.

Thanks

Unable to use on Windows Terraform

I have created a simple install to build a 3 node cluster. My settings are as follows:

module "k3s" {
  source = "xunleii/k3s/module"

  depends_on = [
    time_sleep.wait_30_seconds
  ]

  drain_timeout  = "30s"
  managed_fields = ["label"] 

  generate_ca_certificates = false  # tried with and without this, same result

  global_flags = [
    "--tls-san 192.168.88.150"
  ]

  servers = {
    # The node name will be automatically provided by
    # the module using the field name... any usage of
    # --node-name in additional_flags will be ignored
    k3s_server_0 = {
      ip = "192.168.88.150" // internal node IP
      connection = {
        host     = "192.168.88.150" // public node IP
        user     = "mark"
        password = var.vm_password
      }
      labels = { "node.kubernetes.io/type" = "master" }
    }
    k3s_server_1 = {
      ip = "192.168.88.151"
      connection = {
        host     = "192.168.88.151" // public node IP
        user     = "mark"
        password = var.vm_password
      }
      labels = { "node.kubernetes.io/type" = "master" }
    }
    k3s_server_2 = {
      ip = "192.168.88.152"
      connection = {
        host     = "192.168.88.152" // public node IP
        user     = "mark"
        password = var.vm_password
      }
      labels = { "node.kubernetes.io/type" = "master" }
    }
  }
  agents = {}
}

However, when I run it, Terraform can ssh into the three Ubuntu hosts, and appears to be running normally. Unit after about one minute, I receive the following warning and errors.

Warning: Argument is deprecated
│
│   with module.k3s.tls_self_signed_cert.kubernetes_ca_certs,
│   on .terraform\modules\k3s\k3s_certificates.tf line 42, in resource "tls_self_signed_cert" "kubernetes_ca_certs":
│   42:   key_algorithm         = "ECDSA"
│
│ This is now ignored, as the key algorithm is inferred from the `private_key_pem`.
│
│ (and 2 more similar warnings elsewhere)
╵
╷
│ Warning: expected allowed_uses to be one of [any_extended cert_signing client_auth code_signing content_commitment crl_signing data_encipherment decipher_only digital_signature email_protection encipher_only ipsec_end_system ipsec_tunnel ipsec_user key_agreement key_encipherment microsoft_commercial_code_signing microsoft_kernel_code_signing microsoft_server_gated_crypto netscape_server_gated_crypto ocsp_signing server_auth timestamping], got critical so will ignored
│
│   with module.k3s.tls_self_signed_cert.kubernetes_ca_certs,
│   on .terraform\modules\k3s\k3s_certificates.tf line 44, in resource "tls_self_signed_cert" "kubernetes_ca_certs":
│   44:   allowed_uses          = ["critical", "digitalSignature", "keyEncipherment", "keyCertSign"]
│
╵
╷
│ Warning: expected allowed_uses to be one of [any_extended cert_signing client_auth code_signing content_commitment crl_signing data_encipherment decipher_only digital_signature email_protection encipher_only ipsec_end_system ipsec_tunnel ipsec_user key_agreement key_encipherment microsoft_commercial_code_signing microsoft_kernel_code_signing microsoft_server_gated_crypto netscape_server_gated_crypto ocsp_signing server_auth timestamping], got digitalSignature so will ignored
│
│   with module.k3s.tls_self_signed_cert.kubernetes_ca_certs,
│   on .terraform\modules\k3s\k3s_certificates.tf line 44, in resource "tls_self_signed_cert" "kubernetes_ca_certs":
│   44:   allowed_uses          = ["critical", "digitalSignature", "keyEncipherment", "keyCertSign"]
│
╵
╷
│ Warning: expected allowed_uses to be one of [any_extended cert_signing client_auth code_signing content_commitment crl_signing data_encipherment decipher_only digital_signature email_protection encipher_only ipsec_end_system ipsec_tunnel ipsec_user key_agreement key_encipherment microsoft_commercial_code_signing microsoft_kernel_code_signing microsoft_server_gated_crypto netscape_server_gated_crypto ocsp_signing server_auth timestamping], got keyEncipherment so will ignored
│
│   with module.k3s.tls_self_signed_cert.kubernetes_ca_certs,
│   on .terraform\modules\k3s\k3s_certificates.tf line 44, in resource "tls_self_signed_cert" "kubernetes_ca_certs":
│   44:   allowed_uses          = ["critical", "digitalSignature", "keyEncipherment", "keyCertSign"]
│
╵
╷
│ Warning: expected allowed_uses to be one of [any_extended cert_signing client_auth code_signing content_commitment crl_signing data_encipherment decipher_only digital_signature email_protection encipher_only ipsec_end_system ipsec_tunnel ipsec_user key_agreement key_encipherment microsoft_commercial_code_signing microsoft_kernel_code_signing microsoft_server_gated_crypto netscape_server_gated_crypto ocsp_signing server_auth timestamping], got keyCertSign so will ignored
│
│   with module.k3s.tls_self_signed_cert.kubernetes_ca_certs,
│   on .terraform\modules\k3s\k3s_certificates.tf line 44, in resource "tls_self_signed_cert" "kubernetes_ca_certs":
│   44:   allowed_uses          = ["critical", "digitalSignature", "keyEncipherment", "keyCertSign"]
│
╵
╷
│ Error: remote-exec provisioner error
│
│   with module.k3s.null_resource.servers_install["k3s_server_2"],
│   on .terraform\modules\k3s\server_nodes.tf line 206, in resource "null_resource" "servers_install":
│  206:   provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_814904232.sh": wait: remote command exited without exit status or exit signal
╵
╷
│ Error: remote-exec provisioner error
│
│   with module.k3s.null_resource.servers_install["k3s_server_1"],
│   on .terraform\modules\k3s\server_nodes.tf line 206, in resource "null_resource" "servers_install":
│  206:   provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1578112108.sh": wait: remote command exited without exit status or exit signal
╵
╷
│ Error: remote-exec provisioner error
│
│   with module.k3s.null_resource.servers_install["k3s_server_0"],
│   on .terraform\modules\k3s\server_nodes.tf line 206, in resource "null_resource" "servers_install":
│  206:   provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_294316203.sh": wait: remote command exited without exit status or exit signal

I have tried both with and without the generate_ca_certificates, i.e. without defaults to true, and I have also tried to include it and set it to false. I get the same results.

Am I missing something? Does this only work on Linux boxes? Can it be used against Ubuntu servers? Any help that be provided will be greatly appreciated.

API URL broken in build script when using dual stack configs

Howdy!

I am building a dual stack cluster however the installer doesn't work and needs to be manually hacked around to get servers to build due to a malformed Kube API URL in the installer:

K8S Server Config

    k8s01 = {
      # name = k8s01
      ip = "172.18.100.2,XXX:XXX:XXX:102::1:1" // internal node IP
      connection = {
        host = "172.18.100.2" // public node IP
        user = "ord"
      }
      # flags  = ["--flannel-backend=none"]
      labels = { "node.kubernetes.io/type" = "master" }
      # taints = { "node.k3s.io/type" = "server:NoSchedule" }
    }
    k8s02 = {
      # name = k8s02
      ip = "172.18.100.6,XXX:XXX:XXX:102::1:2"
      connection = {
        host = "172.18.100.6" // public node IP
        user = "ord"
      }
      # flags  = ["--flannel-backend=none"]
      labels = { "node.kubernetes.io/type" = "master" }
      # taints = { "node.k3s.io/type" = "server:NoSchedule" }
    }

Script that is output

#!/bin/sh
INSTALL_K3S_VERSION=v1.26.1+k3s1 sh /tmp/k3s-installer server --node-ip 172.18.100.6,XXX:XXX:XXX:102::1:2 --node-name 'k8s02' --server https://172.18.100.2,XXX:XXX:XXX:102::1:1:6443 --cluster-cidr 10.255.0.0/16,XXX:XXX:XXX:200::/56 --service-cidr 172.18.100.0/24,XXX:XXX:XXX:102::/112 --token <redacted> --disable traefik --disable servicelb
until sudo kubectl get node k8s02; do sleep 1; done

I've taken a look at the code and I could use root_server_ip = values(var.servers)[0].connection.host but that seems like a hack, perhaps the first value in the dual stack pair should be parsed and used instead?

K3s Cluster Node(s) Upgrade

Hello @xunleii,

I did not see a discussions area for the repo, so I will ask my question here as it seems undocumented. Can this terraform module appropriately upgrade a k3s cluster similar to what is described in the "manual" part of the documentation here. If not is there another recommended way to handle version upgrades gracefully?

Otherwise, thanks for open sourcing this project. I haven't finished implementing it yet, but it makes operations a lot easier.

Pod and Service cidrs must be passed on all masters (not just the 1st one)

Unless i totally missed something, if you try to change svc cidr the cluster-dns will change too and have to be passed to all masters and not just only the first one.

this is already achievable through the individual flags but will be easier if handled the same way has pods and svc cidrs

tell me if that make sense to you i can work on the PR

thanks for this great module btw

Hetzner example doesn't work

Hi, I tried the exact same example but the process will never finish.

module.k3s.null_resource.k8s_ca_certificates_install[1] (remote-exec):   Checking Host Key: false
module.k3s.null_resource.k8s_ca_certificates_install[1] (remote-exec):   Target Platform: unix
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Connecting to remote host via SSH...
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Host: xxx
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   User: root
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Password: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Private key: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Certificate: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   SSH Agent: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Checking Host Key: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec):   Target Platform: unix
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec): Connecting to remote host via SSH...
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Host: xxx
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   User: root
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Password: false
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Private key: false
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Certificate: false
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   SSH Agent: false
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Checking Host Key: false
module.k3s.null_resource.k8s_ca_certificates_install[5] (remote-exec):   Target Platform: unix
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec): Connecting to remote host via SSH...
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Host: xxx
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   User: root
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Password: false
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Private key: false
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Certificate: false
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   SSH Agent: false
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Checking Host Key: false
module.k3s.null_resource.k8s_ca_certificates_install[4] (remote-exec):   Target Platform: unix







Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain



Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain



Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain



Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain



Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain



Error: timeout - last error: SSH authentication failed (root@xxx:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain

Consider Integration Testing with k3d

@xunleii

Have you considered setting up github actions to run the terraform module against a k3d cluster which could be installed on a single github actions agent? This way it would be possible to verify functionality on each pull request. I'm writing my own tool in python for managing k3s lifecycle on a home cluster, but I would prefer terraform in the longterm since it would be easier to maintain.

Error sensitive var.servers

 Error: Invalid for_each argument
│
│   on .terraform/modules/k3s/server_nodes.tf line 161, in resource "null_resource" "servers_install":
│  161:   for_each = var.servers
│     ├────────────────
│     │ var.servers has a sensitive value

hcloud-k3s doesnt work with v3.3.0

When i try to run the the terraform plan i get the following error:

╷
│ Warning: Redundant empty provider block
│
│   on .terraform/modules/k3s_example_hcloud-k3s/examples/hcloud-k3s/main.tf line 1:
│    1: provider "hcloud" {}
│
│ Earlier versions of Terraform used empty provider blocks ("proxy provider configurations") for child modules to declare their need to be passed a provider configuration by their
│ callers. That approach was ambiguous and is now deprecated.
│
│ If you control this module, you can migrate to the new declaration syntax by removing all of the empty provider "hcloud" blocks and then adding or updating an entry like the
│ following to the required_providers block of module.k3s_example_hcloud-k3s:
│     hcloud = {
│       source = "hetznercloud/hcloud"
│     }
╵
╷
│ Error: Iteration over null value
│
│   on .terraform/modules/k3s_example_hcloud-k3s/server_nodes.tf line 12, in locals:
│   12:   install_env_vars = join(" ", [for k, v in var.k3s_install_env_vars : "${k}=${v}"])
│     ├────────────────
│     │ var.k3s_install_env_vars is null
│
│ A null value cannot be used as the collection in a 'for' expression.
╵

The Warning can be fixed when we remove the provider block from the main.tf in hcloud-k3s:

provider "hcloud" {}

And the Error can be fixed when we add this check to the variable in the server_nodes.tf in the main module:

install_env_vars = var.k3s_install_env_vars == null ? "" : join(" ", [for k, v in var.k3s_install_env_vars : "${k}=${v}"])

Error "Variable `name` is deprecated"

Hello!

I'm trying to set up a simple cluster in EC2 instance, but ever have the same problem executing terraform plan:

image

I used my own configuration and the example one in the readme, but both result in same error message.

Do you know what it could be?

Deprecated attribute with Terraform 1.3.7

First of all, thanks for this great module 💮
I'm getting this deprecation warning with tf v1.3.7:

╷
│ Warning: Deprecated attribute
│ 
│   on .terraform/modules/k3s/agent_nodes.tf line 109, in resource "null_resource" "agents_install":
│  109:     content     = data.http.k3s_installer.body
│ 
│ The attribute "body" is deprecated. Refer to the provider documentation for details.
│ 
│ (and 5 more similar warnings elsewhere)
╵

Nothing urgent.

rename variable name to cluster_domain

I do think it will be clearer if var.name is renamed in cluster_domain or similar.

Get tricked by this until i realised that my services where all ending in aws.cluster :-)

Deprecation of network_id in `hcloud_server_network`

I did changed the network_id in hcloud_server_network to fix the deprecation warning. When I did this a get a new error that the network subnet id is invalid.

Warning: "network_id": [DEPRECATED] use subnet_id instead                                    
                                                                                             
  on server_instances.tf line 16, in resource "hcloud_server_network" "control_planes":      
  16: resource hcloud_server_network control_planes {                                        
                                                                                             
                                                                                             
                                                                                             
Warning: Experimental feature "variable_validation" is active                                
                                                                                             
  on ../../main.tf line 3, in terraform:                                                     
   3:   experiments      = [variable_validation]                                             
                                                                                             
Experimental features are subject to breaking changes in future minor or patch               
releases, based on feedback.                                                                 
                                                                                             
If you have feedback on the design of this feature, please open a GitHub issue               
to discuss it.                                                                               
                                                                                             
                                                                                             
Error: invalid network subnet id                                                             
                                                                                             
  on agent_instances.tf line 17, in resource "hcloud_server_network" "agents_network":       
  17: resource hcloud_server_network agents_network {                                        
                                                                                             
                                                                                             
                                                                                             
Error: invalid network subnet id                                                             
                                                                                             
  on agent_instances.tf line 17, in resource "hcloud_server_network" "agents_network":       
  17: resource hcloud_server_network agents_network {                                        
                                                                                             
                                                                                             
                                                                                             
Error: invalid network subnet id                                                             
                                                                                             
  on agent_instances.tf line 17, in resource "hcloud_server_network" "agents_network":       
  17: resource hcloud_server_network agents_network {                                        
                                                                                             

I think the changed was introduced with hcloud provider 1.19.2

:bug: Cannot scale up server nodes

🔥 What happened?

  1. Create a k3s cluster with only one server node, the cluster works as intended:

Config

k3s_version = "v1.27.5+k3s1"
k3s_install_env_vars = {}
drain_timeout = "300s"
private_key_path = "/home/coder/.ssh/id_rsa"
cidr = {
    pods = "10.0.0.0/16"
    services = "10.1.0.0/16"
}
managed_fields = ["label", "taint"]
servers = {
    # The node name will be automatically provided by
    # the module using the field name... any usage of
    # --node-name in additional_flags will be ignored
    node-1 = {
        ip = "10.195.64.82" // internal node IP
        connection = {
            host = "10.195.64.82" // public node IP
            user = "ubuntu"
        }
        flags = ["--tls-san cluster.local", "--write-kubeconfig-mode '0644'", "--disable-network-policy"]
        labels = {"node.kubernetes.io/type" = "master"}
    }
}
  1. Try to scale the server nodes to 3 with the following config:
k3s_version = "v1.27.5+k3s1"
k3s_install_env_vars = {}
drain_timeout = "300s"
private_key_path = "/home/coder/.ssh/id_rsa"
cidr = {
    pods = "10.0.0.0/16"
    services = "10.1.0.0/16"
}
managed_fields = ["label", "taint"]
servers = {
    # The node name will be automatically provided by
    # the module using the field name... any usage of
    # --node-name in additional_flags will be ignored
    node-1 = {
        ip = "10.195.64.82" // internal node IP
        connection = {
            host = "10.195.64.82" // public node IP
            user = "ubuntu"
        }
        flags = ["--tls-san cluster.local", "--write-kubeconfig-mode '0644'", "--disable-network-policy"]
        labels = {"node.kubernetes.io/type" = "master"}
    },
    node-2 = {
        ip = "10.195.64.223" // internal node IP
        connection = {
            host = "10.195.64.223" // public node IP
            user = "ubuntu"
        }
        flags = ["--tls-san cluster.local", "--write-kubeconfig-mode '0644'", "--disable-network-policy"]
        labels = {"node.kubernetes.io/type" = "master"}
    },
    node-3 = {
        ip = "10.195.64.208" // internal node IP
        connection = {
            host = "10.195.64.208" // public node IP
            user = "ubuntu"
        }
        flags = ["--tls-san cluster.local", "--write-kubeconfig-mode '0644'", "--disable-network-policy"]
        labels = {"node.kubernetes.io/type" = "master"}
    }
}

The logs on node-2 and node-3 have the following error, not being able to join the initial cluster:

journalctl -xef

Nov 14 08:17:29 ip-10-195-64-103 sh[24077]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Nov 14 08:17:29 ip-10-195-64-103 sh[24078]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Nov 14 08:17:30 ip-10-195-64-103 k3s[24081]: time="2023-11-14T08:17:30Z" level=info msg="Starting k3s v1.27.5+k3s1 (8d074ecb)"
Nov 14 08:17:30 ip-10-195-64-103 k3s[24081]: time="2023-11-14T08:17:30Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Nov 14 08:17:30 ip-10-195-64-103 k3s[24081]: time="2023-11-14T08:17:30Z" level=info msg="Managed etcd cluster not yet initialized"
Nov 14 08:17:30 ip-10-195-64-103 k3s[24081]: time="2023-11-14T08:17:30Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Nov 14 08:17:30 ip-10-195-64-103 k3s[24081]: time="2023-11-14T08:17:30Z" level=fatal msg="starting kubernetes: preparing server: https://10.195.64.124:6443/v1-k3s/server-bootstrap: 400 Bad Request"
Nov 14 08:17:30 ip-10-195-64-103 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE

👍 What did you expect to happen?

The new servers should have joined the existing cluster.

🔍 How can we reproduce the issue?

The steps for reproducing are in the first section

🔧 Module version

"3.3.0"

🔧 Terraform version

1.6.3

🔧 Terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/null] 3.2.1
└── module.k3s
    ├── provider[registry.terraform.io/hashicorp/null] ~> 3.0
    ├── provider[registry.terraform.io/hashicorp/random] ~> 3.0
    ├── provider[registry.terraform.io/hashicorp/tls] ~> 4.0
    └── provider[registry.terraform.io/hashicorp/http] ~> 3.0

📋 Additional information

No response

Cluster CA certificate is not trusted

module version: v3.1.0
k3s version: v1.23.3+k3s1

When setting cluster_domain subsequent server nodes fail to join the cluster with error:

Feb 22 11:56:41 n02 k3s[3988]: time="2022-02-22T11:56:41-05:00" level=info msg="Starting k3s v1.23.3+k3s1 (5fb370e5)"
Feb 22 11:56:41 n02 k3s[3988]: time="2022-02-22T11:56:41-05:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Feb 22 11:56:41 n02 k3s[3988]: time="2022-02-22T11:56:41-05:00" level=info msg="Managed etcd cluster not yet initialized"
Feb 22 11:56:41 n02 k3s[3988]: time="2022-02-22T11:56:41-05:00" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Feb 22 11:56:41 n02 k3s[3988]: time="2022-02-22T11:56:41-05:00" level=fatal msg="starting kubernetes: preparing server: failed to validate server configuration: critical configuration value mismatch"

I've tried to set tls-san with additional global flags to see if it helps but it does not:

"--tls-san ${var.my_cluster_domain}",
"--tls-san cluster.local",

If I remove the cluster_domain from the module altogether the cluster successfully bootstraps with default cluster domain cluster.local

Is there something I'm missing? Thanks!

Support for K3S AirGap deployments

The current implementation of the module deploys K3S clusters as long as the data center is connected to the internet.

It would be a nice addition to extend the functionality with K3S deployments in AirGap environments where one cannot download files from the internet but relies on the files being preset on the execution box: https://docs.k3s.io/installation/airgap

cdktf compatibility

I'm trying to use the module with cdktf but I'm getting errors when calling cdktf get to generate the cdk contructs:

NOTE: Temp directory retained due to an error: /tmp/temp-rYawex
Error: jsii compilation failed with non-zero exit code: 1
  | [2023-08-07T17:40:49.119] [ERROR] jsii/compiler - Compilation errors prevented the JSII assembly from being created
  | warning JSII6: A "peerDependency" on "constructs" at "10.2.69" means you should take a "devDependency" on "constructs" at "10.2.69" (found "undefined")
  | warning JSII6: A "peerDependency" on "cdktf" at "0.17.3" means you should take a "devDependency" on "cdktf" at "0.17.3" (found "undefined")
  | warning JSII3: There is no "README.md" file. It is required in order to generate valid PyPI (Python) packages.
  | modules/k3s.ts:5:1 - error JSII5015: Interface "k3s.K3SConfig" re-declares member "dependsOn". This is not supported as it results in invalid C#.
  |   5 export interface K3SConfig extends TerraformModuleUserConfig {
  |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  |   6   /**
  |     ~~~~~
  | ... 
  |  80   readonly useSudo?: boolean;
  |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  |  81 }
  |     ~
  | modules/k3s.ts:87:14 - error JSII8000: Type names must be PascalCased. Rename "K3S" to "K3s"
  | 87 export class K3S extends TerraformModule {
  |                 ~~~
  | modules/k3s.ts:128:14 - error TS2611: 'dependsOn' is defined as a property in class 'TerraformModule', but is overridden here in 'K3S' as an accessor.
  | 128   public get dependsOn(): any | undefined {
  |                  ~~~~~~~~~
  +----------------------------------------------------------------------------------
  | Command: /home/linuxbrew/.linuxbrew/Cellar/cdktf/0.17.3/libexec/lib/node_modules/cdktf-cli/node_modules/jsii/bin/jsii --silence-warnings reserved-word
  | Workdir: /tmp/temp-rYawex
  +----------------------------------------------------------------------------------
    at p (/home/linuxbrew/.linuxbrew/Cellar/cdktf/0.17.3/libexec/lib/node_modules/cdktf-cli/bundle/bin/cmds/handlers.js:51:1581)
    at ChildProcess.<anonymous> (/home/linuxbrew/.linuxbrew/Cellar/cdktf/0.17.3/libexec/lib/node_modules/cdktf-cli/bundle/bin/cmds/handlers.js:54:112)
    at Object.onceWrapper (node:events:629:26)
    at ChildProcess.emit (node:events:514:28)
    at ChildProcess.emit (node:domain:489:12)
    at ChildProcess._handle.onexit (node:internal/child_process:294:12)
⠦ downloading and generating modules and providers...
Error: jsii compilation failed with non-zero exit code: 1
  | [2023-08-07T17:40:49.119] [ERROR] jsii/compiler - Compilation errors prevented the JSII assembly from being created
  | warning JSII6: A "peerDependency" on "constructs" at "10.2.69" means you should take a "devDependency" on "constructs" at "10.2.69" (found "undefined")
  | warning JSII6: A "peerDependency" on "cdktf" at "0.17.3" means you should take a "devDependency" on "cdktf" at "0.17.3" (found "undefined")
  | warning JSII3: There is no "README.md" file. It is required in order to generate valid PyPI (Python) packages.
  | modules/k3s.ts:5:1 - error JSII5015: Interface "k3s.K3SConfig" re-declares member "dependsOn". This is not supported as it results in invalid C#.
  |   5 export interface K3SConfig extends TerraformModuleUserConfig {
  |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  |   6   /**
  |     ~~~~~
  | ... 
  |  80   readonly useSudo?: boolean;
  |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  |  81 }
  |     ~
  | modules/k3s.ts:87:14 - error JSII8000: Type names must be PascalCased. Rename "K3S" to "K3s"
  | 87 export class K3S extends TerraformModule {
  |                 ~~~
  | modules/k3s.ts:128:14 - error TS2611: 'dependsOn' is defined as a property in class 'TerraformModule', but is overridden here in 'K3S' as an accessor.
  | 128   public get dependsOn(): any | undefined {
  |                  ~~~~~~~~~
  +----------------------------------------------------------------------------------
  | Command: /home/linuxbrew/.linuxbrew/Cellar/cdktf/0.17.3/libexec/lib/node_modules/cdktf-cli/node_modules/jsii/bin/jsii --silence-warnings reserved-word
  | Workdir: /tmp/temp-rYawex

kube_config output missing

Giving this a try for the first time, I was able to successfully spin up a K3s cluster. Thank you.

However, I cannot get the kube_config out of the process. At least not base on the examples that were provided.

Is there anything obvious I might be doing wrong? This would be with K3s 1.20

register: metadata.name: Invalid value

Hi,

Using your example lead to this error when k3s try to register nodes :

Feb 10 09:18:24 ip-xx-xx-xx-xx k3s[7199]: E0210 09:18:24.566878    7199 kubelet_node_status.go:93] Unable to register node "server_one" with API server: Node "server_one" is invalid: metadata.name: Invalid value: "server_one": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Is it possible to add assertion, or at least update your usage example ?

Thanks,

Refresh kubeconfig when terraform state is lost

Hi,

I have a scenario, where I need to refresh the local kubeconfig with the one from the target host. Right now, terraform does not offer a way to copy a remote file to the local host.

hashicorp/terraform#3379

Is there a way to refresh the kubeconfig with the current module if I lose the tfstate or any other alternative solution that can be automated via terraform?

Thanks

Publish a new version on the Terraform registry

Hey!
This is a very handy module, thank you!

Would it be possible to update the version on the terraform registry with the latest changes? Specifically the use_sudo argument (added in #57) would be handy to have.

For anyone else seeing this, you can pull the latest master with:

module "k3s" {
  source      = "github.com/xunleii/terraform-module-k3s.git?ref=45e073c6ea4784ded7260467a9f6f0f6e6f8f636"
 
  ...
}

Error: Invalid Attribute Value Match

At some point the accepted values for allowed_uses with the tls_self_signed_cert resource changed, and when using the currently released version of this module Terraform errors with an Error: Invalid Attribute Value Match. This was fixed in e160154 but this commit is not included in the most recent release.

Can you please release an updated version of this module which includes this fix (plus probably others)?

Remove or fix the 'latest' feature

Because of this feature I break one of my cluster 🤦 ... because the k3s team has fixed a CVE (thanks for theirs works by the way), the last version available from the Github API was kube v1.16.13+k3s1 ... so my cluster jumps from v1.18 to v1.16 ...

⚠️ I DON'T RECOMMEND TO USE THIS FEATURE TODAY ... I need to find another way to do that or remove this feature

failed to start k3s node with label `node-role.kubernetes.io/***`

It's currently impossible to run a k3s node with a label inside the kubernetes.io namespace.

Feb 10 13:34:22 aa992fdc2446e727c330a3cc4b9c2f467e438d6e k3s[1996]: F0210 13:34:22.858484    1996 server.go:186] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/agent node-role.kubernetes.io/nodepool-default]
Feb 10 13:34:22 aa992fdc2446e727c330a3cc4b9c2f467e438d6e k3s[1996]: --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/os, node.kubernetes.io/instance-type, topology.kubernetes.io/region, topology.kubernetes.io/zone)

The quick win to solve this issue is to avoid --node-label flags ... applying labels only after the node is running.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

asdf
.tool-versions
  • act 0.2.57
  • terraform 1.6.6
  • terraform-docs 0.17.0
  • trivy 0.48.2
github-actions
.github/workflows/github.documentation.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • heinrichreimer/github-changelog-generator-action v2.4@e60b5a2bd9fcd88dadf6345ff8327863fb8b490f
  • EndBug/add-and-commit v9.1.3@1bad3abcf0d6ec49a5857d124b0bfb52dc7bb081
.github/workflows/github.labeler.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • micnncim/action-label-syncer v1.3.0@3abd5ab72fda571e69fffd97bd4e0033dd5f495c
.github/workflows/github.stale.yaml
  • actions/stale v9.0.0@28ca1036281a5e5922ead5184a1bbf96e5fc984e
.github/workflows/security.terraform.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • aquasecurity/trivy-action 0.16.1@d43c1f16c00cfd3978dde6c07f4bbcf9eb6993ca
.github/workflows/security.workflows.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • zgosalvez/github-actions-ensure-sha-pinned-actions v3.0.3@ba37328d4ea95eaf8b3bd6c6cef308f709a5f2ec
.github/workflows/templates.terraform.pull_requests.lint.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • hashicorp/setup-terraform v3.0.0@a1502cd9e758c50496cc9ac5308c4843bcd56d36
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
.github/workflows/templates.terraform.pull_requests.plan.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • hashicorp/setup-terraform v3.0.0@a1502cd9e758c50496cc9ac5308c4843bcd56d36
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
  • marocchino/sticky-pull-request-comment v2.8.0@efaaab3fd41a9c3de579aba759d2552635e590fd
.github/workflows/terraform.plan.yaml
  • actions-ecosystem/action-remove-labels d05162525702062b6bdef750ed8594fc024b3ed7
terraform
versions.tf
  • http ~> 3.0
  • null ~> 3.0
  • random ~> 3.0
  • tls ~> 4.0
  • hashicorp/terraform ~> 1.0

  • Check this box to trigger a request for Renovate to run again on this repository

Generated kubeconfig cannot be used (certificate signed by unknown authority)

Disclaimer: I'm new to Terraform and am just getting started provisioning a small k3s setup at home.

After running apply, the cluster seems to be bootstrapped as expected and seems to be working. However, the output kube_config does not appear to generate a valid config file that allows communication with the cluster itself:

$ kubectl get node
Unable to connect to the server: x509: certificate signed by unknown authority

If I scp the k3s kubeconfig to my machine and replace the IP, it works just as expected and I am able to communicate with the cluster.

I'll attach my main.tf, which is extremely barebones for now, to show my setup.

terraform {
  required_providers {
    
  }
}

module "k3s" {
    source  = "xunleii/k3s/module"
    version = "3.2.0"
    k3s_version = "v1.25.2+k3s1"
    use_sudo = true

    servers = {
        k3s-01 = {
            ip = "<ip>"
            connection = {
                user = "<username>"
                private_key = "${file("/home/<me>/.ssh/id_rsa")}"
    
           }
        }
    }  
}

output "kubeconfig" {
    value = module.k3s.kube_config
    sensitive = true
}

resource "local_sensitive_file" "kubeconfig_file" {
    content = module.k3s.kube_config
    filename = "/home/<me>/.kube/config"
}

Am I missing something? Is this an issue with the k3s version I'm using being newer than tested configurations? I would expect a generated kubeconfig from this module to be useable against the generated cluster.

Any help would be appreciated, this module seems like a great way to bootstrap a cluster without using something like ansible, which tends to not handle state changes as well as terraform it seems.

Servers must have an odd number of nodes

Trying to create HA k3s cluster based on https://docs.k3s.io/datastore/ha:

Two or more server nodes that will serve the Kubernetes API and run other control plane services

But I cannot create two nodes becouse of:

  validation {
    condition     = length(var.servers) % 2 == 1
    error_message = "Servers must have an odd number of nodes."
  }

error_message = "Servers must have an odd number of nodes."

Why modules restricts even server nodes count?

Windows Terraform - SSH authentication failed

Hi i try to create a k3s cluster on hetzner cloud with this terraform script, the script run in a timeout on connect the machine over ssh. I tried to manually create a server and assign the key so the key worked fine. But when I start the script, unfortunately it does not work where to look for the private key?

Error: timeout - last error: SSH authentication failed ([email protected]:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none], no supported methods remain

module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Connecting to remote host via SSH...
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Host: XXX.XXX.XXX.XXX
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): User: root
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Password: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Private key: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Certificate: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): SSH Agent: false
module.k3s.null_resource.k8s_ca_certificates_install[0] (remote-exec): Checking Host Key: false

My windows commands to generate a key

# Change the Windows Service config for "OpenSSH Authentication Agent"
sc config "ssh-agent" start=delayed-auto
sc start "ssh-agent"

# Create a private/public key pair
ssh-keygen -t ecdsa -b 521 -f myKey

ssh-add myKey

🚧 Refresh this repository

This project is a little old and needs to be refreshed.


Definition of done

  • Update the README with #132 and #136 informations
  • Include existing example inside the README to avoid non-working example (like in #136)
  • Add some automatisation like stale-bot
  • Add pre-filled template for issue
  • Add some security checks for TF
  • Add example for at least one big cloud provider (#43)

NOTE¹: I remove the #132 on this issue scope because of its complexity; it's not just a simple documentation but requires more investigation too
NOTE²: I remove the how to contribute task waiting the #133 to be implemented

Remove 'scp' dependency

Currently, this module use 'scp' in order to get the secret token node. Instead, we can use K3S_CLUSTER_SECRET on each nodes and generate a random string with random_string.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.