Giter Club home page Giter Club logo

terraform-k8s-hcloud's Introduction

Terraform Kubernetes on Hetzner Cloud

This repository will help to setup an opionated Kubernetes Cluster with kubeadm on Hetzner Cloud.

Usage

$ git clone https://github.com/solidnerd/terraform-k8s-hcloud.git
$ terraform init
$ terraform apply

Example

$ terraform init
$ terraform apply
$ KUBECONFIG=secrets/admin.conf kubectl get nodes
$ KUBECONFIG=secrets/admin.conf kubectl apply -f https://docs.projectcalico.org/archive/v3.15/manifests/calico.yaml
$ KUBECONFIG=secrets/admin.conf kubectl get pods --namespace=kube-system -o wide
$ KUBECONFIG=secrets/admin.conf kubectl run nginx --image=nginx
$ KUBECONFIG=secrets/admin.conf kubectl expose deploy nginx --port=80 --type NodePort

Variables

Name Default Description Required
hcloud_token `` API Token that will be generated through your hetzner cloud project https://console.hetzner.cloud/projects Yes
master_count 1 Amount of masters that will be created No
master_image ubuntu-20.04 Predefined Image that will be used to spin up the machines (Currently supported: ubuntu-20.04, ubuntu-18.04) No
master_type cx11 Machine type for more types have a look at https://www.hetzner.de/cloud No
node_count 1 Amount of nodes that will be created No
node_image ubuntu-20.04 Predefined Image that will be used to spin up the machines (Currently supported: ubuntu-20.04, ubuntu-18.04) No
node_type cx11 Machine type for more types have a look at https://www.hetzner.de/cloud No
ssh_private_key ~/.ssh/id_ed25519 Private Key to access the machines No
ssh_public_key ~/.ssh/id_ed25519.pub Public Key to authorized the access for the machines No
docker_version 19.03 Docker CE version that will be installed No
kubernetes_version 1.18.6 Kubernetes version that will be installed No
feature_gates `` Add your own Feature Gates for Kubeadm No
calico_enabled false Installs Calico Network Provider after the master comes up No

All variables cloud be passed through environment variables or a tfvars file.

An example for a tfvars file would be the following terraform.tfvars

# terraform.tfvars
hcloud_token = "<yourgeneratedtoken>"
master_type = "cx21"
master_count = 1
node_type = "cx31"
node_count = 2
kubernetes_version = "1.18.6"
docker_version = "19.03"

Or passing directly via Arguments

$ terraform apply \
  -var hcloud_token="<yourgeneratedtoken>" \
  -var docker_version=19.03 \
  -var kubernetes_version=1.18.6 \
  -var master_type=cx21 \
  -var master_count=1 \
  -var node_type=cx31 \
  -var node_count=2

Contributing

Bug Reports & Feature Requests

Please use the issue tracker to report any bugs or file feature requests.

Tested with

terraform-k8s-hcloud's People

Contributors

anon6789 avatar fogs avatar solidnerd avatar splattael avatar tillepille avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

terraform-k8s-hcloud's Issues

Error running command - could not find a ready tiller pod

Hi team, looks like there are some issues with the scripts:

null_resource.kube-cni:` Provisioning with 'local-exec'...
null_resource.kube-cni (local-exec): Executing: ["/bin/sh" "-c" "KUBECONFIG=secrets/admin.conf helm install -n kube-system hcloud-csi-driver mlohr/hcloud-csi-driver --set csiDriver.secret.create=true --set csiDriver.secret.hcloudApiToken=0lQ5BEtHPUxodken3TCqF6pR7ZA112DSFmf5K71mEqM9YVUOSIiOj8Kt68LNM2bV"]
null_resource.kube-cni (local-exec): Error: could not find a ready tiller pod
╷
│ Error: local-exec provisioner error
│
│   with null_resource.kube-cni,
│   on 03-kube-post-init.tf line 59, in resource "null_resource" "kube-cni":
│   59:   provisioner "local-exec" {
│
│ Error running command 'KUBECONFIG=secrets/admin.conf helm install -n kube-system hcloud-csi-driver mlohr/hcloud-csi-driver --set csiDriver.secret.create=true --set
│ csiDriver.secret.hcloudApiToken=0lQ5BEtHPUxodken3TCqF6pR7ZA112DSFmf5K71mEqM9YVUOSIiOj8Kt68LNM2bV': exit status 1. Output: Error: could not find a ready tiller pod
│
╵

I am trying to deploy 3 masters, 2 workers clusters and looks like some issue is happening with CNI and CoreDNS pods:

| => KUBECONFIG=secrets/admin.conf kubectl get nodes
KUBECONFIG=secrets/admin.conf kubectl get pods -A -o wide
NAME                    STATUS     ROLES    AGE     VERSION
k8s-helsinki-master-1   NotReady   master   10m     v1.18.6
k8s-helsinki-master-2   NotReady   master   8m52s   v1.18.6
k8s-helsinki-master-3   NotReady   master   7m41s   v1.18.6
k8s-helsinki-node-1     NotReady   <none>   5m53s   v1.18.6
k8s-helsinki-node-2     NotReady   <none>   6m13s   v1.18.6
________________________________________________________________________________
| ~/Documents/Code/ubloquity/terraform-k8s-hetzner-DigitalOcean-Federation/hetzner_01 @ jperez-mbp (jperez)
| => KUBECONFIG=secrets/admin.conf kubectl get pods -A -o wide
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-9rj9r                        0/1     Pending   0          10m     <none>          <none>                  <none>           <none>
kube-system   coredns-66bff467f8-qqzvp                        0/1     Pending   0          10m     <none>          <none>                  <none>           <none>
kube-system   etcd-k8s-helsinki-master-1                      1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   etcd-k8s-helsinki-master-2                      1/1     Running   0          8m48s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   etcd-k8s-helsinki-master-3                      1/1     Running   0          7m37s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-1            1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-2            1/1     Running   0          8m51s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-apiserver-k8s-helsinki-master-3            1/1     Running   0          7m40s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-1   1/1     Running   1          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-2   1/1     Running   0          8m51s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-controller-manager-k8s-helsinki-master-3   1/1     Running   0          7m41s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-proxy-6mhh7                                1/1     Running   0          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-proxy-fxmhr                                1/1     Running   0          7m42s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   kube-proxy-h4lt9                                1/1     Running   0          5m54s   65.21.251.5     k8s-helsinki-node-1     <none>           <none>
kube-system   kube-proxy-r85mj                                1/1     Running   0          8m52s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-proxy-v2fvk                                1/1     Running   0          6m14s   65.108.86.224   k8s-helsinki-node-2     <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-1            1/1     Running   1          10m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-2            1/1     Running   0          8m52s   65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-system   kube-scheduler-k8s-helsinki-master-3            1/1     Running   0          7m40s   65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-system   tiller-deploy-56b574c76d-5t8bs                  0/1     Pending   0          5m43s   <none>          <none>                  <none>           <none>
kube-system   tiller-deploy-587d84cd48-jl9nl                  0/1     Pending   0          5m47s   <none>          <none>                  <none>           <none>

Any ideas?

| => KUBECONFIG=secrets/admin.conf kubectl get pods --namespace=kube-system -o wide

NAME                                            READY   STATUS    RESTARTS   AGE     IP              NODE                    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-9rj9r                        0/1     Pending   0          11m     <none>          <none>                  <none>           <none>
coredns-66bff467f8-qqzvp                        0/1     Pending   0          11m     <none>          <none>                  <none>           <none>
etcd-k8s-helsinki-master-1                      1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
etcd-k8s-helsinki-master-2                      1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
etcd-k8s-helsinki-master-3                      1/1     Running   0          9m1s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-apiserver-k8s-helsinki-master-1            1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-apiserver-k8s-helsinki-master-2            1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-apiserver-k8s-helsinki-master-3            1/1     Running   0          9m4s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-controller-manager-k8s-helsinki-master-1   1/1     Running   1          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-controller-manager-k8s-helsinki-master-2   1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-controller-manager-k8s-helsinki-master-3   1/1     Running   0          9m5s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-proxy-6mhh7                                1/1     Running   0          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-proxy-fxmhr                                1/1     Running   0          9m6s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
kube-proxy-h4lt9                                1/1     Running   0          7m18s   65.21.251.5     k8s-helsinki-node-1     <none>           <none>
kube-proxy-r85mj                                1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-proxy-v2fvk                                1/1     Running   0          7m38s   65.108.86.224   k8s-helsinki-node-2     <none>           <none>
kube-scheduler-k8s-helsinki-master-1            1/1     Running   1          11m     65.21.0.135     k8s-helsinki-master-1   <none>           <none>
kube-scheduler-k8s-helsinki-master-2            1/1     Running   0          10m     65.21.251.220   k8s-helsinki-master-2   <none>           <none>
kube-scheduler-k8s-helsinki-master-3            1/1     Running   0          9m4s    65.21.4.190     k8s-helsinki-master-3   <none>           <none>
tiller-deploy-56b574c76d-5t8bs                  0/1     Pending   0          7m7s    <none>          <none>                  <none>           <none>
tiller-deploy-587d84cd48-jl9nl                  0/1     Pending   0          7m11s   <none>          <none>                  <none>           <none>

Looks like following some links:
coredns-pod-is-not-running-in-kubernetes?

| => KUBECONFIG=secrets/admin.conf kubectl describe pods coredns-66bff467f8-9rj9r --namespace=kube-system
Name:                 coredns-66bff467f8-9rj9r
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 <none>
Labels:               k8s-app=kube-dns
                      pod-template-hash=66bff467f8
Annotations:          <none>
Status:               Pending
IP:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  11m                default-scheduler  0/2 nodes are available: 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  10m                default-scheduler  0/3 nodes are available: 3 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  9m7s               default-scheduler  0/5 nodes are available: 5 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  9m7s               default-scheduler  0/5 nodes are available: 5 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  12m (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  12m (x3 over 12m)  default-scheduler  0/2 nodes are available: 2 node(s) had taint {node.cloudprovider.kubernetes.io/uninitialized: true}, that the pod didn't tolerate.

Could not get lock /var/lib/apt/lists/lock

It looks like there are background processes of apt on a newly installed server that is causing an error during terraform apply:

hcloud_ssh_key.k8s_admin: Creating...
hcloud_ssh_key.k8s_admin: Creation complete after 0s [id=1107844]
hcloud_server.master[0]: Creating...
hcloud_server.master[0]: Still creating... [10s elapsed]
hcloud_server.master[0]: Provisioning with 'file'...
hcloud_server.master[0]: Still creating... [20s elapsed]
hcloud_server.master[0]: Still creating... [30s elapsed]
hcloud_server.master[0]: Provisioning with 'file'...
hcloud_server.master[0]: Provisioning with 'remote-exec'...
hcloud_server.master[0] (remote-exec): Connecting to remote host via SSH...
hcloud_server.master[0] (remote-exec):   Host: 78.47.128.104
hcloud_server.master[0] (remote-exec):   User: root
hcloud_server.master[0] (remote-exec):   Password: false
hcloud_server.master[0] (remote-exec):   Private key: true
hcloud_server.master[0] (remote-exec):   Certificate: false
hcloud_server.master[0] (remote-exec):   SSH Agent: false
hcloud_server.master[0] (remote-exec):   Checking Host Key: false
hcloud_server.master[0] (remote-exec): Connected!
hcloud_server.master[0]: Still creating... [40s elapsed]
hcloud_server.master[0]: Still creating... [50s elapsed]
hcloud_server.master[0]: Still creating... [1m0s elapsed]
hcloud_server.master[0]: Still creating... [1m10s elapsed]
hcloud_server.master[0]: Still creating... [1m20s elapsed]
hcloud_server.master[0] (remote-exec): E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
hcloud_server.master[0] (remote-exec): E: Unable to lock directory /var/lib/apt/lists/


Error: error executing "/tmp/terraform_1748166156.sh": Process exited with status 100

Cilium.yaml

Error: Error running command 'KUBECONFIG=secrets/admin.conf kubectl apply -f ./cilium-firewall.yaml': exit status 1. Output: error: unable to recognize "./cilium-firewall.yaml": no matches for kind "CiliumClusterwideNetworkPolicy" in version "cilium.io/v2"

Error in package dependencies during bootstrap.sh

apt-get -q install -y kubelet kubeadm
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 kubeadm : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed
 kubelet : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed
E: Unable to correct problems, you have held broken packages.

ingress examples dont work ....

Hi there I tried to follow the examples provided from the web:
https://alexslubsky.medium.com/setup-highly-available-kubernetus-cluster-with-hetzner-cloud-and-terraform-941a9e25ddf6

KUBECONFIG=secrets/admin.conf helm install -n kube-system tmp-ingress ingress-nginx/ingress-nginx -f demo-ingress.yml
KUBECONFIG=secrets/admin.conf kubectl apply -f demo-app.yml

That is generating two services, as load balancer:

| => KUBECONFIG=secrets/admin.conf helm install -n kube-system tmp-ingress ingress-nginx/ingress-nginx -f demo-ingress.yml

NAME: tmp-ingress
LAST DEPLOYED: Thu Dec 16 02:50:32 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace kube-system get services -o wide -w tmp-ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

and then the app

| ~/Documents/code/ubloquity/wireguard/rancher @ jperez-mbp (jperez)
| => KUBECONFIG=secrets/admin.conf kubectl apply -f demo-app.yml
service/hello-kubernetes-custom unchanged
deployment.apps/hello-kubernetes-custom unchanged
ingress.networking.k8s.io/hello-kubernetes-ingress created
ingress.networking.k8s.io/hello-kubernetes-ingress-world created

I can see the lb in the hetzner UI but no curl on that:

| => KUBECONFIG=secrets/admin.conf kubectl get services --all-namespaces=true
NAMESPACE     NAME                                             TYPE           CLUSTER-IP       EXTERNAL-IP                                    PORT(S)                      AGE
default       hello-kubernetes-custom                          ClusterIP      10.100.61.10     <none>                                         80/TCP                       63s
default       kubernetes                                       ClusterIP      10.96.0.1        <none>                                         443/TCP                      73m
kube-system   cilium-agent                                     ClusterIP      None             <none>                                         9095/TCP                     71m
kube-system   hcloud-csi-driver-controller-metrics             ClusterIP      10.105.159.126   <none>                                         9189/TCP                     71m
kube-system   hcloud-csi-driver-node-metrics                   ClusterIP      10.109.99.41     <none>                                         9189/TCP                     71m
kube-system   kube-dns                                         ClusterIP      10.96.0.10       <none>                                         53/UDP,53/TCP,9153/TCP       73m
kube-system   tmp-ingress-ingress-nginx-controller             LoadBalancer   10.102.124.233   10.88.0.4,2a01:4f9:c01e:447::1,95.217.171.73   80:32760/TCP,443:31786/TCP   71s
kube-system   tmp-ingress-ingress-nginx-controller-admission   ClusterIP      10.100.43.236    <none>                                         443/TCP                      71s

and curl:

| ~/Documents/code/ubloquity/wireguard/rancher @ jperez-mbp (jperez)
| => curl -I http://95.217.171.73/testpath

HTTP/1.1 404 Not Found
Date: Thu, 16 Dec 2021 01:52:15 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive

Features wanted? (hcloud-cloud-controller-manager, CSI driver, firewalls)

Hi,
I tried the the terraform project to setup a cluster in the Hetzner cloud. It works quite well, thank you.
However I added some more features in my fork. For example

Do you want any of these changes as a PR in this project?
Greetings

Always newest docker version installed

Despite

variable "docker_version" {
  default = "17.03"
}

installed version of docker is 18.03.1-ce. That produces error:

[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03

edit:
indeed always the newest version is installed:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Add autoscaling

I am thinking of migrating my private cluster to hetzner to save some costs. I would really prefer to have autoscaling to a fixed number of worker nodes. If you are interested, I can submit a pull request when I have finished it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.