Giter Club home page Giter Club logo

hcloud-cloud-controller-manager's People

Contributors

apricote avatar aspettl avatar cbeneke avatar dependabot[bot] avatar ekeih avatar elohmeier avatar fhofherr avatar flokli avatar hcloud-bot avatar hendrikheil avatar jakob3xd avatar jooola avatar kranurag7 avatar lkaemmerling avatar maksim-paskal avatar mfrister avatar mkiehl avatar morremeyer avatar mstarostik avatar mysticaltech avatar pitwegner avatar process0 avatar renovate[bot] avatar samcday avatar simonostendorf avatar thetechnick avatar tibordp avatar ydkn avatar yeldirium avatar ym avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcloud-cloud-controller-manager's Issues

No matches for kind "Deployment" in version "extensions/v1beta1"

Kubernetes v1.16.1

I get the error no matches for kind "Deployment" in version "extensions/v1beta1" with the following command: kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.4.0-networks.yaml

Is this my fault or is this a bug?

support baremetal servers too

Is there a plan that the hetznercloud API will support baremetal?
If yes, what is the estimated timeline?

Then it would be nice to have this support :)

Support overwrite of values via labels

Hello,

I am using a patched version of this controller in my kubernetes cluster to allow overwriting specific values. In hetzneronline/community-content#119 it was suggested to contribute this back to the upstream controller.

I'm not quite sure, if this is something you would want in the controller though. My use-case is to allow spinning up a wireguard in the cluster and overwrite the (nonexistent) internal IP via label. Opinions from your side? :)

If you are interested in this, I'll have to refurbish the changes to make it a little more gerenally approachable (current changes: master...cbeneke:master )

Cheers

LoadBalancer - targets and services not automatically getting registered

Hi,
I am trying to deploy a Kubernetes cluster via kubespray to use the newly offered Load balancers as the front facing component and I am running into few problems. Mainly, few things (such as adding targets and services) have to be done manually. More details below:

Process/Deployment

  1. Created private network and subnet in 172.16.0.0/12 range
  2. Deployed Kubernetes v1.17.6 with calico as CNI with IP in IP mode enabled.
  3. Installed the cloud controller manager with Network support
  4. Installed nginx ingress controller with following values and annotations:
controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/location: fsn1
      load-balancer.hetzner.cloud/protocol: "tcp"
      load-balancer.hetzner.cloud/use-private-ip: "true"
      load-balancer.hetzner.cloud/name: "applications"
      load-balancer.hetzner.cloud/disable-public-network: "false"
      load-balancer.hetzner.cloud/hostname: "apps-loadbalancer"

with

$ helm install ingress-nginx ingress-nginx/ingress-nginx -f values.yaml 

Problem

Now the load balancer is provisioned by the controller but following are the problems:

  • No nodes are added as the target to the load balancer. I had to manually add the targets via Cloud console.
  • No load balancer services were created so I had to manually create two TCP services with port 80 to 31739 and port 443 to 30034
  • The service always remains in the pending state as shown below:
$ kubectl get service
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
adminer                              ClusterIP      10.233.42.145   <none>        80/TCP                       8h
ingress-nginx-controller             LoadBalancer   10.233.44.37    <pending>     80:31739/TCP,443:30034/TCP   28m
ingress-nginx-controller-admission   ClusterIP      10.233.2.112    <none>        443/TCP                      28m
kubernetes                           ClusterIP      10.233.0.1      <none>        443/TCP                      9h
mysql                                ClusterIP      10.233.39.172   <none>        3306/TCP                     8h

And for reference, here is the full service definition as installed by the helm chart:

apiVersion: v1
kind: Service
metadata:
  annotations:
    load-balancer.hetzner.cloud/disable-public-network: "false"
    load-balancer.hetzner.cloud/hostname: apps-loadbalancer
    load-balancer.hetzner.cloud/location: fsn1
    load-balancer.hetzner.cloud/name: applications
    load-balancer.hetzner.cloud/protocol: tcp
    load-balancer.hetzner.cloud/use-private-ip: "true"
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2020-06-29T21:31:15Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 0.33.0
    helm.sh/chart: ingress-nginx-2.9.0
  name: ingress-nginx-controller
  namespace: default
  resourceVersion: "94540"
  selfLink: /api/v1/namespaces/default/services/ingress-nginx-controller
  uid: ddb44648-d0fd-4708-a0cf-c9084a1e7d4f
spec:
  clusterIP: 10.233.44.37
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31739
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 30034
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

So as yo can see, following are the problems:

  • Targets are not automatically registered. Is this expected and are to be registered manually?
  • Services for the load balancer are not created and are to be manually created. Is this expected?
  • The Kubernetes service always remains in Pending state.

Additional information

The hcloud controller does not add additional annotations to the Node resources:

apiVersion: v1
kind: Node
metadata:
  annotations:
    alpha.kubernetes.io/provided-node-ip: 172.16.0.5
    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: "0"
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: "2020-06-29T12:46:02Z"
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: kubernetes-node-0
    kubernetes.io/os: linux
  name: kubernetes-node-0
  resourceVersion: "99825"
  selfLink: /api/v1/nodes/kubernetes-node-0
  uid: b79e30ff-4095-4100-a0d7-cd58c2d22eee
spec:
  podCIDR: 10.233.67.0/24
  podCIDRs:
  - 10.233.67.0/24
status:
  addresses:
  - address: kubernetes-node-0
    type: Hostname
  - address: 159.x.x.x
    type: ExternalIP
  - address: 172.16.0.5
    type: InternalIP
  allocatable:
    cpu: 1900m
    ephemeral-storage: "72456848060"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 7620980Ki
    pods: "110"
  capacity:
    cpu: "2"
    ephemeral-storage: 78620712Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 7973380Ki
    pods: "110"

  nodeInfo:
    architecture: amd64
    bootID: eedb8135-a35b-4a25-89b3-73096860db30
    containerRuntimeVersion: docker://18.9.7
    kernelVersion: 4.15.0-99-generic
    kubeProxyVersion: v1.17.6
    kubeletVersion: v1.17.6
    machineID: 158faf6e3fc94edba9bb12d5e5b48db7
    operatingSystem: linux
    osImage: Ubuntu 18.04.4 LTS
    systemUUID: 158FAF6E-3FC9-4EDB-A9BB-12D5E5B48DB7

The logs for the hcloud controller have following in logs:

E0629 21:07:10.125243       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479481
E0629 21:07:10.658447       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479482
E0629 21:07:11.111480       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479484
E0629 21:07:11.576658       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479483
E0629 21:12:12.059902       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479483
E0629 21:12:12.513601       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479481
E0629 21:12:12.923371       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479482
E0629 21:12:13.398487       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479484
E0629 21:17:13.829197       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479483
E0629 21:17:14.261718       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479481
E0629 21:17:14.751126       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479482
E0629 21:17:15.227245       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479484
E0629 21:22:15.724172       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479483
E0629 21:22:16.180591       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479481
E0629 21:22:16.637356       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479482
E0629 21:22:17.497877       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479484
E0629 21:27:18.003477       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479483
E0629 21:27:18.521352       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479481
E0629 21:27:19.015129       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479482
E0629 21:27:19.442692       1 node_controller.go:237] hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6479484
I0629 21:31:15.054597       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"ingress-nginx-controller", UID:"ddb44648-d0fd-4708-a0cf-c9084a1e7d4f", APIVersion:"v1", ResourceVersion:"92882", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
I0629 21:31:15.067065       1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="ingress-nginx-controller" nodes=[kubernetes-node-1 kubernetes-node-0 kubernetes-node-2]
E0629 21:31:16.864055       1 controller.go:244] error processing service default/ingress-nginx-controller (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://: 
I0629 21:31:16.864760       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"ingress-nginx-controller", UID:"ddb44648-d0fd-4708-a0cf-c9084a1e7d4f", APIVersion:"v1", ResourceVersion:"92882", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://: 
I0629 21:31:21.865008       1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="ingress-nginx-controller" nodes=[kubernetes-node-1 kubernetes-node-0 kubernetes-node-2]
I0629 21:31:21.866657       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"ingress-nginx-controller", UID:"ddb44648-d0fd-4708-a0cf-c9084a1e7d4f", APIVersion:"v1", ResourceVersion:"92885", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
I0629 21:31:21.960424       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"ingress-nginx-controller", UID:"ddb44648-d0fd-4708-a0cf-c9084a1e7d4f", APIVersion:"v1", ResourceVersion:"92885", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://: 
E0629 21:31:21.959360       1 controller.go:244] error processing service default/ingress-nginx-controller (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://: 
I0629 21:31:31.961844       1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="ingress-nginx-controller" nodes=[kubernetes-node-2 kubernetes-node-1 kubernetes-node-0]

Helm Chart?

Hi,

are you planning to provide a Helm Chart for installing hcloud-cloud-controller-manager? If not, I would provide one if you are interested.

Best regards
Matthias

Crash when LBs are deactivated

I set the new env option to disable the LB handling.
However, that causes a crash in the module:

goroutine 197 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x18c91c0, 0x2ba3980)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x82
panic(0x18c91c0, 0x2ba3980)
	/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.(*loadBalancers).EnsureLoadBalancer(0x0, 0x1e109a0, 0xc000046030, 0x1b3466c, 0xa, 0xc0008d6840, 0xc000838000, 0x6, 0x8, 0x1b2eb43, ...)
	/maschine-controller/src/hcloud/load_balancers.go:84 +0x28e
k8s.io/kubernetes/pkg/controller/service.(*Controller).ensureLoadBalancer(0xc00066cb60, 0xc0008d6840, 0x0, 0x0, 0x6)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:389 +0xde
k8s.io/kubernetes/pkg/controller/service.(*Controller).syncLoadBalancerIfNeeded(0xc00066cb60, 0xc0008d6840, 0xc0004034c0, 0x1b, 0xc000111580, 0xc000969cb0, 0x14b7743)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:344 +0x879
k8s.io/kubernetes/pkg/controller/service.(*Controller).processServiceCreateOrUpdate(0xc00066cb60, 0xc0008d6840, 0xc0004034c0, 0x1b, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:280 +0xdf
k8s.io/kubernetes/pkg/controller/service.(*Controller).syncService(0xc00066cb60, 0xc0004034c0, 0x1b, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:759 +0x30a
k8s.io/kubernetes/pkg/controller/service.(*Controller).processNextWorkItem(0xc00066cb60, 0x203000)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:238 +0xf0
k8s.io/kubernetes/pkg/controller/service.(*Controller).worker(0xc00066cb60)
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:227 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00040f4f0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00040f4f0, 0x1dc8960, 0xc000327f80, 0xc0003adb01, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00040f4f0, 0x3b9aca00, 0x0, 0x1bf9e01, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc00040f4f0, 0x3b9aca00, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/pkg/controller/service.(*Controller).Run
	/go/pkg/mod/k8s.io/[email protected]/pkg/controller/service/controller.go:216 +0x205
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x170053e]```

ExternalP still required when running Calico instead of flannel

I am running the latest hcloud-cloud-controller-manager, with Calico v3.13.2, and K8s v1.18.2 which gives me 1020 ? Ssl 3:41 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --cloud-provider=external

I still need to specify the explicit externalIPs entry.

kind: Service
apiVersion: v1
metadata:
  name: router
spec:
  selector:
    app: router
  ports:
    - port: 80
      name: unencrypted
    - port: 443
      name: encrypted
  type: LoadBalancer
  externalTrafficPolicy: Local
  externalIPs:
    - 99.99.99.99

When I do not specify the IP, I get

default       service/router                          LoadBalancer   10.109.138.114   <pending>     80:31869/TCP,443:32569/TCP   3d4h

...which stays pending and the router is not reachable. Is this issue on your radar or can you advice me how I can get rid of the specific IP from my spec. Eventually I want to propagate the client IPs to a pod inside K8s cluster. Therefore I have specified externalTrafficPolicy: Local which does not have any effect. I assume that this is due to the specific IP address specification.

Assign static IP address to kubernetes cluster

Hello everyone,

I set up a kubernetes cluster on the Hetzner cloud and I want to run a web service in that cluster. Therefore I want to make the cluster accessible from the internet. Since I want this web service to be accessible through a domain, I need a public static IP to direct the traffic to. Initially I thought I could achieve this with the following setup:

[Internet] 
    -> [Hetzner cloud load balancer (with fixed static IP)] 
    -> [Ingress controller (e.g. traefik or nginx)] 
    -> [Kubernetes service for my web service] 
    -> [Pods for my web service]

My problem is that the load balancer is created dynamically and is assigned some IP I cannot control. I create the load balancer by deploying the Ingress controller's service resource (with spec.type = LoadBalancer).

I read the Hetzner Cloud load balancer documentation as well as the annotation documentation. It seems that non of the following alternatives I can think of are possible:

  1. Specify an existing Hetzner cloud load balancer to use. This would enable me to reuse this load balancer (and thereby its IP) in case I need to rebuild the kubernetes cluster.
  2. Specify a Hetzner cloud floating IP to use as static IP for the dynamically created LB, e.g. by referring to that reserved floating IP in the service resource similar to what is possible in Google's GKE by setting the service's spec.loadBalancerIP accordingly.
  3. Assign a Hetzner cloud floating IP to the dynamically created load balancer via the HCloud web interface / API. This is not possible as floating IPs can only be assigned to nodes, not load balancers. However, I would like to avoid this approach anyway since I prefer to manage this using Kubernetes resources only.

Am I missing something or are the Hetzner Cloud load balancers that just left the beta phase are not (yet) supporting this scenario?

Share same load balancer for multiple services

Is there any way to make kubernetes reuse / configure the same LoadBalancer for multiple services as soon as each service is configured in different ports?

I see that the Load Balancers in the Cloud UI accept up to 5 services definition. If I have understood correctly the goal of defining multiple services in a load balancer, I guess it would be nice to be able to achieve this configuration directly via annotations in the k8s services.

For example with metallb we can do something like metallb.universe.tf/allow-shared-ip: "float-ip" to reuse the same external IP for multiple services (of course the ports cannot be reused).

I have tried setting the same load-balancer.hetzner.cloud/name annotation in 2 services but one of them overrides the definition of the other and I end up with only one service really working.

Apologize if i'm missing anything and the question has no sense ;)

SyncLoadBalancerFailed

Hello,

I followed the instructions described in this blog https://community.hetzner.com/tutorials/install-kubernetes-cluster.

Everything went pretty smooth and the cluster looks like it is set up ok. Unfortunately wehen deploying something with type=Loadbalancer after getting the external IP the service is not responding.

`
root@master-1:~# kubectl describe services my-service
Name: my-service
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations:
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.108.68.170
LoadBalancer Ingress: xxx.xx.xxx.xx
Port: 8080/TCP
TargetPort: 8080/TCP
NodePort: 31417/TCP
Endpoints: 10.244.3.10:8080,10.244.3.11:8080,10.244.3.12:8080 + 2 more...
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message


Warning AllocationFailed 4m21s (x2 over 4m21s) metallb-controller Failed to allocate IP for "default/my-service": no available IPs
Normal IPAllocated 2m9s metallb-controller Assigned IP "xxx.xx.xxx.xx"
Normal nodeAssigned 2m9s metallb-speaker announcing from node "worker-3"
Normal EnsuringLoadBalancer 106s (x6 over 4m21s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 106s (x6 over 4m21s) service-controller Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.Create: neither load-balancer.hetzner.cloud/location nor load-balancer.hetzner.cloud/network-zone set
`

I recheck the walkthrough instructions and made sure I did all in the same order as described.

I am not sure what to check or how to debug that properly. Hope you can help pointing me in the right direction.

hcloud provider network start fail

Hello all.
When i try start hcloud-cloud-controller-manager with network support i get this error logs

kubectl logs -f hcloud-cloud-controller-manager-7f59845fb6-smtwn

Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
ERROR: logging before flag.Parse: W0916 13:48:44.704964 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
ERROR: logging before flag.Parse: W0916 13:48:44.712289 1 controllermanager.go:108] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
ERROR: logging before flag.Parse: W0916 13:48:44.713737 1 authentication.go:55] Authentication is disabled
ERROR: logging before flag.Parse: I0916 13:48:44.713808 1 insecure_serving.go:49] Serving insecurely on [::]:10253
ERROR: logging before flag.Parse: I0916 13:48:44.717345 1 node_controller.go:89] Sending events to api server.
ERROR: logging before flag.Parse: E0916 13:48:44.721045 1 controllermanager.go:240] Failed to start service controller: the cloud provider does not support external load balancers
ERROR: logging before flag.Parse: I0916 13:48:44.721880 1 pvlcontroller.go:107] Starting PersistentVolumeLabelController
ERROR: logging before flag.Parse: I0916 13:48:44.721989 1 controller_utils.go:1025] Waiting for caches to sync for persistent volume label controller
ERROR: logging before flag.Parse: I0916 13:48:44.822307 1 controller_utils.go:1032] Caches are synced for persistent volume label controller
ERROR: logging before flag.Parse: I0916 13:48:45.020493 1 route_controller.go:102] Starting route controller
ERROR: logging before flag.Parse: I0916 13:48:45.020554 1 controller_utils.go:1025] Waiting for caches to sync for route controller
ERROR: logging before flag.Parse: E0916 13:48:45.060905 1 node_controller.go:161] providerID should start with hcloud://: 3295411
ERROR: logging before flag.Parse: I0916 13:48:45.120773 1 controller_utils.go:1032] Caches are synced for route controller
ERROR: logging before flag.Parse: E0916 13:48:45.121072 1 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:72
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/maschine-controller/src/hcloud/routes.go:30
/maschine-controller/src/hcloud/routes.go:58
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:128
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:119
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:98
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1087f97]

goroutine 126 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x161ed00, 0x29720f0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.(*routes).reloadNetwork(0xc0004ba860, 0x1ae5000, 0xc00009c018, 0x203000, 0xc0002af960)
/maschine-controller/src/hcloud/routes.go:30 +0x37
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.(*routes).ListRoutes(0xc0004ba860, 0x1ae5000, 0xc00009c018, 0x18498ad, 0xa, 0xc00088f610, 0x44bdb6, 0xc000567880, 0x7, 0x10)
/maschine-controller/src/hcloud/routes.go:58 +0x5a
k8s.io/kubernetes/pkg/controller/route.(*RouteController).reconcileNodeRoutes(0xc0002491f0, 0x44bb4b, 0xc000120738)
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:128 +0x73
k8s.io/kubernetes/pkg/controller/route.(*RouteController).Run.func1()
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:119 +0x2e
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0004ccde0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004ccde0, 0x2540be400, 0x0, 0x100, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.NonSlidingUntil(0xc0004ccde0, 0x2540be400, 0x0)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:98 +0x4d
created by k8s.io/kubernetes/pkg/controller/route.(*RouteController).Run
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:118 +0x1f2

kubernetes version 1.15.3
cni flannel

CRDs for API resources

It would be really nice to have CRD resources for servers, networks, block storages etc. similar to how this can be done with the GC config connector.

This would make writing an extension for gardener a bit easier as well.

External IP in pending state when using hcloud-cloud-controller-manager

Hi,

I've followed the steps listed below in order to create a kubernetes cluster on hetzner. I've not created FloatingIPs, LoadBalancer and did not use Network Deployment.
When I deploy an nginx and specify the service type as LoadBalancer, the external ip is in pending state. I was expecting the hcloud-cloud-controller-manager to assign a loadbalancer ip as the docs state, allows to use Hetzner Cloud Load Balancers with Kubernetes Services

Is there any step which I've missed out or is anyone else facing a similar issue?

1. Created a Network

λ hcloud network list
ID            NAME            IP RANGE       SERVERS
<NetworkID>   <NetowrkName>   10.98.0.0/16   3 servers


2. Created 3 servers using hcloud (1 master & 2 worker nodes)

λ hcloud server list
ID        NAME          STATUS   IPV4             IPV6                      DATACENTER
7293376   xp-master-1   off      95.217.157.204   2a01:4f9:c010:b802::/64   hel1-dc2
7293609   xp-worker-1   off      135.181.33.219   2a01:4f9:c010:6f10::/64   hel1-dc2
7293612   xp-worker-2   off      135.181.25.216   2a01:4f9:c010:a9be::/64   hel1-dc2

3. Edited /etc/systemd/system/kubelet.service.d/20-hetzner-cloud.conf in each of the servers in each of the servers with the following
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"

4. Installed Docker & Kubernetes in each of the servers

5. Configured the following in /etc/sysctl.conf
/etc/sysctl.conf

# Allow IP forwarding for kubernetes
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1


6. Ran the following on Master node

kubeadm init --apiserver-advertise-address=<MasterIPAddress> --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.8 --ignore-preflight-errors=NumCPU --apiserver-cert-extra-sans 10.0.0.1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl -n kube-system create secret generic hcloud --from-literal=token=<HCloudAPIToken>
kubectl apply -f  https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-mana ger/master/deploy/v1.7.0.yaml
kubectl -n kube-system patch daemonset kube-flannel-ds-amd64 --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csidriver.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csinodeinfo.yaml
kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/master/deploy/kubernetes/hcloud-csi.yml

7. Installed ngnix by "kubectl run nginx --image nginx"

8. Exposed ngnix service by "kubectl expose deploy nginx --port 80 --type LoadBalancer"

9. kubectl get all shows that the nginx service is stuck with getting an External IP

λ kubectl get all --all-namespaces                                                                                                                   
NAMESPACE     NAME                                                   READY   STATUS                       RESTARTS   AGE                             
default       pod/nginx-76df748b9-gs29b                              1/1     Running                      1          10h                             
kube-system   pod/coredns-847cb494-46f77                             1/1     Running                      2          11h                             
kube-system   pod/coredns-847cb494-6z4rw                             1/1     Running                      2          11h                             
kube-system   pod/etcd-xp-master-1                                   1/1     Running                      2          11h                             
kube-system   pod/hcloud-cloud-controller-manager-565849f78f-zdkkd   1/1     Running                      2          11h                             
kube-system   pod/hcloud-csi-controller-0                            4/5     CreateContainerConfigError   8          11h                             
kube-system   pod/hcloud-csi-node-4l5t6                              2/3     CreateContainerConfigError   4          11h                             
kube-system   pod/hcloud-csi-node-4pxvt                              2/3     CreateContainerConfigError   4          11h                             
kube-system   pod/hcloud-csi-node-578vw                              2/3     CreateContainerConfigError   4          11h                             
kube-system   pod/kube-apiserver-xp-master-1                         1/1     Running                      2          11h                             
kube-system   pod/kube-controller-manager-xp-master-1                1/1     Running                      2          11h                             
kube-system   pod/kube-flannel-ds-amd64-4pfcm                        1/1     Running                      3          11h                             
kube-system   pod/kube-flannel-ds-amd64-dzxpx                        1/1     Running                      2          11h                             
kube-system   pod/kube-flannel-ds-amd64-zwl78                        1/1     Running                      2          11h                             
kube-system   pod/kube-proxy-6wvf2                                   1/1     Running                      2          11h                             
kube-system   pod/kube-proxy-p56b4                                   1/1     Running                      2          11h                             
kube-system   pod/kube-proxy-pqq8k                                   1/1     Running                      2          11h                             
kube-system   pod/kube-scheduler-xp-master-1                         1/1     Running                      2          11h                             
                                                                                                                                                     
                                                                                                                                                     
NAMESPACE     NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE                      
default       service/kubernetes                      ClusterIP      10.96.0.1       <none>        443/TCP                  11h                      
default       service/nginx                           LoadBalancer   10.102.223.92   <pending>     80:30685/TCP             10h                      
kube-system   service/hcloud-csi-controller-metrics   ClusterIP      10.107.101.44   <none>        9189/TCP                 11h                      
kube-system   service/hcloud-csi-node-metrics         ClusterIP      10.97.205.24    <none>        9189/TCP                 11h                      
kube-system   service/kube-dns                        ClusterIP      10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   11h                      
                                                                                                                                                     
NAMESPACE     NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE             
kube-system   daemonset.apps/hcloud-csi-node           3         3         0       3            0           <none>                   11h             
kube-system   daemonset.apps/kube-flannel-ds-amd64     3         3         3       3            3           <none>                   11h             
kube-system   daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>                   11h             
kube-system   daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>                   11h             
kube-system   daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   11h             
kube-system   daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>                   11h             
kube-system   daemonset.apps/kube-proxy                3         3         3       3            3           kubernetes.io/os=linux   11h             
                                                                                                                                                     
NAMESPACE     NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE                                                 
default       deployment.apps/nginx                             1/1     1            1           10h                                                 
kube-system   deployment.apps/coredns                           2/2     2            2           11h                                                 
kube-system   deployment.apps/hcloud-cloud-controller-manager   1/1     1            1           11h                                                 
                                                                                                                                                     
NAMESPACE     NAME                                                         DESIRED   CURRENT   READY   AGE                                           
default       replicaset.apps/nginx-76df748b9                              1         1         1       10h                                           
kube-system   replicaset.apps/coredns-66bff467f8                           0         0         0       11h                                           
kube-system   replicaset.apps/coredns-6cf46fccfd                           0         0         0       11h                                           
kube-system   replicaset.apps/coredns-847cb494                             2         2         2       11h                                           
kube-system   replicaset.apps/hcloud-cloud-controller-manager-565849f78f   1         1         1       11h                                           
                                                                                                                                                     
NAMESPACE     NAME                                     READY   AGE                                                                                   
kube-system   statefulset.apps/hcloud-csi-controller   0/1     11h                                                                                                  

Describe on hcloud-cloud-controller-manager gives the following output

λ kubectl describe pod hcloud-cloud-controller-manager-565849f78f-zdkkd -n kube-system
Name:           hcloud-cloud-controller-manager-565849f78f-zdkkd
Namespace:      kube-system
Priority:       0
Node:           xp-master-1/95.217.157.204
Start Time:     Sat, 22 Aug 2020 23:25:01 +0530
Labels:         app=hcloud-cloud-controller-manager
                pod-template-hash=565849f78f
Annotations:    scheduler.alpha.kubernetes.io/critical-pod:
Status:         Running
IP:             10.244.0.18
Controlled By:  ReplicaSet/hcloud-cloud-controller-manager-565849f78f
Containers:
  hcloud-cloud-controller-manager:
    Container ID:  docker://702af35035bb10d72f1adfa43100cd7da35bfe9ea932bf36e596e3abedae2729
    Image:         hetznercloud/hcloud-cloud-controller-manager:v1.7.0
    Image ID:      docker-pullable://hetznercloud/hcloud-cloud-controller-manager@sha256:ce9c86c7d12be487a116cc5d689bc7fa0ba5d7b948709eb1976b0e28a396a3f6
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/hcloud-cloud-controller-manager
      --cloud-provider=hcloud
      --leader-elect=false
      --allow-untagged-cloud
    State:          Running
      Started:      Sun, 23 Aug 2020 10:43:27 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 23 Aug 2020 00:05:14 +0530
      Finished:     Sun, 23 Aug 2020 10:42:57 +0530
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      NODE_NAME:      (v1:spec.nodeName)
      HCLOUD_TOKEN:  <set to the key 'token' in secret 'hcloud'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from cloud-controller-manager-token-khhmh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  cloud-controller-manager-token-khhmh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cloud-controller-manager-token-khhmh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
                 node.kubernetes.io/not-ready:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                    From                  Message
  ----     ------                  ----                   ----                  -------
  Warning  FailedScheduling        11h (x5 over 11h)      default-scheduler     0/1 nodes are available: 1 Insufficient cpu.
  Normal   Scheduled               11h                    default-scheduler     Successfully assigned kube-system/hcloud-cloud-controller-manager-565849f78f-zdkkd to xp-master-1
  Normal   Pulled                  11h                    kubelet, xp-master-1  Container image "hetznercloud/hcloud-cloud-controller-manager:v1.7.0" already present on machine
  Normal   Created                 11h                    kubelet, xp-master-1  Created container hcloud-cloud-controller-manager
  Normal   Started                 11h                    kubelet, xp-master-1  Started container hcloud-cloud-controller-manager
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c272857b00dd073007b9bf97cbf8a074fd42e1f932f6460faec41b429fca88ec" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4f6944d4e39e6a8750501a8ab3a9bee2ad3d5abb4f50bde3fde0fb1488ba2e2f" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e88f93804b90fb55340c66b5ce0ee19377617066cf3274c0b1e83868efedc1cc" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "72c1778e72f21e2a076df9651d599a0d5b0f043d2a209f606367cd084853b3e5" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c8d2bd183b76817f1f15991465d62232043866490809dc61c63e4d7f7245a725" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bb3adf7e1d2484b2108f4bff720f0a64de06c94a1873617c10f0dd015defb70b" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "972579ac36176af478298160b4e922da0b5f2518c329d14be10d15fea3619970" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c4e1c05e5dadc4d31ccf87c842f8357effe99b1bfe8b5c72edbf6a3c374cd801" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h                    kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ebc90a078d0bbf24b634bc7dac2db1a601713e57651f374f9b46f136b82890e4" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  10h (x3 over 10h)      kubelet, xp-master-1  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "aa8d0c73a8bf90811b0f014433ccbcec6814a7dd258ae84788e8b0f1b3c9c82e" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          10h (x13 over 10h)     kubelet, xp-master-1  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedMount             9m56s                  kubelet, xp-master-1  MountVolume.SetUp failed for volume "cloud-controller-manager-token-khhmh" : failed to sync secret cache: timed out waiting for the condition
  Warning  FailedCreatePodSandBox  9m49s                  kubelet, xp-master-1  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9dfe7a9374b10c589f52a001919951a25079e2d8ec7cd460380aab5f0fd57a19" network for pod "hcloud-cloud-controller-manager-565849f78f-zdkkd": networkPlugin cni failed to set up pod "hcloud-cloud-controller-manager-565849f78f-zdkkd_kube-system" network: open /run/flannel/subnet.env: no such file or directory
  Normal   SandboxChanged          9m48s (x2 over 9m54s)  kubelet, xp-master-1  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled                  9m46s                  kubelet, xp-master-1  Container image "hetznercloud/hcloud-cloud-controller-manager:v1.7.0" already present on machine
  Normal   Created                 9m46s                  kubelet, xp-master-1  Created container hcloud-cloud-controller-manager
  Normal   Started                 9m46s                  kubelet, xp-master-1  Started container hcloud-cloud-controller-manager

Kontena Pharos support

Kontena Pharos does not support Hetzner Cloud. there is no addon for it. I asked their support and they said this controller manager can be used for that.

I have no idea how to make Pharos addon. I'm too novice for that. Would you guys please add support for Kontena Pharos?

Failure in EnsureLoadBalancer: missing prefix hcloud://

I have a running cluster and I have previously provisioned a load balancer using the instructions provided for the ingress controller. It all works fine.

However, now I am trying to do a similar thing for another service and I keep getting this:

error processing service gitlabsync/production-auto-deploy-aspnetcore (will retry): failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://:

I experience this issue with any new service of type LoadBalancer with hcloud annotations.

It seems some requirements have changed but it's not documented what do I suppose to do. It looks the same as #50 but I don't really want to re-provision the cluster since it works just fine and I have quite a lot of things running there.

error: a container name must be specified for pod hcloud-csi-controller-0

Hey,
i'm trying to setup a k8s cluster on centos7 - after the setup seems to worked, i tried to get the dashboard working. It seems that their was anything wrong with my cni related to this stackoverflow article

In short:
When i try to open the dashboard i'll become this in the browser:
no endpoints available for service \"https:kubernetes-dashboard:\"

Then i saw, after running kubectl get pods --all-namespaces

kube-system hcloud-csi-controller-0 3/5 CrashLoopBackOff 32 65m
kube-system hcloud-csi-node-wzs28 2/3 CrashLoopBackOff 16 58m

so, cauz i think fixing that would be fix the issue with the dashboard, i want to investigate in it. So i tried to kubectl logs -n kube-system hcloud-csi-controller-0 and to kubectl logs -n kube-system hcloud-csi-node-wzs28 but getting this:

error: a container name must be specified for pod hcloud-csi-controller-0, choose one of: [csi-attacher csi-resizer csi-provisioner hcloud-csi-driver liveness-probe]
error: a container name must be specified for pod hcloud-csi-node-wzs28, choose one of: [csi-node-driver-registrar hcloud-csi-driver liveness-probe]

I would need help to dig deeper, cauz i'm absolutely new to k8 and its just a little cluster to learn.

Here is my complete setup - i think i have censored it completely:

MASTER 1:
ssh myuser@<master-hostname>

WORKER 1:
ssh myuser@<node-hostname>


myuser (passphrase PrivateKey)
<myuser-privkey>

SSH

root
<root-password>

myuser
<myuser-password>



LOKAL--##################################################

##INFOS
myuser ssh key id: <ssh-key>
k8s-dev Network ID <network-id>
Subnet k8s-dev added to network <network-id>
hcloud api key (meikel-hcloud): <hcloud-api-token> <network-id>

<master-hostname>:
Server 6163896 created
IPv4: <ip.of.master.server>

<node-hostname>:
Server 6163900 created
IPv4: <ip.of.node.server


####################################################################################################
####################################################################################################
MASTER + WORKER--###################################################################################
####################################################################################################
####################################################################################################


yum install -y nano wget

adduser myuser
passwd myuser
passwd root
mkdir /home/myuser/.ssh
nano /home/myuser/.ssh/authorized_keys

ssh-rsa XXX myuser

usermod -aG wheel myuser

nano /etc/ssh/sshd_config

[...]
Port <ssh-port>
[...]
PermitRootLogin no
[...]
AuthorizedKeysFile .ssh/authorized_keys
[...]
PasswordAuthentication no 
[...]
ChallenegeResponseAuthentication no
[...]
X11Forwarding no
[...]

systemctl restart sshd

yum update -y

yum install -y firewalld

nano /etc/firewalld/firewalld.conf

[...]
AllowZoneDrifting = no
[...]

systemctl enable firewalld
systemctl start firewalld

firewall-cmd --zone=public --permanent --service=ssh --add-port=<ssh-port>/tcp
firewall-cmd --zone=public --permanent --service=ssh --remove-port=22/tcp

firewall-cmd --reload

swapoff -a

nano /etc/hosts

[...]
10.98.0.2 <master-hostname>
10.98.0.3 <node-hostname>
[...]

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

sudo modprobe br_netfilter

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum update -y && yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.8 \
  docker-ce-cli-19.03.8
  
mkdir /etc/docker

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

systemctl daemon-reload
systemctl enable docker
systemctl restart docker

cat <<EOF>> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet


####################################################################################################
####################################################################################################
MASTER ONLY--#######################################################################################
####################################################################################################
####################################################################################################

firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp

firewall-cmd --reload

kubeadm config images pull

kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --kubernetes-version=v1.18.3 \
  --ignore-preflight-errors=NumCPU \
  --apiserver-cert-extra-sans 10.98.0.2


W0607 20:49:28.663378   11274 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [<master-hostname> kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 <ip.of.master.server> 10.98.0.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [<master-hostname> localhost] and IPs [<ip.of.master.server> 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [<master-hostname> localhost] and IPs [<ip.of.master.server> 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0607 20:49:33.792761   11274 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0607 20:49:33.794856   11274 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.005877 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node <master-hostname> as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node <master-hostname> as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: un0wzv.manhehldwocfseqx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join <ip.of.master.server>:6443 --token un0wzv.manhehldwocfseqx \
    --discovery-token-ca-cert-hash sha256:c13a6f330ab32948mf04230t2nff23v2t2t2awc7d3c000ac90d180f38fc4577bbccdcd77 
[root@master-1 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.



mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: hcloud
  namespace: kube-system
stringData:
  token: "<hcloud-api-token>"
  network: "<network-id>"
---
apiVersion: v1
kind: Secret
metadata:
  name: hcloud-csi
  namespace: kube-system
stringData:
  token: "<hcloud-api-token>"
EOF

kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.5.2-networks.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl -n kube-system patch daemonset kube-flannel-ds-amd64 --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'

kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csidriver.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csinodeinfo.yaml
kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/master/deploy/kubernetes/hcloud-csi.yml



####################################################################################################
####################################################################################################
WORKER ONLY--#######################################################################################
####################################################################################################
####################################################################################################

firewall-cmd --zone=public --permanent --add-port={10250,30000-32767}/tcp

firewall-cmd --reload

kubeadm join <ip.of.master.server>:6443 --token un0wzv.manhehldwocfseqx \
    --discovery-token-ca-cert-hash sha256:c13a6f330abbd2cc287945nf23n939rb39rb32900ac90d180f38fc4577bbccdcd77 


Error Message: missing prefix hcloud://

You might see the error message missing prefix hcloud:// within your cloud controller logs. This message indicates that you didn't set up your cluster with:

Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"

(See Step 1. of our README.md)

You need to recreate the whole cluster with this flag. Setting the flag and restarting the kubelet won't work!

providerID should start with hcloud://

Hello!
I installed hcloud-cloud-controller-manager into the newly created cluster and after adding a second node see the following logs of the controller:

ERROR: logging before flag.Parse: E1012 18:04:19.550861       1 node_controller.go:161] providerID should start with hcloud://: XXXXXX1
ERROR: logging before flag.Parse: E1012 18:09:20.964303       1 node_controller.go:161] providerID should start with hcloud://: XXXXXX1
ERROR: logging before flag.Parse: E1012 18:14:21.780274       1 node_controller.go:161] providerID should start with hcloud://: XXXXXX1

where XXXXXX1 is indeed an id of my second node in the Hetzner Cloud.
And no extra labels are added automatically to this second node.

I don't quite understand the reasons of it but apparently this log comes from here -

err = fmt.Errorf("providerID should start with hcloud://: %s", providerID)

Maybe this is some sort of misconfiguration from my side? Should I add any extra annotations or labels to the node?

Add server labels as node labels

It would be quite nice to have server labels from Hetzner to be set as node labels by the controller.
Would be then possible to provision servers by e.g. Terraform as workers with cloud-init and cluster join parameters and have labels available for scheduling selection after node initialization.

"Specified Node IP not found in cloudprovider"

Firstly, great to see that you guys investing time in this cloud controller!

The cloud controller manager seems not to know how to find the node.
I have a cluster running using weave-net and wireguard peer-2-peer vpn for internal communication.
Kubelet register it self with an internal ip address to the api controller. It seems like it is using this ip to find the node on the hetzner api.

Is it also possible to use the hostname to find the node?

Have you stopped maintaining this?

I have tried applying this to 1.13.3, Docker 18.06, Ubuntu 16.04, but node taint node.cloudprovider.kubernetes.io/uninitialized doesn't go away. Even pod hcloud-cloud-controller-manager didn't start because it didn't tolerate node.kubernetes.io/not-ready:NoSchedule, I fixed this but still your controller doesn't work. It's a pitty that this is not maintained :(

LoadBalancer service stuck at pending even though the LB has been created and has public IP

Hi, I have installed v1.6.0 in a test cluster, and the first thing I noticed is that no labels or annotations were added to the nodes, which is something that happened when I tested the controller a few months ago. In the controller log there is the error: hcloud/instances.InstanceExistsByProviderID: hcloud/providerIDToServerID: missing prefix hcloud://: 6393937

Then I tried to install Nginx ingress with a service of type LoadBalancer. The load balancer was created quickly and I could see from the HC console that it had a public IP, but the service got stuck at pending, without the IP. In the controller log I see this:

reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: hcloud/loadBalancers.EnsureLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBTargets: hcops/providerIDToServerID: missing prefix hcloud://:

I added the cloud-provider: "external" extra arg to Kubelet before installing the controller. Not sure if it's relevant, but this is a cluster provisioned with Rancher and using Weave as CNI. Edit: Kubernetes 1.17.5.

Thanks in advance for any help.

health-check-interval: parsing "15s": invalid syntax

The health-check-interval annotation is set as time.Duration but then is attempted to be parsed as int.

Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"echoserver", UID:"618a5944-ff89-4042-85c3-f398036c582d", APIVersion:"v1", ResourceVersion:"1538", FieldPath:""}): type: 'Warning' reason: 'UpdateLoadBalancerFailed' Error updating load balancer with new hosts map[ubuntu-2gb-hel1-3:{}]: hcloud/loadBalancers.UpdateLoadBalancer: hcops/LoadBalancerOps.ReconcileHCLBServices: hcops/hclbServiceOptsBuilder.buildUpdateServiceOpts: hcops/hclbServiceOptsBuilder.extractHealthCheck: annotation/Name.IntFromService: load-balancer.hetzner.cloud/health-check-interval: strconv.Atoi: parsing "15s": invalid syntax

Load balancer with failover IP

I don't know if it is the scope of the tool, but it would be nice that when we create Load balancer in kubernetes, we would get a Failover IP that would be attached to one of the worker node.

In case this node fail, the controller would be responsible of re attaching to an healthy worker.

As a user, I'd also like to limit the nodes that can receive the IP to a label, say for instance "role=edge".

Basic deployment (without Networks support) does not work

I am trying to setup hcloud-cloud-controller-manager without Networks support using

kubectl apply -f  https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.5.0.yaml

but it fails with pod being in state CreateContainerConfigError due a missing secret:

Error: couldn't find key network in Secret kube-system/hcloud

Actually I have only created token but not network because I didn't want to use the Networks feature. Is the deployment manifest wrong or how can I solve that?

[1.6.1] LoadBalancer does not work with Calico or Cilium

I've installed a kubernetes-cluster on Hetzner-Cloud using Kubespray. I've installed the hcloud-controller-manager and the LoadBalancer resources gets deployed when i try to create one, but it seems like the traffic can't be forwarded to the applications.

I'm using:

  • Kubernetes v1.18.4 installed via Kubespray
  • hcloud-controller-manager v1.6.1
  • Calico 3.14.1

I guess the traffic should be routed using a underlying Nodeport?
If thats true, then i assume that Calico may not be properly configured to allow this behaviour (even i've not enabled any security constraints atm)

Maybe someone here can tell if that's a bug or a configuration issue?

Load Balancer with ACME / Let's Encrypt

I understand if this is out of the scope of the controller manager, which is why I'd like to ask a few questions that'd allow me to implement a cron job for automatic TLS certificates for the load balancer:

  1. Can the content of a certificate resource in the Hetzner API somehow be exchanged or would this require creating a new certificate resource? The API docs only mention name and labels in the update endpoint.
  2. If not, can the list of certificate IDs the load balancer uses be updated without introducing downtime in the load balancer itself?

In particular, I'd like to implement the Let's Encrypt DNS challenge through a Kubernetes cronjob using the Hetzner DNS API and then exchange the certificate the load balancer uses without downtime.

Nil pointer dereference when using network version with multiple subnets

A nil pointer dereference keeps occuring when I use the network version of the conroller. Not sure if this is a configuration issue or an other issue. Keys and network id are configured and ok as it worked when there were no subnets defined but broke once I defined subnets.

The cloud controller is deployed in Kubernetes v1.18.2.

Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
ERROR: logging before flag.Parse: W0518 06:55:01.586721       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
ERROR: logging before flag.Parse: W0518 06:55:02.606795       1 controllermanager.go:108] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
ERROR: logging before flag.Parse: W0518 06:55:02.607266       1 authentication.go:55] Authentication is disabled
ERROR: logging before flag.Parse: I0518 06:55:02.607434       1 insecure_serving.go:49] Serving insecurely on [::]:10253
ERROR: logging before flag.Parse: I0518 06:55:02.608997       1 node_controller.go:89] Sending events to api server.
ERROR: logging before flag.Parse: I0518 06:55:02.610693       1 pvlcontroller.go:107] Starting PersistentVolumeLabelController
ERROR: logging before flag.Parse: E0518 06:55:02.611437       1 controllermanager.go:240] Failed to start service controller: the cloud provider does not support external load balancers
ERROR: logging before flag.Parse: I0518 06:55:02.616541       1 controller_utils.go:1025] Waiting for caches to sync for persistent volume label controller
ERROR: logging before flag.Parse: I0518 06:55:02.716799       1 controller_utils.go:1032] Caches are synced for persistent volume label controller
ERROR: logging before flag.Parse: I0518 06:55:04.228080       1 route_controller.go:102] Starting route controller
ERROR: logging before flag.Parse: I0518 06:55:04.228114       1 controller_utils.go:1025] Waiting for caches to sync for route controller
ERROR: logging before flag.Parse: I0518 06:55:04.328390       1 controller_utils.go:1032] Caches are synced for route controller
ERROR: logging before flag.Parse: E0518 06:55:04.329253       1 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:72
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:82
/usr/local/go/src/runtime/signal_unix.go:390
/maschine-controller/src/hcloud/routes.go:31
/maschine-controller/src/hcloud/routes.go:59
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:128
/go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:119
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:98
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1088426]

goroutine 107 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x161ed40, 0x29730f0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.(*routes).reloadNetwork(0xc0004c0b60, 0x1ae52e0, 0xc00009c018, 0x203000, 0xc0004c0680)
        /maschine-controller/src/hcloud/routes.go:31 +0x36
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.(*routes).ListRoutes(0xc0004c0b60, 0x1ae52e0, 0xc00009c018, 0x184992e, 0xa, 0xc0005f4610, 0x44bdb6, 0xc000420dc0, 0x7, 0x8)
        /maschine-controller/src/hcloud/routes.go:59 +0x5a
k8s.io/kubernetes/pkg/controller/route.(*RouteController).reconcileNodeRoutes(0xc00025c1c0, 0x44bb4b, 0xc00036c6e8)
        /go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:128 +0x73
k8s.io/kubernetes/pkg/controller/route.(*RouteController).Run.func1()
        /go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:119 +0x2e
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc0004f18d0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004f18d0, 0x2540be400, 0x0, 0x6d59744647610a00, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.NonSlidingUntil(0xc0004f18d0, 0x2540be400, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:98 +0x4d
created by k8s.io/kubernetes/pkg/controller/route.(*RouteController).Run
        /go/pkg/mod/k8s.io/[email protected]/pkg/controller/route/route_controller.go:118 +0x1f2

ingress controller

Hi,

Is it posible to use LB as a ingress controller so you don't need to create a LB for every service?

Thanks,
Vuko

v1.6.0: Targets are not adding to the load balancer

Probably I misconfigured something, but I can't figure out what exactly.

There are the logs from Cloud Controller Manager:

root@test:~# kubectl logs hcloud-cloud-controller-manager-656bbd88db-n8d7r -n kube-system
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0625 08:14:46.827619       1 serving.go:313] Generated self-signed cert in-memory
W0625 08:14:47.474831       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0625 08:14:47.477714       1 controllermanager.go:120] Version: v0.0.0-master+$Format:%h$
Hetzner Cloud k8s cloud controller v1.6.0 started
W0625 08:14:48.027637       1 controllermanager.go:132] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
I0625 08:14:48.028791       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0625 08:14:48.028815       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0625 08:14:48.028859       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0625 08:14:48.028866       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0625 08:14:48.029437       1 secure_serving.go:178] Serving securely on [::]:10258
I0625 08:14:48.029536       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0625 08:14:48.030593       1 node_lifecycle_controller.go:78] Sending events to api server
I0625 08:14:48.030656       1 controllermanager.go:247] Started "cloud-node-lifecycle"
I0625 08:14:48.031936       1 controllermanager.go:247] Started "service"
I0625 08:14:48.032093       1 controller.go:208] Starting service controller
I0625 08:14:48.032107       1 shared_informer.go:223] Waiting for caches to sync for service
I0625 08:14:48.125698       1 controllermanager.go:247] Started "route"
I0625 08:14:48.125895       1 route_controller.go:100] Starting route controller
I0625 08:14:48.125949       1 shared_informer.go:223] Waiting for caches to sync for route
I0625 08:14:48.126733       1 node_controller.go:110] Sending events to api server.
I0625 08:14:48.126796       1 controllermanager.go:247] Started "cloud-node"
I0625 08:14:48.129535       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0625 08:14:48.129685       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0625 08:14:48.144444       1 node_controller.go:325] Initializing node test with cloud provider
I0625 08:14:48.226209       1 shared_informer.go:230] Caches are synced for route
I0625 08:14:48.232277       1 shared_informer.go:230] Caches are synced for service
I0625 08:14:48.562019       1 route_controller.go:193] Creating route for node test 10.244.0.0/24 with hint c3a9fdd8-0b4e-4359-a30f-069f57cbd98c, throttled 862ns
I0625 08:14:49.726362       1 route_controller.go:213] Created route for node test 10.244.0.0/24 with hint c3a9fdd8-0b4e-4359-a30f-069f57cbd98c after 1.16434051s
I0625 08:14:49.785607       1 node_controller.go:397] Successfully initialized node test with cloud provider
I0625 08:15:40.063967       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"echoserver", UID:"cec7350c-c004-400a-be2c-f4526e467760", APIVersion:"v1", ResourceVersion:"805", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
I0625 08:15:40.076748       1 load_balancers.go:81] "ensure Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" service="echoserver" nodes=[]
I0625 08:15:40.077134       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"echoserver", UID:"cec7350c-c004-400a-be2c-f4526e467760", APIVersion:"v1", ResourceVersion:"805", FieldPath:""}): type: 'Warning' reason: 'UnAvailableLoadBalancer' There are no available nodes for LoadBalancer
I0625 08:15:54.987363       1 load_balancer.go:420] "add service" op="hcops/LoadBalancerOps.ReconcileHCLBServices" port=80 loadBalancerID=35028
I0625 08:15:55.824007       1 load_balancers.go:117] "reload HC Load Balancer" op="hcloud/loadBalancers.EnsureLoadBalancer" loadBalancerID=35028
I0625 08:15:55.919779       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"echoserver", UID:"cec7350c-c004-400a-be2c-f4526e467760", APIVersion:"v1", ResourceVersion:"805", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Steps to reproduce:

# network range: 10.0.0.0/8
# instance specs:
# image: ubuntu-18.04, type: cpx21, server is assigned to the 10.0.0.0/8 network with private IP: 10.0.0.2

curl https://get.docker.com | VERSION=19.03.12 sh
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.16.11-00 kubeadm=1.16.11-00 kubectl=1.16.11-00
mkdir -p /etc/systemd/system/kubelet.service.d/
cat <<EOF | tee /etc/systemd/system/kubelet.service.d/20-hcloud.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"
EOF
kubeadm init --apiserver-advertise-address=10.0.0.2 --pod-network-cidr=10.244.0.0/16
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl -n kube-system create secret generic hcloud --from-literal=token=$(cat ~/.htoken) --from-literal=network=my-network
kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.6.0-networks.yaml

Verify that cloud controller manager works:

$ kubectl get node -L beta.kubernetes.io/instance-type -L failure-domain.beta.kubernetes.io/region -L failure-domain.beta.kubernetes.io/zone
NAME   STATUS   ROLES    AGE     VERSION    INSTANCE-TYPE   REGION   ZONE
test   Ready    master   2m39s   v1.16.11   cpx21           hel1     hel1-dc2

Create the Service with type LoadBalancer:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echoserver
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.10
        imagePullPolicy: Always
        name: echoserver
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver
  annotations:
   load-balancer.hetzner.cloud/location: hel1
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  selector:
    app: echoserver

network configuration using rancher

Hi, and thanks for this great controller, it seems to work well setting the nodes metadata, but I can not seem to get it to work with the network.

I use flannel in a fresh installation of rancher. Everything is default. It seems after I apply the deployment part of the networking is wrong and dns is unreachable. When logging in to the node the ip of kube dns is indeed unreachable.

  • Created the network according to the docs
  • Set the secret for network and token using --from-literal
  • I changed --cluster-cidr=10.42.0.0/16, because of the following rancher defaults.
  • Applied the deployment to the cluster

Direclty the nodes output event 'RouteController failed to create a route'

0s Warning FailedToCreateRoute node/master1 Could not create route fe8eb8f5-8070-4632-995d-811545595a54 10.42.0.0/24 for node master1 after 228.108063ms: server master1 not found

The nodes seem to properly get the metadata however the network is not assigned to any of the nodes.

These are the last lines of the log of the pod

ERROR: logging before flag.Parse: I1216 00:09:21.374900       1 route_controller.go:176] Creating route for node worker1 10.42.1.0/24 with hint 30560909-ab51-4f84-a3a6-aa46fed1a46b, throttled 755ns
ERROR: logging before flag.Parse: I1216 00:09:21.374984       1 route_controller.go:176] Creating route for node master1 10.42.0.0/24 with hint fe8eb8f5-8070-4632-995d-811545595a54, throttled 1.92µs
ERROR: logging before flag.Parse: E1216 00:09:21.574934       1 route_controller.go:199] Could not create route 30560909-ab51-4f84-a3a6-aa46fed1a46b 10.42.1.0/24 for node worker1: server worker1 not found
ERROR: logging before flag.Parse: I1216 00:09:21.575333       1 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"worker1", UID:"worker1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateRoute' Could not create route 30560909-ab51-4f84-a3a6-aa46fed1a46b 10.42.1.0/24 for node worker1 after 200.008981ms: server worker1 not found
ERROR: logging before flag.Parse: E1216 00:09:21.596687       1 route_controller.go:199] Could not create route fe8eb8f5-8070-4632-995d-811545595a54 10.42.0.0/24 for node master1: server master1 not found
ERROR: logging before flag.Parse: I1216 00:09:21.596751       1 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"master1", UID:"master1", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateRoute' Could not create route fe8eb8f5-8070-4632-995d-811545595a54 10.42.0.0/24 for node master1 after 221.703601ms: server master1 not found

And here are some relevant rancher setting and defaults


  network:
    options:
      flannel_backend_type: vxlan
    plugin: flannel
# 
#    services:
#      kube-api:
#        service_cluster_ip_range: 10.43.0.0/16
#      kube-controller:
#        cluster_cidr: 10.42.0.0/16
#        service_cluster_ip_range: 10.43.0.0/16
#      kubelet:
#        cluster_domain: cluster.local
#        cluster_dns_server: 10.43.0.10

route -n

0.0.0.0         172.31.1.1      0.0.0.0         UG    0      0        0 eth0
10.42.0.0       10.42.0.0       255.255.255.0   UG    0      0        0 flannel.1
10.42.1.0       0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.31.1.1      0.0.0.0         255.255.255.255 UH    0      0        0 eth0

Not sure how to debug this. I think it is something simple with the network setting. Any ideas? Thanks!

server doesn't have a resource type "docker-switch"

Hi
I'm at step 5

Patch the flannel deployment to tolerate the uninitialized taint:
kubectl -n kube-system patch ds kube-flannel-ds --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'

and I get this error:
error: the server doesn't have a resource type "docker-switch"

I googled a lot but did not find any clue what to add to the k8s/docker config to make it work.
If anyone has an idea..!

I use k8s version 1.15.4

cloud-controller-manager is deleting non-hetzner nodes

I'm currently implementing the feature to add external servers to a hetzner-cloud kubernetes in hetzner-kube and faced the issue, that the cloud-controller-manager shortly deletes the node from the cluster right after it joined. I'm sure, that's because it finds, that no server with this name is found in the hetzner-cloud API and it must be removed.

I think there must be some mechanic like giving every node some label like hetznercloud/is-hetzner. There are three states:

  • a node does not has this label, than we need to check the API. If it's there, set it true, otherwise false
  • it's true, then check if it's still listed in the API, delete if not found
  • it's false, then don't do anything

Bug: hcloud-cloud-controller-manager crashes with SIGSEGV

Hi,

I'm trying to set up a Kubernetes Cluster on the Hetzner Cloud with the network feature.

  • Kubernetes v1.17.7
  • Network plugin: Calico
  • Network Subnet: 10.244.0.0/16

I'm using the unmodified https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/master/deploy/v1.6.1-networks.yaml file.

After removing the taints for calico/coredns they're starting, the hcloud-cloud-controller-manager started to crash:

Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0701 05:47:43.542132       1 serving.go:313] Generated self-signed cert in-memory
W0701 05:47:43.968638       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0701 05:47:43.978618       1 controllermanager.go:120] Version: v0.0.0-master+$Format:%h$
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16faad3]

goroutine 1 [running]:
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.newCloud(0x0, 0x0, 0xc000222758, 0xc0003acd10, 0xc000222750, 0xa4)
        /maschine-controller/src/hcloud/cloud.go:83 +0x983
github.com/hetznercloud/hcloud-cloud-controller-manager/hcloud.init.0.func1(0x0, 0x0, 0x7ffcf9723504, 0x6, 0xc0002227d8, 0xc00003a101)
        /maschine-controller/src/hcloud/cloud.go:155 +0x35
k8s.io/cloud-provider.GetCloudProvider(0x7ffcf9723504, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/plugins.go:86 +0xcf
k8s.io/cloud-provider.InitCloudProvider(0x7ffcf9723504, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/plugins.go:134 +0x504
k8s.io/kubernetes/cmd/cloud-controller-manager/app.Run(0xc00000e878, 0xc0000820c0, 0xc0004d3a98, 0x4)
        /go/pkg/mod/k8s.io/[email protected]/cmd/cloud-controller-manager/app/controllermanager.go:122 +0x127
k8s.io/kubernetes/cmd/cloud-controller-manager/app.NewCloudControllerManagerCommand.func1(0xc000198a00, 0xc000080dc0, 0x0, 0x5)
        /go/pkg/mod/k8s.io/[email protected]/cmd/cloud-controller-manager/app/controllermanager.go:78 +0x204
github.com/spf13/cobra.(*Command).execute(0xc000198a00, 0xc00003a1f0, 0x5, 0x5, 0xc000198a00, 0xc00003a1f0)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:830 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc000198a00, 0x161d8acea0e98b5b, 0x2bbee40, 0xb)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main()
        /maschine-controller/src/main.go:44 +0xf3

Any idea?

Best regards
Matthias

Problems with Rancher deployed clusters

Hi! I am testing this controller with Rancher clusters, and for some reason the metrics server nor the Prometheus/Grafana monitoring installed by Rancher seem to work. kubectl top nodes returns error: metrics not available yet even after waiting for a while, and the monitoring API is never coming up.

I can't remember the details, but I did test this a little months ago and had similar problems because of the IP addresses. Before installing the controller, kubectl get nodes -owide was showing the IP addresses as internal, while after installing the controller they are shown as external and there is no internal IP. I can't remember how I found out there was a link between this change and the metrics servert/API not being available. Am I missing something? I made sure the kubelet is configured with cloud-provider = external.

Thanks!

LoadBalancers: Provide an option for disabling LoadBalancer management

Hey,

I'm trying to set up a Kubernetes Cluster which has the hcloud-cloud-controller-manager installed. As expected, the hcloud-cloud-controller-manager will try to map my Kubernetes LoadBalancers to a Hetzner LoadBalancer. However, I would like to let MetalLB do that job in my case. Is there any way to prevent hcloud-cloud-controller-manager from grabbing a certain Kubernetes Loadbalancer? How can I define which LoadBalancer should be handled by which controller?

Best regards
Matthias

Don't add nodes with some service label to load balancer target

Would be cool if I have ability to label node with some service label and this node will be ignored and not added as target in load balancer. For example - I have 4 nodes for Ingress controller, that use this loadbalanser and have another 100 nodes, that do not. And I dont want to all this nodes will be added as targets in LB

1.6.0 does not work with networks and weave-net

Hi all,
I tried for mutiple hours to update the cloud-provider from 1.5.2 to 1.6.0 (always with fresh clusters), but it didn't work.
All nodes are always marked as NodeNetworkUnavailable: true. If I switch back to 1.5.2 without changing more than that, it works.

My setup:

  • 3 Master, 3 Worker with K8s 1.18.3 (kubeadm-setup)
  • OS: Ubuntu 18.04 LTS
  • CNI: Latest version of Weavenet (with IPALLOC_RANGE=11.32.0.0/12)
  • Followed the instruction in docs/deploy_with_networks.md (setting --cluster-cidr to 11.32.0.0/12 which is the same as in kube-proxy config
  • Setting service-cidr to 11.96.0.0/12 to avoid conflict with 10.0.0.0/8 from hcloud-network

Is there anything I missed there?

Support Hetzner hybrid cloud setups

When creating a kubernetes cluster including cloud and bare metal Hetzner servers the hcloud-cloud-controller-manager deletes the bare metal nodes, because they are not found in the Hetzner Cloud.
(See #7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.