Giter Club home page Giter Club logo

Comments (18)

scotty-c avatar scotty-c commented on August 14, 2024

@micah the version of Docker you are running is not supported. https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker
It needs to be 1.12 or 17.03

from puppetlabs-kubernetes.

micah avatar micah commented on August 14, 2024

from puppetlabs-kubernetes.

micah avatar micah commented on August 14, 2024

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

@micah What are the logs for Weave saying ?

from puppetlabs-kubernetes.

micah avatar micah commented on August 14, 2024

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

@micah Those logs are fine, what do you have set for your cluster and node cidr ?

from puppetlabs-kubernetes.

micah avatar micah commented on August 14, 2024

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

The only thing that stands out is that you are using tun0 for the api and it looks like weave cant speak to it. Have you tried testing it with a standard interface like eth* ?

from puppetlabs-kubernetes.

ndelic0 avatar ndelic0 commented on August 14, 2024

@scotty-c @micah Using almost same hiera configs.

kubernetes::kubernetes_version: 1.9.2
kubernetes::kubernetes_package_version: 1.9.2
kubernetes::container_runtime: docker
kubernetes::cni_network_provider: https://git.io/weave-kube-1.6
kubernetes::cni_cluster_cidr: 10.32.0.0/12
kubernetes::cni_node_cidr: true
kubernetes::cluster_service_cidr: 10.96.0.0/12
kubernetes::kubernetes_fqdn: k8s-ctl-01.poc.local
kubernetes::bootstrap_controller_ip: 10.239.32.195
kubernetes::etcd_initial_cluster: etcd-kube-master=http://10.239.32.195:2380
kubernetes::etcd_ip: 10.239.32.195
kubernetes::kube_api_advertise_address: 10.239.32.195
kubernetes::install_dashboard: true
kubernetes::kube_api_service_ip: 10.96.0.1
kubernetes::kube_dns_ip: 10.96.0.10

[root@k8s-ctl-01 ~]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─kubernetes.conf
Active: active (running) since Fri 2018-05-04 22:51:03 MST; 40min ago
Docs: http://kubernetes.io/docs/
Main PID: 2443 (kubelet)
Memory: 40.0M
CGroup: /system.slice/kubelet.service
└─2443 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt

May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:31:28.101403 2443 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:31:28.101577 2443 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:28.101619 2443 pod_workers.go:186] Error syncing pod 436f003a30752655fdbe89394361071e ("kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:28.846513 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:28.847322 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:31:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:28.848407 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:31:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:29.847440 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:31:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:29.848260 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:31:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:31:29.849501 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused

In my case cluster never boots. Errors reported by kubelet does not help. Any suggestions?

May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:16.067196 2443 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "k8s-ctl-01.poc.local" not found
May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: W0504 23:13:16.523943 2443 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:16.524163 2443 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:16.852785 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:16.853772 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:16 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:16.855276 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:17 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:17.853728 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:17 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:17.854612 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:17 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:17.855820 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:18 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:18.179092 2443 event.go:209] Unable to write event: 'Patch https://10.239.32.195:6443/api/v1/namespaces/default/events/k8s-ctl-01.poc.local.152ba9ae0bb7bccb: dial tcp 10.239.32.195:6443: getsockopt: connection refused' (may retry after sleeping)
May 04 23:13:18 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:18.854557 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:18 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:18.855395 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:18 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:18.856505 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:19 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:19.855352 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:19 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:19.856449 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:19 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:19.857496 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:20.769406 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:20.772226 2443 kubelet_node_status.go:82] Attempting to register node k8s-ctl-01.poc.local
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:20.772635 2443 kubelet_node_status.go:106] Unable to register node "k8s-ctl-01.poc.local" with API server: Post https://10.239.32.195:6443/api/v1/nodes: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:20.856062 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:20.857169 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:20 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:20.858140 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:21 k8s-ctl-01.poc.local kubelet[2443]: W0504 23:13:21.525639 2443 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 04 23:13:21 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:21.525846 2443 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 04 23:13:21 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:21.856873 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:21 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:21.858077 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:21 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:21.859221 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:22 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:22.857708 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:22 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:22.858887 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:22 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:22.859854 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:23 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:23.798278 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:23 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:23.858557 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:23 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:23.859457 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:23 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:23.860705 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:24.101762 2443 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --allow-privileged=true --enable-bootstrap-token-auth=true --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --storage-backend=etcd3 --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --service-account-key-file=/etc/kubernetes/pki/sa.pub --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --requestheader-username-headers=X-Remote-User --requestheader-allowed-names=front-proxy-client --authorization-mode=Node,RBAC --advertise-address=10.239.32.195 --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s ReadOnly:true MountPath:/etc/kubernetes/ SubPath: MountPropagation:} {Name:certs ReadOnly:false MountPath:/etc/ssl/certs SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:24.101886 2443 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:24.102076 2443 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:24.102122 2443 pod_workers.go:186] Error syncing pod 436f003a30752655fdbe89394361071e ("kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-ctl-01.poc.local_kube-system(436f003a30752655fdbe89394361071e)"
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:24.859386 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:24.860376 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:24 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:24.861383 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:25 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:25.860251 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:25 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:25.861684 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:25 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:25.862730 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:26.067486 2443 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "k8s-ctl-01.poc.local" not found
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: W0504 23:13:26.527250 2443 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:26.527920 2443 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:26.798284 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:26.861059 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:26.862278 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:26 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:26.863316 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:27.772912 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:27.775746 2443 kubelet_node_status.go:82] Attempting to register node k8s-ctl-01.poc.local
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:27.776234 2443 kubelet_node_status.go:106] Unable to register node "k8s-ctl-01.poc.local" with API server: Post https://10.239.32.195:6443/api/v1/nodes: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:27.861886 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:27.862940 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:27 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:27.863873 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:28.179963 2443 event.go:209] Unable to write event: 'Patch https://10.239.32.195:6443/api/v1/namespaces/default/events/k8s-ctl-01.poc.local.152ba9ae0bb7bccb: dial tcp 10.239.32.195:6443: getsockopt: connection refused' (may retry after sleeping)
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:28.798357 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:28.799049 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:28.862675 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:28.863471 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:28 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:28.864603 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:29.102507 2443 kuberuntime_manager.go:514] Container {Name:etcd Image:gcr.io/google_containers/etcd-amd64:3.1.11 Command:[etcd --name=etcd-k8s-ctl-01 --listen-client-urls=http://10.239.32.195:2379,http://127.0.0.1:2379 --listen-peer-urls=http://10.239.32.195:2380 --advertise-client-urls=http://10.239.32.195:2379 --data-dir=/var/lib/etcd --initial-cluster=etcd-kube-master=http://10.239.32.195:2380 --initial-advertise-peer-urls=http://10.239.32.195:2380 --initial-cluster-token=etcd-cluster-1 --initial-cluster-state=new --heartbeat-interval=250 --election-timeout=1250] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:certs ReadOnly:false MountPath:/etc/ssl/certs SubPath: MountPropagation:} {Name:etcd Rea:false MountPath:/var/lib/etcd SubPath: MountPropagation:} {Name:k8s ReadOnly:true MountPath:/etc/kubernetes/ SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:2379,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:29.102653 2443 kuberuntime_manager.go:758] checking backoff for container "etcd" in pod "etcd-k8s-ctl-01.poc.local_kube-system(9ccfe86965777524b41c676db8481c4b)"
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:29.102809 2443 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=etcd pod=etcd-k8s-ctl-01.poc.local_kube-system(9ccfe86965777524b41c676db8481c4b)
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:29.102847 2443 pod_workers.go:186] Error syncing pod 9ccfe86965777524b41c676db8481c4b ("etcd-k8s-ctl-01.poc.local_kube-system(9ccfe86965777524b41c676db8481c4b)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=etcd pod=etcd-k8s-ctl-01.poc.local_kube-system(9ccfe86965777524b41c676db8481c4b)"
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:29.863501 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:29.864490 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:29 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:29.865341 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:30 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:30.864266 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:30 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:30.865378 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:30 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:30.866318 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:31 k8s-ctl-01.poc.local kubelet[2443]: W0504 23:13:31.529396 2443 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 04 23:13:31 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:31.529636 2443 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 04 23:13:31 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:31.865122 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:31 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:31.865875 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:31 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:31.867430 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:32 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:32.865961 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:32 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:32.866753 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:32 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:32.867986 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:33 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:33.866774 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:33 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:33.868104 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:33 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:33.869074 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:34.776471 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:34.779760 2443 kubelet_node_status.go:82] Attempting to register node k8s-ctl-01.poc.local
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:34.780193 2443 kubelet_node_status.go:106] Unable to register node "k8s-ctl-01.poc.local" with API server: Post https://10.239.32.195:6443/api/v1/nodes: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:34.867654 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:34.868718 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:34 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:34.870215 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:35 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:35.868620 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:35 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:35.869905 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:36 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:36.869414 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:36 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:36.870513 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:36 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:36.871789 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:37 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:37.870307 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:37 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:37.871429 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:37 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:37.872395 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:38 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:38.180739 2443 event.go:209] Unable to write event: 'Patch https://10.239.32.195:6443/api/v1/namespaces/default/events/k8s-ctl-01.poc.local.152ba9ae0bb7bccb: dial tcp 10.239.32.195:6443: getsockopt: connection refused' (may retry after sleeping)
May 04 23:13:38 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:38.798316 2443 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
May 04 23:13:38 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:38.871170 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.239.32.195:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:38 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:38.872465 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.239.32.195:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:38 k8s-ctl-01.poc.local kubelet[2443]: E0504 23:13:38.873440 2443 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://10.239.32.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-ctl-01.poc.local&limit=500&resourceVersion=0: dial tcp 10.239.32.195:6443: getsockopt: connection refused
May 04 23:13:39 k8s-ctl-01.poc.local kubelet[2443]: I0504 23:13:39.101818 2443 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --allow-privileged=true --enable-bootstrap-token-auth=true --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --storage-backend=etcd3 --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --service-account-key-file=/etc/kubernetes/pki/sa.pub --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --requestheader-username-headers=X-Remote-User --requestheader-allowed-names=front-proxy-client --authorization-mode=Node,RBAC --advertise-address=10.239.32.195 --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s ReadOnly:true MountPath:/etc/kubernetes/ SubPath: MountPropagation:} {Name:certs ReadOnly:false MountPath:/etc/ssl/certs SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:
May 04 23:13:39 k8s-ctl-01.poc.local kubelet[2443]: false TTY:false} is dead, but RestartPolicy says that we should restart it.

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

@micah the kube api server is in crashloop. What do your etcd logs look like ?

from puppetlabs-kubernetes.

ndelic0 avatar ndelic0 commented on August 14, 2024

[root@k8s-ctl-01 ~]# cat /var/log/pods/9ccfe86965777524b41c676db8481c4b/etcd_20.log
{"log":"2018-05-05 07:08:56.224618 I | etcdmain: etcd Version: 3.1.11\n","stream":"stderr","time":"2018-05-05T07:08:56.22616Z"}
{"log":"2018-05-05 07:08:56.224726 I | etcdmain: Git SHA: 960f4604b\n","stream":"stderr","time":"2018-05-05T07:08:56.22621552Z"}
{"log":"2018-05-05 07:08:56.224734 I | etcdmain: Go Version: go1.8.5\n","stream":"stderr","time":"2018-05-05T07:08:56.226225242Z"}
{"log":"2018-05-05 07:08:56.224740 I | etcdmain: Go OS/Arch: linux/amd64\n","stream":"stderr","time":"2018-05-05T07:08:56.22623247Z"}
{"log":"2018-05-05 07:08:56.224748 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2\n","stream":"stderr","time":"2018-05-05T07:08:56.226239567Z"}
{"log":"2018-05-05 07:08:56.224812 N | etcdmain: the server is already initialized as member before, starting as etcd member...\n","stream":"stderr","time":"2018-05-05T07:08:56.226246435Z"}
{"log":"2018-05-05 07:08:56.224939 I | embed: listening for peers on http://10.239.32.195:2380\n","stream":"stderr","time":"2018-05-05T07:08:56.226255192Z"}
{"log":"2018-05-05 07:08:56.225001 I | embed: listening for client requests on 10.239.32.195:2379\n","stream":"stderr","time":"2018-05-05T07:08:56.226261795Z"}
{"log":"2018-05-05 07:08:56.225059 I | embed: listening for client requests on 127.0.0.1:2379\n","stream":"stderr","time":"2018-05-05T07:08:56.226268428Z"}
{"log":"2018-05-05 07:08:56.230517 C | etcdmain: couldn't find local name "etcd-k8s-ctl-01" in the initial cluster configuration\n","stream":"stderr","time":"2018-05-05T07:08:56.232924155Z"}

from puppetlabs-kubernetes.

esalberg avatar esalberg commented on August 14, 2024

In case it helps:
We had to setup the configuration so that the etcd --name parameter matched what was set in --initial-cluster. If you look in /etc/kubernetes/manifests/etcd.yaml:

spec:
containers:

  • command:
    • etcd
    • --name=etcd-kuberdev01
    • --listen-client-urls=http://:2379,http://127.0.0.1:2379
    • --listen-peer-urls=http://:2380
    • --advertise-client-urls=http://:2379
    • --data-dir=/var/lib/etcd
    • --initial-cluster=etcd-kuberdev01=http://:2380

So you might need to update
kubernetes::etcd_initial_cluster: etcd-kube-master=http://10.239.32.195:2380

to something like
kubernetes::etcd_initial_cluster: etcd-k8s-ctl-01=http://10.239.32.195:2380

Also, check /etc/hosts (e.g. if the server name is k8s-ctl-01, it should be set).

from puppetlabs-kubernetes.

esalberg avatar esalberg commented on August 14, 2024

We have the same issue right now as @micah (our etcd is fine, unlike @ndelic0 ).

From this error:
ERROR: logging before flag.Parse: E0509 17:43:03.140347 30226 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:228: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

What is responsible for setting up the 10.96.0.1 network address / listening on port 443? As far as I can tell, that error message is correct - there is no route to that host.

We just used the values that the initial docker run command provided. We're trying weave right now, but we had similar issues with flannel, and I'm wondering if there's something more fundamentally misconfigured for the local networking.

kubernetes::cni_cluster_cidr: 10.32.0.0/12
kubernetes::cni_node_cidr: true
kubernetes::cluster_service_cidr: 10.96.0.0/12
kubernetes::kubernetes_fqdn: kubernetes
kubernetes::bootstrap_controller_ip: <public_ip>

kubernetes::kube_api_advertise_address: "%{::ipaddress_eth0}"
kubernetes::install_dashboard: true
kubernetes::kube_api_service_ip: 10.96.0.1
kubernetes::kube_dns_ip: 10.96.0.10

from puppetlabs-kubernetes.

ndelic0 avatar ndelic0 commented on August 14, 2024

@esalberg Thanks - solved that issue. Later on had a problem with hostname - twas fixed with overriding the node label parameter like this kubernetes::node_label: "%{::fqdn}.
Issue with connect: no route to hostin my case has been solved by replacing weave with flannel. Big thanks to @scotty-c .

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

@esalberg kube api server is responsible for setting up the address 10.96.0.1 This is done before the CNI is initialised

from puppetlabs-kubernetes.

esalberg avatar esalberg commented on August 14, 2024

Yes, sorry, I didn't come back here because it wasn't our ticket.

We fixed the issue - turns out the kube-proxy wasn't coming up properly for a couple of different reasons.

from puppetlabs-kubernetes.

tskirvin avatar tskirvin commented on August 14, 2024

I'm running into this problem now; what did you do to make the proxy come up properly?

...unless it's just "switch to flannel, start over cleanly". Hmm.

from puppetlabs-kubernetes.

scotty-c avatar scotty-c commented on August 14, 2024

closing due to fixes in #106

from puppetlabs-kubernetes.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.