提前配置好 /etc/kubernetes/manifests/lite-apiserver.yaml , cache类型 - --file-cache-path=/data/lite-apiserver/cache。
将kubelet的server地址指向127.0.0.1:51003,systemct start kubelet。此时lite-apiserver正常启动, 且kubectl get node 可以看到节点注册,但是kubelet日志显示node not found,此时kubelet不能正常工作,不能自动建daemonset。
systemd[1]: Started kubelet: The Kubernetes Node Agent.
kubelet[4744]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --kube-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --kube-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
kubelet[4744]: I0421 15:53:38.199076 4744 server.go:418] Version: v1.14.10-tk8s.1.4+2f7b4c68ce2039
kubelet[4744]: I0421 15:53:38.199326 4744 plugins.go:103] No cloud provider specified.
kubelet[4744]: I0421 15:53:38.259712 4744 server.go:629] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
kubelet[4744]: GetCgroupMounts.MountPoints: []cgroups.Mount{cgroups.Mount{Mountpoint:"/sys/fs/cgroup/systemd", Root:"/", Subsystems:[]string{"systemd"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/net_cls", Root:"/", Subsystems:[]string{"net_cls"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/cpu,cpuacct", Root:"/", Subsystems:[]string{"cpuacct", "cpu"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/hugetlb", Root:"/", Subsystems:[]string{"hugetlb"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/freezer", Root:"/", Subsystems:[]string{"freezer"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/oom", Root:"/", Subsystems:[]string{"oom"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/perf_event", Root:"/", Subsystems:[]string{"perf_event"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/devices", Root:"/", Subsystems:[]string{"devices"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/cpuset", Root:"/", Subsystems:[]string{"cpuset"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/memory", Root:"/", Subsystems:[]string{"memory"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/pids", Root:"/", Subsystems:[]string{"pids"}}, cgroups.Mount{Mountpoint:"/sys/fs/cgroup/blkio", Root:"/", Subsystems:[]string{"blkio"}}}
kubelet[4744]: I0421 15:53:38.295169 4744 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
kubelet[4744]: I0421 15:53:38.295182 4744 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:60 scale:-3} d:{Dec:<nil>} s:60m Format:DecimalSI} memory:{i:{value:167772160 scale:0} d:{Dec:<nil>} s: Format:BinarySI}] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:static ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms CPUReservedEnabled:false}
kubelet[4744]: I0421 15:53:38.295246 4744 container_manager_linux.go:286] Creating device plugin manager: true
kubelet[4744]: I0421 15:53:38.295356 4744 cpu_manager.go:122] [cpumanager] detected CPU topology: &{24 12 2 map[0:{0 0} 1:{0 1} 2:{0 2} 3:{0 3} 4:{0 4} 5:{0 5} 6:{1 6} 7:{1 7} 8:{1 8} 9:{1 9} 10:{1 10} 11:{1 11} 12:{0 0} 13:{0 1} 14:{0 2} 15:{0 3} 16:{0 4} 17:{0 5} 18:{1 6} 19:{1 7} 20:{1 8} 21:{1 9} 22:{1 10} 23:{1 11}]}
kubelet[4744]: I0421 15:53:38.295708 4744 policy_static.go:98] [cpumanager] reserved 1 CPUs ("0") not available for exclusive assignment
kubelet[4744]: I0421 15:53:38.295730 4744 state_mem.go:38] [cpumanager] initializing new in-memory state store
kubelet[4744]: I0421 15:53:38.297188 4744 kubelet.go:304] Watching apiserver
kubelet[4744]: I0421 15:53:38.298770 4744 client.go:75] Connecting to docker on unix:///var/run/docker.sock
kubelet[4744]: I0421 15:53:38.298786 4744 client.go:104] Start docker client with request timeout=2m0s
kubelet[4744]: W0421 15:53:38.309166 4744 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
kubelet[4744]: I0421 15:53:38.309262 4744 docker_service.go:238] Hairpin mode set to "hairpin-veth"
kubelet[4744]: W0421 15:53:38.311033 4744 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
kubelet[4744]: I0421 15:53:38.311188 4744 docker_service.go:253] Docker cri networking managed by cni
kubelet[4744]: I0421 15:53:38.316247 4744 docker_service.go:258] Docker Info: &{ID:LIBD:ZZBK:IBWK:2QOV:HFAH:CVCF:3U2O:DNVB:KXMD:P5MJ:7KH5:R2XH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2021-04-21T15:53:38.311837751+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.107-1-tlinux2-0051 OperatingSystem:Tencent tlinux 2.2 (Final) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0005c0310 NCPU:24 MemTotal:33268166656 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:TENCENT64.site Labels:[] ExperimentalBuild:false ServerVersion:19.03.9 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:true Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
kubelet[4744]: I0421 15:53:38.316340 4744 docker_service.go:271] Setting cgroupDriver to cgroupfs
kubelet[4744]: I0421 15:53:38.321542 4744 remote_runtime.go:62] parsed scheme: ""
kubelet[4744]: I0421 15:53:38.321556 4744 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
kubelet[4744]: I0421 15:53:38.321584 4744 remote_image.go:50] parsed scheme: ""
kubelet[4744]: I0421 15:53:38.321590 4744 remote_image.go:50] scheme "" not registered, fallback to default scheme
kubelet[4744]: I0421 15:53:38.321673 4744 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
kubelet[4744]: I0421 15:53:38.321694 4744 clientconn.go:796] ClientConn switching balancer to "pick_first"
kubelet[4744]: I0421 15:53:38.321730 4744 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
kubelet[4744]: I0421 15:53:38.321748 4744 clientconn.go:796] ClientConn switching balancer to "pick_first"
kubelet[4744]: I0421 15:53:38.321764 4744 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0002fd4d0, CONNECTING
kubelet[4744]: I0421 15:53:38.321789 4744 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000235470, CONNECTING
kubelet[4744]: I0421 15:53:38.321942 4744 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0002fd4d0, READY
kubelet[4744]: I0421 15:53:38.321947 4744 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000235470, READY
kubelet[4744]: E0421 15:53:38.591631 4744 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
kubelet[4744]: E0421 15:53:38.604531 4744 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope
kubelet[4744]: E0421 15:53:38.612940 4744 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "11-22-00-ff-bb-cd" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
kubelet[4744]: E0421 15:53:58.690462 4744 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
kubelet[4744]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
kubelet[4744]: I0421 15:53:58.695520 4744 kuberuntime_manager.go:215] Container runtime docker initialized, version: 19.03.9, apiVersion: 1.40.0
kubelet[4744]: I0421 15:53:58.699860 4744 server.go:1056] Started kubelet
kubelet[4744]: E0421 15:53:58.699948 4744 kubelet.go:1283] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
kubelet[4744]: I0421 15:53:58.699915 4744 server.go:141] Starting to listen on 0.0.0.0:10250
kubelet[4744]: I0421 15:53:58.700742 4744 server.go:344] Adding debug handlers to kubelet server.
kubelet[4744]: I0421 15:53:58.700746 4744 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
kubelet[4744]: I0421 15:53:58.700793 4744 status_manager.go:152] Starting to sync pod status with apiserver
kubelet[4744]: I0421 15:53:58.700822 4744 volume_manager.go:248] Starting Kubelet Volume Manager
kubelet[4744]: I0421 15:53:58.700830 4744 kubelet.go:1809] Starting kubelet main sync loop.
kubelet[4744]: I0421 15:53:58.700849 4744 desired_state_of_world_populator.go:130] Desired state populator starts to run
kubelet[4744]: I0421 15:53:58.700851 4744 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
kubelet[4744]: E0421 15:53:58.707219 4744 docker_sandbox.go:538] Failed to retrieve checkpoint for sandbox "a8904d14dded5d2f8c51e7da22da9009a7f8f93e8a99eae7e40c3a453bf0a371": checkpoint is not found
kubelet[4744]: I0421 15:53:58.782412 4744 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
kubelet[4744]: E0421 15:53:58.800896 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:58.800897 4744 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
kubelet[4744]: I0421 15:53:58.800962 4744 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet.
kubelet[4744]: E0421 15:53:58.900998 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:59.001063 4744 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet.
kubelet[4744]: E0421 15:53:59.001085 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.027756 4744 controller.go:204] failed to get node "11-22-00-ff-bb-cd" when trying to set owner ref to the node lease: nodes "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.050947 4744 streamwatcher.go:109] Unable to decode an event from the watch stream: got short buffer with n=0, base=168, cap=2688
kubelet[4744]: W0421 15:53:59.050970 4744 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received
kubelet[4744]: E0421 15:53:59.101164 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.201277 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:59.213443 4744 cpu_manager.go:202] [cpumanager] starting with static policy
kubelet[4744]: I0421 15:53:59.213452 4744 kubelet_node_status.go:72] Attempting to register node 11-22-00-ff-bb-cd
kubelet[4744]: I0421 15:53:59.213455 4744 cpu_manager.go:203] [cpumanager] reconciling every 10s
kubelet[4744]: I0421 15:53:59.213549 4744 state_mem.go:96] [cpumanager] updated default cpuset: "0-23"
kubelet[4744]: W0421 15:53:59.215017 4744 manager.go:540] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
kubelet[4744]: E0421 15:53:59.215285 4744 eviction_manager.go:287] eviction manager: failed to get summary stats: failed to get node info: node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:59.271316 4744 kubelet_node_status.go:75] Successfully registered node 11-22-00-ff-bb-cd
kubelet[4744]: E0421 15:53:59.301381 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:59.326104 4744 kuberuntime_manager.go:1030] updating runtime config through cri with podcidr 192.168.18.0/24
kubelet[4744]: I0421 15:53:59.326254 4744 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:192.168.18.0/24,},}
kubelet[4744]: I0421 15:53:59.326416 4744 kubelet_network.go:77] Setting Pod CIDR: -> 192.168.18.0/24
kubelet[4744]: W0421 15:53:59.401203 4744 pod_container_deletor.go:75] Container "7caa8bbd2ab47fb43bb3165cbadda047212a9ad4178caa01a06c63d3dcbf0304" not found in pod's containers
kubelet[4744]: W0421 15:53:59.401218 4744 pod_container_deletor.go:75] Container "80511f792a67d2aa510d2dc61365ee335cd230ad2f5df7c0323aa7c97a653a91" not found in pod's containers
kubelet[4744]: W0421 15:53:59.401226 4744 pod_container_deletor.go:75] Container "a8904d14dded5d2f8c51e7da22da9009a7f8f93e8a99eae7e40c3a453bf0a371" not found in pod's containers
kubelet[4744]: W0421 15:53:59.401232 4744 pod_container_deletor.go:75] Container "d127ca41b22e2b63de55a34de340bab7d0774f962aa94381fc73faa34ee40032" not found in pod's containers
kubelet[4744]: E0421 15:53:59.401467 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.501550 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: I0421 15:53:59.502100 4744 reconciler.go:154] Reconciler: start to sync state
kubelet[4744]: E0421 15:53:59.601658 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.701753 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.801846 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:53:59.901930 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:54:00.002009 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:54:00.102092 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:54:00.202185 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:54:00.302270 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
kubelet[4744]: E0421 15:54:00.402350 4744 kubelet.go:2247] node "11-22-00-ff-bb-cd" not found
proxy.go:69] New request: method->GET, url->/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/11-22-00-ff-bb-cd?timeout=10s
proxy.go:199] request resourceInfo=&{IsResourceRequest:true Path:/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/11-22-00-ff-bb-cd Verb:get APIPrefix:apis APIGroup:coordination.k8s.io APIVersion:v1beta1 Namespace:kube-node-lease Resource:leases Subresource: Name:11-22-00-ff-bb-cd Parts:[leases 11-22-00-ff-bb-cd]}
cache_mgr.go:60] cache for kubelet/v1.14.10_kube-node-lease_leases_11-22-00-ff-bb-cd_
cache_mgr.go:78] cache 586 bytes body from response for kubelet/v1.14.10_kube-node-lease_leases_11-22-00-ff-bb-cd_
proxy.go:69] New request: method->PUT, url->/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/11-22-00-ff-bb-cd?timeout=10s
proxy.go:199] request resourceInfo=&{IsResourceRequest:true Path:/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/11-22-00-ff-bb-cd Verb:update APIPrefix:apis APIGroup:coordination.k8s.io APIVersion:v1beta1 Namespace:kube-node-lease Resource:leases Subresource: Name:11-22-00-ff-bb-cd Parts:[leases 11-22-00-ff-bb-cd]}
proxy.go:69] New request: method->GET, url->/api/v1/nodes/11-22-00-ff-bb-cd?resourceVersion=0&timeout=10s
proxy.go:199] request resourceInfo=&{IsResourceRequest:true Path:/api/v1/nodes/11-22-00-ff-bb-cd Verb:get APIPrefix:api APIGroup: APIVersion:v1 Namespace: Resource:nodes Subresource: Name:11-22-00-ff-bb-cd Parts:[nodes 11-22-00-ff-bb-cd]}
cache_mgr.go:60] cache for kubelet/v1.14.10__nodes_11-22-00-ff-bb-cd_
cache_mgr.go:78] cache 3660 bytes body from response for kubelet/v1.14.10__nodes_11-22-00-ff-bb-cd_