Comments (15)
The issue lies in our failure to correctly pass the context in SyncPod. As a result, when the context is canceled, it cannot be propagated to PullImage
method.
However, this is not a simple problem. We have previously recorded this issue, and it has not yet been resolved, as referenced in #113606.
A few PRs block resolving this issue:
The pod worker is being improved to periodically resync missing pods - moderate refactor that must be completed #113145
If we were to propagate context into sync today, certain wait loops would return generic errors instead of context cancellation which would cause spurious errors - a refactor of the wait package that introduces new methods to handle this must be merged first #107826
After these two issues merge, we can propagate context into Sync methods and add tests that verify pods with long lifecycle hooks are terminated quickly and long grace period deletions can be reduced, as well as stress tests that verify that we do not generate excessive errors. This issue tracks this final work.
According to #113606, once #113145 and #107826 are merged, we should be able to propagate the context to SyncPod. It appears that the prerequisites have now been fulfilled, so I will attempt to pass the context to SyncPod and resolve the test issues arising from it.
/cc @smarterclayton @bobbypage @dashpole If I'm mistaken, please correct me.
from kubernetes.
/priority important-soon
We chatted about this at the SIG node meeting. We broke this down into two cases. Does the pod:
- get created and then eventually get deleted
- get created and then remain undeleted, reporting a success
within that dichotomy there's also the question of what's the state in the API server
if the pod gets created and is eventually deleted, then that's probably not a bug (or at least, would be a feature to fix)
If it gets created and leaks, then we should find out why the kubelet looses track of that deletion request.
@mimowo do you know which it is?
from kubernetes.
that's a relief! I don't know if there's anything we can do to fully interrupt the container creation now. I think i'd consider it a feature to have the kubelet actually halt creation when it gets a delete for a pod it's actively creating.
/priority backlog
/kind feature
from kubernetes.
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
from kubernetes.
/sig node
/cc @SergeyKanzhelev
from kubernetes.
/cc @bobbypage
from kubernetes.
/cc
from kubernetes.
Thanks for looking and discussing it. It was (1.) in all my testing - the pod got deleted from the API server.
from kubernetes.
from the API server, but from the kubelet as well? like is the pod discoverable with crictl or anything?
from kubernetes.
I don't see anything leaked on node related to the pod.
Here are the kubelet logs, grepped by the pod name or UID, at level 6 on 1.29.4 on Kind:
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441540 268 config.go:398] "Receiving a new pod" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441583 268 kubelet.go:2415] "SyncLoop ADD" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441610 268 topology_manager.go:215] "Topology Admit Handler" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" podNamespace="default" podName="indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441696 268 pod_workers.go:768] "Pod is being synced for the first time" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="create"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441717 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="sync"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441736 268 pod_workers.go:1230] "Processing pod event" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="sync"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441759 268 kubelet.go:1726] "SyncPod enter" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441788 268 kubelet_pods.go:1673] "Generating pod status" podIsTerminal=false pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441816 268 kubelet_pods.go:1686] "Got phase for pod" pod="default/indexed-job-0-ppwc9" oldPhase="Pending" phase="Pending"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441828 268 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.441869 268 status_manager.go:687] "updateStatusInternal" version=1 podIsFinished=false pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers="(usybox state=waiting previous=<none>)"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.442028 268 status_manager.go:833] "Sync pod status" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" statusUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" version=1
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.444868 268 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 2 milliseconds
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.451468 268 volume_manager.go:406] "Waiting for volumes to attach and mount for pod" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.454734 268 kubelet.go:2428] "SyncLoop RECONCILE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.454740 268 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9/status 200 OK in 9 milliseconds
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.454867 268 status_manager.go:874] "Patch status for pod" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" patch="{\"metadata\":{\"uid\":\"4a2edca4-18e7-41ee-94c6-170d1498fe93\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"PodReadyToStartContainers\"},{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastProbeTime\":null,\"lastTransitionTime\":\"2024-05-23T09:50:58Z\",\"status\":\"False\",\"type\":\"PodReadyToStartContainers\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2024-05-23T09:50:58Z\",\"status\":\"True\",\"type\":\"Initialized\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2024-05-23T09:50:58Z\",\"message\":\"containers with unready status: [usybox]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2024-05-23T09:50:58Z\",\"message\":\"containers with unready status: [usybox]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"image\":\"nvidia/cuda:11.8.0-devel-ubuntu22.04\",\"imageID\":\"\",\"lastState\":{},\"name\":\"usybox\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"waiting\":{\"reason\":\"ContainerCreating\"}}}],\"hostIP\":\"192.168.8.2\",\"hostIPs\":[{\"ip\":\"192.168.8.2\"}],\"startTime\":\"2024-05-23T09:50:58Z\"}}"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.454924 268 status_manager.go:883] "Status for pod updated successfully" pod="default/indexed-job-0-ppwc9" statusVersion=1 status={"phase":"Pending","conditions":[{"type":"PodReadyToStartContainers","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z","reason":"ContainersNotReady","message":"containers with unready status: [usybox]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z","reason":"ContainersNotReady","message":"containers with unready status: [usybox]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"}],"hostIP":"192.168.8.2","hostIPs":[{"ip":"192.168.8.2"}],"startTime":"2024-05-23T09:50:58Z","containerStatuses":[{"name":"usybox","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"nvidia/cuda:11.8.0-devel-ubuntu22.04","imageID":"","started":false}],"qosClass":"BestEffort"}
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.456063 268 desired_state_of_world_populator.go:334] "Added volume to desired state" pod="default/indexed-job-0-ppwc9" volumeName="kube-api-access-fcvq8" volumeSpecName="kube-api-access-fcvq8"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.473298 268 reconciler_common.go:248] "Starting operationExecutor.VerifyControllerAttachedVolume for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"indexed-job-0-ppwc9\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") " pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.473363 268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"indexed-job-0-ppwc9\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") " pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.573853 268 reconciler_common.go:220] "Starting operationExecutor.MountVolume for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"indexed-job-0-ppwc9\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") " pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.573930 268 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"indexed-job-0-ppwc9\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") " pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.574007 268 projected.go:191] Setting up volume kube-api-access-fcvq8 for pod 4a2edca4-18e7-41ee-94c6-170d1498fe93 at /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.579760 268 empty_dir_linux.go:88] Determining mount medium of /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.579797 268 empty_dir_linux.go:99] Statfs_t of /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8: {Type:61267 Bsize:4096 Blocks:1055169812 Bfree:773487668 Bavail:729830059 Files:268075008 Ffree:253944790 Fsid:{Val:[-404073003 1299020577]} Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.579820 268 empty_dir.go:340] pod 4a2edca4-18e7-41ee-94c6-170d1498fe93: mounting tmpfs for volume wrapped_kube-api-access-fcvq8
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.579838 268 mount_linux.go:218] Mounting cmd (mount) with arguments (-t tmpfs -o size=202612228096 tmpfs /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8)
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.583771 268 atomic_writer.go:196] pod default/indexed-job-0-ppwc9 volume kube-api-access-fcvq8: performed write of new data to ts data directory: /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8/..2024_05_23_09_50_58.2635322384
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.584008 268 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"indexed-job-0-ppwc9\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") " pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752563 268 volume_manager.go:442] "All volumes are attached and mounted for pod" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752615 268 kuberuntime_manager.go:825] "Syncing Pod" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752627 268 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752652 268 kuberuntime_manager.go:1057] "computePodActions got for pod" podActions="KillPod: true, CreateSandbox: true, UpdatePodResources: false, Attempt: 0, InitContainersToStart: [], ContainersToStart: [0], EphemeralContainersToStart: [],ContainersToUpdate: map[], ContainersToKill: map[]" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752671 268 kuberuntime_manager.go:1066] "SyncPod received new pod, will create a sandbox for it" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752699 268 kuberuntime_manager.go:1073] "Stopping PodSandbox for pod, will start new one" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.752727 268 kuberuntime_manager.go:1128] "Creating PodSandbox for pod" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.913253 268 kuberuntime_manager.go:1180] "Created PodSandbox for pod" podSandboxID="968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b" pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.913861 268 kuberuntime_manager.go:1203] "Determined the ip for pod after sandbox changed" IPs=["10.244.1.2"] pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: > pod="default/indexed-job-0-ppwc9"
May 23 09:50:58 kind-worker kubelet[268]: I0523 09:50:58.914804 268 event.go:376] "Event occurred" object="default/indexed-job-0-ppwc9" fieldPath="spec.containers{usybox}" kind="Pod" apiVersion="v1" type="Normal" reason="Pulling" message="Pulling image \"nvidia/cuda:11.8.0-devel-ubuntu22.04\""
May 23 09:50:59 kind-worker kubelet[268]: I0523 09:50:59.900186 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b" oldState="non-existent" newState="running"
May 23 09:50:59 kind-worker kubelet[268]: I0523 09:50:59.900634 268 kuberuntime_manager.go:1439] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=["968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"] pod="default/indexed-job-0-ppwc9"
May 23 09:50:59 kind-worker kubelet[268]: I0523 09:50:59.901583 268 generic.go:457] "PLEG: Write status" pod="default/indexed-job-0-ppwc9" podStatus={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Name":"indexed-job-0-ppwc9","Namespace":"default","IPs":["10.244.1.2"],"ContainerStatuses":[],"SandboxStatuses":[{"id":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b","metadata":{"name":"indexed-job-0-ppwc9","uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","namespace":"default"},"created_at":1716457858766557828,"network":{"ip":"10.244.1.2"},"linux":{"namespaces":{"options":{"pid":1}}},"labels":{"batch.kubernetes.io/controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","batch.kubernetes.io/job-completion-index":"0","batch.kubernetes.io/job-name":"indexed-job","controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","io.kubernetes.pod.name":"indexed-job-0-ppwc9","io.kubernetes.pod.namespace":"default","io.kubernetes.pod.uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","job-name":"indexed-job"},"annotations":{"batch.kubernetes.io/job-completion-index":"0","kubernetes.io/config.seen":"2024-05-23T09:50:58.441548453Z","kubernetes.io/config.source":"api"}}],"TimeStamp":"0001-01-01T00:00:00Z"}
May 23 09:50:59 kind-worker kubelet[268]: I0523 09:50:59.901666 268 kubelet.go:2447] "SyncLoop (PLEG): event for pod" pod="default/indexed-job-0-ppwc9" event={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Type":"ContainerStarted","Data":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"}
May 23 09:50:59 kind-worker kubelet[268]: I0523 09:50:59.901696 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="sync"
May 23 09:51:14 kind-worker kubelet[268]: I0523 09:51:14.520410 268 kubelet.go:2431] "SyncLoop DELETE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:14 kind-worker kubelet[268]: I0523 09:51:14.520504 268 pod_workers.go:854] "Pod is marked for graceful deletion, begin teardown" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="update"
May 23 09:51:14 kind-worker kubelet[268]: I0523 09:51:14.520529 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminating"
May 23 09:51:14 kind-worker kubelet[268]: I0523 09:51:14.520557 268 pod_workers.go:970] "Cancelling current pod sync" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminating"
May 23 09:51:15 kind-worker kubelet[268]: I0523 09:51:15.789624 268 status_manager.go:939] "Delaying pod deletion as the phase is non-terminal" phase="Pending" localPhase="Pending" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:25 kind-worker kubelet[268]: I0523 09:51:25.789666 268 status_manager.go:939] "Delaying pod deletion as the phase is non-terminal" phase="Pending" localPhase="Pending" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:35 kind-worker kubelet[268]: I0523 09:51:35.789807 268 status_manager.go:939] "Delaying pod deletion as the phase is non-terminal" phase="Pending" localPhase="Pending" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.448515 268 event.go:376] "Event occurred" object="default/indexed-job-0-ppwc9" fieldPath="spec.containers{usybox}" kind="Pod" apiVersion="v1" type="Normal" reason="Pulled" message="Successfully pulled image \"nvidia/cuda:11.8.0-devel-ubuntu22.04\" in 40.533s (40.533s including waiting)"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.448539 268 kubelet_pods.go:161] "Creating hosts mount for container" pod="default/indexed-job-0-ppwc9" containerName="usybox" podIPs=["10.244.1.2"] path=true
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.448570 268 kubelet_pods.go:257] "Mount has propagation" pod="default/indexed-job-0-ppwc9" containerName="usybox" volumeMountName="kube-api-access-fcvq8" propagation="PROPAGATION_PRIVATE"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.450140 268 memory_manager.go:235] "No allocation is available" pod="default/indexed-job-0-ppwc9" containerName="usybox"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.470127 268 event.go:376] "Event occurred" object="default/indexed-job-0-ppwc9" fieldPath="spec.containers{usybox}" kind="Pod" apiVersion="v1" type="Normal" reason="Created" message="Created container usybox"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.601340 268 kubelet.go:1728] "SyncPod exit" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" isTerminal=false
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.601375 268 pod_workers.go:1510] "Pending update already queued" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.601400 268 pod_workers.go:1335] "Processing pod event done" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="sync"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.601416 268 pod_workers.go:1230] "Processing pod event" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="terminating"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.601349 268 event.go:376] "Event occurred" object="default/indexed-job-0-ppwc9" fieldPath="spec.containers{usybox}" kind="Pod" apiVersion="v1" type="Normal" reason="Started" message="Started container usybox"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.973943 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" oldState="non-existent" newState="running"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.974355 268 kuberuntime_manager.go:1439] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=["968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"] pod="default/indexed-job-0-ppwc9"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975516 268 generic.go:457] "PLEG: Write status" pod="default/indexed-job-0-ppwc9" podStatus={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Name":"indexed-job-0-ppwc9","Namespace":"default","IPs":["10.244.1.2"],"ContainerStatuses":[{"ID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be","Name":"usybox","State":"running","CreatedAt":"2024-05-23T09:51:39.467094463Z","StartedAt":"2024-05-23T09:51:39.597072426Z","FinishedAt":"0001-01-01T00:00:00Z","ExitCode":0,"Image":"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04","ImageID":"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f","ImageRuntimeHandler":"","Hash":226121593,"HashWithoutResources":0,"RestartCount":0,"Reason":"","Message":"","Resources":null}],"SandboxStatuses":[{"id":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b","metadata":{"name":"indexed-job-0-ppwc9","uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","namespace":"default"},"created_at":1716457858766557828,"network":{"ip":"10.244.1.2"},"linux":{"namespaces":{"options":{"pid":1}}},"labels":{"batch.kubernetes.io/controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","batch.kubernetes.io/job-completion-index":"0","batch.kubernetes.io/job-name":"indexed-job","controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","io.kubernetes.pod.name":"indexed-job-0-ppwc9","io.kubernetes.pod.namespace":"default","io.kubernetes.pod.uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","job-name":"indexed-job"},"annotations":{"batch.kubernetes.io/job-completion-index":"0","kubernetes.io/config.seen":"2024-05-23T09:50:58.441548453Z","kubernetes.io/config.source":"api"}}],"TimeStamp":"0001-01-01T00:00:00Z"}
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975566 268 kubelet.go:2447] "SyncLoop (PLEG): event for pod" pod="default/indexed-job-0-ppwc9" event={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Type":"ContainerStarted","Data":"17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be"}
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975578 268 pod_workers.go:1352] "Pod worker has observed request to terminate" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975603 268 kubelet.go:2011] "SyncTerminatingPod enter" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975614 268 kubelet_pods.go:1673] "Generating pod status" podIsTerminal=false pod="default/indexed-job-0-ppwc9"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975643 268 kubelet_pods.go:1686] "Got phase for pod" pod="default/indexed-job-0-ppwc9" oldPhase="Pending" phase="Running"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975689 268 status_manager.go:687] "updateStatusInternal" version=2 podIsFinished=false pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers="(usybox state=running previous=<none>)"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975703 268 kubelet.go:2021] "Pod terminating with grace period" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" gracePeriod=30
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975715 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminating"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975744 268 kuberuntime_container.go:749] "Killing container with a grace period override" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerName="usybox" containerID="containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" gracePeriod=30
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975750 268 status_manager.go:833] "Sync pod status" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" statusUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" version=2
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975762 268 kuberuntime_container.go:770] "Killing container with a grace period" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerName="usybox" containerID="containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" gracePeriod=30
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.975894 268 event.go:376] "Event occurred" object="default/indexed-job-0-ppwc9" fieldPath="spec.containers{usybox}" kind="Pod" apiVersion="v1" type="Normal" reason="Killing" message="Stopping container usybox"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.978520 268 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 2 milliseconds
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986313 268 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9/status 200 OK in 7 milliseconds
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986664 268 status_manager.go:874] "Patch status for pod" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" patch="{\"metadata\":{\"uid\":\"4a2edca4-18e7-41ee-94c6-170d1498fe93\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"PodReadyToStartContainers\"},{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2024-05-23T09:51:39Z\",\"status\":\"True\",\"type\":\"PodReadyToStartContainers\"},{\"lastTransitionTime\":\"2024-05-23T09:51:39Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2024-05-23T09:51:39Z\",\"message\":null,\"reason\":null,\"status\":\"True\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be\",\"image\":\"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04\",\"imageID\":\"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f\",\"lastState\":{},\"name\":\"usybox\",\"ready\":true,\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2024-05-23T09:51:39Z\"}}}],\"phase\":\"Running\",\"podIP\":\"10.244.1.2\",\"podIPs\":[{\"ip\":\"10.244.1.2\"}]}}"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986736 268 status_manager.go:883] "Status for pod updated successfully" pod="default/indexed-job-0-ppwc9" statusVersion=2 status={"phase":"Running","conditions":[{"type":"PodReadyToStartContainers","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:39Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:39Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:39Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"}],"hostIP":"192.168.8.2","hostIPs":[{"ip":"192.168.8.2"}],"podIP":"10.244.1.2","podIPs":[{"ip":"10.244.1.2"}],"startTime":"2024-05-23T09:50:58Z","containerStatuses":[{"name":"usybox","state":{"running":{"startedAt":"2024-05-23T09:51:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04","imageID":"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f","containerID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be","started":true}],"qosClass":"BestEffort"}
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986762 268 pod_startup_latency_tracker.go:164] "Mark when the pod was running for the first time" pod="default/indexed-job-0-ppwc9" rv="650"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986783 268 status_manager.go:939] "Delaying pod deletion as the phase is non-terminal" phase="Running" localPhase="Running" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:39 kind-worker kubelet[268]: I0523 09:51:39.986826 268 kubelet.go:2428] "SyncLoop RECONCILE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:45 kind-worker kubelet[268]: I0523 09:51:45.789553 268 status_manager.go:939] "Delaying pod deletion as the phase is non-terminal" phase="Running" localPhase="Running" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.101879 268 kuberuntime_container.go:779] "Container exited normally" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerName="usybox" containerID="containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.294946 268 kuberuntime_manager.go:1439] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=["968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"] pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296349 268 kubelet.go:2072] "Post-termination container state" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers=[{"Name":"usybox","State":"exited","ExitCode":0,"FinishedAt":"2024-05-23T09:51:40.608737822Z"}]
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296377 268 kubelet_pods.go:1673] "Generating pod status" podIsTerminal=true pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296403 268 kubelet_pods.go:1686] "Got phase for pod" pod="default/indexed-job-0-ppwc9" oldPhase="Running" phase="Succeeded"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296421 268 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296461 268 status_manager.go:687] "updateStatusInternal" version=3 podIsFinished=false pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers="(usybox state=terminated=0 previous=<none>)"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296487 268 kubelet.go:2095] "Pod termination stopped all running containers" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296499 268 kubelet.go:2097] "SyncTerminatingPod exit" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296508 268 pod_workers.go:1399] "Pod terminated all containers successfully" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296520 268 pod_workers.go:1510] "Pending update already queued" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296536 268 pod_workers.go:1335] "Processing pod event done" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="terminating"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296546 268 pod_workers.go:1230] "Processing pod event" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="terminated"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.296572 268 status_manager.go:833] "Sync pod status" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" statusUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" version=3
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.298798 268 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 2 milliseconds
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.306850 268 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9/status 200 OK in 7 milliseconds
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.307024 268 status_manager.go:874] "Patch status for pod" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" patch="{\"metadata\":{\"uid\":\"4a2edca4-18e7-41ee-94c6-170d1498fe93\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"PodReadyToStartContainers\"},{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2024-05-23T09:51:46Z\",\"status\":\"False\",\"type\":\"PodReadyToStartContainers\"},{\"reason\":\"PodCompleted\",\"type\":\"Initialized\"},{\"lastTransitionTime\":\"2024-05-23T09:51:46Z\",\"reason\":\"PodCompleted\",\"status\":\"False\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2024-05-23T09:51:46Z\",\"reason\":\"PodCompleted\",\"status\":\"False\",\"type\":\"ContainersReady\"}],\"containerStatuses\":[{\"containerID\":\"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be\",\"image\":\"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04\",\"imageID\":\"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f\",\"lastState\":{},\"name\":\"usybox\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"terminated\":{\"containerID\":\"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be\",\"exitCode\":0,\"finishedAt\":\"2024-05-23T09:51:40Z\",\"reason\":\"Completed\",\"startedAt\":\"2024-05-23T09:51:39Z\"}}}],\"phase\":\"Succeeded\",\"podIP\":null,\"podIPs\":null}}"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.307079 268 status_manager.go:883] "Status for pod updated successfully" pod="default/indexed-job-0-ppwc9" statusVersion=3 status={"phase":"Succeeded","conditions":[{"type":"PodReadyToStartContainers","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z","reason":"PodCompleted"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z","reason":"PodCompleted"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z","reason":"PodCompleted"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"}],"hostIP":"192.168.8.2","hostIPs":[{"ip":"192.168.8.2"}],"startTime":"2024-05-23T09:50:58Z","containerStatuses":[{"name":"usybox","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2024-05-23T09:51:39Z","finishedAt":"2024-05-23T09:51:40Z","containerID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be"}},"lastState":{},"ready":false,"restartCount":0,"image":"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04","imageID":"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f","containerID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be","started":false}],"qosClass":"BestEffort"}
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.307192 268 kubelet.go:2428] "SyncLoop RECONCILE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.373096 268 desired_state_of_world_populator.go:261] "Removing volume from desired state" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" volumeName="kube-api-access-fcvq8"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393234 268 reconciler_common.go:165] "Starting operationExecutor.UnmountVolume for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"4a2edca4-18e7-41ee-94c6-170d1498fe93\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") "
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393316 268 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") pod \"4a2edca4-18e7-41ee-94c6-170d1498fe93\" (UID: \"4a2edca4-18e7-41ee-94c6-170d1498fe93\") "
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393343 268 subpath_linux.go:244] Cleaning up subpath mounts for /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volume-subpaths/kube-api-access-fcvq8
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393397 268 projected.go:410] Tearing down volume kube-api-access-fcvq8 for pod 4a2edca4-18e7-41ee-94c6-170d1498fe93 at /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393697 268 empty_dir_linux.go:88] Determining mount medium of /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393733 268 empty_dir_linux.go:99] Statfs_t of /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8: {Type:16914836 Bsize:4096 Blocks:49465876 Bfree:49465873 Bavail:49465873 Files:24732938 Ffree:24732929 Fsid:{Val:[222801706 -1150928900]} Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.393767 268 mount_linux.go:360] Unmounting /var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes/kubernetes.io~projected/kube-api-access-fcvq8
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.397023 268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8" (OuterVolumeSpecName: "kube-api-access-fcvq8") pod "4a2edca4-18e7-41ee-94c6-170d1498fe93" (UID: "4a2edca4-18e7-41ee-94c6-170d1498fe93"). InnerVolumeSpecName "kube-api-access-fcvq8". PluginName "kubernetes.io/projected", VolumeGidValue ""
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.494424 268 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fcvq8\" (UniqueName: \"kubernetes.io/projected/4a2edca4-18e7-41ee-94c6-170d1498fe93-kube-api-access-fcvq8\") on node \"kind-worker\" DevicePath \"\""
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.789504 268 kubelet.go:2466] "SyncLoop (SYNC) pods" total=1 pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.789561 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminated"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.989803 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" oldState="running" newState="exited"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.989814 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b" oldState="running" newState="exited"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.990192 268 kuberuntime_manager.go:1439] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=["968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"] pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991195 268 generic.go:457] "PLEG: Write status" pod="default/indexed-job-0-ppwc9" podStatus={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Name":"indexed-job-0-ppwc9","Namespace":"default","IPs":[],"ContainerStatuses":[{"ID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be","Name":"usybox","State":"exited","CreatedAt":"2024-05-23T09:51:39.467094463Z","StartedAt":"2024-05-23T09:51:39.597072426Z","FinishedAt":"2024-05-23T09:51:40.608737822Z","ExitCode":0,"Image":"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04","ImageID":"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f","ImageRuntimeHandler":"","Hash":226121593,"HashWithoutResources":0,"RestartCount":0,"Reason":"Completed","Message":"","Resources":null}],"SandboxStatuses":[{"id":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b","metadata":{"name":"indexed-job-0-ppwc9","uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","namespace":"default"},"state":1,"created_at":1716457858766557828,"network":{},"linux":{"namespaces":{"options":{"pid":1}}},"labels":{"batch.kubernetes.io/controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","batch.kubernetes.io/job-completion-index":"0","batch.kubernetes.io/job-name":"indexed-job","controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","io.kubernetes.pod.name":"indexed-job-0-ppwc9","io.kubernetes.pod.namespace":"default","io.kubernetes.pod.uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","job-name":"indexed-job"},"annotations":{"batch.kubernetes.io/job-completion-index":"0","kubernetes.io/config.seen":"2024-05-23T09:50:58.441548453Z","kubernetes.io/config.source":"api"}}],"TimeStamp":"0001-01-01T00:00:00Z"}
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991895 268 kubelet.go:2146] "SyncTerminatedPod enter" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991904 268 generic.go:334] "Generic (PLEG): container finished" podID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" exitCode=0
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991954 268 kubelet.go:2447] "SyncLoop (PLEG): event for pod" pod="default/indexed-job-0-ppwc9" event={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Type":"ContainerDied","Data":"17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be"}
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991926 268 kubelet_pods.go:1673] "Generating pod status" podIsTerminal=true pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.991980 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminated"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992015 268 kubelet.go:2447] "SyncLoop (PLEG): event for pod" pod="default/indexed-job-0-ppwc9" event={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Type":"ContainerDied","Data":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"}
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992030 268 pod_workers.go:963] "Notifying pod of pending update" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" workType="terminated"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992033 268 kubelet_pods.go:1686] "Got phase for pod" pod="default/indexed-job-0-ppwc9" oldPhase="Succeeded" phase="Succeeded"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992072 268 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992127 268 status_manager.go:687] "updateStatusInternal" version=4 podIsFinished=false pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers="(usybox state=terminated=0 previous=<none>)"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992205 268 volume_manager.go:451] "Waiting for volumes to unmount for pod" pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992256 268 volume_manager.go:480] "All volumes are unmounted for pod" pod="default/indexed-job-0-ppwc9"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992268 268 kubelet.go:2160] "Pod termination unmounted volumes" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992393 268 status_manager.go:833] "Sync pod status" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" statusUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" version=4
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.992478 268 kubelet.go:2174] "Pod termination cleaned up volume paths" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:46 kind-worker kubelet[268]: I0523 09:51:46.995323 268 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 2 milliseconds
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003496 268 kubelet.go:2196] "Pod termination removed cgroups" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003538 268 status_manager.go:490] "TerminatePod calling updateStatusInternal" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003573 268 status_manager.go:687] "updateStatusInternal" version=5 podIsFinished=true pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containers="(usybox state=terminated=0 previous=<none>)"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003627 268 kubelet.go:2203] "Pod is terminated and will need no more status updates" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003639 268 kubelet.go:2205] "SyncTerminatedPod exit" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003654 268 pod_workers.go:1460] "Pod is complete and the worker can now stop" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003677 268 pod_workers.go:1306] "Processing pod event done" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="terminated"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.003686 268 pod_workers.go:951] "Pod worker has stopped" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.004882 268 round_trippers.go:553] PATCH https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9/status 200 OK in 9 milliseconds
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.005119 268 status_manager.go:874] "Patch status for pod" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" patch="{\"metadata\":{\"uid\":\"4a2edca4-18e7-41ee-94c6-170d1498fe93\"},\"status\":{\"podIP\":\"10.244.1.2\",\"podIPs\":[{\"ip\":\"10.244.1.2\"}]}}"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.005184 268 status_manager.go:883] "Status for pod updated successfully" pod="default/indexed-job-0-ppwc9" statusVersion=4 status={"phase":"Succeeded","conditions":[{"type":"PodReadyToStartContainers","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z","reason":"PodCompleted"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z","reason":"PodCompleted"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:51:46Z","reason":"PodCompleted"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-05-23T09:50:58Z"}],"hostIP":"192.168.8.2","hostIPs":[{"ip":"192.168.8.2"}],"podIP":"10.244.1.2","podIPs":[{"ip":"10.244.1.2"}],"startTime":"2024-05-23T09:50:58Z","containerStatuses":[{"name":"usybox","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2024-05-23T09:51:39Z","finishedAt":"2024-05-23T09:51:40Z","containerID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be"}},"lastState":{},"ready":false,"restartCount":0,"image":"docker.io/nvidia/cuda:11.8.0-devel-ubuntu22.04","imageID":"docker.io/nvidia/cuda@sha256:94fd755736cb58979173d491504f0b573247b1745250249415b07fefc738e41f","containerID":"containerd://17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be","started":false}],"qosClass":"BestEffort"}
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.005242 268 status_manager.go:833] "Sync pod status" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" statusUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" version=5
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.005257 268 kubelet.go:2428] "SyncLoop RECONCILE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.007383 268 round_trippers.go:553] GET https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 2 milliseconds
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.007761 268 status_manager.go:874] "Patch status for pod" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" patch="{\"metadata\":{\"uid\":\"4a2edca4-18e7-41ee-94c6-170d1498fe93\"}}"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.007787 268 status_manager.go:881] "Status for pod is up-to-date" pod="default/indexed-job-0-ppwc9" statusVersion=5
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.007805 268 status_manager.go:944] "The pod termination is finished as SyncTerminatedPod completes its execution" phase="Succeeded" localPhase="Succeeded" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.014164 268 kubelet.go:2431] "SyncLoop DELETE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.014198 268 pod_workers.go:840] "Pod is finished processing, no further updates" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="update"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.017689 268 kubelet.go:2425] "SyncLoop REMOVE" source="api" pods=["default/indexed-job-0-ppwc9"]
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.017700 268 round_trippers.go:553] DELETE https://kind-control-plane:6443/api/v1/namespaces/default/pods/indexed-job-0-ppwc9 200 OK in 9 milliseconds
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.017857 268 kubelet.go:2259] "Pod has been deleted and must be killed" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.017865 268 status_manager.go:912] "Pod fully terminated and removed from etcd" pod="default/indexed-job-0-ppwc9"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.017885 268 pod_workers.go:840] "Pod is finished processing, no further updates" pod="default/indexed-job-0-ppwc9" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" updateType="kill"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.791636 268 pod_workers.go:1634] "Pod has been terminated and is no longer known to the kubelet, remove all history" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.793326 268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" path="/var/lib/kubelet/pods/4a2edca4-18e7-41ee-94c6-170d1498fe93/volumes"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.793869 268 kubelet_volumes.go:250] "Orphaned pod found, removing" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.993283 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="17267527cedf2eb896eef0e874658a43f03f11a093b3d6b543bb5491de9c99be" oldState="exited" newState="non-existent"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.993709 268 kuberuntime_manager.go:1439] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=["968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b"] pod="default/indexed-job-0-ppwc9"
May 23 09:51:47 kind-worker kubelet[268]: I0523 09:51:47.994276 268 generic.go:457] "PLEG: Write status" pod="default/indexed-job-0-ppwc9" podStatus={"ID":"4a2edca4-18e7-41ee-94c6-170d1498fe93","Name":"indexed-job-0-ppwc9","Namespace":"default","IPs":[],"ContainerStatuses":[],"SandboxStatuses":[{"id":"968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b","metadata":{"name":"indexed-job-0-ppwc9","uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","namespace":"default"},"state":1,"created_at":1716457858766557828,"network":{},"linux":{"namespaces":{"options":{"pid":1}}},"labels":{"batch.kubernetes.io/controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","batch.kubernetes.io/job-completion-index":"0","batch.kubernetes.io/job-name":"indexed-job","controller-uid":"f2ba7ffa-807f-40fc-a09e-84b90b674c25","io.kubernetes.pod.name":"indexed-job-0-ppwc9","io.kubernetes.pod.namespace":"default","io.kubernetes.pod.uid":"4a2edca4-18e7-41ee-94c6-170d1498fe93","job-name":"indexed-job"},"annotations":{"batch.kubernetes.io/job-completion-index":"0","kubernetes.io/config.seen":"2024-05-23T09:50:58.441548453Z","kubernetes.io/config.source":"api"}}],"TimeStamp":"0001-01-01T00:00:00Z"}
May 23 09:51:55 kind-worker kubelet[268]: I0523 09:51:55.808798 268 kuberuntime_gc.go:342] "Removing pod logs" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
May 23 09:51:56 kind-worker kubelet[268]: I0523 09:51:56.007697 268 generic.go:184] "GenericPLEG" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93" containerID="968710c8b236666f7805abd80538eab7700e7645639781ae329927f961a6502b" oldState="exited" newState="non-existent"
May 23 09:51:56 kind-worker kubelet[268]: I0523 09:51:56.007711 268 generic.go:436] "PLEG: Delete status for pod" podUID="4a2edca4-18e7-41ee-94c6-170d1498fe93"
from kubernetes.
/remove-kind bug
from kubernetes.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from kubernetes.
/remove-lifecycle stale
from kubernetes.
/remove-priority important-soon
from kubernetes.
/assign
I will try to move it forward.
from kubernetes.
Related Issues (20)
- Need emulation version guidance on how to perserve disabled feature gate tests for LockToDefault features HOT 6
- Failure cluster [c85e0cdc...] `Cluster size autoscaling` HOT 3
- setHostnameAsFQDN: true but the hostname is just pod name HOT 4
- Getting Unknown Field error for "podLogsDir" HOT 5
- Upgrade failed when using patches directory HOT 3
- `kubectl` handling of `node-role.kubernetes.io/control-plane` label encourages poor practice HOT 7
- PodScheduled status.conditions field does not have an entry in `managedFields` for Pod HOT 3
- Add validation that `--emulation-version=<version>` >= `DefaultKubeBinaryVersion`-3 HOT 7
- [Flaking Tests] Kubernetes e2e suite.[It] [sig-node] [Feature:GPUDevicePlugin] [Serial] Test using a Pod should run gpu HOT 14
- Failure cluster [00391c9f...] `ci-kubernetes-e2e-gke-prod-*-conformance` leaked to public bucket? HOT 3
- [Flaky Test] Kubernetes e2e suite.[It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema HOT 4
- [Flaky Test] k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.reconciler HOT 4
- Pods Not Scaling Up with HPA Despite CPU Utilization Exceeding Target HOT 3
- /sig <API Machinery> The kubeadm init can't start a healthy API server HOT 5
- terminationMessagePolicy: "File" not effective HOT 9
- WithRoutine loses stack information HOT 6
- Certificate Mismatch causes repeated reloads and temporary connection disruptions HOT 6
- eviction manager fails to prioritize Hostpath Volume Pods for Eviction Under DiskPressure HOT 3
- watchlist request will be closed abnormally when cacheInterval contains a large amount of watchEvents HOT 6
- DRA: Extra unexpected devices allocated when using 'allocationMode: All' HOT 13
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kubernetes.