Giter Club home page Giter Club logo

gardener-extension-provider-openstack's Introduction

REUSE status CI Build status Go Report Card

Project Gardener implements the automated management and operation of Kubernetes clusters as a service. Its main principle is to leverage Kubernetes concepts for all of its tasks.

Recently, most of the vendor specific logic has been developed in-tree. However, the project has grown to a size where it is very hard to extend, maintain, and test. With GEP-1 we have proposed how the architecture can be changed in a way to support external controllers that contain their very own vendor specifics. This way, we can keep Gardener core clean and independent.

This controller implements Gardener's extension contract for the OpenStack provider.

An example for a ControllerRegistration resource that can be used to register this controller to Gardener can be found here.

Please find more information regarding the extensibility concepts and a detailed proposal here.

Supported Kubernetes versions

This extension controller supports the following Kubernetes versions:

Version Support Conformance test results
Kubernetes 1.30 1.30.0+ N/A
Kubernetes 1.29 1.29.0+ Gardener v1.29 Conformance Tests
Kubernetes 1.28 1.28.0+ Gardener v1.28 Conformance Tests
Kubernetes 1.27 1.27.0+ Gardener v1.27 Conformance Tests
Kubernetes 1.26 1.26.0+ Gardener v1.26 Conformance Tests
Kubernetes 1.25 1.25.0+ Gardener v1.25 Conformance Tests

Please take a look here to see which versions are supported by Gardener in general.


Compatibility

The following lists known compatibility issues of this extension controller with other Gardener components.

OpenStack Extension Gardener Action Notes
< v1.12.0 > v1.10.0 Please update the provider version to >= v1.12.0 or disable the feature gate MountHostCADirectories in the Gardenlet. Applies if feature flag MountHostCADirectories in the Gardenlet is enabled. This is to prevent duplicate volume mounts to /usr/share/ca-certificates in the Shoot API Server.

How to start using or developing this extension controller locally

You can run the controller locally on your machine by executing make start.

Static code checks and tests can be executed by running make verify. We are using Go modules for Golang package dependency management and Ginkgo/Gomega for testing.

Feedback and Support

Feedback and contributions are always welcome. Please report bugs or suggestions as GitHub issues or join our Slack channel #gardener (please invite yourself to the Kubernetes workspace here).

Learn more!

Please find further resources about out project here:

gardener-extension-provider-openstack's People

Contributors

acumino avatar andreasburger avatar ary1992 avatar danielfoehrkn avatar dependabot[bot] avatar dguendisch avatar dimitar-kostadinov avatar dimityrmirchev avatar dkistner avatar docktofuture avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar hebelsan avatar ialidzhikov avatar jkmw avatar kon-angelo avatar kostov6 avatar martinweindel avatar oliver-goetz avatar prashanth26 avatar raphaelvogel avatar rfranzke avatar shafeeqes avatar stoyanr avatar tedteng avatar timebertt avatar timuthy avatar vlvasilev avatar vpnachev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gardener-extension-provider-openstack's Issues

Drop floating pool name validation

The OpenStack validator checks that the floatingPoolName used for shoots is one that was previously defined in the CloudProfile by a Gardener operator. However, some OpenStack environments don't manage the floating pools globally (at least not all of them) but sometimes also per project. Hence, such a list in the CloudProfile is never complete and we should allow end-users to enter an arbitrary floating pool name in their shoot spec. The list in the CloudProfile can still be kept and help users/the dashboard to find out default pools.

Cannot delete infrastructure when credentials data keys a re missing in secret

From gardener-attic/gardener-extensions#577

If the account secret does not contain a service account json, the cluster can for sure not be created.
But when trying to delete such a cluster this fails because of the same reason:

Waiting until shoot infrastructure has been destroyed
Last Error
task "Waiting until shoot infrastructure has been destroyed" failed: Failed to delete infrastructure: Error deleting infrastructure: secret shoot--berlin--rg-kyma/cloudprovider doesn't have a service account json

It is the same also for the other providers. This is not something specific to gcp.

Infra integration test failing on PR

How to categorize this issue?

/area quality dev-productivity
/kind bug
/priority normal
/platform openstack

What happened:

Currently, the infra integration test is failing when executed via the TM bot, because the test run can't reach out the SAP-internal OpenStack installation (isolated environment):
see #179 (comment)

What you expected to happen:
The infrastructure integration test to be executable on a PR via the TM bot.

How to reproduce it (as minimally and precisely as possible):
/test-single

Anything else we need to know?:

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Add domain field to `openstackv1alpha1.FloatingPool`

What would you like to be added:
Let's add a new optional domain *string field to the openstackv1alpha1.FloatingPool. It allows to configure that the given floating pool is only available in the specified domain. If no domain is specified then it is valid in all domains:

apiVersion: core.gardener.cloud/v1beta1
kind: CloudProfile
metadata:
  name: openstack
spec:
  type: openstack
  ...
  providerConfig:
    apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
    kind: CloudProfileConfig
    ...
    constraints:
      floatingPools:
      - name: fp-pool-1 #valid in `domain1` domain in `eu-1` region
        region: eu-1
        domain: domain1
      - name: fp-pool-2 # valid in all domains in `eu-1` region
        region: eu-1
      - name: fp-pool-3 # valid in all domains and all regions
      loadBalancerProviders:
      - name: haproxy

Why is this needed:
In some OpenStack environments the floating pools differ based on region (already supported) and domain (not yet supported). Reflecting this in the CloudProfile will make the user experience better and allows the Gardener Dashboard to provide the correct values in the cluster creation dialog.

☂️-Issue for "Support Open Telekom Cloud (OTC)"

How to categorize this issue?

/area control-plane
/area os
/kind enhancement
/priority normal
/platform openstack

Open Telekom Cloud (OTC) is mostly based on OpenStack (ref) and using this OpenStack provider extension seems as the quickest and a preferable way to support OTC.

Identified issues:

🚧 Egress traffic for worker machines not possible (#165).

❔ VM root disk size is not configurable.

error message: {"badRequest": {"message": "Block Device Mapping is Invalid: Boot sequence for the instance and image/block device mapping combination is not valid.", "code": 400}}

❔ LoadBalancers can't be created via K8s services: kubernetes/cloud-provider-openstack#960


Status
completed
🚧 in progress
in clarification
incomplete

Mirror CCM and CSI images to GCR

How to categorize this issue?

/area control-plane
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:
Use the public GCR repo to mirror CSI and CCM images that are still pulled from dockerhub and may incur rate-limits

Why is this needed:
Circumvent DockerHub's rate-limiting.

Update credentials during Worker deletion

From gardener-attic/gardener-extensions#523

Steps to reproduce:

  1. Create a Shoot with valid cloud provider credentials my-secret.
  2. Ensure that the Shoot is successfully created.
  3. Invalidate the my-secret credentials.
  4. Delete the Shoot.
  5. Update my-secret credentials with valid ones.
  6. Ensure that the Shoot deletion fails waiting the Worker to be deleted.

Currently we do no sync the cloudprovider credentials in the <Provider>MachineClass during Worker deletion. Hence machine-controller-manager fails to delete the machines because the credentials are the invalid ones.

Add the gardener cluster prefix to the OpenStack resources

What would you like to be added:

Add the gardener cluster prefix to the OpenStack resource names, especially keypairs, since the keypair name is unique.

Why is this needed:

Multiple isolated gardener clusters within the same project cannot have the same keypairs for shoots.

openstack_compute_keypair_v2.ssh_key: Creating...
  fingerprint: "" => "<computed>"
  name:        "" => "shoot--garden--foo"
  private_key: "" => "<computed>"
  public_key:  "" => "ssh-rsa AAAAB3..."
  region:      "" => "<computed>"

Error: Error applying plan:

1 error occurred:
        * openstack_compute_keypair_v2.ssh_key: 1 error occurred:
        * openstack_compute_keypair_v2.ssh_key: Unable to create openstack_compute_keypair_v2 shoot--garden--foo: Expected HTTP response code [200] when accessing [POST https://openstack:443/v2.1/os-keypairs], but got 409 instead
{"conflictingRequest": {"message": "Key pair 'shoot--garden--foo' already exists.", "code": 409}}

The same could be done for network resources, e.g. networks, subnets, routers

Get rid of the auth_url field in user secret

How to categorize this issue?
/area open-source
/kind technical-debt
/priority normal
/platform openstack

What would you like to be added:
We should not longer use the auth_url field in the referenced Shoot secret provided by the user.

Why is this needed:
The auth_url is the same as the Keystone url of the respective OpenStack environment and those is already maintained in the CloudProfile via .spec.providerConfig.keystoneURL | .keystoneURLs[]. We do not everywhere rely on the keystoneUrl from the CloudProfile e.g. in the backup controller. We should change this and always ignore the auth_url field in the user secret and only read the keystoneUrl from the CloudProfile.

cc @kon-angelo

Remove insecure flag from infra TF config

What would you like to be added:
We should remove the insecure=true config here: https://github.com/gardener/gardener-extension-provider-openstack/blob/master/charts/internal/openstack-infra/templates/main.tf#L8. This probably requires the specification of the CA in case the OpenStack API is not signed with a commonly trusted CA.

Why is this needed:
We should not use insecure connections in post-PoC phases. This was not cleaned up and exists since day one, so let's clean it now.

/cc @dkistner @kayrus

Add OS_USER_DOMAIN_NAME support for the OpenStack auth

When the technical user is created in the different domain, auth fail.

More background is here: gardener/garden-setup#58

At least the following files have to be modified:

$ egrep -rli 'domainName|domain.name' controllers/provider-openstack/ | grep -v _test
controllers/provider-openstack/charts/internal/openstack-infra/values.yaml
controllers/provider-openstack/charts/internal/openstack-infra/templates/main.tf
controllers/provider-openstack/charts/internal/cloud-provider-config/values.yaml
controllers/provider-openstack/charts/internal/cloud-provider-config/templates/cloud-provider-config.yaml
controllers/provider-openstack/charts/internal/machineclass/values.yaml
controllers/provider-openstack/charts/internal/machineclass/templates/machineclass.yaml
controllers/provider-openstack/example/30-infrastructure.yaml
controllers/provider-openstack/example/30-worker.yaml
controllers/provider-openstack/example/30-controlplane.yaml
controllers/provider-openstack/example/30-etcd-backup-secret.yaml
controllers/provider-openstack/pkg/internal/credentials.go
controllers/provider-openstack/pkg/internal/infrastructure/terraform.go
controllers/provider-openstack/pkg/openstack/types.go
controllers/provider-openstack/pkg/controller/controlplane/valuesprovider.go
controllers/provider-openstack/pkg/controller/worker/machines.go
controllers/provider-openstack/pkg/webhook/controlplanebackup/ensurer.go

/cc @afritzler this is quite critical issue for us.

Forbid replacing secret with new account for existing Shoots

What would you like to be added:
Currently we don't have a validation that would prevent user to replace its cloudprovider secret with credentials for another account. Basically we do have only a warning in the dashboard - ref gardener/dashboard#422.

Steps to reproduce:

  1. Get an existing Shoot.
  2. Update its secret with credentials for another account.
  3. Ensure that on new reconciliation, new infra resources will be created in the new account. The old infra resources and machines in the old account will leak.
    For me the reconciliation failed at
    lastOperation:
      description: Waiting until the Kubernetes API server can connect to the Shoot
        workers
      lastUpdateTime: "2020-02-20T14:56:43Z"
      progress: 89
      state: Processing
      type: Reconcile

wtih reason

$ k describe svc -n kube-system vpn-shoot
Events:
  Type     Reason                   Age                  From                Message
  ----     ------                   ----                 ----                -------
  Normal   EnsuringLoadBalancer     7m38s (x6 over 10m)  service-controller  Ensuring load balancer
  Warning  SyncLoadBalancerFailed   7m37s (x6 over 10m)  service-controller  Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB

Why is this needed:
Prevent users to harm themselves.

Cannot update v1.18 Shoot to v1.19

How to categorize this issue?

/area quality storage
/kind bug
/platform openstack

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

  1. Create v1.18.8 Shoot cluster

  2. Create sample PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: app
        image: centos
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
        volumeMounts:
        - name: persistent-storage
          mountPath: /data
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: ebs-claim
  1. Upgrade the Shoot cluster to v1.19.1

  2. Ensure that kube-controller-manager enters in CrashLoopBackOff

$ k -n shoot--foo--bar get po
NAME                                          READY   STATUS             RESTARTS   AGE
cert-controller-manager-576c7dc6b9-ng72k      1/1     Running            0          32m
cloud-controller-manager-855bcb47b6-n7qqw     1/1     Running            0          24m
csi-driver-controller-6c967d976f-tbh25        6/6     Running            0          12m
csi-snapshot-controller-5646cb648b-nj4s7      1/1     Running            0          14m
etcd-events-0                                 2/2     Running            0          34m
etcd-main-0                                   2/2     Running            0          34m
gardener-resource-manager-845f588c4f-7fb8b    1/1     Running            0          32m
kube-apiserver-58496f9bcb-5kdjk               2/2     Running            0          14m
kube-controller-manager-5c95b6667c-qmz6t      0/1     CrashLoopBackOff   6          8m55s
kube-scheduler-68b7768cf6-brmgv               1/1     Running            0          13m
machine-controller-manager-7877b49ccb-z6k5m   1/1     Running            0          32m
shoot-dns-service-78489fd5b9-ntfqf            1/1     Running            0          32m

Logs of kube-controller-manager:

I0913 10:57:55.549520       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0913 10:57:55.549644       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="kube-controller-manager-5c95b6667c-qmz6t_492e5efc-bdf9-4907-8746-f8e35759a889 became leader"
I0913 10:57:55.549667       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="kube-controller-manager-5c95b6667c-qmz6t_492e5efc-bdf9-4907-8746-f8e35759a889 became leader"
I0913 10:57:55.550923       1 controllermanager.go:224] using dynamic client builder
F0913 10:57:56.155544       1 controllermanager.go:244] error building controller context: cloud provider could not be initialized: could not init cloud provider "openstack": warning:
can't store data at section "BlockStorage", variable "rescan-on-resize"
goroutine 264 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000c42a00, 0x100, 0x200)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a5a0c0, 0xc000000003, 0x0, 0x0, 0xc000baa070, 0x68b312d, 0x14, 0xf4, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x6a5a0c0, 0x3, 0x0, 0x0, 0x4495a7c, 0x25, 0xc001031ae0, 0x1, 0x1)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1456
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func1(0x4a601e0, 0xc00060a440)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:244 +0x54e
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:208 +0x113

goroutine 1 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007fb6f8, 0x49f9ca0, 0xc001084540, 0xc00060a401, 0xc0010828a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007fb6f8, 0x77359400, 0x0, 0xc00060a401, 0xc0010828a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc000032240, 0x4a601e0, 0xc00060a480)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:263 +0x107
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc000032240, 0x4a601e0, 0xc00060a440)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:209 +0x13b
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x4a60220, 0xc000058040, 0x4a963a0, 0xc000ece3c0, 0x37e11d600, 0x2540be400, 0x77359400, 0xc000ece3a0, 0x45a55e0, 0x0, ...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:222 +0x9c
k8s.io/kubernetes/cmd/kube-controller-manager/app.Run(0xc00011a7a0, 0xc0000a20c0, 0xc000bace50, 0xc000414690)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:285 +0x979
k8s.io/kubernetes/cmd/kube-controller-manager/app.NewControllerManagerCommand.func2(0xc000516580, 0xc000517600, 0x0, 0x2a)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:124 +0x2b7
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000516580, 0xc00004c2d0, 0x2a, 0x2b, 0xc000516580, 0xc00004c2d0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000516580, 0x163452aa0807c40e, 0x6a59c80, 0x406505)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/controller-manager.go:46 +0xe5

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x6a5a0c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 75 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000315640)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:207 +0x66
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewBroadcaster
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/mux.go:75 +0xce

goroutine 137 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest.(*RequestHeaderAuthRequestController).Run(0xc000bf9040, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:182 +0x2ff
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*DynamicRequestHeaderController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/authentication_dynamic_request_header.go:77 +0x94

goroutine 132 [IO wait]:
internal/poll.runtime_pollWait(0x7ff3fc9ead98, 0x72, 0x49fdb60)
	/usr/local/go/src/runtime/netpoll.go:220 +0x55
internal/poll.(*pollDesc).wait(0xc0003ba418, 0x72, 0xc0003ca800, 0x1670, 0x1670)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0003ba400, 0xc0003ca800, 0x1670, 0x1670, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1b1
net.(*netFD).Read(0xc0003ba400, 0xc0003ca800, 0x1670, 0x1670, 0x203000, 0x7ff3fc543fa8, 0x3f)
	/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000768020, 0xc0003ca800, 0x1670, 0x1670, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00101c7e0, 0xc0003ca800, 0x1670, 0x1670, 0x1e, 0x166b, 0xc00010f710)
	/usr/local/go/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000bc4600, 0x49f48e0, 0xc00101c7e0, 0x40bcc5, 0x3c3d1c0, 0x430d9a0)
	/usr/local/go/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000bc4380, 0x49faaa0, 0xc000768020, 0x5, 0xc000768020, 0xd)
	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000bc4380, 0x0, 0x0, 0x442ecdf)
	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000bc4380, 0xc0001c4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
bufio.(*Reader).Read(0xc0009fcc00, 0xc00044e2d8, 0x9, 0x9, 0xc000ce8000, 0xc00010fd28, 0x405f8e)
	/usr/local/go/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x49f4720, 0xc0009fcc00, 0xc00044e2d8, 0x9, 0x9, 0x9, 0x4415506, 0x464840, 0xc000348600)
	/usr/local/go/src/io/io.go:314 +0x87
io.ReadFull(...)
	/usr/local/go/src/io/io.go:333
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc00044e2d8, 0x9, 0x9, 0x49f4720, 0xc0009fcc00, 0x0, 0xc000000000, 0xc001016030, 0xc000aca090)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00044e2a0, 0xc001016030, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00010ffa8, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1794 +0xd8
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0009d8180)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1716 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x66e

goroutine 97 [select]:
k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc00057cff0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:154 +0x105
created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.opencensus.io/stats/view/worker.go:32 +0x57

goroutine 162 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0002f11a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 74 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x45ad030, 0x49f9ca0, 0xc000aaaea0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x45ad030, 0x12a05f200, 0x0, 0xc00079db01, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x45ad030, 0x12a05f200)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a

goroutine 76 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4a099e0, 0xc000c3d9e0, 0xc000c7e0f0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xaa
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e

goroutine 77 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4a099e0, 0xc000c3db90, 0xc000c3db60)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xaa
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e

goroutine 78 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc00031b4a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 79 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00031b620)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 88 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc00079cae0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 89 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00079cd20)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 90 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc00079cd80)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 91 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00079cea0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 92 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc00079cf60)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 93 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00079d0e0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x185

goroutine 134 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run(0xc000bf66e0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:222 +0x365
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.unionCAContent.Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/union_content.go:104 +0xcb

goroutine 135 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*DynamicRequestHeaderController).Run(0xc0001a0790, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/authentication_dynamic_request_header.go:78 +0xab
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.unionCAContent.Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/union_content.go:104 +0xcb

goroutine 163 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc0004357e8, 0x1)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc0004357d8)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0004357c0, 0xc000fa6050, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:488 +0x98
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000fa42d0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:179 +0x42
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000cd3e70)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cd3e70, 0x49f9ca0, 0xc000fa02d0, 0xc0008dd001, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000cd3e70, 0x3b9aca00, 0x0, 0xc000972201, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run(0xc000fa42d0, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:150 +0x2ce
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc00073fd60, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:410 +0x42a
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 +0x20a

goroutine 161 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc0002f0de0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xac
created by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x135

goroutine 136 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run(0xc000bf6790, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:222 +0x365
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options.(*DynamicRequestHeaderController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/options/authentication_dynamic_request_header.go:76 +0x5a

goroutine 138 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc0000c62a8, 0x1)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc0000c6298)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0000c6280, 0xc0007760d0, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:488 +0x98
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000cd8000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:179 +0x42
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000164e70)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000164e70, 0x49f9ca0, 0xc000cd6030, 0xc000c1c001, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000164e70, 0x3b9aca00, 0x0, 0xc000972301, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run(0xc000cd8000, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:150 +0x2ce
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc00073fe00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:410 +0x42a
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest.(*RequestHeaderAuthRequestController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 +0x20a

goroutine 149 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000315f50, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000315f40)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc00079cd80, 0x0, 0x0, 0x390db00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).processNextWorkItem(0xc000bf6790, 0x203000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:231 +0x6c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).runWorker(0xc000bf6790)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:226 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000ec4010)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ec4010, 0x49f9ca0, 0xc000ab4060, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ec4010, 0x3b9aca00, 0x0, 0x45ada01, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000ec4010, 0x3b9aca00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:214 +0x2d1

goroutine 141 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc0008f22a8, 0x1)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc0008f2298)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0008f2280, 0xc0002ab990, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:488 +0x98
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000c56360)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:179 +0x42
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00016be70)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00016be70, 0x49f9ca0, 0xc00066b890, 0xc0003b1201, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00016be70, 0x3b9aca00, 0x0, 0xc000972101, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run(0xc000c56360, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:150 +0x2ce
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc00073fcc0, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:410 +0x42a
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 +0x20a

goroutine 150 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000ec6000, 0xc000ec4020, 0xc000ab2240, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc000ec4020, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc000ec4020, 0xc0000a20c0, 0x1, 0xc000cd2f01)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:217 +0x348

goroutine 209 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:772 +0x5d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000072760)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000abdf60, 0x49f9ca0, 0xc000ab4000, 0x3adca01, 0xc000ab2000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000072760, 0x3b9aca00, 0x0, 0x1, 0xc000ab2000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000c21000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab5d0, 0xc001080000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 177 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000bab570, 0xc000ab00c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c1c000, 0xc00034a360)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 178 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a20c0, 0xc000cd8000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:127 +0x34
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:126 +0xa5

goroutine 241 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000764050, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000764040)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc00079cf60, 0x0, 0x0, 0x390db00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest.(*RequestHeaderAuthRequestController).processNextWorkItem(0xc000bf9040, 0x203000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:209 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest.(*RequestHeaderAuthRequestController).runWorker(0xc000bf9040)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:204 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000abe0a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000abe0a0, 0x49f9ca0, 0xc001016720, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000abe0a0, 0x3b9aca00, 0x0, 0x45ada01, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000abe0a0, 0x3b9aca00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest.(*RequestHeaderAuthRequestController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:180 +0x2e5

goroutine 225 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:772 +0x5d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003d6f60)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ab9f60, 0x49f9ca0, 0xc0007be000, 0x3adca01, 0xc0007bc000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003d6f60, 0x3b9aca00, 0x0, 0x1, 0xc0007bc000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000c20f00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab560, 0xc0007b0000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 167 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000bab500, 0xc0000a30e0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008dcf90, 0xc000f697c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 168 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a20c0, 0xc000fa42d0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:127 +0x34
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:126 +0xa5

goroutine 210 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).pop(0xc000c21000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:742 +0x157
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab5d0, 0xc001080010)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 193 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000bab490, 0xc00044c660)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003b1190, 0xc0002ef300)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 194 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a20c0, 0xc000c56360)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:127 +0x34
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:126 +0xa5

goroutine 211 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:772 +0x5d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000cdb760)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e47f60, 0x49f9ca0, 0xc001084000, 0x3adca01, 0xc001082000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000cdb760, 0x3b9aca00, 0x0, 0x1, 0xc001082000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000c20b00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab4f0, 0xc001080020)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 212 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).pop(0xc000c20b00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:742 +0x157
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab4f0, 0xc001080030)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 226 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).pop(0xc000c20f00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:742 +0x157
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bab560, 0xc0007b0010)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 179 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc000aa9a00, 0xbfcf9ca8d4a2f67e, 0x7454cdc7, 0x6a59c80, 0x4a099a0, 0xc000764100, 0xc000e91b88, 0xc00079dce0, 0xc0000a20c0, 0x0, ...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:451 +0x1a5
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc000aa9a00, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:415 +0x657
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:209 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000165ee0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e91ee0, 0x49f9c80, 0xc000432140, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000aa9a00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:208 +0x196
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c1c040, 0xc00034a500)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 147 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).Run(0xc000babea0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_serving_content.go:144 +0x2cd
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:113 +0x88b

goroutine 195 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc00086de10, 0xbfcf9ca8cf69a83d, 0x6f1b7f92, 0x6a59c80, 0x4a099a0, 0xc0001801c0, 0xc000e95b88, 0xc000561260, 0xc0000a20c0, 0x0, ...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:451 +0x1a5
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc00086de10, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:415 +0x657
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:209 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000ebfee0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000e95ee0, 0x49f9c80, 0xc0000a0fa0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00086de10, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:208 +0x196
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003b1280, 0xc0002ef400)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 213 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2(0xc00086de10, 0xc0000a20c0, 0xc001082060, 0xc000561260)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:361 +0x16f
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:355 +0x2a5

goroutine 148 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2(0xc000aa9a00, 0xc0000a20c0, 0xc0001495c0, 0xc00079dce0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:361 +0x16f
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:355 +0x2a5

goroutine 118 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000315c50, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000315c40)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc00031b4a0, 0x0, 0x0, 0x390db00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).processNextWorkItem(0xc000babea0, 0x203000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_serving_content.go:153 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).runWorker(0xc000babea0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_serving_content.go:148 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000abe020)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000abe020, 0x49f9ca0, 0xc0007be030, 0x45ac601, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000abe020, 0x3b9aca00, 0x0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000abe020, 0x3b9aca00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_serving_content.go:134 +0x245

goroutine 119 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc000acc000, 0xc000abe030, 0xc000ab2060, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc000abe030, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc000abe030, 0xc0000a20c0, 0xb, 0xc000e44f48)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_serving_content.go:137 +0x2b3

goroutine 120 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0000a20c0, 0xc000abe050, 0x4a601e0, 0xc00005a680)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 121 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000ab2120, 0xdf8475800, 0x0, 0xc000ab20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 169 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc0002b35f0, 0xbfcf9ca8cf796b1f, 0x6f2b4273, 0x6a59c80, 0x4a099a0, 0xc000180540, 0xc0007dbb88, 0xc00036f620, 0xc0000a20c0, 0x0, ...)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:451 +0x1a5
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc0002b35f0, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:415 +0x657
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:209 +0x38
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000cd4ee0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007dbee0, 0x49f9c80, 0xc000f52370, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002b35f0, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:208 +0x196
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008dd030, 0xc000f69880)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

goroutine 197 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run(0xc00052bd80, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:254 +0x245
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*SecureServingInfo).tlsConfig
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:136 +0x5fa

goroutine 228 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000147090, 0xc000000001)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000147080)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc0002f0de0, 0x0, 0x0, 0x390db00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc00052bd80, 0x203000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(0xc00052bd80)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007b04f0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007b04f0, 0x49f9ca0, 0xc0007be4b0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007b04f0, 0x3b9aca00, 0x0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0007b04f0, 0x3b9aca00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1b3

goroutine 233 [select]:
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2(0xc0002b35f0, 0xc0000a20c0, 0xc0007bc240, 0xc00036f620)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:361 +0x16f
created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:355 +0x2a5

goroutine 198 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func1(0xc00044c960, 0xc0000a20c0, 0x0, 0xc00044e460)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:221 +0x65
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:219 +0x88

goroutine 199 [IO wait]:
internal/poll.runtime_pollWait(0x7ff3fc9eaf58, 0x72, 0x0)
	/usr/local/go/src/runtime/netpoll.go:220 +0x55
internal/poll.(*pollDesc).wait(0xc000c20c98, 0x72, 0x0, 0x0, 0x441974a)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000c20c80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
net.(*netFD).accept(0xc000c20c80, 0x203000, 0x203000, 0x45addd8)
	/usr/local/go/src/net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc000c63240, 0x7ff3fc692620, 0xc0007e02d0, 0x50)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc000c63240, 0x30, 0x30, 0x7ff4236832f0, 0xc000400800)
	/usr/local/go/src/net/tcpsock.go:261 +0x65
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.tcpKeepAliveListener.Accept(0x4a5c2a0, 0xc000c63240, 0x7ff4236832f0, 0xc0009d9500, 0x50, 0x3f484c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:261 +0x35
crypto/tls.(*listener).Accept(0xc0007d0500, 0x4067d40, 0xc0007be540, 0x3b5f680, 0x6a20c50)
	/usr/local/go/src/crypto/tls/tls.go:67 +0x37
net/http.(*Server).Serve(0xc00044e460, 0x4a45a20, 0xc0007d0500, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2937 +0x266
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2(0x4a5c2a0, 0xc000c63240, 0xc00044e460, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:236 +0xe9
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/secure_serving.go:227 +0xc8

goroutine 229 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007b0500, 0x49f9ca0, 0xc0007be480, 0x45ac601, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007b0500, 0xdf8475800, 0x0, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0007b0500, 0xdf8475800, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x22b

goroutine 239 [IO wait]:
internal/poll.runtime_pollWait(0x7ff3fc9eae78, 0x72, 0x49fdb60)
	/usr/local/go/src/runtime/netpoll.go:220 +0x55
internal/poll.(*pollDesc).wait(0xc00111a198, 0x72, 0xc000ef0000, 0x18c0, 0x18c0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00111a180, 0xc000ef0000, 0x18c0, 0x18c0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:159 +0x1b1
net.(*netFD).Read(0xc00111a180, 0xc000ef0000, 0x18c0, 0x18c0, 0x203000, 0x66761b, 0xc0003ae860)
	/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0005103e8, 0xc000ef0000, 0x18c0, 0x18c0, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc0002ee120, 0xc000ef0000, 0x18c0, 0x18c0, 0x151, 0x1894, 0xc000ebd710)
	/usr/local/go/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc0003ae980, 0x49f48e0, 0xc0002ee120, 0x40bcc5, 0x3c3d1c0, 0x430d9a0)
	/usr/local/go/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc0003ae700, 0x49faaa0, 0xc0005103e8, 0x5, 0xc0005103e8, 0x140)
	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc0003ae700, 0x0, 0x0, 0xc000ebdd18)
	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc0003ae700, 0xc00114d000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
bufio.(*Reader).Read(0xc000758300, 0xc0007e22d8, 0x9, 0x9, 0xc000ebdd18, 0x45add00, 0x9be4ab)
	/usr/local/go/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x49f4720, 0xc000758300, 0xc0007e22d8, 0x9, 0x9, 0x9, 0xc00007a050, 0x0, 0x49f4b40)
	/usr/local/go/src/io/io.go:314 +0x87
io.ReadFull(...)
	/usr/local/go/src/io/io.go:333
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0007e22d8, 0x9, 0x9, 0x49f4720, 0xc000758300, 0x0, 0x0, 0xc000cafb60, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007e22a0, 0xc000cafb60, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000ebdfa8, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1794 +0xd8
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0007b2a80)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1716 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x66e

goroutine 151 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0000a20c0, 0xc000ec4030, 0x4a601e0, 0xc00005ba00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 237 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc0007c1d20, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc0007c1d10)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*pipe).Read(0xc0007c1d08, 0xc00111ca00, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/pipe.go:65 +0x97
k8s.io/kubernetes/vendor/golang.org/x/net/http2.transportResponseBody.Read(0xc0007c1ce0, 0xc00111ca00, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2083 +0xaf
encoding/json.(*Decoder).refill(0xc00112e160, 0xc0007d0b40, 0x7ff3fc6928c8)
	/usr/local/go/src/encoding/json/stream.go:165 +0xeb
encoding/json.(*Decoder).readValue(0xc00112e160, 0x0, 0x0, 0x3b64860)
	/usr/local/go/src/encoding/json/stream.go:140 +0x1ff
encoding/json.(*Decoder).Decode(0xc00112e160, 0x3bf35a0, 0xc0007d0b40, 0x30, 0x492a421)
	/usr/local/go/src/encoding/json/stream.go:63 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc0007beff0, 0xc0007cf000, 0x400, 0x400, 0x73ad06, 0x38, 0x38)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer/framer.go:150 +0x1a8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc0007e05f0, 0x0, 0x4a095a0, 0xc000180580, 0xc0003d4eb0, 0x40bb8c, 0xc0003d4eb0, 0xc0007b0530, 0xc00044e460)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming/streaming.go:77 +0x89
k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc0007d0b20, 0x0, 0x0, 0xc000058040, 0x0, 0x100000000000000, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch/decoder.go:49 +0x6e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc000180540)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:104 +0x147
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:71 +0xbe

goroutine 232 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc00109a9e0, 0x0)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc00109a9d0)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*pipe).Read(0xc00109a9c8, 0xc00111c200, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/pipe.go:65 +0x97
k8s.io/kubernetes/vendor/golang.org/x/net/http2.transportResponseBody.Read(0xc00109a9a0, 0xc00111c200, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2083 +0xaf
encoding/json.(*Decoder).refill(0xc0007c11e0, 0xc0007d0680, 0x7ff3fc6928c8)
	/usr/local/go/src/encoding/json/stream.go:165 +0xeb
encoding/json.(*Decoder).readValue(0xc0007c11e0, 0x0, 0x0, 0x3b64860)
	/usr/local/go/src/encoding/json/stream.go:140 +0x1ff
encoding/json.(*Decoder).Decode(0xc0007c11e0, 0x3bf35a0, 0xc0007d0680, 0xc0004b0d80, 0x10e87dc)
	/usr/local/go/src/encoding/json/stream.go:63 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc0007be900, 0xc0007ce400, 0x400, 0x400, 0x0, 0x38, 0x38)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer/framer.go:150 +0x1a8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc0007e04b0, 0x0, 0x4a095a0, 0xc000180240, 0x0, 0xc0004b0ea8, 0x10000000040701a, 0x45ac658, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming/streaming.go:77 +0x89
k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc0007d0660, 0x0, 0x7ff4236832f0, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch/decoder.go:49 +0x6e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc0001801c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:104 +0x147
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:71 +0xbe

goroutine 123 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0006c4140, 0xc001018b70, 0xc00044d200, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc001018b70, 0xc0000a20c0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc001018b70, 0xc0000a20c0, 0x1, 0xc00016af01)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:217 +0x348

goroutine 207 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000ac81a0, 0x0)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000ac8190)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*pipe).Read(0xc000ac8188, 0xc00087a000, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/pipe.go:65 +0x97
k8s.io/kubernetes/vendor/golang.org/x/net/http2.transportResponseBody.Read(0xc000ac8160, 0xc00087a000, 0x200, 0x200, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2083 +0xaf
encoding/json.(*Decoder).refill(0xc001020580, 0xc00101c820, 0x7ff3fc543fa8)
	/usr/local/go/src/encoding/json/stream.go:165 +0xeb
encoding/json.(*Decoder).readValue(0xc001020580, 0x0, 0x0, 0x3b64860)
	/usr/local/go/src/encoding/json/stream.go:140 +0x1ff
encoding/json.(*Decoder).Decode(0xc001020580, 0x3bf35a0, 0xc00101c820, 0x0, 0x0)
	/usr/local/go/src/encoding/json/stream.go:63 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc0010165d0, 0xc000e4c000, 0x400, 0x400, 0x0, 0x38, 0x38)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/framer/framer.go:150 +0x1a8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc000ad00a0, 0x0, 0x4a095a0, 0xc000764140, 0x0, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming/streaming.go:77 +0x89
k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc00101c800, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/watch/decoder.go:49 +0x6e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc000764100)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:104 +0x147
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:71 +0xbe

goroutine 122 [sync.Cond.Wait]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:312
sync.runtime_notifyListWait(0xc000315f10, 0xc000000000)
	/usr/local/go/src/runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc000315f00)
	/usr/local/go/src/sync/cond.go:56 +0x9d
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc00079cae0, 0x0, 0x0, 0x390db00)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:145 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).processNextWorkItem(0xc000bf66e0, 0x203000)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:231 +0x6c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).runWorker(0xc000bf66e0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:226 +0x2b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc001018b60)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001018b60, 0x49f9ca0, 0xc000ec2180, 0x1, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001018b60, 0x3b9aca00, 0x0, 0x45ada01, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc001018b60, 0x3b9aca00, 0xc0000a20c0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*ConfigMapCAController).Run
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:214 +0x2d1

goroutine 124 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0000a20c0, 0xc001018b80, 0x4a601e0, 0xc000430200)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c

goroutine 125 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00044d2c0, 0xdf8475800, 0x0, 0xc00044d260)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 152 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc000ab2300, 0xdf8475800, 0x0, 0xc000ab22a0)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c

goroutine 268 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001019f60, 0x49f9ca0, 0xc000cd7a10, 0x49fad01, 0xc001082840)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001019f60, 0x6fc23ac00, 0x0, 0xc000eead01, 0xc001082840)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc001019f60, 0x6fc23ac00, 0xc001082840)
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/cmd/kube-controller-manager/app.CreateControllerContext
	/workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:481 +0x48b

Anything else we need to know?:

Environment:

  • Gardener version (if relevant): v1.10.0
  • Extension version: v1.11.2
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

Make fip configuration part of the router config

How to categorize this issue?
/kind api-change
/priority normal
/platform openstack

What would you like to be added:
The floatingPoolName and floatingPoolSubnetName fields are actually properties of the router as they are define to which fip and fip subnet the router should be attached to. Therefore the fields should be moved to the router section with the next api version.

See discussion here: #92 (comment)

SeedNetworkPoliciesTest fails always

From gardener-attic/gardener-extensions#293

What happened:
The test defined in SeedNetworkPoliciesTest.yaml fails always.
Most of the time the following 3 specs fail:

2019-07-29 11:32:33	Test Suite Failed
2019-07-29 11:32:33	Ginkgo ran 1 suite in 3m20.280138435s
2x		2019-07-29 11:32:33	
2019-07-29 11:32:32	FAIL! -- 375 Passed | 3 Failed | 0 Pending | 126 Skipped
2019-07-29 11:32:32	Ran 378 of 504 Specs in 85.218 seconds
2019-07-29 11:32:32	
2019-07-29 11:32:32	> /go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods elasticsearch-logging [AfterEach] should block connection to "Garden Prometheus" prometheus-web.garden:80
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1062
2019-07-29 11:32:32	[Fail] Network Policy Testing components are selected by correct policies [AfterEach] gardener-resource-manager
2019-07-29 11:32:32	
2019-07-29 11:32:32	/go/src/github.com/gardener/gardener/test/integration/seeds/networkpolicies/aws/networkpolicy_aws_test.go:1194
2019-07-29 11:32:32	[Fail] Network Policy Testing egress for mirrored pods gardener-resource-manager [AfterEach] should block connection to "External host" 8.8.8.8:53

@mvladev can you please check?

Environment:
TestMachinery on all landscapes (dev, ..., live)

Add support on per region image ids

In OpenStack similar to gardener-attic/gardener-extensions#482 we need to handle image ids on a per region level. Reason for that: in OpenStack environments where you don't have the permission to publish public images but still need to roll our custom images to all projects, glance has the option to do that via using the community visibility [0].

The MCM has now the feature to use image ids instead of the name (gardener/machine-controller-manager#374).

[0] https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design

Implement `Infrastructure` controller for OpenStack provider

Similar to how we have implemented the Infrastructure extension resource controller for the AWS provider let's please now do it for GCP.

Based on the current implementation the InfrastructureConfig should look like this:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
# router:
#   id: uuid
  floatingPoolName: fip-network-one
  zones:
  - name: eu-1a
    workers: 10.250.0.0/19

Based on the current implementation the InfrastructureStatus should look like this:

---
apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureStatus
networks:
  router:
    id: uuid
  cluster:
    id: uuid
  floatingPool:
    id: uuid
  subnets:
  - purpose: nodes
    id: uuid
securityGroups:
- purpose: nodes
  id: uuid
  name: sec-group-name

The current infrastructure creation/deletion implementation can be found here. Please try to change as little as possible (with every change the risk that we break something increases!) and just move the code over into the extensions infrastructure actuator.

Implement `ControlPlane` controller for OpenStack provider

Similar to how we have implemented the ControlPlane extension resource controller for the AWS provider let's please now do it for OpenStack.

Based on the current implementation the ControlPlaneConfig should look like this:

apiVersion: openstack.provider.extensions.gardener.cloud/v1alpha1
kind: ControlPlaneConfig
cloudControllerManager:
  featureGates:
    CustomResourceValidation: true
loadBalancerProvider: haproxy

No ControlPlaneStatus needs to be implemented right now (not needed yet).

ServerGroup per Workerpool to define Node Affinity/Anti-Affinity

How to categorize this issue?

/area open-source
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:
We want to allow to assign each workerpool of an Openstack Shoot to assign a Affinity or Anti-Affinity Policy. If a policy is assigned to the workerpool then we need to create a OpenStack Servergroup for the workerpool which ensures the policy for the machines of the workerpool.

Why is this needed:
We need to ensure that not all machines of a workerpool get scheduled to one workerpool (node anti-affinity). We wanted to make let the user define which Openstack ServerGroup policy should be assigned to the workerpool. Gardener operators will define in the Cloudprofile which policies are valid.

Encrypted Disks

In order to comply with certifications required for Gardener, Gardener shall use encrypted disks (supported since Kilo).

/cc @ThormaehlenFred

Add support for UserDomainName, TenantID and DomainID in OpenStack infra secret

What would you like to be added:
Currently when using OpenStack we only allow the usage of tenantName and domainName in the infrastructure secret [0]. We should also support IDs instead of name here.

The integration has to happen in the following components

Why is this needed:
There might be OpenStack setups where users want to use IDs instead of names all along. This scenarios should also be supported.

[0] https://github.com/gardener/gardener/blob/master/pkg/operation/cloudbotanist/openstackbotanist/types.go#L29

Minimal Permissions for user credentials

From gardener-attic/gardener-extensions#133

We have narrowed down the access permissions for AWS shoot clusters (potential remainder tracked in #178), but not yet for Azure, GCP and OpenStack, which this ticket is now about. We expect less success on these infrastructures as AWSes permision/policy options are very detailed. This may break the "shared account" idea on these infrastructures (Azure and GCP - OpenStack can be mitigated by programmatically creating tenants on the fly).

Out-of-the-box Manila CSI driver support for shoots

How to categorize this issue?

/area storage
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:
We could support out-of-the-box deployments and configuration of Manila CSI driver (only shoots >= 1.19 where CSI is enabled).

Why is this needed:
Shared file system support for OpenStack

/cc @dkistner @kayrus

Implement `Worker` controller for OpenStack provider

Similar to how we have implemented the Worker extension resource controller for the AWS provider let's please now do it for OpenStack.

There is no special provider config required to be implemented, however, we should have component configuration for the controller that should look as follows:

---
apiVersion: openstack.provider.extensions.config.gardener.cloud/v1alpha1
kind: ControllerConfiguration
machineImages:
- name: coreos
  version: 2023.5.0
  cloudProfiles:
  - name: eu-de-1
    image: coreos-2023.5.0

Configure mcm-settings from worker to machine-deployment.

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform openstack

What would you like to be added: Machine-controller-manager now allows configuring certain controller-settings, per machine-deployment. Currently, the following fields can be set:

Also, with the PR gardener/gardener#2563 , these settings can be configured via shoot-resource as well.

We need to enhance the worker-extensions to read these settings from worker-object and set respectively on MachineDeployment.

Similar PR on AWS worker-extension: gardener/gardener-extension-provider-aws#148
Dependencies:

  • Vendor the MCM 0.33.0
  • gardener/gardener#2563 should be merged.
  • g/g with the #2563 change should be vendored.

Why is this needed:
To allow a fine configuration of MCM via worker-object.

Adapt to terraform v0.12 language changes

How to categorize this issue?

/area open-source
/kind cleanup
/priority normal
/platform openstack

What would you like to be added:
provider-openstack needs an adaptation of the terraform configuration to v0.12. For provider-aws this is done with this PR - gardener/gardener-extension-provider-aws#111.

Why is this needed:
Currently the terraformer run is only omitting warnings but in a future version of terraform, the warnings will be switched to errors.

Update openstack terraform provider configuration

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:

Current provider configuration should be adjusted:

provider "openstack" {
auth_url = "{{ required "openstack.authURL is required" .Values.openstack.authURL }}"
domain_name = "{{ required "openstack.domainName is required" .Values.openstack.domainName }}"
tenant_name = "{{ required "openstack.tenantName is required" .Values.openstack.tenantName }}"
region = "{{ required "openstack.region is required" .Values.openstack.region }}"
user_name = var.USER_NAME
password = var.PASSWORD
insecure = true
}

Insecure flag must be removed:

insecure    = true

And a max_retries should be set in order support the 429 response code with a Retry-After header.

Why is this needed:

  • Production environment must not contain the insecure=true parameter.
  • max_retries starting from Terrafrom OpenStack provider v1.36 will support the 429 response code and sleep accordingly. This is needed in order to decrease high amount of API calls, when a technical user password was changed and a user gets blocked by identity.

Shoots in existing networks

How to categorize this issue?
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:
Like for other cloud providers we want to enable OpenStack Shoot deployments into existing networks. As we do it on the other infrastructures we will manage subnet as smallest unit in the network and it need to be disjunct from the other subnets in the network.

A example configuration could look like this.

kind: InfrastructureConfig
networks:
  id: <id-of-my-network>
  workers: <cidr-of-subnet-in-my-network>

Segregate seed infra and end user shoot lb profiles

From gardener-attic/gardener-extensions#275

There is no way to create the seed cluster, which will work within the network1, and customer shoot cluster, which will work within the network2 and don't see the network1.

Faking the network name for the seed cluster doesn't help, because somehow the network ID is resolved:

I0817 10:30:19.793179       1 event.go:258] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"addons-nginx-ingress-controller", UID:"eca0462b-2da4-46f1-807b-ead2ca9d92be", APIVersion:"v1", ResourceVersion:"267", FieldPath:""}): type: 'Warning' reason: 'SyncLoadBalancerFailed' Error syncing load balancer: failed to ensure load balancer: error creating LB floatingip {Description:Floating IP for Kubernetes external service kube-system/addons-nginx-ingress-controller from cluster shoot--garden--region1-01-dev FloatingNetworkID:b2471289-8ca8-437e-aab1-d8012f741c66 FloatingIP: PortID:ed4c59a5-43fc-4280-ab40-7c92c2783c4e FixedIP: SubnetID:d2eda633-e930-464e-b285-ecb826e02861 TenantID: ProjectID:}: Bad request with: [POST https://network-3.region1/v2.0/floatingips], error message: {"NeutronError": {"message": "Invalid input for operation: Failed to create port on network e1fd2c63-c52f-41d5-84b5-d64fe8b9127d, because fixed_ips included invalid subnet d2eda633-e930-464e-b285-ecb826e02861.", "type": "InvalidInput", "detail": ""}}

/cc @afritzler

Don't stick an amount of OpenStack AZs to the amount of worker networks for shoots

What would you like to be added:

spec.cloud.openstack.networks.workers should be optional. If it is not specified, then spec.cloud.openstack.networks.nodes CIDR should be used. An amount of spec.cloud.openstack.zones should not be stick to the amount of spec.cloud.openstack.networks.workers

Why is this needed:

All AZs within the region in CCloud EE env share the same network CIDR. Current error: must specify as many workers networks as zones
The related code https://github.com/gardener/gardener/blob/a21316678614cf6b46b3792c90a5abd1e80755fd/pkg/apis/garden/validation/validation.go#L1202..L1204

Update credentials during Worker deletion

From gardener-attic/gardener-extensions#523

Steps to reproduce:

  1. Create a Shoot with valid cloud provider credentials my-secret.
  2. Ensure that the Shoot is successfully created.
  3. Invalidate the my-secret credentials.
  4. Delete the Shoot.
  5. Update my-secret credentials with valid ones.
  6. Ensure that the Shoot deletion fails waiting the Worker to be deleted.

Currently we do no sync the cloudprovider credentials in the <Provider>MachineClass during Worker deletion. Hence machine-controller-manager fails to delete the machines because the credentials are the invalid ones.

Add a "--user-agent" CLI parameter to occm/csi components

How to categorize this issue?

/area control-plane
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:

--user-agent parameter adds an additional prefix to a User-Agent request header

Why is this needed:

If we set --user-agent %openstack_domain_name% --user-agent %openstack_project_name% --user-agent %gardener_cluster_name%, the resulting User-Agent header will look like:

User-Agent: %domain% %project% %cluster_name% occm/version gophercloud/2.0.0

This information will help us to quickly identify the suspect through HTTP access logs, when we face an issue.

Gardener shoot loadbalancer cleanup race condition (OpenStack)

What happened:

When you remove the seed cluster, it stucks because loadbalancers cannot be removed.

What you expected to happen:

seed cluster removal should not stuck

How to reproduce it (as minimally and precisely as possible):

create a seed cluster in openstack environment with lbaas support, then remove the seed cluster. The issue is not permanent, from my perspective it occurred every second time.

Anything else we need to know?:

some info about the environment: https://gist.github.com/kayrus/540feef00222e32a214c9f34f88319cb

Looks like kube-apiserver and kube-addon-manager deployments are removed without waiting the successful public ingress svc removal.

Environment:

  • Gardener version: 0.24.0
  • Kubernetes version (use kubectl version): soil cluster 1.12.9
  • Cloud provider or hardware configuration: OpenStack

make generate fails with missing executables

How to categorize this issue?
/area dev-productivity
/kind bug
/priority 5
/platform openstack

What happened:

Cloning the repository and running make generate returns:

❯ make generate
> Generate
Successfully generated controller registration at ../../example/controller-registration.yaml
pkg/apis/config/v1alpha1/doc.go:20: running "gen-crd-api-reference-docs": exec: "gen-crd-api-reference-docs": executable file not found in $PATH
../../../hack/update-codegen.sh: line 21: GOPATH: unbound variable
pkg/apis/openstack/doc.go:18: running "../../../hack/update-codegen.sh": exit status 1
pkg/apis/openstack/v1alpha1/doc.go:20: running "gen-crd-api-reference-docs": exec: "gen-crd-api-reference-docs": executable file not found in $PATH
pkg/imagevector/imagevector.go:15: running "packr2": exec: "packr2": executable file not found in $PATH
pkg/openstack/client/types.go:15: running "mockgen": exec: "mockgen": executable file not found in $PATH
netpol-gen/netpol-gen.go:20:2: cannot find package "github.com/gardener/gardener-extension-provider-openstack/test/e2e/netpol-gen/app" in any of:
	/usr/local/go/src/github.com/gardener/gardener-extension-provider-openstack/test/e2e/netpol-gen/app (from $GOROOT)
	/home/mmeyer/go/src/github.com/gardener/gardener-extension-provider-openstack/test/e2e/netpol-gen/app (from $GOPATH)
netpol-gen/netpol-gen.go:22:2: cannot find package "github.com/gardener/gardener/extensions/test/e2e/framework/networkpolicies/generators" in any of:
	/usr/local/go/src/github.com/gardener/gardener/extensions/test/e2e/framework/networkpolicies/generators (from $GOROOT)
	/home/mmeyer/go/src/github.com/gardener/gardener/extensions/test/e2e/framework/networkpolicies/generators (from $GOPATH)
netpol-gen/netpol-gen.go:23:2: cannot find package "k8s.io/gengo/args" in any of:
	/usr/local/go/src/k8s.io/gengo/args (from $GOROOT)
	/home/mmeyer/go/src/k8s.io/gengo/args (from $GOPATH)
netpol-gen/netpol-gen.go:24:2: cannot find package "k8s.io/klog" in any of:
	/usr/local/go/src/k8s.io/klog (from $GOROOT)
	/home/mmeyer/go/src/k8s.io/klog (from $GOPATH)
test/e2e/doc.go:15: running "go": exit status 1
make: *** [Makefile:127: generate] Error 1

This specifically happens when the generation script processes ./test/... and ./pkg/....

What you expected to happen:

make generate to run sucessfully

How to reproduce it (as minimally and precisely as possible):

  1. Clone the repository
  2. Run make generate

Anything else we need to know?:

I am most likely missing a/some setup step(s), but unfortunately I didn’t see any documentation on that. Researching the error as a go beginner didn’t get me far.

Remove zone from ControlPlaneConfig

What would you like to be added:
The zone field should be removed from the ControlPlaneConfig. It was taken over during extensibility from the in-tree behaviour, but it is not needed. In fact, it only influences the zone parameter in the default StorageClass that gets installed to the shoot cluster, though, this does not work well with multi-zone shoots. The default behaviour of the Cinder volume plugin - if no zone parameter is specified - is to randomize the zones.

Why is this needed:
Better default configuration for OpenStack shoots.

Add an ability to specify a subnet pattern for a router static IP

How to categorize this issue?

/area usability
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:

Here is an example of the pattern, which will allow to create a router on a specific FIP subnet:

data "openstack_networking_network_v2" "ext_network" {
  name = "FloatingIP-external"
}

data "openstack_networking_subnet_ids_v2" "ext_subnets" {
  name_regex = "FloatingIP-internet-"
  network_id = data.openstack_networking_network_v2.ext_network.id
}

resource "openstack_networking_router_v2" "router_1" {
  name                = "my_router"
  external_network_id = data.openstack_networking_network_v2.ext_network.id
  external_subnet_ids = data.openstack_networking_subnet_ids_v2.ext_subnets.ids
}

This structure will be available once the Terraform OpenStack provider v1.36 is released.

Why is this needed:

To control the router creation.

Stop using github.com/pkg/errors

How to categorize this issue?

/kind enhancement
/priority 3
/platform openstack

What would you like to be added:
Similar to gardener/gardener#4280 we should be using Go native error wrapping (available since GO 1.13).

$ grep -r '"github.com/pkg/errors"' | grep -v vendor/ | cut -f 1 -d ':' | cut -d '/' -f 1-3 | sort | uniq -c | sort
      1 pkg/apis/openstack
      1 pkg/controller/controlplane
      1 pkg/controller/infrastructure
      1 pkg/webhook/controlplane
      1 pkg/webhook/controlplaneexposure
      1 test/tm/generator.go
      4 pkg/controller/worker

Why is this needed:
Getting rid of vendors in favor of using stdlib is always nice. Others seem to do this as well - kubernetes/kubernetes#103043 and containerd/console#54.

Validate cloudprovider credentials

(recreating issue from the g/g repo: gardener/gardener#2293)

What would you like to be added:
Add validation for cloudprovider secret

Why is this needed:
Currently, when uploading secrets via the UI, all secret fields are required and validated. However, when creating those credentials via the cloudprovider secret, there is no validation. This results in errors such as this error: (specific to Azure but a similar error would be generated for Openstack):

Flow "Shoot cluster reconciliation" encountered task errors: [task "Waiting until shoot infrastructure has been reconciled" failed: failed to create infrastructure: retry failed with context deadline exceeded, last error: extension encountered error during reconciliation: Error reconciling infrastructure: secret shoot--xxxx--xxxx/cloudprovider doesn't have a subscription ID] Operation will be retried.

Screen Shot 2020-05-07 at 1 22 32 PM

Remove Credentials on OpenStack Nodes

What would you like to be added:
Currently the cloud-controller-manager and the kubelet need a credential file on the host. This can be prevented by using the out-of-tree OpenStack cloud provider together with the CSI plugin for Cinder.

Why is this needed:
Prevent users who can mount the host filesystem to read infrastructure credentials from the host.

Increase CSI sidecar timeouts

How to categorize this issue?

/area storage
/kind bug
/priority normal
/platform openstack

What happened:

During the resize action the following error occurred: resize volume pv-test-11894292-abbf-4200-953d-24247c081184 failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

What you expected to happen:

Resize action should not fail with a context deadline exceeded

How to reproduce it (as minimally and precisely as possible):

n/a

Anything else we need to know?:

see kubernetes/cloud-provider-openstack#1265

Environment:

  • Gardener version (if relevant):
  • Extension version:
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Others:

cc @rfranzke

Make handing of floating pool (subnet) names more robust

How to categorize this issue?

/area robustness
/kind enhancement
/priority normal
/platform openstack
/exp intermediate
/topology garden seed

What would you like to be added:
Today it's possible to provide these data in the InfrastructureConfig:

      floatingPoolName: FloatingIP-foo-bar
      floatingPoolSubnetName: 744165f5-06dc-4fac-a9d9-3590d1f76f09

There is no validation to prevent submitting a GUID while a name is expected. Consequently, the infrastructure reconciliation (or deletion) will fail.
To improve this I'm seeing two options at the moment:

  1. Improve validation code to forbid specifying a GUID when a name is expected
  2. Improve infrastructure reconciliation to be capable to also work with GUIDs, i.e., dynamically determine (based on the input) whether to work with the name or with the GUID.

Personally, I'd prefer the second option.

Why is this needed:
Robustness, better user experience, less ops effort.

Provider-specific webhooks in Garden cluster

From gardener-attic/gardener-extensions#407

With the new core.gardener.cloud/v1alpha1.Shoot API Gardener does no longer understand the provider-specifics, e.g., the infrastructure config, control plane config, worker config, etc.
This allows end-users to harm themselves and create invalid Shoot resources the Garden cluster. Errors will only become present during reconciliation part creation of the resource.

Also, it's not possible to default any of the provider specific sections. Hence, we could also think about mutating webhooks in the future.

As we are using the controller-runtime maintained by the Kubernetes SIGs it should be relatively easy to implement these webhooks as the library abstracts already most of the things.

We should have a separate, dedicated binary incorporating the webhooks for each provider, and a separate Helm chart for the deployment in the Garden cluster.

Similarly, the networking and OS extensions could have such webhooks as well to check on the providerConfig for the networking and operating system config.

Part of gardener/gardener#308

Add infrastructure permission documentation

How to categorize this issue?

/area documentation
/kind enhancement
/priority normal
/platform openstack

What would you like to be added:

We need more detailed docs regarding required infrastructure permissions for both the operator and end-user similar to this documentation.

Why is this needed:
Better ops/user experience.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.