Giter Club home page Giter Club logo

kubernetes-in-kubernetes's Introduction

Kubernetes-in-Kubernetes

Deploy Kubernetes in Kubernetes using Helm

demo

Requirements

  • Kubernetes v1.21+
  • Helm v3
  • cert-manager v1.0.0+

Quick Start

Preparation

Installation

helm repo add kvaps https://kvaps.github.io/charts
helm install foo kvaps/kubernetes --version 0.13.5 \
  --namespace foo \
  --create-namespace \
  --set persistence.storageClassName=local-path

Cleanup

kubectl delete namespace foo

Usage

Kubernetes-in-Kubernetes is just a control plane, in most cases it's useless without workers.
If you're looking for a real use case, check out the following projects that implement worker nodes management:

  • Kubefarm - Automated Kubernetes deployment and the PXE-bootable servers farm

kubernetes-in-kubernetes's People

Contributors

egeneralov avatar gecube avatar ikeeip avatar kvaps avatar mrakopes avatar xom4ek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-in-kubernetes's Issues

Templates for kubeadm configuration

Kubeadm configuration consists of 3 manifests:

# helm install foo kvaps/kubernetes --set persistence.enabled=false --set admin.job.enabled=false
# kubectl exec -ti deploy/foo-kubernetes-admin -- sh
# kubeadm init phase upload-config kubeadm --config /config/kubeadmcfg.yaml -v 10 2>&1 | sed -n 's/.*Request Body: //p'
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: foo-kubernetes-apiserver:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.19.3\nnetworking:\n  dnsDomain: cluster.local\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  foo-kubernetes-admin-679958fd46-jmh2d:\n    advertiseAddress: 10.112.2.112\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
{"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
{"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}

There is a need to convert them to templates, possible parametrize and put into manifests directory

Setup bootstrap-tokens configuration

Bootstrap tokens configuration consists of 9 manifests:

# helm install foo kvaps/kubernetes --set persistence.enabled=false --set admin.job.enabled=false
# kubectl exec -ti deploy/foo-kubernetes-admin -- sh
# kubeadm init phase bootstrap-token --config /config/kubeadmcfg.yaml --skip-token-print -v 10 2>&1 | sed -n 's/.*Request Body: //p'
{"kind":"Secret","apiVersion":"v1","metadata":{"name":"bootstrap-token-lusbhc","namespace":"kube-system","creationTimestamp":null},"data":{"auth-extra-groups":"c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=","expiration":"MjAyMC0xMS0xMFQyMToxNjoxNVo=","token-id":"bHVzYmhj","token-secret":"dmsyOGg2b2h4aW9kOGl2eg==","usage-bootstrap-authentication":"dHJ1ZQ==","usage-bootstrap-signing":"dHJ1ZQ=="},"type":"bootstrap.kubernetes.io/token"}
{"kind":"ClusterRole","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:get-nodes","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["nodes"]}]}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:get-nodes","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"kubeadm:get-nodes"}}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-bootstrap","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:node-bootstrapper"}}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:node-autoapprove-bootstrap","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:certificates.k8s.io:certificatesigningrequests:nodeclient"}}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:node-autoapprove-certificate-rotation","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:certificates.k8s.io:certificatesigningrequests:selfnodeclient"}}
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"cluster-info","namespace":"kube-public","creationTimestamp":null},"data":{"kubeconfig":"apiVersion: v1\nclusters:\n- cluster:\n    certificate-authority: /pki/admin-client/ca.crt\n    server: https://foo-kubernetes-apiserver:6443\n  name: \"\"\ncontexts: null\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers: null\n"}}
{"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:bootstrap-signer-clusterinfo","namespace":"kube-public","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["cluster-info"]}]}
{"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:bootstrap-signer-clusterinfo","namespace":"kube-public","creationTimestamp":null},"subjects":[{"kind":"User","name":"system:anonymous"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:bootstrap-signer-clusterinfo"}}

Let's omit the secret and consider the opportunity to convert the rest to templates, parametrize and put into manifests directory

Compute Nodes

When the cluster is deployed, how do you add resources (compute nodes)?
kube-dns is stuck waiting for resources with "no nodes available to schedule pods"

Templates for kubelet configuration

Kubelet configuration consists of 3 manifests:

# helm install foo kvaps/kubernetes --set persistence.enabled=false --set admin.job.enabled=false
# kubectl exec -ti deploy/foo-kubernetes-admin -- sh
# kubeadm init phase upload-config kubelet --config /config/kubeadmcfg.yaml -v 10 2>&1 | sed -n 's/.*Request Body: //p'
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.19","namespace":"kube-system","creationTimestamp":null,"annotations":{"kubeadm.kubernetes.io/component-config.hash":"sha256:48eb5e62959095c1171e4f77ed97735d181a5f570dab19c21fc7166ee4b0fc1a"}},"data":{"kubelet":"apiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 0s\n    cacheUnauthorizedTTL: 0s\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\ncpuManagerReconcilePeriod: 0s\nevictionPressureTransitionPeriod: 0s\nfileCheckFrequency: 0s\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 0s\nimageMinimumGCAge: 0s\nkind: KubeletConfiguration\nlogging: {}\nnodeStatusReportFrequency: 0s\nnodeStatusUpdateFrequency: 0s\nrotateCertificates: true\nruntimeRequestTimeout: 0s\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 0s\nsyncFrequency: 0s\nvolumeStatsAggPeriod: 0s\n"}}
{"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.19","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.19"]}]}
{"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.19","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.19"}}

There is a need to convert them to templates, parametrize like konnectivity-agent and put into manifests directory

https://github.com/kvaps/kubernetes-in-kubernetes/blob/c181314d84f3545409cb5569599c217effbc0e6b/deploy/helm/kubernetes/values.yaml#L283-L308

konnectivity-agent as DaemonSet

Hi there,

Thank you for this great helm chart!

I noticed that Kubernetes Documentation states running the Konnectivity Agent as a DaemonSet rather than a Deployment.
https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/
https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/admin/konnectivity/konnectivity-agent.yaml

This seems to make sense to me as the Agent ensures connectivity of the control-plane to all nodes kubelets. This way nodes in isolated network zones can still be contacted by the contol-plane. Which is the use-case I am intending to use it for.

@kvaps Does it make sense to adopt the DaemonSet style?

Thank you very much for the hard work put in so far.

Templates for coredns

Coredns consists of 6 manifests:

# helm install foo kvaps/kubernetes --set persistence.enabled=false --set admin.job.enabled=false
# kubectl exec -ti deploy/foo-kubernetes-admin -- sh
# kubeadm init phase addon coredns -v 10 2>&1 | sed -n 's/.*Request Body: //p'
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"coredns","namespace":"kube-system","creationTimestamp":null},"data":{"Corefile":".:53 {\n    errors\n    health {\n       lameduck 5s\n    }\n    ready\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n       pods insecure\n       fallthrough in-addr.arpa ip6.arpa\n       ttl 30\n    }\n    prometheus :9153\n    forward . /etc/resolv.conf {\n       max_concurrent 1000\n    }\n    cache 30\n    loop\n    reload\n    loadbalance\n}\n"}}
{"kind":"ClusterRole","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"system:coredns","creationTimestamp":null},"rules":[{"verbs":["list","watch"],"apiGroups":[""],"resources":["endpoints","services","pods","namespaces"]},{"verbs":["get"],"apiGroups":[""],"resources":["nodes"]}]}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"system:coredns","creationTimestamp":null},"subjects":[{"kind":"ServiceAccount","name":"coredns","namespace":"kube-system"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:coredns"}}
{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"coredns","namespace":"kube-system","creationTimestamp":null}}
{"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"coredns","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"kube-dns"}},"spec":{"replicas":2,"selector":{"matchLabels":{"k8s-app":"kube-dns"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"kube-dns"}},"spec":{"volumes":[{"name":"config-volume","configMap":{"name":"coredns","items":[{"key":"Corefile","path":"Corefile"}]}}],"containers":[{"name":"coredns","image":"k8s.gcr.io/coredns:1.7.0","args":["-conf","/etc/coredns/Corefile"],"ports":[{"name":"dns","containerPort":53,"protocol":"UDP"},{"name":"dns-tcp","containerPort":53,"protocol":"TCP"},{"name":"metrics","containerPort":9153,"protocol":"TCP"}],"resources":{"limits":{"memory":"170Mi"},"requests":{"cpu":"100m","memory":"70Mi"}},"volumeMounts":[{"name":"config-volume","readOnly":true,"mountPath":"/etc/coredns"}],"livenessProbe":{"httpGet":{"path":"/health","port":8080,"scheme":"HTTP"},"initialDelaySeconds":60,"timeoutSeconds":5,"successThreshold":1,"failureThreshold":5},"readinessProbe":{"httpGet":{"path":"/ready","port":8181,"scheme":"HTTP"}},"imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["all"]},"readOnlyRootFilesystem":true,"allowPrivilegeEscalation":false}}],"dnsPolicy":"Default","nodeSelector":{"kubernetes.io/os":"linux"},"serviceAccountName":"coredns","tolerations":[{"key":"CriticalAddonsOnly","operator":"Exists"},{"key":"node-role.kubernetes.io/master","effect":"NoSchedule"}],"priorityClassName":"system-cluster-critical"}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1}}},"status":{}}
{"kind":"Service","apiVersion":"v1","metadata":{"name":"kube-dns","namespace":"kube-system","resourceVersion":"0","creationTimestamp":null,"labels":{"k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"},"annotations":{"prometheus.io/port":"9153","prometheus.io/scrape":"true"}},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53},{"name":"metrics","protocol":"TCP","port":9153,"targetPort":9153}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.96.0.10"},"status":{"loadBalancer":{}}}

There is a need to convert them to templates, parametrize like konnectivity-agent and put into manifests directory

https://github.com/kvaps/kubernetes-in-kubernetes/blob/c181314d84f3545409cb5569599c217effbc0e6b/deploy/helm/kubernetes/values.yaml#L283-L308

Initial work started at coredns-manifests branch.

Templates for kube-proxy

Kube-proxy consists of 6 manifests:

# helm install foo kvaps/kubernetes --set persistence.enabled=false --set admin.job.enabled=false
# kubectl exec -ti deploy/foo-kubernetes-admin -- sh
# kubeadm init phase addon kube-proxy -v 10 2>&1 | sed -n 's/.*Request Body: //p'
{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"kube-proxy","namespace":"kube-system","creationTimestamp":null}}
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kube-proxy","namespace":"kube-system","creationTimestamp":null,"labels":{"app":"kube-proxy"},"annotations":{"kubeadm.kubernetes.io/component-config.hash":"sha256:e77da6dcbaed695c37e260873762a84e5c347369c68d3d2249a2cf2c439b550a"}},"data":{"config.conf":"apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nbindAddressHardFail: false\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 0\n  contentType: \"\"\n  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n  qps: 0\nclusterCIDR: \"\"\nconfigSyncPeriod: 0s\nconntrack:\n  maxPerCore: null\n  min: null\n  tcpCloseWaitTimeout: null\n  tcpEstablishedTimeout: null\ndetectLocalMode: \"\"\nenableProfiling: false\nhealthzBindAddress: \"\"\nhostnameOverride: \"\"\niptables:\n  masqueradeAll: false\n  masqueradeBit: null\n  minSyncPeriod: 0s\n  syncPeriod: 0s\nipvs:\n  excludeCIDRs: null\n  minSyncPeriod: 0s\n  scheduler: \"\"\n  strictARP: false\n  syncPeriod: 0s\n  tcpFinTimeout: 0s\n  tcpTimeout: 0s\n  udpTimeout: 0s\nkind: KubeProxyConfiguration\nmetricsBindAddress: \"\"\nmode: \"\"\nnodePortAddresses: null\noomScoreAdj: null\nportRange: \"\"\nshowHiddenMetricsForVersion: \"\"\nudpIdleTimeout: 0s\nwinkernel:\n  enableDSR: false\n  networkName: \"\"\n  sourceVip: \"\"","kubeconfig.conf":"apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n    server: https://10.112.2.112:6443\n  name: default\ncontexts:\n- context:\n    cluster: default\n    namespace: default\n    user: default\n  name: default\ncurrent-context: default\nusers:\n- name: default\n  user:\n    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token"}}
{"kind":"DaemonSet","apiVersion":"apps/v1","metadata":{"name":"kube-proxy","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"kube-proxy"}},"spec":{"selector":{"matchLabels":{"k8s-app":"kube-proxy"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"kube-proxy"}},"spec":{"volumes":[{"name":"kube-proxy","configMap":{"name":"kube-proxy"}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","type":"FileOrCreate"}},{"name":"lib-modules","hostPath":{"path":"/lib/modules"}}],"containers":[{"name":"kube-proxy","image":"k8s.gcr.io/kube-proxy:v1.19.3","command":["/usr/local/bin/kube-proxy","--config=/var/lib/kube-proxy/config.conf","--hostname-override=$(NODE_NAME)"],"env":[{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}}],"resources":{},"volumeMounts":[{"name":"kube-proxy","mountPath":"/var/lib/kube-proxy"},{"name":"xtables-lock","mountPath":"/run/xtables.lock"},{"name":"lib-modules","readOnly":true,"mountPath":"/lib/modules"}],"imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"nodeSelector":{"kubernetes.io/os":"linux"},"serviceAccountName":"kube-proxy","hostNetwork":true,"tolerations":[{"key":"CriticalAddonsOnly","operator":"Exists"},{"operator":"Exists"}],"priorityClassName":"system-node-critical"}},"updateStrategy":{"type":"RollingUpdate"}},"status":{"currentNumberScheduled":0,"numberMisscheduled":0,"desiredNumberScheduled":0,"numberReady":0}}
{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:node-proxier","creationTimestamp":null},"subjects":[{"kind":"ServiceAccount","name":"kube-proxy","namespace":"kube-system"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:node-proxier"}}
{"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kube-proxy","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kube-proxy"]}]}
{"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kube-proxy","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kube-proxy"}}

There is a need to convert them to templates, parametrize like konnectivity-agent and put into manifests directory

https://github.com/kvaps/kubernetes-in-kubernetes/blob/c181314d84f3545409cb5569599c217effbc0e6b/deploy/helm/kubernetes/values.yaml#L283-L308

Initial work started at kube-proxy-manifests branch.

Error: INSTALLATION FAILED

when I use command :
helm install foo kvaps/kubernetes --version 0.13.3 --namespace foo --create-namespace --set persistence.storageClassName=local-path

It return this:
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/xrw/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/xrw/.kube/config
W0224 17:14:07.219758 293805 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0224 17:14:07.282543 293805 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
Error: INSTALLATION FAILED: Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp 10.102.100.71:443: i/o timeout

what shouid i do?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.