Giter Club home page Giter Club logo

kube-lego's Introduction

kube-lego

⚠️

kube-lego is no longer maintained. The officially endorsed successor is cert-manager.

If you are a current user of kube-lego, you can find a migration guide here.

⚠️

kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt

Build Status

Screencast

Kube Lego screencast

Features

  • Recognizes the need of a new certificate for this cases:
    • No certificate existing
    • Existing certificate is not containing all domain names
    • Existing certificate is expired or near to its expiry date (cf. option LEGO_MINIMUM_VALIDITY)
    • Existing certificate is unparseable, invalid or not matching the secret key
  • Creates a user account (incl. private key) for Let's Encrypt and stores it in Kubernetes secrets (secret name is configurable via LEGO_SECRET_NAME)
  • Obtains the missing certificates from Let's Encrypt and authorizes the request with the HTTP-01 challenge
  • Makes sure that the specific Kubernetes objects (Services, Ingress) contain the rights configuration for the HTTP-01 challenge to succeed
  • Official Kubernetes Helm chart for simplistic deployment.

Requirements

  • Kubernetes 1.2+
  • Compatible ingress controller (nginx or GCE see here)
  • Non-production use case 😆

Usage

run kube-lego

The default value of LEGO_URL is the Let's Encrypt staging environment. If you want to get "real" certificates you have to configure their production env.

how kube-lego works

As soon as the kube-lego daemon is running, it will create a user account with LetsEncrypt, make a service resource, and look for ingress resources that have this annotation:

metadata:
  annotations:
    kubernetes.io/tls-acme: "true"

Every ingress resource that has this annotation will be monitored by kube-lego (cluster-wide in all namespaces). The only part that is watched is the list spec.tls. Every element will get its own certificate through Let's Encrypt.

Let's take a look at this ingress resource:

spec:
  tls:
  - secretName: mysql-tls
    hosts:
    - phpmyadmin.example.com
    - mysql.example.com
  - secretName: postgres-tls
    hosts:
    - postgres.example.com

On finding the above resource, the following happens:

  1. An ingress resource is created coordinating where to send acme challenges for the said domains.

  2. kube-lego will then perform its own check for i.e. http://mysql.example.com/.well-known/acme-challenge/_selftest to ensure all is well before reaching out to letsencrypt.

  3. kube-lego will obtain two certificates (one with phpmyadmin.example.com and mysql.example.com, the other with postgres.example.com).

Please note:

  • The secretName statements have to be unique per namespace
  • secretName is required (even if no secret exists with that name, as it will be created by kube-lego)
  • Setups which utilize 1:1 NAT need to ensure internal resources can reach gateway controlled public addresses.
  • Additionally, your domain must point to your externally available Load Balancer (either directly or via 1:1 NAT)

Switching from staging to production

At some point you'll be ready to use LetsEncrypt production API URL. To make the switch in kube-lego, please do the following:

  • Update LEGO_URL to https://acme-v01.api.letsencrypt.org/directory.
  • Delete the existing k8s secret kube-lego-account.
  • Delete other secrets that hold data for certificates you want to replace.
  • Restart kube-lego.

Ingress controllers

  • available through image gcr.io/google_containers/nginx-ingress-controller
  • fully supports kube-lego from version 0.8 onwards
  • you don't have to maintain the ingress controller yourself, you pay GCE to do that for you
  • every ingress resource creates one GCE load balancer
  • all service that you want to expose, have to be Type=NodePort

Environment variables

Name Required Default Description
LEGO_EMAIL y - E-Mail address for the ACME account, used to recover from lost secrets
LEGO_POD_IP y - Pod IP address (use the downward API)
LEGO_NAMESPACE n default Namespace where kube-lego is running in
LEGO_URL n https://acme-staging.api.letsencrypt.org/directory URL for the ACME server. To get "real" certificates set to the production API of Let's Encrypt: https://acme-v01.api.letsencrypt.org/directory
LEGO_SECRET_NAME n kube-lego-account Name of the secret in the same namespace that contains ACME account secret
LEGO_SERVICE_SELECTOR n kube-lego Set the service selector to the the kube-lego pod
LEGO_SERVICE_NAME_NGINX n kube-lego-nginx Service name for NGINX ingress
LEGO_SERVICE_NAME_GCE n kube-lego-gce Service name for GCE ingress
LEGO_SUPPORTED_INGRESS_CLASS n nginx,gce Specify the supported ingress class
LEGO_SUPPORTED_INGRESS_PROVIDER n nginx,gce Specify the supported ingress provider
LEGO_INGRESS_NAME_NGINX n kube-lego-nginx Ingress name which contains the routing for HTTP verification for nginx ingress
LEGO_PORT n 8080 Port where this daemon is listening for verifcation calls (HTTP method)
LEGO_CHECK_INTERVAL n 8h Interval for periodically certificate checks (to find expired certs)
LEGO_MINIMUM_VALIDITY n 720h (30 days) Request a renewal when the remaining certificate validity falls below that value
LEGO_DEFAULT_INGRESS_CLASS n nginx Default ingress class for resources without specification
LEGO_DEFAULT_INGRESS_PROVIDER n $LEGO_DEFAULT_INGRESS_CLASS Default ingress provider for resources without specification
LEGO_KUBE_API_URL n http://127.0.0.1:8080 API server URL
LEGO_LOG_LEVEL n info Set log level (debug, info, warn or error)
LEGO_LOG_TYPE n text Set log type. Only json as custom value supported, everything else defaults to default logrus textFormat
LEGO_KUBE_ANNOTATION n kubernetes.io/tls-acme Set the ingress annotation used by this instance of kube-lego to get certificate for from Let's Encrypt. Allows you to run kube-lego against staging and production LE
LEGO_WATCH_NAMESPACE n `` Namespace that kube-lego should watch for ingresses and services
LEGO_RSA_KEYSIZE n 2048 Size of the private RSA key
LEGO_EXPONENTIAL_BACKOFF_MAX_ELAPSED_TIME n 5m Max time to wait for each domain authorization attempt
LEGO_EXPONENTIAL_BACKOFF_MAX_INITIAL_INTERVAL n 30s Initial interval to wait for each domain authorization attempt
LEGO_EXPONENTIAL_BACKOFF_MAX_MULTIPLIER n 2.0 Multiplier for every step

Full deployment examples

Troubleshooting

When interacting with kube-lego, its a good idea to run with LEGO_LOG_LEVEL=debug for more verbose details. Additionally, be aware of the automatically created resources (see environment variables) when cleaning up or testing.

Possible resources for help:

  • The official channel #kube-lego #cert-manager on kubernetes.slack.com (The old channel was renamed)

There is also a good chance to get some support on non-official support channels for kube-lego, but be aware that these are rather general kubernetes discussion channels.

  • #coreos on freenode
  • Slack channels like #kubernetes-users or #kubernetes-novice on kubernetes.slack.com
  • If you absolutely just can't figure out your problem, file an issue.

Enable the pprof tool

To enable the pprof tool run kube-lego with environment LEGO_LOG_LEVEL=debug.

Capture 20 seconds of the execution trace:

$ wget http://localhost:8080/debug/pprof/trace?seconds=20 -O kube-lego.trace

You can inspect the trace sample running

$ go tool trace kube-lego.trace

Authors

kube-lego's People

Contributors

amcleodca avatar ankon avatar arno01 avatar chepurko avatar docx avatar dylangrafmyre avatar elvinefendi avatar farcaller avatar gianrubio avatar haizaar avatar hvaara avatar jackhopner avatar jackzampolin avatar lestrrat avatar munnerz avatar nielsole avatar pavels avatar pgporada avatar philipcristiano avatar pierreozoux avatar renaudguerin avatar robszumski avatar rowdyelectron avatar rutsky avatar simonswine avatar simplyzee avatar stephenlacy avatar tmc avatar wallrj avatar wernight avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-lego's Issues

Best practice when changing DNS pointer

Hi!

We've been working on a new web site that will replace an old one. When launching we will change the DNS to point to the containers at GKE, running kube-lego. When doing this which is the best way?

Alternative 1

  1. Change DNS to point to new service
  2. Update ingress with TLS-names when DNS change has propagated

Alternative 2

  1. Update ingress with TLS-names
  2. Change DNS to point to new service

Alternative 2 assumes that kube-lego keeps trying to verify the new IP-address until it succeeds. Is that the way it works? This would likely also mean that you can prepare the site in advance and the switch will mean less downtime.

In case it is a SAN cert, will kube-lego continue requesting new certs even if one of the domains is not pointing at the GKE server at the moment and will that exhaust the rate limit at Let's encrypt.

SessionAffinity and nginx-ingress

Hi!

I got an environment up running with two namespaces, production and staging. It is working fine with kube-lego and the nginx-ingress thanks to this project.

The other day I started a web socket server in production and all was nice and dandy. I set sessionAffinity to ClientIP for the services involved and it went smooth. When creating the second web socket server in staging, it didn't go that well. Actually, I'm not certain were the error is at the moment but I suspect when with two nginx-ingress replicas the sessionAffinity breaks. Setting the replicas of nginx-ingress to 1 will make it work.

Anyone who have had the same experience and have a recipe to get it working to have redundant nginx-ingresses?

Ingress tls secrets only updated if delete+create is used (and not update)?

(Warning: This report is a bit fuzzy)

I had kube-lego setup to update the certificates if they had 30 days or less to expire. Looking at the kubectl logs output, this was performed correctly on kube-lego side. It was reporting that the certificate it was seeing had 70 days to go, which matched the timestamp of the secret associated with it.

However, no matter how long I waited, the actual certificate as seen by my external checker was the old one, with 20 days left to go.

I started looking for clues, and I found https://code.google.com/p/google-cloud-platform/issues/detail?id=105
This suggests to use kubectl replace --force -f secret.json, so I did kubectl get -o json secret my-certificate and fed that backt o the kubectl replace command. at which point the certificate seemed to have been updated. So maybe GCE HTTP(s) Load Balancers require us to destroy the resource first and to re-create it, in order to force a reload?

I'm not 100% sure if this was the correct symptom/fix yet, but at this moment I highly suspect that this line here needs to be Delete+Create instead of update

image

Again, sorry about the fuzzy report, but I'm also trying to figure out what is going on here :/

404 on setting up the GCE example.

So I had this running with the default setup. It fetched the certs and was able to see the site up and running! Then I wanted to change some names and reorganize the files so I tore down the cluster and rebuilt it. I haven't been able to get it running a second time! I've gone ahead and created a gist with the appropriate files.

I tried some of the troubleshooting steps from #15 but was again unable to get the certs to authenticate.

Any help would be greatly appreciated @simonswine!

screenshot 2016-09-30 17 16 40

Incorrect response handling

We use kube2sky for DNS inside our K8S cluster.
It was down for some reason. but I haven't noticed that. When I created an ingress which required a certificate kube-lego did not handle the DNS problem quite well:

time="2016-12-09T17:16:54Z" level=info msg="requesting certificate for domain.foo.com" context="ingress_tls" name=https namespace="ece8168e-f5c7-4f41-9469-702f1eb2e4ec"
time="2016-12-09T17:16:54Z" level=info msg="creating new secret" context=secret name=kube-lego-account namespace="a12ddb33-13d4-43e4-9cfe-bf7e5b90935d"
time="2016-12-09T17:17:39Z" level=info msg="creating new secret" context=secret name=https namespace="ece8168e-f5c7-4f41-9469-702f1eb2e4ec"
time="2016-12-09T17:17:39Z" level=error msg="Error while process certificate requests: Secret \"https\" is invalid: [data[tls.crt]: Required value, data[tls.key]: Required value]" context=kubelego

Looks like if DNS is down kubelego fails silently and then tries to create an empty secret.

New ingress rules for `kube-lego-nginx` are not added

When I create more than one ingress resource with tls host entries, they aren't added to the rules of the kube-lego ingress config.

I have to manually add the hosts to make sure the /.well-known/... is proxied to the kube-lego service.

Seem's like not work

Kube-lego's log show below and not work. It work again after pod restart :

E1121 12:01:09.104331       1 runtime.go:58] Recovered from panic: &runtime.TypeAssertionError{interfaceString:"interface {}", concreteString:"cache.DeletedFinalStateUnknown", assertedString:"*extensions.Ingress", missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *extensions.Ingress)
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:52
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:40
/usr/local/go/src/runtime/asm_amd64.s:472
/usr/local/go/src/runtime/panic.go:443
/usr/local/go/src/runtime/iface.go:182
/go/src/github.com/jetstack/kube-lego/pkg/kubelego/watch.go:76
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:178
<autogenerated>:25
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:248
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:122
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:97
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:66
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:67
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:47
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:97
/usr/local/go/src/runtime/asm_amd64.s:1998
E1122 01:55:41.157331       1 runtime.go:58] Recovered from panic: &runtime.TypeAssertionError{interfaceString:"interface {}", concreteString:"cache.DeletedFinalStateUnknown", assertedString:"*extensions.Ingress", missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *extensions.Ingress)
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:52
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:40
/usr/local/go/src/runtime/asm_amd64.s:472
/usr/local/go/src/runtime/panic.go:443
/usr/local/go/src/runtime/iface.go:182
/go/src/github.com/jetstack/kube-lego/pkg/kubelego/watch.go:76
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:178
<autogenerated>:25
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:248
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:122
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:97
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:66
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:67
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:47
/go/src/github.com/jetstack/kube-lego/vendor/k8s.io/kubernetes/pkg/controller/framework/controller.go:97
/usr/local/go/src/runtime/asm_amd64.s:1998

k8s 's version:

Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:13:23Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:06:06Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Hardcoded service selector

Hi,

I was a bit confused recently why kube-lego is not working on my setup. After some debugging, I've noticed that it replaced all selectors I've defined in service.yml with just app=kube-lego.

Is it really needed? For example, I'm using a bit different naming and had to add extra label just to let it work. If not, at least it would be great to document this somewhere.

Cert not renewed?

Maybe it's a misunderstanding on my side but I just activated kube-lego for a gce ingress with a manually preinstalled certificate which expires in 6 days.

This is the ingress configuration:

# ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: ingress
 namespace: default
 annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "gce"
spec:
  tls:
  - hosts:
    - sub.example.com
    secretName: tls-secret
  backend:
    serviceName: endpoint
    servicePort: api

  rules:
  - host: sub.example.com
    http:
      paths:
      - path: /something
        backend:
          serviceName: some-service
          servicePort: http

I set explicitly LEGO_MINIMUM_VALIDITY to "720h".

The log of the kube-lego pod is showing this:

time="2016-08-22T20:00:15Z" level=info msg="cert expires in 6.6 days, no renewal needed" context="ingress_tls" expire_time=2016-08-29 09:53:00 +0000 UTC name=ingress namespace=default 

So did I misunderstood the minimum validity, and if so when is the cert renewed and how can I control the interval?

Greetings

Duplicate secret causes exhaustion of duplicate certificates for same name

When a secret already exists with the the name configured in the TLS directive for ingress a duplicate or replacement secret cannot be created (error Invalid value: \"kubernetes.io/tls\": field is immutable").

As a result, kube-lego repeatedly requests a cert for the same subdomain, meaning you only have a few seconds before hitting the Duplicate Certificate rate limit

time="2016-09-16T15:14:58Z" level=debug msg="testing reachablity of http://sub.example.com/.well-known/acme-challenge/_selftest" context=acme host=sub.example.com
time="2016-09-16T15:14:58Z" level=info msg="initialize lego acme connection" context=acme
2016/09/16 15:14:58 [INFO][sub.example.com] acme: Obtaining bundled SAN certificate
2016/09/16 15:14:58 [INFO][sub.example.com] acme: Trying to solve HTTP-01
2016/09/16 15:14:58 [INFO][sub.example.com] The server validated our request
2016/09/16 15:14:58 [INFO][sub.example.com] acme: Validations succeeded; requesting certificates
time="2016-09-16T15:14:59Z" level=warning msg="Error while obtaining certificate: Errors while obtaining cert: map[sub.example.com:acme: Error 429 - urn:acme:error:rateLimited - Error creating new cert :: Too many certificates already issued for exact set of domains: sub.example.com]" context=acme

This could be avoided in two ways:

  1. Check if a secret with the same name exists before attempting creation of a certificate
  2. When a cert has been retrieved successfully, but then an error occurs either cache or drop the cert, log the error, but do not request the cert again. This would allow for the user to rectify the issue and then cert creation could be attempted again when the application is restarted.

404 issue with GCE example

I followed the GCE example but continually see logs like this in the kube-lego pod:

time="2016-08-14T16:50:19Z" level=debug msg="testing reachablity of http://foo.redacted.com/.well-known/acme-challenge/_selftest" context=acme host=foo.redacted.com
time="2016-08-14T16:50:19Z" level=warning msg="wrong status code '404'" context=acme host=foo.redacted.com

My DNS record is pointed at the Ingress external IP for foo.redcated.com. I can curl it via http, but only the html index is accessible: all of its css js and images 404.

Are there any suggested debugging steps? I previously had this working but wanted to try out the GCE solution.

Edit: I also see a bunch of wrong status code '502' logs after killing the kube-lego pod and watching the new one come up.

Edit 2: looks like the css/js/images issue was fixed by updating - path: / to - path: /* in my ingress object.

May be a bug?

Should code be below in tls.go?

    if !i.newCertNeeded(i.ingress.kubelego.LegoMinimumValidity()) {
        i.Log().Infof("no cert request needed")
        return nil
    }

Which IP do I point my domain at?

In the screencast example, it shows two ingress resources: echoserver and kube-lego. Each one has 2 external IP addresses. Which one do you use for the demo.kube-lego.jetstack.net A record?

It'd be helpful to include this in the docs.

[nginx] Ingress constantly drops and recreates static ip

$ kubectl describe ing proding --namespace prodenv
Name:           proding
Namespace:      prodenv
Address:        
Default backend:    default-http-backend:80 (10.48.2.7:8080)
TLS:
  proding-tls terminates www.{foo}.com,{foo}.com,www.{foo}.design,{foo}.design
Rules:
  Host          Path    Backends
  ----          ----    --------
  www.{foo}.com 
                /   jackserver:80 (<none>)
  {foo}.com 
                /   jackserver:80 (<none>)
  www.{foo}.design  
                /   pylonserver:80 (<none>)
  {foo}.design      
                /   pylonserver:80 (<none>)
Annotations:
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason  Message
  --------- --------    -----   ----                -------------   --------    ------  -------
  15m       15m     1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  15m       15m     1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  15m       15m     1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  14m       14m     1   {nginx-ingress-controller }         Normal      DELETE  ip: 104.196.29.121
  14m       14m     1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  14m       14m     1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  14m       14m     1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  9m        9m      1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  9m        9m      1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  9m        9m      1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  8m        8m      1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  8m        8m      1   {nginx-ingress-controller }         Normal      DELETE  ip: 104.196.29.121
  8m        8m      1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  8m        8m      1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  2m        2m      1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  2m        2m      1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  2m        2m      1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  2m        2m      1   {nginx-ingress-controller }         Normal      CREATE  prodenv/proding
  2m        2m      1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.196.29.121
  2m        2m      1   {nginx-ingress-controller }         Normal      UPDATE  prodenv/proding
  1m        1m      1   {nginx-ingress-controller }         Normal      DELETE  ip: 104.196.29.121

Gist with configs

The backends were reachable initially but then the wonkyness started.

I am happy to provide any debugging information.

Multiple Kubernetes clusters sharing domain names

Are there any plans on supporting multiple clusters hosting same domain? In our case we have Kubernetes cluster(s) in multiple regions and use DNS-based for traffic routing. This has the issue that only single cluster would be able to obtain the certificate with the current kube-lego integration.

I'm currently writing a patch that allows kube-lego to pass-thru the challenge request to other clusters (query DNS for list of the clusters (a bit of torn between SRV and A records) & try to retrieve the challenge response from each. If anyone returns 200, return that to the client). Is this design something that could be upstreamed or is this something that's should to be addressed in some other way/simply not in the scope of the project?

kube-lego deployed on GKE uses both NGINX and GLBC ingress controllers

When the kube-lego ingress is created, it uses both nginx-ingress-controller and loadbalancer-controller:

Name:                   kube-lego
Namespace:              default
Address:                5.6.7.8,1.2.3.4
Default backend:        default-http-backend:80 (10.84.0.3:8080)
Rules:
  Host                  Path    Backends
  ----                  ----    --------
  echo.example.com
                        /.well-known/acme-challenge     kube-lego:8080 (<none>)
Annotations:
  backends:             {"k8s-be-31590--64893abc10ca8449":"HEALTHY"}
  forwarding-rule:      k8s-fw-default-kube-lego--64893abc10ca8449
  ssl-redirect:         false
  target-proxy:         k8s-tp-default-kube-lego--64893abc10ca8449
  url-map:              k8s-um-default-kube-lego--64893abc10ca8449
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  8m            8m              1       {nginx-ingress-controller }                     Normal          CREATE  default/kube-lego
  8m            8m              1       {loadbalancer-controller }                      Normal          ADD     default/kube-lego
  8m            7m              2       {nginx-ingress-controller }                     Normal          CREATE  ip: 1.2.3.4
  7m            7m              1       {loadbalancer-controller }                      Normal          CREATE  ip: 5.6.7.8
  7m            7m              1       {loadbalancer-controller }                      Warning         Status  Operation cannot be fulfilled on ingresses.extensions "kube-lego": the object has been modified; please apply your changes to the latest version and try again
  8m            2m              7       {nginx-ingress-controller }                     Normal          UPDATE  default/kube-lego

Probably it should be created with annotation kubernetes.io/ingress.class: "nginx" to disable GLBC.

AWS LoadBalancer Support

I would love a use this project for our Kubernetes cluster running in AWS. Unfortunately it doesn't seem that this project supports AWS ElasticLoadBalancers yet. I know that lego itself supports Route53 which is Amazons DNS provider, we just need the load balancer integration.

Whould this be possible to implement?

private key in logs

thank you for writing and publishing such a nice piece of software 👍

however, I'd prefer to not see my private keys in the log, or at least not on info-lvl. could you please change this?

2016/09/16 ..:..:.. [INFO][my.host.name] Server responded with a certificate.
time="2016-09-16T...." level=info msg="Got certs={my.host.name https://acme-v01.api.letsencrypt.org/acme/cert/.....  https://acme-v01.api.letsencrypt.org/acme/reg/..... -----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAu4iFuMTP/YxdVeZBfbrbbAdyvm .......

i'm using version jetstack/kube-lego:0.1.2

challenge backend failing health checks with GCE ingress

Background:

I have two existing GCE ingresses running without kube-lego. This is what the health checks look like:

$ gcloud compute http-health-checks list
NAME                            HOST  PORT   REQUEST_PATH
k8s-be-30804--eeba4b4d12737265        30804  /login
k8s-be-32516--eeba4b4d12737265        32516  /
k8s-be-32742--eeba4b4d12737265        32742  /healthz

screenshot 2016-09-07 10 09 39

The /login path is for a legacy service that does not expose a dedicated, unauthenticated health(z) endpoint. /login is used b/c it doesn't require authentication and returns a 200. The deployment behind this ingress exposes /login as a readinessProbe.

Issue:

Now I want to test kube-lego (0.1.2) with the echoserver sample. After deploying the echoserver ingress (and updating dns), I see a lot of errors in the kube-lego logs indicating that the reachability test is failing with wrong status code '502'. Sure enough, looking at the GCE load balancer, I can see that none of my nodes for the /.well-known/acme-challenge/* backend are healthy.

screenshot 2016-09-07 10 36 12

gcloud compute backend-services get-health k8s-be-32044--eeba4b4d12737265  | grep healthState
  - healthState: UNHEALTHY
  - healthState: UNHEALTHY
  - healthState: UNHEALTHY

screenshot 2016-09-07 10 26 55

When I look a the health checks again, I see that there are two /login paths. That doesn't seem right. The health check for the /.well-known/acme-challenge/* should be /healthz, right?

screenshot 2016-09-07 10 34 49

$ gcloud compute http-health-checks list
NAME                            HOST  PORT   REQUEST_PATH
k8s-be-30804--eeba4b4d12737265        30804  /login
k8s-be-32044--eeba4b4d12737265        32044  /login
k8s-be-32243--eeba4b4d12737265        32243  /
k8s-be-32516--eeba4b4d12737265        32516  /
k8s-be-32742--eeba4b4d12737265        32742  /healthz

$ gcloud compute http-health-checks describe k8s-be-32044--eeba4b4d12737265
checkIntervalSec: 1
creationTimestamp: '2016-09-07T07:12:18.014-07:00'
description: kubernetes L7 health check from readiness probe.
healthyThreshold: 1
host: ''
id: '304508149891440301'
kind: compute#httpHealthCheck
name: k8s-be-32044--eeba4b4d12737265
port: 32044
requestPath: /login
selfLink: https://www.googleapis.com/compute/v1/projects/imm-gce/global/httpHealthChecks/k8s-be-32044--eeba4b4d12737265
timeoutSec: 1
unhealthyThreshold: 10

If I manually change the path for the k8s-be-32044--eeba4b4d12737265 health check to /healthz and then wait a few minutes, the backend for /.well-known/acme-challenge/* becomes healthy and kube-lego is able to get a certificate.

I can't figure out how the health check for the challenge endpoint is getting created with /login instead of /healthz. Is it somehow picking it up from the pre-existing ingress? Or am I doing some wrong here? Any ideas? Thanks!

Status code 401 during reachabily test

Hello.

After adding the kube lego deployment these lines are printed :

time="2016-09-21T16:30:18Z" level=warning msg="wrong status code '401'" context=acme host=my-host.com 
time="2016-09-21T16:30:18Z" level=warning msg="Error while obtaining certificate: reachabily test failed for this cert" context=acme 
time="2016-09-21T16:30:23Z" level=debug msg="testing reachablity of http://my-host.com/.well-known/acme-challenge/_selftest" context=acme host=my-host.com

I'm using nginx controller with http auth.
Can this be the cause ? How can pass the reachabily test ?

Only adds one cert to load balancers on GCE

We have 3 different stages of our app. They are all running on a different domain.
Each of them already obtained a ssl certificate and saved it as a secret.

This is a screenshot showing the secrets:

screen shot 2016-11-16 at 11 50 07

But for only one of the stages the ssl cert is actually set in the LoadBalancer. All others are only available through HTTP and not HTTP&HTTPS.

Here are all SSL certs available in the LoadBalancer settings:

screen shot 2016-11-16 at 11 48 45

Any idea why it only adds one certificate to the Google Cloud Engine Network settings?

allow changing Lets encrypt cert provider by annotation

Thanks for the nice tool. I am currently using it with staging url but for some services, I would prefer to get the cert from production LE CA rather than staging. So ability to choose the provider for each ingress with annotation can be quite handy. So the scenario will be like this
default: use LEGO_URL configured value for all the ingress where no annotation is specified to override it.
Have a support for annotation as:
`kubernetes.io/tls-acme-lego-url: "https://acme-staging.api.letsencrypt.org/directory"

the value can be staging or prod depending upon use case. This can help in avoiding the rate limiting when you testing some stuff and use production once you done.

Question: Recommended usage for many different domains.

First off, this is awesome. Thank you so much for providing this amazing tool. It took me a couple hours, but I've got it up and running (using nginx), and everything is working great.

In my particular use case, I have an application that powers many different domains, say aaa.com, bbb.com, ccc.com, etc. Adding and removing domains needs to be automated.

What's the best (or recommended) way to handle this type of scenario? I'm thinking that creating separate ingresses within the application's namespace for each domain would be the best approach. (The ingresses would be created programmatically using the Kubernetes API.) I've manually tested this, but I'm not sure how scalable this approach is, or if there are other issues around having many ingresses.

I'm relatively new to Kubernetes and Let's Encrypt, any help is greatly appreciated. Thanks!

Getting staging certificate instead of production

The kube-lego container has LEGO_URL set to production:

Containers:
  kube-lego:
    Container ID:       docker://...
    Image:              jetstack/kube-lego:0.1.2
    Image ID:           docker://...
    Port:               8080/TCP
    State:              Running
    Ready:              True
    Restart Count:      0
    Readiness:          http-get http://:8080/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment Variables:
      LEGO_EMAIL:       [email protected]
      LEGO_NAMESPACE:   default (v1:metadata.namespace)
      LEGO_POD_IP:       (v1:status.podIP)
      LEGO_URL:         https://acme-v01.api.letsencrypt.org/directory

However its retrieving the staging TLS certificate:

time="2016-08-31T11:51:50Z" level=info msg="CREATE foo/ingress" context=kubelego 
time="2016-08-31T11:51:50Z" level=info msg="creating new secret" context=secret name=foo-tls-certificate namespace=foo 
time="2016-08-31T11:51:50Z" level=info msg="no cert associated with ingress" context="ingress_tls" name=ingress namespace=foo 
time="2016-08-31T11:51:50Z" level=info msg="requesting certificate for foo.example.com" context="ingress_tls" name=ingress namespace=foo 
time="2016-08-31T11:51:50Z" level=debug msg="testing reachablity of http://foo.example.com/.well-known/acme-challenge/_selftest" context=acme host=foo.example.com 
time="2016-08-31T11:51:50Z" level=warning msg="wrong status code '502'" context=acme host=foo.example.com 
time="2016-08-31T11:52:08Z" level=debug msg="testing reachablity of http://foo.example.com/.well-known/acme-challenge/_selftest" context=acme host=foo.example.com 
time="2016-08-31T11:52:08Z" level=warning msg="wrong status code '503'" context=acme host=foo.example.com 
time="2016-08-31T11:52:16Z" level=debug msg="testing reachablity of http://foo.example.com/.well-known/acme-challenge/_selftest" context=acme host=foo.example.com 
2016/08/31 11:52:17 [INFO][foo.example.com] acme: Obtaining bundled SAN certificate
time="2016-08-31T11:52:18Z" level=warning msg="Error while obtaining certificate: Errors while obtaining cert: map[foo.example.com:acme: Error 400 - urn:acme:error:badNonce - JWS has invalid anti-replay nonce *************************************]" context=acme 
time="2016-08-31T11:52:34Z" level=debug msg="testing reachablity of http://foo.example.com/.well-known/acme-challenge/_selftest" context=acme host=foo.example.com 
2016/08/31 11:52:34 [INFO][foo.example.com] acme: Obtaining bundled SAN certificate
2016/08/31 11:52:35 [INFO][foo.example.com] acme: Trying to solve HTTP-01
2016/08/31 11:52:35 [INFO][foo.example.com] The server validated our request
2016/08/31 11:52:35 [INFO][foo.example.com] acme: Validations succeeded; requesting certificates
2016/08/31 11:52:36 [INFO][foo.example.com] Server responded with a certificate.
time="2016-08-31T11:52:36Z" level=info msg="Got certs={foo.example.com https://acme-staging.api.letsencrypt.org/acme/cert/*****************  https://acme-staging.api.letsencrypt.org/acme/reg/123456 -----BEGIN RSA PRIVATE KEY-----\n*************************

Note: I kept Nginx Ingress controller hsts-include-subdomains to the default (true).

Should I expect to receive any e-mail at [email protected]? Should I delete the token or something else if I change from staging to production?

Explain a little Nginx Ingress config

kube-lego/examples/nginx/20-nginx-configmap.yaml contains a lot of overrides of the default values.

It's worth putting a comment for each to mark:

  • Is it optional or required?
  • Why do you suggest to override it?

Support for same host duplicated across ingress objects

Hi,

For various reasons I need to have the different paths for a host spread across different ingress objects (mainly to enable http basic-auth on some paths only, since there's no way to set this at the "rule" level and it can only be done using ingress-level annotations)

I couldn't find a way to make kube-lego play nice with this. I do want to share the same certificate/secret for the same host across these different ingress objects, but it complains with :
the secret xxxxxx/default is used multiple times. These linked TLS ingress elements where ignored:

And it then ignores all TLS ingress elements, not just additional duplicates but even the first definition it encounters.

Is there a workaround I'm missing ? Could kube-lego handle duplicates more gracefully maybe (gather the list of hosts/secrets needed across all ingresses, de-duplicate if needed, and request) ?

Health Check

Would it be possible to build a Health Check into Kube-Lego itself? I feel like this would be a simplistic enough task, and it would allow the kubernetes server to use liveliness probes with kube-lego. If someone points me in the right direction, perhaps I could help with this.

the server has asked for the client to provide credentials error

I'm getting the follow error (and all that is in the logs) when the pod starts. I assume this is when kube-lego tries to talk to my kube-api

2016-10-18T00:28:29.478223138Z time="2016-10-18T00:28:29Z" level=info msg="kube-lego 0.1.2-03c268d7 starting" context=kubelego 
2016-10-18T00:28:29.513084276Z time="2016-10-18T00:28:29Z" level=fatal msg="the server has asked for the client to provide credentials" context=kubelego 

500 warnings / badNonce

When kube-lego starts up and tries to obtain the cert, I see 3 repetitions of these logs (domain masked):

time="2016-06-06T12:54:05Z" level=debug msg="testing reachablity of http://foo.com/.well-known/acme-challenge/_selftest" context=acme host=foo.com
time="2016-06-06T12:54:05Z" level=warning msg="wrong status code '503'" context=acme host=foo.com

This results in:

time="2016-06-06T12:54:06Z" level=info msg="initialize lego acme connection" context=acme
2016/06/06 12:54:06 [INFO][foo.com] acme: Obtaining bundled SAN certificate
t
Error 400 - urn:acme:error:badNonce - JWS has invalid anti-replay nonce tlGC7MNn-udU86kjs_DOK1pTWynV5P3kYjNmDuVIruo]" context=acme

Are those related? I saw many others with similar JWS errors and read possible solutions (retry ~6 times, change email address) but still no luck.

Kube-lego works but connection gives default backend - 404

Hi!

To start with, thanks for your good work with kube-lego!

I've setup kube-lego with gce and it works fine. The certificates are requested and deployed for two sites, one mobile and one desktop site. However, only the mobile site is reachable, the desktop site returns default backend - 404.

The setup files I've used are https://gist.github.com/johnparn/ce0e025e8c015de812c0b84ef8b1faf9

Containers for both mobile and desktop and mobile are exposed on port 80. The only difference that I've spotted is that in the GCE Load Balancer for mobile service there is a path rule with All unmatched (default) for that particular host name.

gce-lb-mobile

This rule is obviously missing in GCE LB for desktop and I believe this is the problem.

gce-lb-desktop

However, I tried creating a corresponding rule for the desktop LB but I don't seem to be able to create All unmatched (default) rule for the desktop host, well not by using the GUI. And I want to to make sure in case I have to rerun the scripts that the rule actually is created.

Any insights appreciated!
// John

Renewal doesn't work

Version of kube-lego:

2016-10-24T13:22:49.775587542Z time="2016-10-24T13:22:49Z" level=info msg="kube-lego 0.1.2-a823e819 starting" context=kubelego 
2016-10-24T13:22:49.804969421Z time="2016-10-24T13:22:49Z" level=info msg="connected to kubernetes api v1.4.3" context=kubelego 

Logs for one domain:

2016-10-24T13:23:22.062415223Z time="2016-10-24T13:23:22Z" level=info msg="cert expires soon so renew" context="ingress_tls" expire_time=2016-11-14 16:27:00 +0000 UTC name=kubedash namespace=kube-system 
2016-10-24T13:23:22.062443256Z time="2016-10-24T13:23:22Z" level=info msg="requesting certificate for kubedash.test.gelato.tech" context="ingress_tls" name=kubedash namespace=kube-system 
2016-10-24T13:23:25.336489023Z time="2016-10-24T13:23:25Z" level=info msg="authorization successful" context=acme domain=kubedash.test.gelato.tech 
2016-10-24T13:23:25.336529610Z time="2016-10-24T13:23:25Z" level=warning msg="authorization failed for some domains" context=acme failed_domains=[ kubedash.test.gelato.tech] 

And one more error from logs:
2016-10-24T13:36:36.966341172Z time="2016-10-24T13:36:36Z" level=error msg="Error while process certificate requests: error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required, error getting certificate: 400 urn:acme:error:malformed: Error creating new cert :: at least one DNS name is required" context=kubelego

Support GCE ForwardingRule

Especially marked ingress ressources (annotation: kubernetes.io/ingress.class: gce), should be set up in a way, that no nginx is needed. (Single ressource, kube-lego svc in same NS)

(related to #6)

Better docs why we not default to production (yet)

LEGO_URL like nearly all ACME clients should default to Let's Encrypt production backend by default. Lego itself does that. However the YML given as example should override it with the staging environment.

This not only is desirable to make it consistent but also because it's easy to find Let's Encrypt staging URL, harder to find their production URL (as it's considered the default on all clients).

Nginx-Ingress without Service supported?

I am currently trying to setup a small bare-metal one-node "cluster". For this I would like to use kube-lego. I have read the docs, but up until now, kube-lego is starting and then halting after

2016-10-05T20:24:41.039507691Z time="2016-10-05T20:24:41Z" level=info msg="server listening on http://:8080/" context=acme

I am using the "usual" nginx-ingress controller, but without a service (like found in your example). The config looks like:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{template "fullname" . }}
spec:
  replicas: 1
  template:
    metadata:
      labels:
        # Required for the auto-create kube-lego-nginx service to work.
        app: "{{ .Chart.Name }}"
    spec:
      containers:
      - name: kube-lego
        image: jetstack/kube-lego:canary
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        env:
        - name: LEGO_KUBE_API_URL
          value: http://lego.xxx.yyy:32767
        - name: LEGO_EMAIL
          valueFrom:
            configMapKeyRef:
              name: {{template "fullname" . }}
              key: lego.email
        - name: LEGO_URL
          valueFrom:
            configMapKeyRef:
              name: {{template "fullname" . }}
              key: lego.url
        - name: LEGO_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LEGO_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 1

My ingress is configured quite similar to the one found in your examples:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ template "fullname" . }}
  labels:
    app: "{{ .Chart.Name }}"
annotations:
  kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: {{ .Chart.Name }}-tls
    hosts:
    - {{ .Values.hostName }}
  rules:
  - host: {{ .Values.hostName }}
    http:
      paths:
      - path: /
        backend:
          serviceName: {{ template "fullname" . }}
          servicePort: 80

For http, everything is working fine and now I would like to setup https. Any help on this one?

This is more of a support request, then a bug, I know, but I was unsure how to reach you.

Can't seem to get the GCE LoadBalancer to work (502)

I only see error 502, no matter what I try.

My ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "gce"
spec:
  tls:
  - hosts:
    - redacted.io
    - www.redacted.io
    secretName: tls
  - hosts:
    - staging.redacted.io
    secretName: tls-staging
  rules:
  - host: staging.redacted.io
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-http-staging
          servicePort: 80
  - host: redacted.io
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-http
          servicePort: 80
  - host: www.redacted.io
    http:
      paths:
      - path: /*
        backend:
          serviceName: web-http
          servicePort: 80

And one of my services (The other looks exactly the same):

apiVersion: v1
kind: Service
metadata:
  name: web-http-staging
  labels:
    app: web
    tier: frontend
spec:
  type: NodePort
  ports:
    # the port that this service should serve on
  - name: http
    port: 80
    protocol: TCP
  - name: https
    port: 443
    protocol: TCP
  selector:
    app: web-http-staging
    tier: frontend

and lastly, my pod listens on 80 and 443. If I curl the internal service IP from a node, I get the correct response (200). Therefore the Load Balancer must fail.

The .well-known path is present in the GCE load balancer, and there are four backends -- and half of them has the health status 0/4, while the last two has 4/4. I have no idea why they are reporting has unhealthy, since none of my Pods have health checks and they are reporting 200, if you connect directly to the service.

Help is greatly appreciated. Thank you.

Error 403 on HTTP-01 challenge

Hello.

I'm trying to use kube-lego with nginx controller.
I used echoserver as explained in the example.
However the HTTP-01 challenge fails :

time="2016-09-21T18:33:14Z" level=warning msg="Error while obtaining certificate: Errors while obtaining cert: map[mydomain.com:acme: Error 403 - urn:acme:error:unauthorized - Invalid response from http://mydomain.com/.well-known/acme-challenge/OHTuEazqigcmn1x-kSXMmzWnHNRJTrLFNfB0kZL7Py8: \"CLIENT VALUES:\r\nclient_address=('10.x.x.x', 47151) (10.x.x.x)\r\ncommand=GET\r\npath=/.well-known/acme-challenge/OHTuEazqigcmn1x-k\"\nError Detail:\n\tValidation for mydomain.com:80\n\tResolved to:\n\t\t104.y.y.y\n\tUsed: 104.y.y.y\n\n]" context=acme 

It looks normal as the echoserver catches the request and respond something not related to the token specified in the url.

How is that possible ? Is there something that I forgot or something missing in the doc ?

New tag / release?

From looking at the issues, it seems that 1.2 can fail to renew certs (or possibly always causes an 8 hour expiration?) because it was using CheckInterval instead of MinimalValidity. I see that this is fixed in the canary tag, but that appears to be tracking master, and I don't want to depend on a mutable tag.

Any thoughts on cutting a new release with the fixed validity issue?

edit: Though I suppose if you set checkinterval to 5 minutes[0], you shouldn't have more than 5 minutes of invalid certs.

[0]: I think this minimum value is too high, the check seems like a lightweight thing, I don't think setting it to something outrageously low like 10 seconds would really cause problems, I may be wrong

edit: I found a couple of bugs that I have patches for on my fork, will submit prs soon

Multiple domains for one service

Hi!

This is a question regarding the best way to handle multiple domains for one service using the nginx ingress-controller. Currently I add the new TLS domain names to the ingress and duplicate the rules for each domain. Could this be written in a less verbose manner?

When adding a new domain name to the list a new SAN-cert that will be issued, hence it would make more sense to apply few bulk changes instead small, multiple changes, to minimize the risk of being blocked from making further certificate requests.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mobile-web-ingress
  namespace: production
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - m.domain.org
    - m.domain.com
    - m.domain.net
    secretName: mobile-web-tls
  rules:
  - host: m.domain.org
    http:
      paths:
      - path: /
        backend:
          serviceName: mobile-web-svc
          servicePort: 80
  - host: m.domain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: mobile-web-svc
          servicePort: 80
  - host: m.domain.net
    http:
      paths:
      - path: /
        backend:
          serviceName: mobile-web-svc
          servicePort: 80

Error during getting secret: resource name may not be empty

Looking in the logs for the kube-lego pod, i'm seeing "Error during getting secret: resource name may not be empty" appear quite frequently. kube-lego is able to fetch a certificate, but because of this error it appears to not store it and thus tries to fetch a new certificate, eventually hitting LE's rate limit on certificate issuance.

2016-11-20T12:17:22.492732131Z time="2016-11-20T12:17:22Z" level=info msg="requesting certificate for domain1.kingj.net,domain2.kingj.net" context="ingress_tls" name=domain1-ingress namespace=default 
2016-11-20T12:17:23.207569614Z time="2016-11-20T12:17:23Z" level=debug msg="testing reachablity of http://domain2.kingj.net/.well-known/acme-challenge/_selftest" context=acme domain=domain2.kingj.net 
2016-11-20T12:17:23.207887947Z time="2016-11-20T12:17:23Z" level=debug msg="testing reachablity of http://domain1.kingj.net/.well-known/acme-challenge/_selftest" context=acme domain=domain1.kingj.net 
2016-11-20T12:17:24.507967398Z time="2016-11-20T12:17:24Z" level=debug msg="got authorization: &{URI:https://acme-v01.api.letsencrypt.org/acme/challenge/REMOVED Status:valid Identifier:{Type: Value:} Challenges:[] Combinations:[]}" context=acme domain=domain1.kingj.net 
2016-11-20T12:17:24.508068395Z time="2016-11-20T12:17:24Z" level=info msg="authorization successful" context=acme domain=domain1.kingj.net 
2016-11-20T12:17:24.508862168Z time="2016-11-20T12:17:24Z" level=debug msg="got authorization: &{URI:https://acme-v01.api.letsencrypt.org/acme/challenge/REMOVED Status:valid Identifier:{Type: Value:} Challenges:[] Combinations:[]}" context=acme domain=domain2.kingj.net 
2016-11-20T12:17:24.508943840Z time="2016-11-20T12:17:24Z" level=info msg="authorization successful" context=acme domain=domain2.kingj.net 
2016-11-20T12:17:25.726043026Z time="2016-11-20T12:17:25Z" level=info msg="successfully got certificate: domains=[domain1.kingj.net domain2.kingj.net] url=https://acme-v01.api.letsencrypt.org/acme/cert/REMOVED" context=acme 
2016-11-20T12:17:25.726140077Z time="2016-11-20T12:17:25Z" level=debug msg="certificate pem data:"REMOVED" context=acme 
2016-11-20T12:17:25.726308003Z time="2016-11-20T12:17:25Z" level=warning msg="Error during getting secret: resource name may not be empty" context=kubelego 
2016-11-20T12:17:25.726400451Z time="2016-11-20T12:17:25Z" level=error msg="Error while process certificate requests: resource name may not be empty" context=kubelego 
2016-11-20T12:17:25.726414706Z time="2016-11-20T12:17:25Z" level=debug msg="worker: done processing true" context=kubelego 

The ingress i'm requesting a certificate for, the nginx LB and kube-lego are all part of the default namespace.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.