Giter Club home page Giter Club logo

aws-load-balancer-controller's People

Contributors

backjo avatar carflo avatar cgchinmay avatar cw-sakamoto avatar dims avatar geoffcline avatar guessi avatar haouc avatar jdn5126 avatar jerryhe1999 avatar jimdial-aws avatar johngmyers avatar k8s-ci-robot avatar kahirokunn avatar kishorj avatar komisan19 avatar lazouz avatar m00nf1sh avatar michaelsaah avatar mikutas avatar oliviassss avatar orsenthil avatar prasadkatti avatar shraddhabang avatar shuheiktgw avatar stelucz avatar stensonb avatar wweiwei-li avatar yasinlachiny avatar yutachaos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-load-balancer-controller's Issues

controller deletes route53 entries not created by ingress

I currently have some route53 dns entries that are created using the classic elb (k8s service Type:LoadBalancer)

However, After deploying this controller, and creating an ingress object (hostname not taken) that uses the controller, I noticed that some of my original dns records (not associated with the ingress object) are automatically being deleted when the controller is creating a route53 entry for the ingress object

AWS doesn't allow you to add terminated EC2 instances to a TargetGroup

[ERROR]: InvalidTarget: The following targets are not in a running state and cannot be registered: 'i-0b75b61b1fe6ea77d', 'i-07bfeb82b48634b6b'

I was able to produce this by adding a worker and removing a worker at the same time. The added worker creates a new node in GetNodes(), triggering a modification of the TG. However, AWS sees that the removed worked is now terminated and throws the error above.

Standardize logging usage

Our logging package provides standard conventions for debugging and easily grepping for what's relevant when debugging the controller.

Let's ensure all controller logging goes through this for consistency.

Add Support for Route53 Delegated Zones

We use delegated zones for each "stack" of our software. However, the alb-ingress-controller does not respect these.

We might have, for instance:

  - host: auth.some-env.some-corp.com

Currently, it will make an auth.some-env record in the some-corp.com zone, even if the some-env.some-corp.com zone exists. Instead, it should make the record in the delegated zone.

Support wider range of AWS certificate types

ACM has a lot of challenges, largely due to requirements for manual approval of certs.
This also rules out the mainstream signers.

Support for "server-certificate" ARNs as an alternative to ACM would be highly desirable.

Need to validate that no subnets specified exist in the same AZ

This can be added to our validation.go logic as it'll validate whether you lookup the subnet by id or by tag.

I0413 17:24:34.240694       1 ec2.go:36] Request: ec2/&{DescribeSecurityGroups POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  GroupIds: ["sg-1e84f777"]
}
I0413 17:24:34.272223       1 route53.go:52] Request: route53/&{ListHostedZonesByName GET /2013-04-01/hostedzonesbyname %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  DNSName: "josh-test-dns.com"
}
I0413 17:24:34.358992       1 log.go:48] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start ELBV2 (ALB) creation.
I0413 17:24:34.359302       1 elbv2.go:43] Request: elasticloadbalancing/&{CreateLoadBalancer POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  Name: "dev1-429acf8d129056e",
  Scheme: "internal",
  SecurityGroups: ["sg-1e84f777"],
  Subnets: [
    "subnet-0b20aa62",
    "subnet-390ae074",
    "subnet-63bf6318",
    "subnet-c327adaa",
    "subnet-e0ba669b"
  ],
  Tags: [{
      Key: "Namespace",
      Value: "echoserver"
    },{
      Key: "IngressName",
      Value: "echoserver"
    },{
      Key: "Hostname",
      Value: "aaaaaa.josh-test-dns.com"
    }]
}
I0413 17:24:34.487203       1 log.go:62] [ALB-INGRESS] [InvalidConfigurationRequest: A load balancer cannot be attached to multiple subnets in the same Availability Zone
	status code: 400, request id: 0fec11dd-206e-11e7-bfba-bf58a5ab05ef] [ERROR]: Failed to create ELBV2 (ALB). Error: %!s(MISSING)

Can't create DNS entry with three-part domain name

Hi, thanks for the great product, it's exact what we need and are planning to use it in production.

However, after running it the first time, I got a panic similar to #61. It seems that the issue is in this line: https://github.com/coreos/alb-ingress-controller/blob/master/awsutil/route53.go#L73

getDomain() assumes that the last two components of a hostname is the domain. However, we are playing around in our sandbox AWS account, which owns *.dev.rockset.io and doesn't own *.rockset.io. This causes a failure in awsutil.Route53svc.GetZoneID because it's called with rockset.io instead of dev.rockset.io.

aws access key id and secret not being recognized

I created an IAM user and attached the sample policy to it and set the environment variables AWS_ACCESS_KEY_ID/AWS_ACCESS_SECRET_KEY in my controller deployment, as stated in the docs

However, the controller seems to be trying to connect to aws using the iam role of the worker nodes instead of the access key id and secret :

2017-04-18T02:11:49.885187560Z F0418 02:11:49.884868       1 ingress.go:193] AccessDenied: User: arn:aws:sts::213232:assumed-role/yaw-Cluster-Stack-Workers-IAMRoleWorker-16S6JA1539L6M/i-0f0091907cc651398 is not authorized to perform: elasticloadbalancing:DescribeLoadBalancers
2017-04-18T02:11:49.885211637Z 	status code: 403, request id: 61d1674a-23dc-11e7-a94c-3502ef010ef0

works fine when i attach the sample policy to the worker IAMRole, but will be great if we can just use the access key and secret instead of going through multiple worker roles and attaching to each one

running quay.io/coreos/alb-ingress-controller:0.8

Health Checks do not work if using multiple pods on routes

e.g

  • host: test.com
    http:
    paths:
    • path: "/api/*"
      backend:
      serviceName: api
      servicePort: 80
    • path: "/*"
      backend:
      serviceName: frontend
      servicePort: 80

You can only set one health-check on the ingress so if service api does not have the same route for a health check as frontend you get a unable system

Publish Helm chart to Quay via Jenkins

Pushing to Quay using the Helm registry plugin is currently a manual process. Let's make it a function of CI. Should be able to use the same robot account that pushes the Docker image.

I'm happy to take it on.

Panic when hosted zone is a subdomain

I'm getting a panic when my ingress resource tries to describe a host that's under a subdomain, i.e.

spec:
  rules:
  - host: services.dev.mydomain.com

I think the controller is trying to modify the Route53 record for mydomain.com, instead of dev.mydomain.com. But mydomain.com in not part of this account's Route53 (it is hosted elsewhere), and only the subdomain dev.mydomain.com is available for modification in Route53. Either this is the problem, or I'm doing something else completely wrong :)

Here's the error I'm seeing:

2017-05-01T19:57:18.360353209Z I0501 19:57:18.356997       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"services", UID:"36b845a6-2ea8-11e7-bbe6-060f29d431aa", APIVersion:"extensions", ResourceVersion:"8473", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/services
2017-05-01T19:57:18.376081982Z I0501 19:57:18.376042       1 leaderelection.go:188] sucessfully acquired lease default/ingress-controller-leader
2017-05-01T19:57:19.357196268Z W0501 19:57:19.357089       1 queue.go:87] requeuing default/services, err deferring sync till endpoints controller has synced
2017-05-01T19:57:19.360285335Z W0501 19:57:19.360250       1 queue.go:87] requeuing kube-system/flannel-token-7mr23, err deferring sync till endpoints controller has synced
2017-05-01T19:57:29.460878489Z I0501 19:57:29.460788       1 log.go:62] [ALB-INGRESS] [default-services] [ERROR]: Unabled to locate ZoneId for %!s(*string=0xc4205fbd90).
2017-05-01T19:57:29.477453747Z I0501 19:57:29.477395       1 log.go:62] [ALB-INGRESS] [default-services] [ERROR]: Unabled to locate ZoneId for %!s(*string=0xc4205fbd90).
2017-05-01T19:57:29.477472523Z I0501 19:57:29.477415       1 log.go:48] [ALB-INGRESS] [default-services] [INFO]: Start ELBV2 (ALB) creation.
2017-05-01T19:57:30.014926314Z I0501 19:57:30.014836       1 log.go:48] [ALB-INGRESS] [default-services] [INFO]: Completed ELBV2 (ALB) creation. Name: k8s-dev-a721fba39cead27 | ARN: arn:aws:elasticloadbalancing:us-east-1:<redacted>
2017-05-01T19:57:30.015059577Z E0501 19:57:30.014974       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

Cache ingress resources satisfied to only operate on changes

Issue

Since validation (e.g. subnet, security groups, etc) is done during annotation parsing, we make calls to the AWS API. This means, API calls during each ingress event, which is quite common.

If we cache each ingress resource that comes in and is satisfied, we can greatly reduce the calls. The cache key can be calculated based on all the resource properties.

Crash when target group exists

I0412 15:51:12.781889       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Start ELBV2 (ALB) creation.
I0412 15:51:13.340316       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Completed ELBV2 (ALB) creation. Name: dev-cbd8bc1517ff907 | ARN: arn:aws:elasticloadbalancing:us-east-1:343550350117:loadbalancer/app/dev-cbd8bc1517ff907/bedcd33bbcb39731
I0412 15:51:13.340335       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Start Route53 resource record set creation.
I0412 15:51:28.516096       1 status.go:271] updating Ingress tectonic-system/tectonic-ingress status to [{10.188.165.195 }]
I0412 15:51:28.536119       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"tectonic-system", Name:"tectonic-ingress", UID:"3d031ca5-13e4-11e7-a2da-0e9384040afc", APIVersion:"extensions", ResourceVersion:"4751896", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress tectonic-system/tectonic-ingress
I0412 15:51:28.603953       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"tectonic-system", Name:"tectonic-ingress", UID:"3d031ca5-13e4-11e7-a2da-0e9384040afc", APIVersion:"extensions", ResourceVersion:"4751898", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress tectonic-system/tectonic-ingress
I0412 15:51:53.951876       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Completed Route 53 resource record set modification. DNS: prometheus.prd123.dev.us-east-1.nonprod-tmaws.io. | Type: A | AliasTarget: {  DNSName: "internal-dev-cbd8bc1517ff907-117324894.us-east-1.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "Z35SXDOTRQ7X7K"}
I0412 15:51:53.951924       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Completed Route 53 resource record set creation. DNS: prometheus.prd123.dev.us-east-1.nonprod-tmaws.io | Type: A | Target: {  DNSName: "internal-dev-cbd8bc1517ff907-117324894.us-east-1.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "Z35SXDOTRQ7X7K"}.
I0412 15:51:53.951934       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Start TargetGroup creation.
I0412 15:51:54.157843       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Failed TargetGroup creation. Error: DuplicateTargetGroupName: A target group with the same name 'dev-32162-HTTP-df48b9c' exists, but with different settings
	status code: 400, request id: f3466965-1f97-11e7-94ef-557fe64ed15c.
I0412 15:51:54.157871       1 log.go:48] [ALB-INGRESS] [prd123-prom-prometheus] [INFO]: Start Listener creation.
E0412 15:51:54.158045       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/asm_amd64.s:514
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:489
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:63
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/signal_unix.go:290
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:89
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:69
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:30
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/asm_amd64.s:2197
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x15213d7]
goroutine 120 [running]:
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
panic(0x17802c0, 0x254eb40)
	/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:489 +0x2cf
github.com/coreos/alb-ingress-controller/controller/alb.(*Listener).create(0xc420f1ffc0, 0xc4202f80a0, 0xc420331040, 0x16)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:89 +0x87
github.com/coreos/alb-ingress-controller/controller/alb.(*Listener).SyncState(0xc420f1ffc0, 0xc4202f80a0, 0x0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:69 +0x17f
github.com/coreos/alb-ingress-controller/controller/alb.Listeners.SyncState(0xc420149630, 0x1, 0x1, 0xc4202f80a0, 0xc4202f80d0, 0x1, 0x1, 0x0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:30 +0x81
github.com/coreos/alb-ingress-controller/controller/alb.LoadBalancers.SyncState(0xc420149658, 0x1, 0x1, 0xc420079380, 0x3ff0000000000000, 0x2503040)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26 +0x14b
github.com/coreos/alb-ingress-controller/controller.(*ALBIngress).SyncState(0xc42022a1e0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344 +0x81
github.com/coreos/alb-ingress-controller/controller.(*ALBController).Reload(0xc420138ab0, 0x2583680, 0x0, 0x0, 0xc422141040, 0x25, 0x25, 0x2583680, 0x0, 0x0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107 +0x76
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).sync(0xc42041db00, 0x169d0e0, 0xc420af41e0, 0xc420af41e0, 0xc420af4100)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432 +0x626
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.sync)-fm(0x169d0e0, 0xc420af41e0, 0xa, 0xc420911e00)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158 +0x3e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).worker(0xc4209aa030)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86 +0xef
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.worker)-fm()
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x2a
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420923fa8)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420f13fa8, 0x12a05f200, 0x0, 0x1, 0xc420951ce0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.Until(0xc420923fa8, 0x12a05f200, 0xc420951ce0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).Run(0xc4209aa030, 0x12a05f200, 0xc420951ce0)
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x5e
created by github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.GenericController.Start
	/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1081 +0x1f1

Ingress creation is single threaded

At one point we would go ALBIngress.SyncState(), lets look at enabling this again to improve throughput when creating/modifying many ingresses.

I think as long as we lock on ALBController.Onupdate() & ALBController.Reload() we should be safe.

CLUSTER_NAME character length

This is limited to 11 characters in main.go, is that limit still needed?

The original function this appears to be related to from commit be873671797b1bc2183bb465f9fcc1d38c9679090 no longer exists.

pkg/cmd/controller/elbv2.go

// Returns the ALBs name; maintains consistency amongst areas of code that much resolve this.
func (a *albIngress) Name() string {
	return fmt.Sprintf("%s-%s", a.clusterName, xid.New().String())
}

Enhance caching on controller for specific cases

We currently cache several AWS API calls. We need to enhance the cache mechanism with the following considerations.

  1. Should cache TTLs be configurable?
  2. We need to adjust cache for certain edge cases (e.g. single auth failure on call shouldn't cause that to be cached for 30+ minutes)

Relates to #76

Make R53 integration optional

As this gets rolled out to the larger masses, the front-to-back integration may a be no go when individuals find that Route 53 is coupled to the ALB creation.

It would be helpful to have a flag (ENV var) that disables state sync against Route 53. Making the alb-ingress-controller only concerned with ELBV2s (ALBs), TargetGroups, Listeners, and Rules.

Helpful for those who:

  1. Manage their DNS outside of Route53.
  2. Want to plugin their own Route53-handling controller.
  3. Prefer to manually alter Route53 records.

/cc @bigkraig

TCP Protocol Support

Is it possible to add TCP as a supported protocol for alb.ingress.kubernetes.io/listen-ports?

This is the error I'm seeing:
I0504 19:11:20.814657 1 log.go:62] [ALB-INGRESS] [controller] [ERROR]: Error parsing annotations for ingress tectonic-system-spotinst-api-access. Error: invalid protocol provided. Must be HTTP or HTTPS and in order to use HTTPS you must have specified a certtificate ARN

Duplicated rules

I have deleted and recreated my ingress object multiple times and I just noticed that in AWS under the alb rules, there are a lot of duplication.

including going into aws and manually deleting all the albs and target groups, but each time the controller recreates the alb, the rules are duplicated

Error: InvalidParameter: 1 validation error(s) found

is setting a rule like /api/v1/oauth2/sso* not permitted?

I0503 19:12:19.611756 1 log.go:62] [ALB-INGRESS] [auth-api-alb-1] [ERROR]: Failed Rule creation. Rule: { Actions: [{ TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-1", Type: "forward" }], Conditions: [{ Field: "path-pattern", Values: ["/api/v1/oauth2/sso*"] }], IsDefault: false} | Error: InvalidParameter: 1 validation error(s) found.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: testing-alb
  namespace: auth
  kubernetes.io/ingress.class: "alb"
spec:
  rules:
  - host: apps.test.com
    http:
      paths:
      - path: /
        backend:
          serviceName: app-bot
          servicePort: 80
      - path: /api/v1/oauth2/sso*
        backend:
          serviceName: "oauth2-client"
          servicePort: 80

Stop using master nodes

Stop master nodes being added to the target group. Master nodes should not handle this traffic.

Modifying Target Group params directly causes go panic/pod crash

Not sure what the ideal behavior should be here. Perhaps when using the AWS API to directly modify target group values, the controller should observe the new values and modify its own configuration. In this case, I changed HealthyThresholdCount from 5 to 2.

I0419 18:56:15.962284 1 ec2.go:35] Request: ec2/&{DescribeSubnets POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: { SubnetIds: ["subnet-40c35329","subnet-6066be1b","subnet-dd23d590"] } 
I0419 18:56:16.099481 1 ec2.go:35] Request: ec2/&{DescribeSubnets POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: { SubnetIds: ["subnet-40c35329","subnet-6066be1b","subnet-dd23d590"] } 
I0419 18:56:16.131379 1 ec2.go:35] Request: ec2/&{DescribeSecurityGroups POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: { GroupIds: ["sg-28a2d841"] } 
I0419 18:56:16.189473 1 log.go:48] [ALB-INGRESS] [2048-game-2048-alb-ingress] [INFO]: Start TargetGroup creation. 
I0419 18:56:16.189614 1 elbv2.go:43] Request: elasticloadbalancing/&{CreateTargetGroup POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: { HealthCheckIntervalSeconds: 30, HealthCheckPath: "/", HealthCheckPort: "traffic-port", HealthCheckProtocol: "HTTP", HealthCheckTimeoutSeconds: 5, HealthyThresholdCount: 5, Matcher: { HttpCode: "200" }, Name: "tectonic-30159-HTTP-c493193", Port: 30159, Protocol: "HTTP", UnhealthyThresholdCount: 2, VpcId: "vpc-6121a208" } I0419 18:56:16.355021 1 log.go:48] [ALB-INGRESS] [2048-game-2048-alb-ingress] [INFO]: Failed TargetGroup creation. 
Error: DuplicateTargetGroupName: A target group with the same name 'tectonic-30159-HTTP-c493193' exists, but with different settings status code: 400, request id: ddbf0cad-2531-11e7-a6ab-ff20619c78e9. 
I0419 18:56:16.355041 1 log.go:48] [ALB-INGRESS] [2048-game-2048-alb-ingress] [INFO]: Start Listener creation. 
E0419 18:56:16.355161 1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49 /usr/lib/go/src/runtime/asm_amd64.s:479 /usr/lib/go/src/runtime/panic.go:458 /usr/lib/go/src/runtime/panic.go:62 /usr/lib/go/src/runtime/sigpanic_unix.go:24 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:89 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:69 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:30 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52 /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 /usr/lib/go/src/runtime/asm_amd64.s:2086 panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x8474d6] goroutine 123 [running]: panic(0x16d4600, 0xc420014030) /usr/lib/go/src/runtime/panic.go:500 +0x1a1 github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126 panic(0x16d4600, 0xc420014030) /usr/lib/go/src/runtime/panic.go:458 +0x243 github.com/coreos/alb-ingress-controller/controller/alb.(*Listener).create(0xc42068e840, 0xc4200bab40, 0xc420730c40, 0x1a) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:89 +0x96 github.com/coreos/alb-ingress-controller/controller/alb.(*Listener).SyncState(0xc42068e840, 0xc4200bab40, 0x0) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listener.go:69 +0x9c github.com/coreos/alb-ingress-controller/controller/alb.Listeners.SyncState(0xc420120b10, 0x1, 0x1, 0xc4200bab40, 0xc4200bab70, 0x1, 0x1, 0xc420657a50) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:30 +0x81 github.com/coreos/alb-ingress-controller/controller/alb.LoadBalancers.SyncState(0xc420022738, 0x1, 0x1, 0x2408700, 0xc4201c5aa0, 0x2408700) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26 +0x149 github.com/coreos/alb-ingress-controller/controller.(*ALBIngress).SyncState(0xc42061bc70) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344 +0x81 github.com/coreos/alb-ingress-controller/controller.(*ALBController).Reload(0xc420104360, 0xc4201d2ce8, 0x0, 0x8, 0xc420626340, 0x2, 0x2, 0x2477f88, 0x0, 0x0) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107 +0x76 github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).sync(0xc42077c360, 0x15f3960, 0xc420627820, 0xc420627820, 0xc420627700) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432 +0x68f github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.sync)-fm(0x15f3960, 0xc420627820, 0xa, 0xc4206d5dd0) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158 +0x3e github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).worker(0xc4207509c0) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86 +0x101 github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.worker)-fm() /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x2a github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420657f58) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420657f58, 0x12a05f200, 0x0, 0x1, 0xc4206f9380) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.Until(0xc420657f58, 0x12a05f200, 0xc4206f9380) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).Run(0xc4207509c0, 0x12a05f200, 0xc4206f9380) /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x55 created by github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.GenericController.Start /home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1081 +0x1f1 

create new alb only if there isn't one already associated with the ingress host

Is it possible to have the controller to only create a new alb if there isn't one already associated with the ingress host?

reason:
I have one hostname, but the services I want to create rules for are in different namespaces, so I'm using multiple ingress objects, but each ingress object creates a new alb and updates the route53 DNS record to the point the most recent one

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-namespaceAuth
  namespace: auth
spec:
  rules:
  - host: apps.example.com
    http:
      paths:
      - path: /service1InAuthNS/*
        backend:
          serviceName: service1
          servicePort: 443
      - path: /service2InAuthNS/*
        backend:
          serviceName: service2
          servicePort: 443
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-namespaceMonitoring
  namespace: monitoring
spec:
  rules:
  - host: apps.example.com
    http:
      paths:
      - path: /service3InMonitoringNS/*
        backend:
          serviceName: service3
          servicePort: 443
     - path: /
        backend:
          serviceName: service4
          servicePort: 443

Add support for multiple listeners of different protocols (HTTP/HTTPS)

The ALB ingress controller (as of #5) supports multiple listeners. The current way to specify multiple listeners, for port 8080 and 9000 is as follows.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver
  namespace: echoserver
  annotations:
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/port: "8080,9000"
    alb.ingress.kubernetes.io/subnets: subnet-c327adaa,subnet-e0ba669b
    alb.ingress.kubernetes.io/security-groups: sg-c48ffcad
    alb.ingress.kubernetes.io/tags: Environment=dev5,ProductCode=PRD999,InventoryCode=echo-app

However, the listeners above are assumed to share a protocol. Meaning both listeners will be HTTPS, sharing the same certificate, or HTTP.

In order to implement this feature, we need a more complex annotation that can map a listener (port) to its protocol and possibly its certificate (ARN).

2 approaches discussed so far are as follows.

1: Use JSON in ports annotation

This approach allows us to specify protocol per port (Listener). It does assume that each HTTPS port defined would share the same AWS certificate.

alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":8080,"HTTPS": 443}]'

2: Add optional field in certificate ARN annotation

This approach allows an optional :<port> to be specified at the end of each certificate ARN. In the example below, this will map the certificate to the port (Listener) defined as 443 and 443 will be assumed to have a protocol of HTTPS. This approach does support unique certificates per listener.

alb.ingress.kubernetes.io/certificate-arn: "arn:<SOMESTUFF>:3423048230489284038290:443"

ALB/R53 Cleanup For No Apparent Reason

ALB Ingress Controller pod starts up just fine and begins building the r53/alb resources, but after a short period of time deletes them. Including sanitized logs for more detail:

I0421 18:15:44.199153       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start TargetGroup creation.
I0421 18:15:44.915457       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Succeeded TargetGroup creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-30400-HTTP-23e6718/fdd77f38193565e9 | Name: dev-30400-HTTP-23e6718.
I0421 18:15:44.915490       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start Listener creation.
I0421 18:15:44.915530       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Located default rule. Rule: {  Actions: [{      Type: "forward"    }],  IsDefault: true,  Priority: "default"}
I0421 18:15:44.948026       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-b3b26c44786c5f2/fa43a2c9ac16d96c/84c7acc3a64d55aa | Port: %!s(int64=80) | Proto: HTTP.
I0421 18:15:44.948189       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start ELBV2 (ALB) creation.
I0421 18:15:45.395655       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed ELBV2 (ALB) creation. Name: dev-acdd6a6abb90bf5 | ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:loadbalancer/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7
I0421 18:15:45.395687       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Route53 resource record set creation.
I0421 18:15:55.921387       1 status.go:271] updating Ingress kube-system/prometheus status to [{10.148.220.175 }]
I0421 18:15:55.922531       1 status.go:271] updating Ingress kube-system/alertmanager status to [{10.148.220.175 }]
I0421 18:15:55.922920       1 status.go:271] updating Ingress selenium-hub/selenium-hub status to [{10.148.220.175 }]
I0421 18:15:55.930035       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"alertmanager", UID:"abd207ae-26a3-11e7-b39b-06ee75049f93", APIVersion:"extensions", ResourceVersion:"54359", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress kube-system/alertmanager
I0421 18:15:55.930228       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"prometheus", UID:"abd4f9a1-26a3-11e7-b39b-06ee75049f93", APIVersion:"extensions", ResourceVersion:"54358", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress kube-system/prometheus
I0421 18:15:55.930298       1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"selenium-hub", Name:"selenium-hub", UID:"71b22395-26bd-11e7-9ba4-02c13625dae1", APIVersion:"extensions", ResourceVersion:"54360", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress selenium-hub/selenium-hub
I0421 18:16:27.309991       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Route 53 resource record set modification. DNS: prometheus.cluster.dev.us-west-2.example.com. | Type: A | AliasTarget: {  DNSName: "internal-dev-acdd6a6abb90bf5-529546578.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}
I0421 18:16:27.310042       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Route 53 resource record set creation. DNS: prometheus.cluster.dev.us-west-2.example.com | Type: A | Target: {  DNSName: "internal-dev-acdd6a6abb90bf5-529546578.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}.
I0421 18:16:27.310060       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start TargetGroup creation.
I0421 18:16:27.931290       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Succeeded TargetGroup creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-31253-HTTP-9fdc2df/d8ee59c5e4d4e7b4 | Name: dev-31253-HTTP-9fdc2df.
I0421 18:16:27.931319       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Listener creation.
I0421 18:16:27.953745       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7/293036e33fbc6f97 | Port: %!s(int64=80) | Proto: HTTP.
I0421 18:16:27.953774       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Rule creation.
I0421 18:16:27.973750       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Rule creation. Rule: {  Actions: [{      TargetGroupArn: "arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-31253-HTTP-9fdc2df/d8ee59c5e4d4e7b4",      Type: "forward"    }],  Conditions: [{      Field: "path-pattern",      Values: ["/graph"]    }],  IsDefault: false,  Priority: "1",  RuleArn: "arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener-rule/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7/293036e33fbc6f97/53ff75b4f8580aaf"}
I0421 18:16:27.973783       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start ELBV2 (ALB) creation.
I0421 18:16:28.888038       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed ELBV2 (ALB) creation. Name: dev-2848e345aecda24 | ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:loadbalancer/app/dev-2848e345aecda24/76c87f9d7f640fd0
I0421 18:16:28.888070       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start Route53 resource record set creation.
I0421 18:17:10.779160       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed Route 53 resource record set modification. DNS: selenium.cluster.dev.us-west-2.example.com. | Type: A | AliasTarget: {  DNSName: "internal-dev-2848e345aecda24-1142498863.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}
I0421 18:17:10.779209       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed Route 53 resource record set creation. DNS: selenium.cluster.dev.us-west-2.example.com | Type: A | Target: {  DNSName: "internal-dev-2848e345aecda24-1142498863.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}.
I0421 18:17:10.779230       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start TargetGroup creation.
I0421 18:17:11.444225       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Succeeded TargetGroup creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-32156-HTTP-e41b33e/a13d3401482c4bc4 | Name: dev-32156-HTTP-e41b33e.
I0421 18:17:11.444260       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start Listener creation.
I0421 18:17:11.444306       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Located default rule. Rule: {  Actions: [{      Type: "forward"    }],  IsDefault: true,  Priority: "default"}
I0421 18:17:11.465506       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-2848e345aecda24/76c87f9d7f640fd0/4abcdf1835652d0e | Port: %!s(int64=80) | Proto: HTTP.
I0421 18:17:11.465558       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:11.466076       1 log.go:62] [ALB-INGRESS] [controller] [ERROR]: Error parsing annotations for ingress tectonic-system-tectonic-ingress. Error: Necessary annotations missing. Must include alb.ingress.kubernetes.io/subnets
I0421 18:17:22.636607       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start ELBV2 (ALB) modification.
I0421 18:17:22.636678       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start ELBV2 tag modification.
I0421 18:17:22.680900       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed ELBV2 tag modification. Tags are [{    Key: "Hostname",    Value: "alertmanager.cluster.dev.us-west-2.example.com"  },{    Key: "IngressName",    Value: "alertmanager"  },{    Key: "Namespace",    Value: "kube-system"  }].
I0421 18:17:22.681039       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start ELBV2 (ALB) modification.
I0421 18:17:22.681139       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start ELBV2 tag modification.
I0421 18:17:22.699670       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed ELBV2 tag modification. Tags are [{    Key: "Hostname",    Value: "prometheus.cluster.dev.us-west-2.example.com"  },{    Key: "IngressName",    Value: "prometheus"  },{    Key: "Namespace",    Value: "kube-system"  }].
I0421 18:17:22.699830       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start ELBV2 (ALB) modification.
I0421 18:17:22.699933       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start ELBV2 tag modification.
I0421 18:17:22.719690       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed ELBV2 tag modification. Tags are [{    Key: "Hostname",    Value: "selenium.cluster.dev.us-west-2.example.com"  },{    Key: "IngressName",    Value: "selenium-hub"  },{    Key: "Namespace",    Value: "selenium-hub"  }].
I0421 18:17:22.719806       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:22.998759       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:26.096678       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:42.391956       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:46.093134       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:17:56.152808       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:07.212293       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:16.235665       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:26.342178       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:36.167791       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:46.101064       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:18:56.105837       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:19:08.083867       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:19:36.724330       1 log.go:62] [ALB-INGRESS] [controller] [ERROR]: Error parsing annotations for ingress kube-system-alertmanager. Error: RequestLimitExceeded: Request limit exceeded.
	status code: 503, request id: 27c95ff3-d41a-4ad1-8bb2-f5928c4e55d3
I0421 18:20:06.718581       1 log.go:62] [ALB-INGRESS] [controller] [ERROR]: Error parsing annotations for ingress kube-system-prometheus. Error: RequestLimitExceeded: Request limit exceeded.
	status code: 503, request id: 67ffcbf1-008f-4425-8597-854aadd52401
I0421 18:20:07.408810       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start ELBV2 (ALB) deletion.
I0421 18:20:07.516680       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed ELBV2 (ALB) deletion. Name: dev-acdd6a6abb90bf5 | ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:loadbalancer/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7
I0421 18:20:07.516736       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Route53 resource record set deletion.
I0421 18:20:07.927548       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed deletion of Route 53 resource record set. DNS: prometheus.cluster.dev.us-west-2.example.com | Type: A | Target: {  DNSName: "internal-dev-acdd6a6abb90bf5-529546578.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}.
I0421 18:20:07.927580       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start TargetGroup deletion.
I0421 18:20:18.441585       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed TargetGroup deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-31253-HTTP-9fdc2df/d8ee59c5e4d4e7b4.
I0421 18:20:18.441616       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Listener deletion.
I0421 18:20:18.454389       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Listener deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7/293036e33fbc6f97
I0421 18:20:18.454418       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Start Rule deletion.
I0421 18:20:18.528660       1 log.go:48] [ALB-INGRESS] [kube-system-prometheus] [INFO]: Completed Rule deletion. Rule: {  Actions: [{      TargetGroupArn: "arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-31253-HTTP-9fdc2df/d8ee59c5e4d4e7b4",      Type: "forward"    }],  Conditions: [{      Field: "path-pattern",      Values: ["/graph"]    }],  IsDefault: false,  Priority: "1",  RuleArn: "arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener-rule/app/dev-acdd6a6abb90bf5/d9c88d0289add6a7/293036e33fbc6f97/53ff75b4f8580aaf"}
I0421 18:20:18.528689       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start ELBV2 (ALB) deletion.
I0421 18:20:18.615675       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed ELBV2 (ALB) deletion. Name: dev-b3b26c44786c5f2 | ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:loadbalancer/app/dev-b3b26c44786c5f2/fa43a2c9ac16d96c
I0421 18:20:18.615698       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start Route53 resource record set deletion.
I0421 18:20:19.027271       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed deletion of Route 53 resource record set. DNS: alertmanager.cluster.dev.us-west-2.example.com | Type: A | Target: {  DNSName: "internal-dev-b3b26c44786c5f2-999659070.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}.
I0421 18:20:19.027498       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start TargetGroup deletion.
I0421 18:20:29.602857       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed TargetGroup deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-30400-HTTP-23e6718/fdd77f38193565e9.
I0421 18:20:29.603039       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Start Listener deletion.
I0421 18:20:29.616709       1 log.go:48] [ALB-INGRESS] [kube-system-alertmanager] [INFO]: Completed Listener deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-b3b26c44786c5f2/fa43a2c9ac16d96c/84c7acc3a64d55aa
I0421 18:20:29.616743       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:20:54.506410       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:20:55.137734       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:20:55.901936       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:21:26.812919       1 log.go:62] [ALB-INGRESS] [controller] [ERROR]: Error parsing annotations for ingress selenium-hub-selenium-hub. Error: RequestLimitExceeded: Request limit exceeded.
	status code: 503, request id: 60e02ce2-9100-442b-98c2-0532484d8ef9
I0421 18:21:26.812958       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start ELBV2 (ALB) deletion.
I0421 18:21:26.931920       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed ELBV2 (ALB) deletion. Name: dev-2848e345aecda24 | ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:loadbalancer/app/dev-2848e345aecda24/76c87f9d7f640fd0
I0421 18:21:26.931952       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start Route53 resource record set deletion.
I0421 18:21:27.303577       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed deletion of Route 53 resource record set. DNS: selenium.cluster.dev.us-west-2.example.com | Type: A | Target: {  DNSName: "internal-dev-2848e345aecda24-1142498863.us-west-2.elb.amazonaws.com.",  EvaluateTargetHealth: false,  HostedZoneId: "XXXXXXXXXX"}.
I0421 18:21:27.303611       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start TargetGroup deletion.
I0421 18:21:37.822226       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed TargetGroup deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:targetgroup/dev-32156-HTTP-e41b33e/a13d3401482c4bc4.
I0421 18:21:37.822258       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Start Listener deletion.
I0421 18:21:37.847217       1 log.go:48] [ALB-INGRESS] [selenium-hub-selenium-hub] [INFO]: Completed Listener deletion. ARN: arn:aws:elasticloadbalancing:us-west-2:<account_number>:listener/app/dev-2848e345aecda24/76c87f9d7f640fd0/4abcdf1835652d0e
I0421 18:21:37.847243       1 controller.go:439] ingress backend successfully reloaded...
I0421 18:21:37.848004       1 controller.go:439] ingress backend successfully reloaded...````

Controller is unable to delete default rules

After the controller has been freshly started, it is trying to replace the default rule on all of the ingresses it has imported

I0411 22:59:19.462724       1 log.go:48] [ALB-INGRESS] [prd353-prom-east-prometheus] [INFO]: Start Rule deletion.
I0411 22:59:19.589145       1 log.go:48] [ALB-INGRESS] [prd353-prom-east-prometheus] [INFO]: Failed Rule deletion. Error: OperationNotPermitted: Default rule 'arn:aws:elasticloadbalancing:us-east-1:889499532989:listener-rule/app/prod-aac43af22cf061f/0ef164d437b4685f/049fb02ae3f4291e/83194f1f670d76f2' cannot be deleted
	status code: 400, request id: 7ec25bf6-1f0a-11e7-ae46-1d2c419de251

Create an end-to-end example for echoservice

Story

Those looking to try out the ALB-Ingress-Controller would likely benefit from an end-to-end example. Echo service seems like a good candidate. Let's write up an end-to-end example with the following criteria.

  • Generate ALB for echoserver
  • Show how logging works and how to understand what's happening
  • Include multiple listeners
  • Include HTTPS on 1 listener
  • Show how modifications are handled

Crash when SG does not exist.

Easy fix, logging it here so we don't forget. Should fail early rather than letting sync happen down stream.

Logs:

I0409 01:55:08.131002       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start ELBV2 (ALB) creation.
I0409 01:55:08.131283       1 elbv2.go:33] Request: elasticloadbalancing/&{CreateLoadBalancer POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  Name: "dev1-429acf8d129056e",
  Scheme: "internal",
  SecurityGroups: ["sg-ccaacaa5"],
  Subnets: ["subnet-0b20aa62","subnet-63bf6318"],
  Tags: [{
      Key: "Namespace",
      Value: "echoserver"
    },{
      Key: "IngressName",
      Value: "echoserver"
    },{
      Key: "Hostname",
      Value: "aaaaaa.josh-test-dns.com"
    }]
}
I0409 01:55:08.404006       1 log.go:54] [ALB-INGRESS] [InvalidConfigurationRequest: Security group 'sg-ccaacaa5' does not belong to VPC 'vpc-55af2b3c'
	status code: 400, request id: 8f00c03f-1cc7-11e7-bfba-bf58a5ab05ef] [ERROR]: Failed to create ELBV2 (ALB). Error: %!s(MISSING)
I0409 01:55:08.404043       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Route53 resource record set creation.
E0409 01:55:08.404144       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/usr/lib/go/src/runtime/asm_amd64.s:479
/usr/lib/go/src/runtime/panic.go:458
/usr/lib/go/src/runtime/panic.go:62
/usr/lib/go/src/runtime/sigpanic_unix.go:24
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/resourcerecordset.go:78
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:24
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:342
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/controller.go:86
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/usr/lib/go/src/runtime/asm_amd64.s:2086
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x8482f2]

Panic upon failing to describe hostedzones

Despite the IAM role on my controllers having the necessary Route53 permissions, it looks like the controller is failing to ListHostedZoneByName (need to dig into this a bit more)

I0417 21:16:36.038770 1 log.go:48] [ALB-INGRESS] [controller] [INFO]: Log level read as "", defaulting to INFO. To change, set LOG_LEVEL environment variable to WARN, ERROR, or DEBUG.
I0417 21:16:36.039028 1 log.go:48] [ALB-INGRESS] [controller] [INFO]: Build up list of existing ingresses
I0417 21:16:36.041806 1 elbv2.go:43] Request: elasticloadbalancing/&{DescribeLoadBalancers POST / %!s(*request.Paginator=&{[Marker] [NextMarker] }) %!s(func(*request.Request) error=)}, Payload: {
PageSize: 100
}
I0417 21:16:36.156689 1 elbv2.go:43] Request: elasticloadbalancing/&{DescribeTags POST / %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
ResourceArns: ["arn:aws:elasticloadbalancing:us-east-2:account-id:loadbalancer/app/tectonic-831d927098db84e/6fab3e78b7c91aef"]
}
I0417 21:16:36.198328 1 route53.go:52] Request: route53/&{ListHostedZonesByName GET /2013-04-01/hostedzonesbyname %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
DNSName: "my-hosted-zone"
}
I0417 21:16:36.360550 1 log.go:48] [ALB-INGRESS] [controller] [INFO]: Failed to resolve 2048.my-hosted-zone zoneID. Returned error Error calling route53.ListHostedZonesByName: AccessDenied
I0417 21:16:36.360565 1 log.go:48] [ALB-INGRESS] [controller] [INFO]: Assembled 0 ingresses from existing AWS resources
I0417 21:16:36.360646 1 launch.go:96] &{ALB Ingress Controller 0.0.1 git-00000000 git://github.com/coreos/alb-ingress-controller}
I0417 21:16:36.360662 1 launch.go:99] Watching for ingress class: alb
I0417 21:16:36.360935 1 launch.go:232] Creating API server client for https://10.3.0.1:443
I0417 21:16:36.373465 1 launch.go:115] validated kube-system/default-http-backend as the default backend
I0417 21:16:36.382128 1 log.go:48] [ALB-INGRESS] [controller] [INFO]: Ingress class set to alb
I0417 21:16:36.382143 1 controller.go:1071] starting Ingress controller
I0417 21:16:36.389089 1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"2048-game", Name:"2048-alb-ingress", UID:"569d7123-23a7-11e7-9698-06c1e8830d57", APIVersion:"extensions", ResourceVersion:"917772", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress 2048-game/2048-alb-ingress
I0417 21:16:36.389110 1 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"tectonic-system", Name:"tectonic-ingress", UID:"5758a03c-206a-11e7-9698-06c1e8830d57", APIVersion:"extensions", ResourceVersion:"262317", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress tectonic-system/tectonic-ingress
I0417 21:16:36.401370 1 leaderelection.go:188] sucessfully acquired lease kube-system/ingress-controller-leader
W0417 21:16:37.389556 1 queue.go:87] requeuing tectonic-system/tectonic-ingress, err deferring sync till endpoints controller has synced
W0417 21:16:37.396584 1 queue.go:87] requeuing tectonic-system/tectonic-license, err deferring sync till endpoints controller has synced
I0417 21:16:46.698697 1 ec2.go:36] Request: ec2/&{DescribeSubnets POST / %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
SubnetIds: ["subnet-40c35329","subnet-6066be1b","subnet-dd23d590"]
}
I0417 21:16:46.847916 1 ec2.go:36] Request: ec2/&{DescribeSecurityGroups POST / %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
GroupIds: ["sg-28a2d841"]
}
I0417 21:16:46.904063 1 route53.go:52] Request: route53/&{ListHostedZonesByName GET /2013-04-01/hostedzonesbyname %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
DNSName: "my-hosted-zone"
}
I0417 21:16:46.976519 1 log.go:62] [ALB-INGRESS] [2048-game-2048-alb-ingress] [ERROR]: Unabled to locate ZoneId for %!s(*string=0xc42054def0).
I0417 21:16:46.976537 1 log.go:48] [ALB-INGRESS] [2048-game-2048-alb-ingress] [INFO]: Start ELBV2 (ALB) creation.
I0417 21:16:46.976727 1 elbv2.go:43] Request: elasticloadbalancing/&{CreateLoadBalancer POST / %!s(*request.Paginator=) %!s(func(*request.Request) error=)}, Payload: {
Name: "tectonic-831d927098db84e",
Scheme: "internet-facing",
SecurityGroups: ["sg-28a2d841"],
Subnets: ["subnet-40c35329","subnet-6066be1b","subnet-dd23d590"],
Tags: [{
Key: "Namespace",
Value: "2048-game"
},{
Key: "IngressName",
Value: "2048-alb-ingress"
},{
Key: "Hostname",
Value: "2048.my-hosted-zone"
}]
}
I0417 21:16:47.390498 1 log.go:48] [ALB-INGRESS] [2048-game-2048-alb-ingress] [INFO]: Completed ELBV2 (ALB) creation. Name: tectonic-831d927098db84e | ARN: arn:aws:elasticloadbalancing:us-east-2::loadbalancer/app/tectonic-831d927098db84e/6fab3e78b7c91aef
E0417 21:16:47.390625 1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/asm_amd64.s:514
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:489
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:63
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/signal_unix.go:290
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/resourcerecordset.go:55
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:24
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/asm_amd64.s:2197
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1525466]

goroutine 111 [running]:
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
panic(0x17802c0, 0x254eb40)
/home/travis/.gimme/versions/go1.8.linux.amd64/src/runtime/panic.go:489 +0x2cf
github.com/coreos/alb-ingress-controller/controller/alb.(*ResourceRecordSet).SyncState(0x0, 0xc4208212c0, 0x0)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/resourcerecordset.go:55 +0x26
github.com/coreos/alb-ingress-controller/controller/alb.LoadBalancers.SyncState(0xc42012a408, 0x1, 0x1, 0xc420128140, 0x3ff0000000000000, 0x2503040)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:24 +0xa0
github.com/coreos/alb-ingress-controller/controller.(*ALBIngress).SyncState(0xc420641a90)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:344 +0x81
github.com/coreos/alb-ingress-controller/controller.(*ALBController).Reload(0xc420168a20, 0x2583680, 0x0, 0x0, 0xc42056d790, 0x2, 0x2, 0x2583680, 0x0, 0x0)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/controller/controller.go:107 +0x76
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).sync(0xc4201c5200, 0x169d0e0, 0xc42056d4a0, 0xc42056d4a0, 0xc42056d400)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432 +0x626
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.sync)-fm(0x169d0e0, 0xc42056d4a0, 0xa, 0xc4206fee00)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158 +0x3e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).worker(0xc4204e53b0)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86 +0xef
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.worker)-fm()
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x2a
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc42075cfa8)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc42050bfa8, 0x12a05f200, 0x0, 0x1, 0xc42072f020)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.Until(0xc42075cfa8, 0x12a05f200, 0xc42072f020)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).Run(0xc4204e53b0, 0x12a05f200, 0xc42072f020)
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x5e
created by github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.GenericController.Start
/home/travis/gopath/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1081 +0x1f1

Determine ALB subnets based on tags if annotation is not provided

If alb.ingress.kubernetes.io/subnets is not specified, we should use/emulate the existing cloudprovider functionality to determine the correct subnets in which to create the ALB. Specifically, internal ALBs will be created in subnets tagged with kubernetes.io/role/internal-elb, while internet-facing ones will be created in kubernetes.io/role/elb subnets.

Invalid cert arn causes controller crash

Issue

When the certificate arn cannot be resolved in AWS, the controller continues to attempt to apply rules to that non-existent listener and eventually crashes.

How to Reproduce

Mess with a certificate ARN in your ingress resource and watch the logs

I0410 18:11:12.688052       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Located default rule. Rule: {  Actions: [{      Type: "forward"    }],  IsDefault: true,  Priority: "default"}
I0410 18:11:12.688182       1 elbv2.go:42] Request: elasticloadbalancing/&{CreateListener POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  Certificates: [{
      CertificateArn: "arn:aws:acm:us-east-2:4432733164488:certificate/ffb5c027-5158-4705-ac09-2254a9a669ed"
    }],
  DefaultActions: [{
      TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:targetgroup/dev1-30946-HTTP-58efb8e/74185a08dfdb6047",
      Type: "forward"
    }],
  LoadBalancerArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:loadbalancer/app/dev1-429acf8d129056e/3198865e24d45513",
  Port: 443,
  Protocol: "HTTPS"
}
I0410 18:11:12.760072       1 log.go:54] [ALB-INGRESS] [echoserver-echoserver] [ERROR]: Failed Listener creation. Error: CertificateNotFound: Certificate 'arn:aws:acm:us-east-2:4432733164488:certificate/ffb5c027-5158-4705-ac09-2254a9a669ed' not found
	status code: 400, request id: 149c692f-1e19-11e7-8516-d1d147b45497.
I0410 18:11:12.760303       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Listener creation.
I0410 18:11:12.760507       1 elbv2.go:42] Request: elasticloadbalancing/&{CreateListener POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  DefaultActions: [{
      TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:targetgroup/dev1-30946-HTTP-58efb8e/74185a08dfdb6047",
      Type: "forward"
    }],
  LoadBalancerArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:loadbalancer/app/dev1-429acf8d129056e/3198865e24d45513",
  Port: 8080,
  Protocol: "HTTP"
}
I0410 18:11:12.826473       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-east-2:432733164488:listener/app/dev1-429acf8d129056e/3198865e24d45513/578c33fe74804c97 | Port: %!s(int64=8080) | Proto: HTTP.
I0410 18:11:12.826591       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Rule creation.
I0410 18:11:12.826765       1 elbv2.go:42] Request: elasticloadbalancing/&{CreateRule POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  Actions: [{
      TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:targetgroup/dev1-32488-HTTP-58efb8e/3abf26a48b8af3a4",
      Type: "forward"
    }],
  Conditions: [{
      Field: "path-pattern",
      Values: ["/mespecial"]
    }],
  ListenerArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:listener/app/dev1-429acf8d129056e/3198865e24d45513/578c33fe74804c97",
  Priority: 1
}
I0410 18:11:12.852861       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Completed Rule creation. Rule: {  Actions: [{      TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:targetgroup/dev1-32488-HTTP-58efb8e/3abf26a48b8af3a4",      Type: "forward"    }],  Conditions: [{      Field: "path-pattern",      Values: ["/mespecial"]    }],  IsDefault: false,  Priority: "1",  RuleArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:listener-rule/app/dev1-429acf8d129056e/3198865e24d45513/578c33fe74804c97/1a110b83682abfa5"}
I0410 18:11:12.852881       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Listener creation.
I0410 18:11:12.852992       1 elbv2.go:42] Request: elasticloadbalancing/&{CreateListener POST / %!s(*request.Paginator=<nil>) %!s(func(*request.Request) error=<nil>)}, Payload: {
  Certificates: [{
      CertificateArn: "arn:aws:acm:us-east-2:4432733164488:certificate/ffb5c027-5158-4705-ac09-2254a9a669ed"
    }],
  DefaultActions: [{
      TargetGroupArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:targetgroup/dev1-30946-HTTP-58efb8e/74185a08dfdb6047",
      Type: "forward"
    }],
  LoadBalancerArn: "arn:aws:elasticloadbalancing:us-east-2:432733164488:loadbalancer/app/dev1-429acf8d129056e/3198865e24d45513",
  Port: 443,
  Protocol: "HTTPS"
}
I0410 18:11:12.911466       1 log.go:54] [ALB-INGRESS] [echoserver-echoserver] [ERROR]: Failed Listener creation. Error: CertificateNotFound: Certificate 'arn:aws:acm:us-east-2:4432733164488:certificate/ffb5c027-5158-4705-ac09-2254a9a669ed' not found
	status code: 400, request id: 14b410c7-1e19-11e7-9579-850515bb8d35.
I0410 18:11:12.911507       1 log.go:42] [ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Rule creation.
E0410 18:11:12.911624       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
/usr/lib/go/src/runtime/asm_amd64.s:479
/usr/lib/go/src/runtime/panic.go:458
/usr/lib/go/src/runtime/panic.go:62
/usr/lib/go/src/runtime/sigpanic_unix.go:24
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rule.go:90
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rule.go:72
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rules.go:13
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:31
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:342
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/controller.go:86
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52
/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49
/usr/lib/go/src/runtime/asm_amd64.s:2086
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x84bd55]

goroutine 90 [running]:
panic(0x16bbe60, 0xc420014030)
	/usr/lib/go/src/runtime/panic.go:500 +0x1a1
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
panic(0x16bbe60, 0xc420014030)
	/usr/lib/go/src/runtime/panic.go:458 +0x243
github.com/coreos/alb-ingress-controller/controller/alb.(*Rule).create(0xc42079a360, 0xc42016ff40, 0xc420535f80, 0x15, 0x0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rule.go:90 +0xe5
github.com/coreos/alb-ingress-controller/controller/alb.(*Rule).SyncState(0xc42079a360, 0xc42016ff40, 0xc420535f80, 0xc420a52270)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rule.go:72 +0xb9
github.com/coreos/alb-ingress-controller/controller/alb.Rules.SyncState(0xc420022a50, 0x1, 0x1, 0xc42016ff40, 0xc420535f80, 0xc4203512a0, 0x2, 0x4)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/rules.go:13 +0x86
github.com/coreos/alb-ingress-controller/controller/alb.Listeners.SyncState(0xc4205096a0, 0x4, 0x4, 0xc42016ff40, 0xc42016ff70, 0x2, 0x2, 0xc42087fa50)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/listeners.go:31 +0xbc
github.com/coreos/alb-ingress-controller/controller/alb.LoadBalancers.SyncState(0xc420022a58, 0x1, 0x1, 0x8, 0x0, 0x1)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/alb/loadbalancers.go:26 +0x149
github.com/coreos/alb-ingress-controller/controller.(*ALBIngress).SyncState(0xc4201e1630)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/ingress.go:342 +0x81
github.com/coreos/alb-ingress-controller/controller.(*ALBController).Reload(0xc4203c8f80, 0xc420a52348, 0x0, 0x8, 0xc42011e9c0, 0x1, 0x1, 0x2448d88, 0x0, 0x0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/controller/controller.go:86 +0x76
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).sync(0xc4203acb40, 0x15dcf40, 0xc420544f00, 0xc420544f00, 0xc420544e00)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:432 +0x68f
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.(*GenericController).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.sync)-fm(0x15dcf40, 0xc420544f00, 0xa, 0xc42087fdb0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:158 +0x3e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).worker(0xc420737e30)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:86 +0x101
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).(github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.worker)-fm()
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x2a
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc42087ff58)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc42087ff58, 0x12a05f200, 0x0, 0x1, 0xc4207483c0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad
github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait.Until(0xc42087ff58, 0x12a05f200, 0xc4207483c0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d
github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task.(*Queue).Run(0xc420737e30, 0x12a05f200, 0xc4207483c0)
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/task/queue.go:49 +0x55
created by github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller.GenericController.Start
	/home/josh/dev/go/src/github.com/coreos/alb-ingress-controller/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1081 +0x1f1

Resolution

  1. Add cert ARN validation logic to annotation parsing step.
  2. Ensure listener creation failures do not allow down stream state syncs to occur (e.g. rules)

How does it compare with kube-ingress-aws-controller?

How does this ALB controller compare with kube-ingress-aws-controller?

  • How does it create Application Load Balancers? (as far as I understand it will create one ALB per Ingress resource, kube-aws-ingress-controller creates one ALB per SSL cert, i.e. reuses ALBs across Ingress resources)
  • How does it select the right SSL certificate? (kube-ingress-aws-controller does autodiscovery of both AWS ACM and AWS IAM SSL certificates based on the hostname)
  • How does it handle DNS records? (kube-ingress-aws-controller does not create Route53 records, but instead relies on External DNS )
  • Does it support routing based on hostname? (as far as I understand it does not support this because AWS ALB only recently added the feature, kube-ingress-aws-controller does the host/path routing in the HTTP Proxy)
  • How does the stack compare to kube-ingress-aws-controller? (ALB + NodePort vs ALB + HTTP Proxy + Service)
  • When should a user use this ALB controller and when is the kube-ingress-aws-controller the better choice?
  • Who is running it in production and what are the experiences so far? (kube-ingress-aws-controller is running in production in Zalando in various clusters since January 2017)

The kube-ingress-aws-controller is also described on http://kubernetes-on-aws.readthedocs.io/en/latest/admin-guide/kubernetes-in-production.html#ingress

Federation support?

I know that ALB is not GCLB and does not support cross-region forwarding, but could federation be supported at least

  1. between clusters in the same region, which is useful when you're trying to shift traffic between them, e.g. when you don't want to switch to etcd3 in place
  2. at the Route 53 level, with the assumption that all services in the ingress are replicated in all regions?

It doesn't look like there's any support or, if there is any, it's very well hidden.

[Feature] Annotation for read timeout

I don't know Go, but I would like to help add an annotation for the load balancer's read idle timeout setting (defaults to 60s). How could I start on trying to get this implemented?

Fix cache checks to include expired keys

Issue

We currently check if the key is nil in most of our caching. Caching, however, keeps expired keys around as detailed in #58 (comment).

We should add a function that returns bool on whether the keys exists or whether the key doesn't exist or is expired.

caching applied to external API calls in certain areas of the controller

Resource deletions occur when controller can not authenticate

Issue

When the ALB Ingress fails to authenticate with AWS, it will add all resources it knows about to the deletable list, and delete all the resources from AWS.

Details

If the ALB ingress controller loses its ability to authenticate with AWS (e.g. credentials are revoked or kube2iam has issues) it it will fail to parse annotations, returning an error.

This means a nil ALB ingress resource will be returned up to the controller.

This then adds the resource to the 'deletable' list and it should be deleted accordingly.

It's likely in this case, we need to bubble up the error and stop the OnUpdate event.

/cc @mgoodness

Listener creation makes extra API call

Issue

When constructing multiple listeners (2+ ports) an extra API call for create is made. While the ALB is still setup properly, we'll want to fix this at some point.

How to Reproduce

Apply a multi-port ingress manifest such as:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver
  namespace: echoserver
  annotations:
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/port: "8080,9000"
    alb.ingress.kubernetes.io/subnets: subnet-63bf6318,subnet-0b20aa62
    alb.ingress.kubernetes.io/security-groups: sg-1f84f776
    alb.ingress.kubernetes.io/tags: Environment=dev1,ProductCode=PRD999,InventoryCode=echo-app
spec:
  rules:
  - host: aaaaaa.josh-test-dns.com
    http:
      paths:
      - path: /
        backend:
          serviceName: echoserver
          servicePort: 80
      - path: /mespecial
        backend:
          serviceName: echoserver2
          servicePort: 80

Watch the logs, note the 3 instances of Start Listener creation.

[ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Listener creation.                                                                                                                         
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Located default rule. Rule: {  Actions: [{      Type: "forward"    }],  IsDefault: true,  Priority: "default"}                                   
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-east-2:432733164488:listener/app/dev1-429acf8d129056e/8349536fda798aee/cbe2f185fc248edc | Port: %!s(int64=8080) | Proto: HTTP.                                                                                                                                                
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Listener creation.                                                                                                             
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Located default rule. Rule: {  Actions: [{      Type: "forward"    }],  IsDefault: true,  Priority: "default"}                                   
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-east-2:432733164488:listener/app/dev1-429acf8d129056e/8349536fda798aee/894f4e77e045bd46 | Port: %!s(int64=9000) | Proto: HTTP.                                                                                                                                                
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Start Listener creation.                                                                                                                         
[ALB-INGRESS] [echoserver-echoserver] [INFO]: Completed Listener creation. ARN: arn:aws:elasticloadbalancing:us-east-2:432733164488:listener/app/dev1-429acf8d129056e/8349536fda798aee/cbe2f185fc248edc | Port: %!s(int64=8080) | Proto: HTTP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.