Giter Club home page Giter Club logo

haproxy-ingress's Introduction

HAProxy Ingress controller

Ingress controller implementation for HAProxy loadbalancer.

build helm

HAProxy Ingress is a Kubernetes ingress controller: it configures a HAProxy instance to route incoming requests from an external network to the in-cluster applications. The routing configurations are built reading specs from the Kubernetes cluster. Updates made to the cluster are applied on the fly to the HAProxy instance.

Use HAProxy Ingress

Documentation:

Supported versions:

HAProxy Ingress Embedded
HAProxy
Supported
Kubernetes
External
HAProxy (*)
v0.15 (snapshot) 2.6 1.19+ 2.2+
v0.14 (latest) 2.4 1.19+ 2.2+
v0.13 2.3 up to v0.13.10
2.4 on v0.13.11+
1.19+ 2.2+
v0.12 2.2 1.18 - 1.21 2.0+
v0.10 2.0 1.8 - 1.21 -
  • Beta quality versions (beta / canary tags) has some new, but battle tested features, usually running on some of our production clusters
  • Development versions (alpha / snapshot tags) has major changes with few tests, usually not recommended for production
  • (*) Minimum supported HAProxy version if using an external HAProxy instance

Community:

Develop HAProxy Ingress

The instructions below are valid for v0.14 and newer. See v0.13 branch for older versions.

Building and running locally:

mkdir -p $GOPATH/src/github.com/jcmoraisjr
cd $GOPATH/src/github.com/jcmoraisjr
git clone https://github.com/jcmoraisjr/haproxy-ingress.git
cd haproxy-ingress
make run

Dependencies to run locally:

  • Golang
  • HAProxy compiled with USE_OPENSSL=1 and USE_LUA=1
  • golangci-lint is used when running make lint or make test targets
  • Lua with lua-json (luarocks install lua-json) if using Auth External or OAuth
  • Kubernetes network should be reachable from the local machine for a proper e2e test

Building container image:

Fast build - cross compile for linux/amd64 (locally) and generate localhost/haproxy-ingress:latest:

make image

Official image - build in a multi-stage Dockerfile and generate localhost/haproxy-ingress:latest:

make docker-build

Deploy local image using Helm:

helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
helm install haproxy-ingress haproxy-ingress/haproxy-ingress\
  --create-namespace --namespace=ingress-controller\
  --set controller.image.repository=localhost/haproxy-ingress\
  --set controller.image.tag=latest\
  --set controller.image.pullPolicy=Never

make options:

The following make variables are supported:

  • CONTROLLER_TAG (defaults to localhost/haproxy-ingress:latest): tag name for make image and make docker-build.
  • LOCAL_FS_PREFIX (defaults to /tmp/haproxy-ingress): temporary directory for make run.
  • KUBECONFIG (defaults to $KUBECONFIG, or $(HOME)/.kube/config if the former is empty): Kubernetes from where to read Ingress configurations.
  • CONTROLLER_CONFIGMAP: <namespace>/<name> of the ConfigMap with global configurations.
  • CONTROLLER_ARGS: space separated list of additional command-line arguments.

The following make targets are supported:

  • build (default): Compiles HAProxy Ingress using the default OS and arch, and generates an executable at bin/controller.
  • run: Runs HAProxy Ingress locally.
  • lint: Runs golangci-lint
  • test: Runs unit tests.
  • test-integration: Runs integration tests, needs haproxy 2.2+ in the path.
  • linux-build: Compiles HAProxy Ingress and generates an ELF (Linux) executable despite the source platform at rootfs/haproxy-ingress-controller. Used by image step.
  • image: Compiles HAProxy Ingress locally and generates a Docker image.
  • docker-build: Compiles HAProxy Ingress and generates a Docker image using a multi-stage Dockerfile.

haproxy-ingress's People

Contributors

aiharos avatar andrejbaran avatar antcs avatar arodland avatar brianloss avatar danigrmartinez avatar dependabot[bot] avatar empirejones avatar git001 avatar ironashram avatar jcmoraisjr avatar l27-0-0-l avatar lafolle avatar mac-chaffee avatar maelvls avatar marvin-roesch avatar michal800106 avatar mrueg avatar mythi avatar oktalz avatar pbabilas avatar prometherion avatar rgherta avatar rikatz avatar roberttheprofessional avatar saary avatar scukerman avatar sealneaward avatar sigseg1v avatar wolf-cosmose avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

haproxy-ingress's Issues

Provide request header with client's certificate fingerprint

What do you think on providing it if client certificates are allowed.
Quick workaround that I have used so far is:

frontend httpsfront-{{ $host }}
...
{{/* Client certificate SHA1 fingerprint */}}
{{ if ne $authSSLCert.CAFileName "" }}
    http-request set-header Cert-Fingerprint-Sha1 %[ssl_c_sha1,hex]
{{ end }}
...
{{ if eq $cfg.Forwardfor "add" }}
    reqidel ^X-Forwarded-For:.*
    option forwardfor

ELB SSL termination with HSTS and HTTPS redirect?

Essentially, the same situation as in kubernetes/ingress-nginx#314 but with HAProxy Ingress controller.

If I'm terminating SSL on ELB and use HTTP (or PROXY), is there any way to configure this controller to do redirect to HTTPS and return HSTS header?

Seems like, it's not directly supported and the only option I have is to replace the haproxy.tmpl with one that sets up a separate port just for the HTTPS redirect (+ change http frontend to return HSTS). Am I right?

cannot validate certificate for 10.254.0.1 because it doesn't contain any IP SANs

I'm trying to deploy haproxy-ingress . According to the case of a website https://github.com/kubernetes/ingress/tree/master/examples/deployment/haproxy.
when I execute "kubectl create -f haproxy-ingress.yaml" , The pod of haproxy-ingress status is "CrashLoopBackOff" and the container logs
"I0307 08:28:17.096877 5 launch.go:92] &{HAProxy v0.1 git-676b5a8 https://github.com/jcmoraisjr/haproxy-ingress}

I0307 08:28:17.097206 5 launch.go:221] Creating API server client for https://10.254.0.1:443

F0307 08:28:17.113124 5 launch.go:109] no service with name default/ingress-default-backend found: Get https://10.254.0.1:443/api/v1/namespaces/default/services/ingress-default-backend: x509: cannot validate certificate for 10.254.0.1 because it doesn't contain any IP SANs
"
How can I solve this error ?

haproxy-ingress locks up attempting leader election

E0912 17:17:48.511187       6 leaderelection.go:258] Failed to update lock: Operation cannot be fulfilled on configmaps "ingress-controller-leader-haproxy": the object has been modified; please apply your changes to the latest version and try again

haproxy-ingress version haproxy-ingress:v0.4-snapshot.4
Kubernetes version 1.5.5

Works great, other than this lock up. After that happens, no new changes in the cluster get registered.

X-Forwarded-Proto not added?

Going through the haproxy ingress it appears the X-Forwarded-For header isn't added in the request to the backend, is that as expected?

$ curl https://httpbin2.empty.us/headers
{
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin2.empty.us",
    "User-Agent": "curl/7.56.1"
  }
}

Changing SSL/TLS headers

On #46 some SSL/TLS headers was added following the X-SSL-etc convention. However RFC 6648 deprecated the recommendation of the X- prefix. A little few words on this subject here.

Because of that I'm changing these headers on beta.3 that should be tagged and deployed later today. I'd follow the change route (breaking actual code) instead of duplicating/depreciating now (0.4) and only remove the X- on 0.5. Please leave your comment if the first (breaking now) option doesn't sound good.

/cc @Scukerman

Can't rely on ssl_fc_has_crt

I ran into an issue when ssl_fc_has_crt is not always true when user provides client certificate, therefore user gets error page which is telling that no certificate was received.
Haproxy docs suggest to use ssl_c_used instead of ssl_fc_has_crt.

ssl_fc_has_crt : boolean
Returns true if a client certificate is present in an incoming connection over
SSL/TLS transport layer. Useful if 'verify' statement is set to 'optional'.

***
Note: on SSL session resumption with Session ID or TLS ticket, client
certificate is not present in the current connection but may be retrieved
from the cache or the ticket. So prefer "ssl_c_used" if you want to check if
current SSL session uses a client certificate.
***

So we can't rely on ssl_fc_has_crt

Testing

I am curious if there are pre-existing plans to add testing to this project? Doing go based unit tests seem like they might be relatively easy to do (though I have limited golang experience myself, so could be very wrong there). System testing on the other hand should exercise the entire project nicely and be possible in minikube (unaware if there are services that will provide a full cluster for testing purposes).

I ask as I would like to add some test cases with whatever approach(s) will be used for at least the rewrite-annotation work that I did that has been merged.

Help with tcp connection

I currently have a PostgreSQL database running in a Pod and would like to be able to access it via psql with something like:

psql -h postgres.domain.whatever -p 443

I'm having some trouble getting the tcp connection to work.

multibinder goes zombie

We have been noticing that after a while the multi-binder process goes zombie which seems to mean new haproxy processes cannot be started, which means new versions of the haproxy.cfg cannot be consumed. This causes config drift that is hard to track as it only causes problems when pods/services/ingresses change.

It looks like the reason it goes zombie (rather than just terminating) is because the bash script that starts multi-binder (start.sh) exits with an exec command to start haproxy-ingress-controller which does not listen for the multi binders exit code.

Is there maybe a way that haproxy-ingress-controller itself could start the multi-binder process instead of the bash script. That way haproxy-ingress-controller could detect when the multi-binder process exits and report itself as unhealty/not live.

Support multiple ingress resources with the same host

Hi,

I noticed that if you create multiple ingress resources which use the same host, but different path routing setups, only one will "win" and end up getting its config created (I would expect all to be created/merged).
My personal reason for splitting this in multiple resources is that I have an application living under a certain path that handles websockets, which need session affinity, however I don't want all my other apps under that hostname to be sticky.
Would there be support for implementing this, or are there technical limitations that would prevent this from working? I tried taking a look where in the code this would happen, but I don't immediately spot it (must say I'm not that experienced with Go)

Allow RegExp for hostnames

I need to have a possibility to set a hostname as a regular expression, e.g. [a-z0-9]{10}.domain.com

AFAIK, Ingress rules don't support it because it wasn't expected to use it that way:

The Ingress "test-frontend" is invalid: spec.rules[0].host: Invalid value: "[a-z0-9]{10}.domain.com": a wildcard DNS-1123 subdomain must start with '*.', followed by a valid DNS subdomain, which must consist of lower case alphanumeric characters, '-' or '.' and end with an alphanumeric character (e.g. '*.example.com', regex used for validation is '\*\.[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

So I believe that it could be implemented via custom annotations.

Incorrect X-Forwarded-For handling

curl -v http://echoheaders.example.com --resolve echoheaders.example.com:80:127.0.0.1 -H 'X-Forwarded-For: 1.2.3.4'
* Added echoheaders.example.com:80:127.0.0.1 to DNS cache
* Rebuilt URL to: http://echoheaders.example.com/
* Hostname echoheaders.example.com was found in DNS cache
*   Trying 127.0.0.1...
* Connected to echoheaders.example.com (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: echoheaders.example.com
> User-Agent: curl/7.47.0
> Accept: */*
> X-Forwarded-For: 1.2.3.4
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.0
< Date: Wed, 10 May 2017 11:11:11 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
<
CLIENT VALUES:
client_address=10.100.5.0
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://echoheaders.example.com:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=echoheaders.example.com
user-agent=curl/7.47.0
x-forwarded-for=1.2.3.4127.0.0.1
BODY:
* Connection #0 to host echoheaders.example.com left intact
-no body in request-

For some reason the x-forwarded-for header does not get appended to with a comma

Sticky session on dynamic scaling configuration

I'm implementing sticky session #18 and trying to create a reproducible hash to use on server ... cookie <value>. So I made a base64 of a partial hash of <ip>:<port> endpoint, which will always be the same regardless of haproxy.cfg updates.

The problem arises on the dynamic scaling configuration. If I create the hash when the slot is disabled, the hash will be the same on all servers because I don't have the endpoint yet. If I create the hash based on the server name, this can be changed when the number of slots changes. Afaics HAProxy doesn't have a set server cookie <value> to change this on the fly, because of that the cookie <value> of empty slots should be configured on the template.

I can see one strategy here: when using dynamic scaling, calc the hash based on the server name and never change it's name when scaling down the number of replicas. This sounds a hard but also a doable work once that we have the old config.

Ideas are very welcome.

cc @aiharos

balance-algorithm not picked up

Seems like the balance-algorithm isn't being set w/ v0.2.1 at least.

My ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: ingress-haproxy-controller
  labels:
    app: ingress-haproxy-controller
data:
  balance-algorithm: hdr(X-Real-IP)

The haproxy.cfg sets the balance-algorithm to round robin.

Unable to override ingress-class(?)

Hi!
Current implementation looks like if you set --ingress-class=to_some_value it will always be overwritten to be set to haproxy:

ingressClass := flags.Lookup("ingress-class")
if ingressClass != nil {
ingressClass.Value.Set("haproxy")
ingressClass.DefValue = "haproxy"
}

Is this the correct behaviour? If so can you please explain why it needs to be fixed to haproxy? I need to have multiple haproxy ingress controllers deployed with different ingress class names.

If this is a bug/missing feature I would be happy to submit a PR.

Annotation support: rewrite-target

Kubernetes ingress controllers supports the annotation ingress.kubernetes.io/rewrite-target. This is a valuable annotation to support in microservice architectures.

Example: I have 2 services, an API and a telemetry service, both on myapi.domain.com. The path setup in the kubernetes ingress definition for the API is /api, while /telemetry is used for the telemetry service. We then might re-use that same telemetry service with a different domain but for some reason the path has to be /stats instead of /telemetry. As such, the telemetry service has been coded to expect inbound traffic on the path / and the rewrite-target is used to overwrite /telemetry and /stats to /.

We believe (and are testing) that the resulting haproxy configuration should likely be something like:

reqrep ^([^\ :]*)\ /telemetry/(.*)     \1\ /\2  if { path_beg /telemetry }
use_backend out-telemetry-80 if { path_beg /telemetry }

We have already eliminated regsub from the running for this config as it does not seem to support capture groups (does not allow closing parentheses in the regex). Thoughts?

One frontend to http and https

Since eb9a8f7 HAProxy Ingress use the same frontend to handle requests from http, https and ssl offload outside haproxy. This will help a lot to avoid duplicating configuration. There are other few improvements coming soon.

Several - perhaps almost all - functionalities was impacted with this change, eg wildcard hostnames, alias, tls auth validation, ssl offload outside haproxy, rewrites and redirects just to name a few.

I did several e2e and exploratory tests but cannot cover everything and guarantee that everything is ok, so take a special care before updating to the next snapshot - which will be tagged later this week. Otoh your tests and new issues are more than welcome! Update this issue with any doubt and please let me know about regressions on new issues.

This of course won't be included in 0.4 which is going to be releases in a few days.

Help exposing haproxy-ingress / resolving DNS

I'm struggling to get this to expose properly. Currently I'm using external-dns + annotations to setup the DNS A-Type records via the external-dns.alpha.kubernetes.io/hostname annotation on the haproxy-ingress service, however this seems to not actually be exposed and instead uses the internal IP. What's the proper way of making haproxy-ingress public-facing? Sorry if this is obvious... all this is a lot to wrap one's mind around.

kind: Service
apiVersion: v1
metadata:
  name: haproxy-ingress
  namespace: production
  labels:
    app: haproxy-ingress
  annotations:
    external-dns.alpha.kubernetes.io/hostname: api.levelstothegame.com
spec:
  ports:
    - port: 80
      targetPort: 80
    - port: 443
      targetPort: 443
    - port: 1936
      targetPort: 1936

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: haproxy-ingress
  namespace: production
  labels:
    app: haproxy-ingress
spec:
  selector:
    matchLabels:
      app: haproxy-ingress
  template:
    metadata:
      labels:
        app: haproxy-ingress
    spec:
      hostNetwork: true
      containers:
      - name: haproxy-ingress
        image: quay.io/jcmoraisjr/haproxy-ingress
        args:
        - --default-backend-service=$(POD_NAMESPACE)/node-api
        - --default-ssl-certificate=$(POD_NAMESPACE)/haproxy-tls
        - --configmap=$(POD_NAMESPACE)/haproxy-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: stat
          containerPort: 1936
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: haproxy-ingress
  namespace: production
  labels:
    app: haproxy-ingress
  annotations:
    kubernetes.io/ingress.class: "haproxy"
spec:
  tls:
  - hosts:
    - api.levelstothegame.com
    secretName: haproxy-tls
  rules:
  - host: api.levelstothegame.com
    http:
      paths:
      - path: /
        backend:
          serviceName: node-api
          servicePort: 3000

kind: Service
apiVersion: v1
metadata:
  name: node-api
  namespace: production
  labels:
    app: node-api
    tier: backend
spec:
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000
  selector:
    app: node-api
    tier: backend

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: external-dns
  namespace: production
spec:
  template:
    metadata:
      labels:
        app: external-dns
    annotations:
      scheduler.alpha.kubernetes.io/critical-pod: ''
      iam.amazonaws.com/role: "external-dns"
    spec:
      containers:
      - name: external-dns
        image: registry.opensource.zalan.do/teapot/external-dns:v0.4.5
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=levelstothegame.com
        - --provider=aws
        - --policy=upsert-only
        - --registry=txt
        - --txt-owner-id=external-dns
        - --aws-zone-type=public

Testing timed out with curl -i {Host's IP}:nodeport

Hi @jcmoraisjr

I am new to kubernetes and trying out your ingress controller example. Please bare with me on this question :-)

I was able to deploy haproxy-ingress but am having issue with testing it using the following instructions. It will work if I leave out the nodeport in the curl command.

Am I doing something wrong? Thanks in advance for your help. BTW, this is a great example!

Look for nodePort field next to port: 80.
Change below 172.17.4.99 to the host's IP and 30876 to the nodePort:
$ curl -i 172.17.4.99:30876

From my service yaml, my cluster IP (I am assuming the is the host’s IP) is 10.0.75.116 and nodeport 32204, so I tried

curl -i 10.0.75.116:32204
curl: (7) Failed to connect to 10.0.75.116 port 32204: Connection timed out

curl -i 10.0.75.116
HTTP/1.1 404 Not Found
Date: Fri, 27 Oct 2017 16:33:51 GMT
Content-Length: 21
Content-Type: text/plain; charset=utf-8

default backend - 404

curl -i 10.0.75.116 -H 'Host: foo.bar'
HTTP/1.1 200 OK
Server: nginx/1.9.11
Date: Fri, 27 Oct 2017 16:34:16 GMT
Content-Type: text/plain
Transfer-Encoding: chunked

CLIENT VALUES:
client_address=10.244.2.3
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://foo.bar:8080/

SERVER VALUES:
server_version=nginx: 1.9.11 - lua: 10001

HEADERS RECEIVED:
accept=/
host=foo.bar
user-agent=curl/7.47.0
x-forwarded-for=10.240.255.5
BODY:

———————————————
Here are my configuration:

kubectl get svc/haproxy-ingress -oyaml

Name: haproxy-ingress
Namespace: default
Labels: run=haproxy-ingress
Annotations:
Selector: run=haproxy-ingress
Type: NodePort
IP: 10.0.75.116
Port: port-1 80/TCP
NodePort: port-1 32204/TCP
Endpoints: 10.244.2.3:80
Port: port-2 443/TCP
NodePort: port-2 31926/TCP
Endpoints: 10.244.2.3:443
Port: port-3 1936/TCP
NodePort: port-3 31217/TCP
Endpoints: 10.244.2.3:1936
Session Affinity: None
Events:

NAME READY STATUS RESTARTS AGE
haproxy-ingress-3032474187-g6g3q 1/1 Running 0 23h
http-svc-220007766-b6mg6 1/1 Running 0 23h
ingress-default-backend-4027382565-bnk5h 1/1 Running 0 23h

Name: haproxy-ingress-3032474187-g6g3q
Namespace: default
Node: k8s-agentpool-38459447-1/10.240.0.5
Start Time: Thu, 26 Oct 2017 16:55:09 +0000
Labels: pod-template-hash=3032474187
run=haproxy-ingress
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"haproxy-ingress-3032474187","uid":"6d187a04-ba6e-11e7-b605-000d3...
Status: Running
IP: 10.244.2.3
Created By: ReplicaSet/haproxy-ingress-3032474187
Controlled By: ReplicaSet/haproxy-ingress-3032474187
Containers:
haproxy-ingress:
Container ID: docker://27a7b5ca990958fb87679ef0a7542e733060764b628272f7a19bb6b6fa6a3353
Image: quay.io/jcmoraisjr/haproxy-ingress
Image ID: docker-pullable://quay.io/jcmoraisjr/haproxy-ingress@sha256:bed3daf69d9045cab60a6353c454013df703c7ad850f947657378184bf588783
Ports: 80/TCP, 443/TCP, 1936/TCP
Args:
--default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
--default-ssl-certificate=$(POD_NAMESPACE)/tls-secret
--configmap=$(POD_NAMESPACE)/haproxy-ingress
State: Running
Started: Thu, 26 Oct 2017 16:55:19 +0000
Ready: True
Restart Count: 0
Environment:
POD_NAME: haproxy-ingress-3032474187-g6g3q (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1qjs6 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-1qjs6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1qjs6
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations:
Events:

Add support for cert manager

As per the kube lego team, they are planning on moving away from kube lego to a more standardised way to automatically provision TLS certificates for Kubernetes.

This project is currently located here

Would be great if the support for this new library can be added.

Thanks

interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret

After running some load for some time my proxy died with the following exception. It seems to me that k8s was suppoed to return something but the ingress-controller got something that he didn't expected.

`
W1106 15:35:39.450232 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:36:09.397685 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:36:09.448310 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:37:01.749835 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:38:37.652810 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:38:42.755545 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
I1106 15:40:07.222551 7 controller.go:184] ignoring delete for ingress gwing based on annotation kubernetes.io/ingress.class
I1106 15:40:07.228940 7 controller.go:184] ignoring delete for ingress pcsproxying based on annotation kubernetes.io/ingress.class
I1106 15:40:07.234685 7 controller.go:184] ignoring delete for ingress websocketsproxying based on annotation kubernetes.io/ingress.class
W1106 15:40:07.271407 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:40:07.418308 7 controller.go:1016] ssl certificate ntg6-coreload-core/ingress-gw-cert-secret does not contain a common name for host ntg6-coreload-haproxy.eastus2.cloudapp.azure.com
W1106 15:40:07.460991 7 reflector.go:323] github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1155: watch of *v1.Secret ended with: too old resource version: 11259959 (39003766)
E1106 15:40:08.561235 7 runtime.go:66] Observed a panic: &runtime.TypeAssertionError{interfaceString:"interface {}", concreteString:"cache.DeletedFinalStateUnknown", assertedString:"*v1.Secret", missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/asm_amd64.s:514
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/panic.go:489
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/iface.go:172
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:219
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:206
:56
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:276
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:147
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/asm_amd64.s:2197
E1106 15:40:08.561359 7 runtime.go:66] Observed a panic: &runtime.TypeAssertionError{interfaceString:"interface {}", concreteString:"cache.DeletedFinalStateUnknown", assertedString:"*v1.Secret", missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/asm_amd64.s:514
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/panic.go:489
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/asm_amd64.s:514
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/panic.go:489
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/iface.go:172
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:219
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:206
:56
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:276
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:147
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/asm_amd64.s:2197
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret [recovered]
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret [recovered]
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Secret

goroutine 86 [running]:
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x126
panic(0x1387e40, 0xc420d02340)
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/panic.go:489 +0x2cf
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x126
panic(0x1387e40, 0xc420d02340)
/home/travis/.gimme/versions/go1.8.1.linux.amd64/src/runtime/panic.go:489 +0x2cf
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller.newIngressController.func5(0x13cd720, 0xc420e01140)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:219 +0x199
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(0x0, 0xc4201adda0, 0xc4201addb0, 0x13cd720, 0xc420e01140)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:206 +0x49
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnDelete(0xc420184680, 0x13cd720, 0xc420e01140)
:56 +0x73
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.NewInformer.func1(0x1397520, 0xc4212ee300, 0x1397520, 0xc4212ee300)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:276 +0x516
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc420586300, 0xc4202b8f00, 0x0, 0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451 +0x27e
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop(0xc4204eb700)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:147 +0x40
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.(*controller).(github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.processLoop)-fm()
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121 +0x2a
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42057dfb0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97 +0x5e
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc420c3bfb0, 0x3b9aca00, 0x0, 0x12efb01, 0xc4204085a0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98 +0xbd
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42057dfb0, 0x3b9aca00, 0xc4204085a0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52 +0x4d
github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache.(*controller).Run(0xc4204eb700, 0xc4204085a0)
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/client-go/tools/cache/controller.go:121 +0x237
created by github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller.GenericController.Start
/home/travis/gopath/src/github.com/jcmoraisjr/haproxy-ingress/vendor/k8s.io/ingress/core/pkg/ingress/controller/controller.go:1155 +0x1c1
`

Nab deployment instructions

CoreOS has removed the deployment instructions and example YAML from their master branch. Please nab them from their history and include them in this repo.

Confusing log message when configuring auth

with this

      ingress.kubernetes.io/auth-secret: kibana-auth
      ingress.kubernetes.io/auth-type: basic

I'm getting this log message:

W1113 08:50:16.948004       7 backend_ssl.go:41] error obtaining PEM from secret logging/kibana-auth: no keypair or CA cert could be found in logging/kibana-auth

I checked and kibana-auth is only referenced as an auth-secret.

Unable to start leader election, server does not allow access...

Whilst following the steps here to set up an HAProxy ingress, I hit an issue where the pod will not come online due to not being able to post an endpoint.

Currently using ubuntu 16.04 + kubeadm/kubectl 1.6.1 + kubernetes 1.6.0

Logs :

[] ➜  ~ kubectl logs haproxy-ingress-564356010-cl6s5
I0407 12:19:31.909095       6 launch.go:96] &{HAProxy v0.2 git-cd17319 https://github.com/jcmoraisjr/haproxy-ingress}
I0407 12:19:31.910060       6 launch.go:232] Creating API server client for https://10.96.0.1:443
I0407 12:19:31.969977       6 launch.go:115] validated default/ingress-default-backend as the default backend
F0407 12:19:31.985231       6 status.go:188] unexpected error starting leader election: the server does not allow access to the requested resource (post endpoints)

Client Certificate Authentication doesn't work

I'm using the latest commit at the moment -- 2a59950.

And I'm trying to get auth to work.

// AuthSSLCert contains the necessary information to do certificate based
// authentication of an ingress location
type AuthSSLCert struct {
	// Secret contains the name of the secret this was fetched from
	Secret string `json:"secret"`
	// CAFileName contains the path to the secrets 'ca.crt'
	CAFileName string `json:"caFilename"`
	// PemSHA contains the SHA1 hash of the 'tls.crt' value
	PemSHA string `json:"pemSha"`
}

According to this struct I have the output of this structure in config as a comment (I have modified template):

{Secret:default/customer-api-ca CAFileName: PemSHA:07ee9b247ea45236b76f5bf65f885cedb0145370}

It seems like I do smth wrong.

No health check problem

Hi,
I followed this way created a haproxy-ingress rules and also mentioned
kubernetes.io/ingress.class: "haproxy"
otherwise google loadbalancer getting created (as i am using GCE) even after i deploy (DaemonSet)haproxy-controller.
My haproxy-controller image is "quay.io/jcmoraisjr/haproxy-ingress:v0.3-beta.2"
where I am passing /etc/haproxy/template from configmaps.
And made a service for haproxy-controller with type: LoadBalancer.

The problem is i am seeing "This load balancer has no health check, so traffic will be sent to all instances regardless of their status" in red and also when i open my urls in browser it is showing
default backend - 404

my haproxy.tmpl

global

defaults
  mode    http
  timeout connect 5000
  timeout client  50000
  timeout server  50000

frontend localnodes
  bind *:80
  mode http
  default_backend servers

backend servers
  mode http
  balance roundrobin 
  server server1 mywebservicename:80 check

Could you help in this ?

Support for converters through haproxy.tmpl?

I'm trying to do client certificate fingerprint extraction and forwarding by creating my own haproxy.tmpl, and adding a single line: http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1,hex], which works in the plain haproxy:1.7-alpine image. I found out that the ,hex part is breaking the functionality when using haproxy-ingress. If I just did http-request set-header X-SSL-Client-SHA1 %[ssl_c_sha1], it works as expected -- but I want the string representation of the value, not binary. Any ideas on how to work around this problem?

use TLS secrets from other namespaces

It would be cool that I can specify the namespace in the secretName, so I can store TLS credentials separate from the applications

It would be handy to hide the certificate keys from developers

example:
secretName: kube-system/foobar-tls

hsts annotations

It'd be nice to be able to configure the hsts configuration in annotations (for example some ingress'es might want to specify subdomains, others not, etc).

Cannot connect to API Server

HAProxy cannot connect to api server and get ingress-default-backend service, because it doesn't have client certificate.

here is the error:
launch.go:118] no service with name kube-system/ingress-default-backend found: the server has asked for the client to provide credentials (get services ingress-default-backend)

I have checked ConfigMap parameters to set client certificate, but I haven't find anything relevant.

kubernetes version: 1.7.4

(By the way, it's my first time installing kubernetes and also first time creating an issue on GitHub, I'm really newbie)

Wrong handling of path

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoheaders
  annotations:
    kubernetes.io/ingress.class: haproxy
spec:
  tls:
  - hosts:
    - domain.com
    secretName: tls-domain-com
  rules:
  - host: domain.com
    http:
      paths:
      - path: /ws
        backend:
          serviceName: echoheaders
          servicePort: 80

I expect to get exact /ws path but not anything else starting with /ws.

HTTPS host header support

Technically, host headers should not be supported for HTTPS connections and the SNI headers that have been chosen as the primary way to determine which frontend/backend to use is the correct way to do it in this ingress controller.

HOWEVER! We ended up with a business use case where my team had to enable host header backend selection in the haproxy config template for HTTPS connections. Please let me know if you would like a PR opened against upstream with this solution, I will gladly submit it. Just unsure it belongs (it is gross...).

secure-verify-ca-secret with secret in other namespace than haproxy-ingress

I was trying to set up an ingress with the annotations secure-backends: "true" and a secure-verify-ca-secret.

My haproxy-ingress is running in the default namespace while my ingress and services in running in a separate namespace.

I created the ca secret as a generic secret with exactly one element called ca.crt and the ca certificate in PEM format.

From the logs of the haproxy-ingress pod I see that it detects the annotation but is unable to fetch the secret from the other namespace.

Any idea what might be wrong? (running kubernetes 1.6.6 through Rancher)

kubectl create secret generic -n other ca-secret --from-file=ca.crt
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
  namespace: other
  annotations:
    ingress.kubernetes.io/secure-backends: "true"
    ingress.kubernetes.io/secure-verify-ca-secret: "ca-secret"
spec:
  rules:
  - host: example.com
  tls:
  - hosts:
    - example.com
    secretName: example-secret
  backend:
    serviceName: app-service
    servicePort: 443
I0726 12:29:24.702877       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"other", Name:"app-ingress", UID:"xxx", APIVersion:"extensions", ResourceVersion:"8277915", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress other/app-ingress
E0726 12:29:24.800131       7 annotations.go:112] error parsing secure upstream: error obtaining certificate: secret other/ca-secret does not exist
I0726 12:29:24.800150       7 leaderelection.go:203] attempting to acquire leader lease...
I0726 12:29:24.800477       7 backend_ssl.go:63] adding secret other/ca-secret to the local store
I0726 12:29:24.800744       7 controller.go:1022] ssl certificate "other/example-secret" does not exist in local store
I0726 12:29:24.801265       7 backend_ssl.go:63] adding secret other/example-secret to the local store
I0726 12:29:24.825600       7 controller.go:421] ingress backend successfully reloaded...

global maxconn should be tunable

HAProxy has several maxconn keywords:

Currently, the Ingress Controller implements max-connections which sets the maxconn in the defaults section (applying to each proxy). The issue is that the global one can't be modified, leading to a limit (DEFAULT_MAXCONN) that can't be overrided.

btw, the link in README.md redirects to the wrong maxconn

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.