Giter Club home page Giter Club logo

kong / kubernetes-ingress-controller Goto Github PK

View Code? Open in Web Editor NEW
2.1K 64.0 586.0 65.53 MB

:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.

Home Page: https://docs.konghq.com/kubernetes-ingress-controller/

License: Apache License 2.0

Makefile 0.81% Go 98.69% Shell 0.35% Dockerfile 0.09% Smarty 0.06%
ingress-controller kubernetes kong kubernetes-ingress ingress kubernetes-ingress-controller apis k8s crds microservices

kubernetes-ingress-controller's Introduction

kong-logo Build Status Go Reference Codecov License Twitter Conformance

Kong Ingress Controller for Kubernetes (KIC)

Use Kong for Kubernetes Gateway API or Ingress. Configure plugins, health checking, load balancing and more, all using Custom Resource Definitions (CRDs) and Kubernetes-native tooling.

Features | Get started | Documentation | main branch builds | Seeking help

Features

  • Gateway API support Use Gateway API resources (official successor of Ingress resources) to configure Kong. Native support for TCP, UDP, TLS, gRPC and HTTP/HTTPS traffic, reuse the same gateway for multiple protocols and namespaces.
  • Ingress support Use Ingress resources to configure Kong.
  • Declarative configuration for Kong Configure all of Kong features in declarative Kubernetes native way with CRDs.
  • Seamlessly operate Kong Scale and manage multiple replicas of Kong Gateway automatically to ensure performance and high-availability.
  • Health checking and load-balancing Load balance requests across your pods and supports active & passive health-checks.
  • Enhanced API management using plugins Use a wide array of plugins for e.g.
    • authentication
    • request/response transformations
    • rate-limiting

Get started (using Helm)

You can use Minikube or Kind on your local machine or use a hosted Kubernetes service like GKE.

Install the Gateway API CRDs

This command will install all resources that have graduated to GA or beta, including GatewayClass, Gateway, HTTPRoute, and ReferenceGrant.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml

Or, if you want to use experimental resources and fields such as TCPRoutes and UDPRoutes, please run this command.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/experimental-install.yaml

Install the Kong Ingress Controller with Helm

helm install kong --namespace kong --create-namespace --repo https://charts.konghq.com ingress

To learn more details about Helm chart follow the Helm chart documentation.

Once installed, please follow the Getting Started guide to start using Kong in your Kubernetes cluster.

Note: Kong Enterprise users, please follow along with our enterprise guide to setup the enterprise version.

Get started (using Operator)

As an alternative to Helm, you can also install Kong Ingress Controller using the Kong Gateway Operator by following this quick start guide.

Container images

Release images

Release builds of Kong Ingress Controller can be found on Docker Hub in kong/kubernetes-ingress-controller repository.

At the moment we're providing images for:

  • Linux amd64
  • Linux arm64

main branch builds

Nightly pre-release builds of the main branch are available from the kong/nightly-ingress-controller repository hosted on Docker Hub:

main contains unreleased new features for upcoming minor and major releases:

docker pull kong/nightly-ingress-controller:nightly

Documentation

All documentation for the Kong Ingress Controller is present in the kong/docs.konghq.com repository. Pull Requests are welcome for additions and corrections.

Guides and Tutorials

Please browse through the guides to get started and to learn specific ingress controller operations.

Contributing

We ❤️ pull requests and we’re continually working hard to make it as easy as possible for developers to contribute. Before beginning development with the Kong Ingress Controller, please familiarize yourself with the following developer resources:

Seeking help

Please search through the FAQs, posts on the discussions page or the Kong Nation Forums as it's likely that another user has run into the same problem. If you don't find an answer, please feel free to post a question.

If you've found a bug, please open an issue.

For a feature request, please open an issue using the feature request template.

You can also talk to the developers behind Kong in the #kong channel on the Kubernetes Slack server.

Community meetings

You can join monthly meetups hosted by Kong to ask questions, provide feedback, or just to listen and hang out. See the Online Meetups Page to sign up and receive meeting invites and Zoom links.

Preview and Experimental Features

At any time the KIC may include features or options that are considered experimental and are not enabled by default, nor available in the Kong Documentation Site.

To try out new features that are behind feature gates, please see the preview features in FEATURE_GATES.md and documentation for these preview features can be found in FEATURE_PREVIEW_DOCUMENTATION.md.

kubernetes-ingress-controller's People

Contributors

aledbf avatar andref5 avatar ccfish2 avatar chazdnato avatar czeslavo avatar dependabot[bot] avatar fffonion avatar hbagdi avatar jeveleth avatar jrsmroz avatar llinvillegwre avatar ludovic-pourrat avatar mflendrich avatar mlavacca avatar mmorel-35 avatar pangruoran avatar pmalek avatar programmer04 avatar rainest avatar randmonkey avatar renovate[bot] avatar rodman10 avatar sayboras avatar shaneutt avatar subicura avatar svenwal avatar tao12345666333 avatar tharun208 avatar ukiahsmith avatar yuchunyu97 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-ingress-controller's Issues

x-forwarded-for does not set correct ip


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Kong Ingress controller version:

kong:0.13.1-centos

Kubernetes version (use kubectl version):

# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
[root@dfkjyph kong]# uname -a
Linux dfkjyph 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[root@dfkjyph kong]# cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (Core) 
[root@dfkjyph kong]# 

What happened:

Can't get the correct x-forwarded-for ip

What you expected to happen:

Set correct x-forwarded-for header for correct ip

How to reproduce it (as minimally and precisely as possible):

[root@dfkjyph kong]# cat dummy-application.yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: http-svc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: http-svc
  template:
    metadata:
      labels:
        app: http-svc
    spec:
      containers:
      - name: http-svc
        image: chinglinwen/echoserver:1.8
        ports:
        - containerPort: 8080
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP

---

apiVersion: v1
kind: Service
metadata:
  name: http-svc
  labels:
    app: http-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: http-svc

---
[root@dfkjyph kong]# cat dummy.ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: http-svc
spec:
  rules:
  - host: dummy.service
    http:
      paths:
      - path: /
        backend:
          serviceName: http-svc 
          servicePort: http
[root@dfkjyph kong]# 

access is from browser ( kong have nodePort type service, port 80 on every node )

map a name specified in dummy ingress to the ip of any node

get the following html page



Hostname: http-svc-55dd675888-wbvqx

Pod Information:
	node name:	dfkjyph-46-122
	pod name:	http-svc-55dd675888-wbvqx
	pod namespace:	default
	pod IP:	172.28.235.32

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=172.28.233.36
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.28.235.32:8080/

Request Headers:
	accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
	accept-encoding=gzip, deflate
	accept-language=en,en-US;q=0.9,zh;q=0.8,zh-CN;q=0.7
	cache-control=no-cache
	connection=keep-alive
	host=172.28.235.32:8080
	pragma=no-cache
	upgrade-insecure-requests=1
	user-agent=Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36
	x-forwarded-for=172.28.46.122
	x-forwarded-host=dummy.service
	x-forwarded-port=8000
	x-forwarded-proto=http
	x-real-ip=172.28.46.122

Request Body:
	-no body in request-

172.28.46.122 is node ip

expect x-forwarded-for be my desktop ip 172.28.66.71

Anything else we need to know:

No way to modify the deployment created by the KongIngress CRD without patching it

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature request

Kong Ingress controller version:
I created the resources directly from https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml

What happened:
I need to add things like pod labels and resources to the kong ingress controller deployment, but can't find docs on the kong CRDs, and don't know if doing that is supported.

What you expected to happen:
I expected adding pod labels and other customizations, such as resource requests/limits, to the kong deployment to be supported.

How to reproduce it (as minimally and precisely as possible):
Try adding a custom label to the kong deployment without patching it.

Problem with strip_path in 0.0.4

BUG REPORT

Kong Ingress controller version:
Kong Ingress controller version: 0.0.4

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.5", GitCommit:"cce11c6a185279d037023e02ac5249e14daa22bf", GitTreeState:"clean", BuildDate:"2017-12-07T18:09:00Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} 

Environment:

  • Cloud provider or hardware configuration: OSX 10.12.6
  • OS(e.g. from /etc/os-release): minikube version: v0.26.1
  • Install tools: Helm

What happened:

Adding mockbin ingress resource like this but with an added path /mockbin as path in kong-ingress-controller:0.0.3 and kong-ingress-controller:0.0.4 behaves differently. Trying curl ${PROXY_IP}:${HTTP_PORT}/mockbin in V:0.04 gives 404 Not Found, whilst in 0.0.3 we reach the mockbin, i.e. the path is being stripped properly to call mockbin.org directly (without path).

In the working example in 0.0.3 we used a ingress resource and kongIngress resource with strip_path like this (from our helm charts):

Ingress resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: proxy-from-k8s-to-mockbin
  namespace: kong-test
  annotations:
    ingress.plugin.konghq.com: proxy-from-k8s-to-mockbin
    request-transformer.plugin.konghq.com: |
      transform-request-to-mockbin
spec:
  rules:
  - host: api.dev.oltd.de
    http:
      paths:
      - path: /mockbin
        backend:
          serviceName: proxy-to-mockbin
          servicePort: 80

KongIngress resource:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: proxy-from-k8s-to-mockbin
  namespace: kong-test
upstream:
  hash_on: none
  hash_fallback: none
  healthchecks:
    active:
      concurrency: 10
      healthy:
      http_statuses:
        - 200
        - 302
      interval: 0
      successes: 0
      http_path: "/mockbin"
      timeout: 1
      unhealthy:
        http_failures: 0
        http_statuses:
        - 429
        interval: 0
        tcp_failures: 0
        timeouts: 0
    passive:
      healthy:
      http_statuses:
        - 200
      successes: 0
      unhealthy:
        http_failures: 0
        http_statuses:
        - 429
        - 503
        tcp_failures: 0
        timeouts: 0
    slots: 10
proxy:
  path: /mockbin
  connect_timeout: 10000
  retries: 10
  read_timeout: 10000
  write_timeout: 10000
route:
  methods:
  - POST
  - GET
  - DELETE
  - HEAD
  regex_priority: 0
  strip_path: true
  preserve_host: false

What you expected to happen:
The results differ on kong's routes and services I would expect them to be the same for 0.0.3 and 0.0.4:

## Routes:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   344    0   344    0     0  90123      0 --:--:-- --:--:-- --:--:--  111k
{
  "next": null,
  "data": [
    {
      "created_at": 1527236461,
      "strip_path": true,
      "hosts": [
        "api.dev.oltd.de"
      ],
      "preserve_host": false,
      "regex_priority": 0,
      "updated_at": 1527236461,
      "paths": [
        "/mockbin"
      ],
      "service": {
        "id": "8838cb0f-866e-4fb2-9857-8210706652ab"
      },
      "methods": [
        "POST",
        "GET",
        "DELETE",
        "HEAD"
      ],
      "protocols": [
        "http"
      ],
      "id": "c5bb6ecc-e5f0-4c1a-8067-3948bc520e11"
    }
  ]
}
## Services:
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   322    0   322    0     0  62463      0 --:--:-- --:--:-- --:--:-- 64400
{
  "next": null,
  "data": [
    {
      "host": "kong-test.proxy-to-mockbin.80",
      "created_at": 1527236461,
      "connect_timeout": 10000,
      "id": "8838cb0f-866e-4fb2-9857-8210706652ab",
      "protocol": "http",
      "name": "kong-test.proxy-to-mockbin.80",
      "read_timeout": 10000,
      "port": 80,
      "path": "/mockbin",
      "updated_at": 1527236461,
      "retries": 10,
      "write_timeout": 10000
    }
  ]
}

Previously the ingress and kong ingress resulted in the similar service but the route had a different: path: /

Kubernetes ingress rule not working

Is this a request for help?: yes

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
ingress, kubernetes

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kong Ingress controller version:
kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.4

Kubernetes version (use kubectl version):
v1.9.3

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Ubuntu 16.04
  • Kernel (e.g. uname -a): 4.4
  • Install tools: Kubernetes cluster on AWS instances deployed by Kops
  • Others:

What happened:
I deployed Kong with ingress controller using this manifest: https://pastebin.com/raw/fNdceRc5
Kubernetes resource overview: https://pastebin.com/raw/N7VnhHHx

I created simple ingress rule to expose Kong dashboard:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kong-dashboard-ingress
  namespace: kong
  annotations:
    kubernetes.io/ingress.class: "kong"
spec:
  rules:
  - http:
      paths:
      - path: /kong
        backend:
          serviceName: kong-dashboard
          servicePort: 8080

However it seems that dashboard is not exposed:

$ curl XXX.us-east-1.elb.amazonaws.com/kong
{"message":"no route and no API found with those values"}

Last logs from ingress-controller:

I0515 06:04:32.804892       5 controller.go:127] syncing Ingress configuration...
I0515 06:07:47.564574       5 controller.go:127] syncing Ingress configuration...
I0515 06:07:47.565028       5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kong", Name:"kong-dashboard-ingress", UID:"84e3d91f-578e-11e8-b6a3-12b7dba3ea38", APIVersion:"extensions", ResourceVersion:"640543", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress kong/kong-dashboard-ingress
I0515 06:14:26.138244       5 controller.go:127] syncing Ingress configuration...
I0515 06:14:29.471761       5 controller.go:127] syncing Ingress configuration...
I0515 06:14:32.805099       5 controller.go:127] syncing Ingress configuration...

In Kong logs I see only 404s.

Here is description of ingress resource:

Name:             kong-dashboard-ingress
Namespace:        kong
Address:          XXX.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
        /kong   kong-dashboard:8080 (<none>)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"kong"},"name":"kong-dashboard-ingress","namespace":"kong"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"kong-dashboard","servicePort":8080},"path":"/kong"}]}}]}}

  kubernetes.io/ingress.class:  kong
Events:                         <none>

What you expected to happen:
I expect that Kong dashboard will be available on XXX.us-east-1.elb.amazonaws.com/kong

How to reproduce it (as minimally and precisely as possible):
Try to apply above mentioned manifests in Kubernetes on AWS.

Anything else we need to know:

Bug when creating Consumer

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kong Ingress controller version: 0.0.5
Kong version: 0.14.0

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:on premise kubernetes
  • OS (e.g. from /etc/os-release): Oracle Linux Server 7.5
  • Kernel (e.g. uname -a): 4.1.12-112.16.7.el7uek.x86_64

What happened:

I'm trying to create simple Kong configuration based on custom types tutorial

But whenever i try to create a new consumer, the controller mess up resource creation/synchronization between Kubernetes and Kong.

I'm using the following configuration for the consumer creation:

apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: consumer-team-x
  namespace: kong
username: team-X

both entities get created:

  • kubernetes: kubectl describe kongconsumer consumer-team-x
Name:         consumer-team-x
Namespace:    kong
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"configuration.konghq.com/v1","kind":"KongConsumer","metadata":{"annotations":{},"name":"consumer-team-x","namespace":"kong"},"username":...
API Version:  configuration.konghq.com/v1
Kind:         KongConsumer
Metadata:
  Cluster Name:        
  Creation Timestamp:  2018-07-20T18:00:31Z
  Generation:          1
  Resource Version:    11580589
  Self Link:           /apis/configuration.konghq.com/v1/namespaces/kong/kongconsumers/consumer-team-x
  UID:                 cb2a35d1-8c46-11e8-a4ba-90b11c1dd55e
Username:              team-X
Events:                <none>
  • admin-api: curl http://<host>:<port>/consumers
{
   "next":null,
   "data":[
      {
         "custom_id":null,
         "created_at":1532109545,
         "username":"team-X",
         "id":"8f81bc04-df4e-4afe-b165-3be99e67dcf6"
      }
   ]
}

although, both entities are there, ingress-controller keeps trying to recreate consumer object in kong using kubernetes UUID: cb2a35d1-8c46-11e8-a4ba-90b11c1dd55e, and do complete other configuration objects setup. Controller log:

W0720 18:27:32.851926       7 queue.go:113] requeuing simet/lmap-collector, err creating a Kong consumer: the server reported a conflict (post consumers.meta.k8s.io)
I0720 18:27:36.181051       7 controller.go:127] syncing Ingress configuration...
I0720 18:27:36.181075       7 kong.go:465] checking if Kong consumer consumer-team-x exists
I0720 18:27:36.182976       7 kong.go:469] Creating Kong consumer cb2a35d1-8c46-11e8-a4ba-90b11c1dd55e
E0720 18:27:36.185011       7 controller.go:130] unexpected failure updating Kong configuration: 
creating a Kong consumer: the server reported a conflict (post consumers.meta.k8s.io)
W0720 18:27:36.185035       7 queue.go:113] requeuing kong/kong-ingress-controller, err creating a Kong consumer: the server reported a conflict (post consumers.meta.k8s.io)

and looking at admin-api logs, we can see that the controller is trying to recreate the consumer with the wrong id:

::1 - - [20/Jul/2018:18:49:32 +0000] "POST /consumers HTTP/1.1" 409 144 "-" "Go-http-client/1.1"
::1 - - [20/Jul/2018:18:49:36 +0000] "GET /consumers/cb2a35d1-8c46-11e8-a4ba-90b11c1dd55e HTTP/1.1" 404 24 "-" "Go-http-client/1.1"
2018/07/20 18:49:36 [notice] 88#0: *245958 [lua] init.lua:391: insert(): ERROR: duplicate key value violates unique constraint "consumers_username_key"
Key (username)=(team-X) already exists., client: ::1, server: kong_admin, request: "POST /consumers HTTP/1.1", host: "localhost:8001"
::1 - - [20/Jul/2018:18:49:36 +0000] "POST /consumers HTTP/1.1" 409 144 "-" "Go-http-client/1.1"

note: Kong consumer uuid is: 8f81bc04-df4e-4afe-b165-3be99e67dcf6

I've tried to recreate all pods and configurations a couple a of times and with some modifications, but things always end up like that.

What you expected to happen:

I'd expect Kong object to be created with Kubernetes object UUID so controller can work properly

How to reproduce it (as minimally and precisely as possible):

i'm using a namespace similar to the one described by the controller tutorial:
https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml

thx in advance

protocols in routes should be configurable

Is this a BUG REPORT or FEATURE REQUEST?:

Kong Ingress controller version:
0.0.3

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T21:12:46Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): Linux
  • Kernel (e.g. uname -a):

What happened:
we create a KongIngress yaml resource with routes, that with have a protocols section (https), but the ingress controller disregard this value.

What you expected to happen:
protocols in routes should be configurable.

How to reproduce it (as minimally and precisely as possible):
deploy and setup kongingress resource:

kind: KongIngress
metadata:
  name:deployment-example
route:
  protocols:
  - https
  methods:
  - GET
  preserve_host: false
  regex_priority: 0
  strip_path: true

how to access kong ingress controller admin api from aws kubernetes?

Is this a request for help?:
yes
What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

kong ingress controller in aws kubernetes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
no

Kong Ingress controller version:
Installed using this command -

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
| kubectl create -f -

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Amazon linux
  • Kernel (e.g. uname -a): Linux ip-172-31-46-44 4.14.33-51.37.amzn1.x86_64 #1 SMP Thu May 3 20:07:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

What happened:
I have tried https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/minikube.md in my laptop and it works well. I managed to create ingress resource for another api exposed as a service.

Now trying the same with AWS kubernetes cluster created by kops with k8s.local gossip private network (not a subdomain of a domain from AWS Route53 with public network) and able to deploy ingress controller, services and ingress routes for them using kubectl utility in the cluster and now stuck at accessing the kong admin and proxy apis. There is a elastic load balancer created in aws and its set to listen on port 443 and Kubernetes API is accessible only through https.

Is there any documentation explains how to setup access via AWS ELB to Kong admin and proxy apis running in kubernetes cluster?

This is my cluster in AWS - output from kops edit cluster command.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2018-05-29T13:51:32Z
  name: k8slab.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://k8s-lab-kops-state-store/k8slab.k8s.local
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.9.3
  masterInternalName: api.internal.k8slab.k8s.local
  masterPublicName: api.k8slab.k8s.local
  networkCIDR: 172.31.0.0/16
  networkID: vpc-7d8a061a
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.31.48.0/20
    name: eu-west-1a
    type: Public
    zone: eu-west-1a
  - cidr: 172.31.64.0/20
    name: eu-west-1b
 topology:
    dns:
      type: Public
    masters: public
    nodes: public

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
Create a kubernetes cluster using this guide - https://dzone.com/articles/how-to-create-a-kubernetes-cluster-on-aws-in-few-m and deploy kong ingress controller and necessary services and ingress routes using https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/minikube.md document.

Anything else we need to know:

GKE ingress not creating upstream

Is this a request for help?:
Yes
What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
ingress, upstream

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG

Kong Ingress controller version:
0.0.4

Kubernetes version (use kubectl version):
1.9.7

Environment:

  • Cloud provider or hardware configuration: Googe Cloud
  • OS (e.g. from /etc/os-release): unknown
  • Kernel (e.g. uname -a): unknown
  • Install tools: kubectl
  • Others:

What happened:
I deployed the Kong ingress controller according to the YAML provided in the manifest. Further i'm using Konga as a GUI for Kong to see created upstreams ,services, etc. I use an auth-nginx deployment but i think any simple nginx sample application will provide enough to replicate the issue.

The ingress i created:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: auth-ingress
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
   - http:
      paths:
      - path: /auth
        backend:
          serviceName: auth-nginx
          servicePort: 80

The 'auth-nginx' service is a NodePort type service and running. When i create the above ingress with

kubect create -f ingress.yaml

i see that the ingress is created in Google Cloud Kubernetes -> Services. By specifying the nginx ingress class, Kong should be notified and create an upstream. Is there anything that i'm missing why Kong Ingress doens't notice the creation of the ingress?

What you expected to happen:
Upstream created in Kong. And the upstream will be reachable with http://IP_OF_LB/auth

How to reproduce it (as minimally and precisely as possible):
Deploy above script in Google Cloud

Anything else we need to know:
Not that i can think of right now

oauth2 service config disapper after a while

Is this a request for help?: No

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): Bug


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

I want to set a oauth2 for service, but after i set plugin oauth2 use command below

curl -X POST http://127.0.0.1:21688/services/nlp.service.satement/plugins
--data "name=oauth2"
--data "config.scopes=email,phone,address"
--data "config.mandatory_scope=true"
--data "config.enable_password_grant=true"

I see config in http://127.0.0.1:21688/services/nlp.service.satement/plugins

{
"total": 1,
"data": [
{
"created_at": 1528278429000,
"config": {
"refresh_token_ttl": 1209600,
"enable_client_credentials": false,
"mandatory_scope": true,
"token_expiration": 7200,
"hide_credentials": false,
"scopes": [
"email",
"phone",
"address"
],
"enable_implicit_grant": false,
"global_credentials": false,
"anonymous": "",
"enable_password_grant": true,
"accept_http_if_already_terminated": false,
"enable_authorization_code": false,
"provision_key": "HEa6OUAwqRONFVGHmMslsoT9yaTDSPgR",
"auth_header_name": "authorization"
},
"id": "d959e8cd-afa2-4797-bd82-ec7a4f297672",
"name": "oauth2",
"service_id": "94548d5d-b055-49ff-9d23-ffd32cb80380",
"enabled": true
}
]
}

After few minute, this config disappear, I visit http://127.0.0.1:21688/services/nlp.service.satement/plugins
it return blank

Kong Ingress controller version: 0.0.4

Kubernetes version (use kubectl version): 1.8

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): ubuntu
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Plugin annotations do not work with multiple hosts

Kong Ingress controller version:
kong:0.13.1-centos

Kubernetes version (use kubectl version):
v1.9.3

What happened:
I have an ingress with multiple host matches on it (which turns into routes in Kong) and have now added a handful of plugin annotations to it. This resulted in one of the routes receiving all but one of the plugins and the other route receiving the other.

What you expected to happen:
All host/routes would have the plugins applied.

How to reproduce it (as minimally and precisely as possible):

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: add-request-id
config:
  header_name: Request-ID

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    correlation-id.plugin.konghq.com: add-request-id
spec:
  rules:
  - host: test1.com
    http: &http_rules
      paths:
      - path: /mypath
        backend:
          serviceName: myservice
          servicePort: 80
  - host: test2.com
    http: *http_rules
  - host: test3.com
    http: *http_rules

Apply the above and see that /routes shows a unique id for each of the host above and that /plugins shows that each plugin appears only once and is bound to a single route_id.

The response-transformer plugin continuously detects outdated configuration to synchronize

Summary

This has specifically been observed with the response-transformer plugin but may extend to others as well.

In these cases, the KongPlugin resource is created (and attached to an Ingress), resulting in the entry correctly being added to Kong with a plugin associated to the route. The issue is that by watching the ingress-controller logs, you can see that the plugin is repeatedly trying to be updated.

The larger risk here is that it may trigger things like Kong/kong#3423 this when many plugins are used.

Kong Ingress controller version

0.1.0

Kubernetes version

1.9.3

Environment

RDS Postgresql Kong Database

What happened

The KongPlugin was created and the Kong configuration was created, but the ingress-controller logs show that it thinks the two are out of sync and tries to persist the changes every 10 minutes.

Immediately after saving a new configuration I can sometimes get a "up to date" message but on the next synchronization check (10 minutes?) it's back to being "outdated".

Expected behvaior

Updates are not attempted against Kong when the KongPlugin and Kong database appear to be in sync.

Steps To Reproduce

  1. Create a KongPlugin with the following configuration
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: add-security-headers
config:
  add:
    headers:
    - "X-Xss-Protection:1; mode=block"
  1. Attach this plugin to an Ingress using response-transformer.plugin.konghq.com: add-security-headers

  2. Tail the ingress-controller container logs filtered for add-security-headers

  3. Observe similar to the following logs that repeat regularly

I0823 01:27:55.081377       7 kong.go:784] configuring plugin stored in KongPlugin CRD add-security-headers
I0823 01:27:55.133977       7 kong.go:824] plugin add-security-headers configuration in kong is outdated. Updating...

Other Information

I spent a bit of time trying to further isolate this without much success. I did try specifying all components of the response-transformer plugin (remove/append/replace) with empty lists/maps with no success. I'm having trouble determining what exactly is causing it to think the values are mismatched.

[ help request ] assign kong's pod to a specific woker node

Hello,

Kubernetes version : v1.10.1
kubernetes-ingress-controller version : 0.0.4

I'm trying to assign kong pod's to a specific worker node. I found in the kubernetes documentation a solution to do that : NodeSelector ( https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ) . But when i apply this spec to the all-in-one-postgres.yaml file, i have got an error like this :

[ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "nodeSelector" in io.k8s.api.core.v1.Container, ValidationError(Deployment.spec.template.spec.containers[1]): unknown field "nodeSelector" in io.k8s.api.core.v1.Container]; if you choose to ignore these errors, turn validation off with --validate=false

Maybe there is another way to do that ?
Thanks

Status of controller?

Great to see the code in the wild.

I was just curious if this is something we should consider ready for production and/or development purposes?

Docs request: upstream config example

First of all, thank you for much for this project!

I am trying to customize one of the upstreams that the kong ingress controller created. Specifically, I want to add hash_on to one of the upstreams per the admin API docs.

The README notes:

Instead it uses the Endpoints API to bypass kube-proxy to allow Kong features like session affinity and custom load balancing algorithms.

Is there an example of how I can add session affinity to an ingress resource?

KongIngress Service Upstream HTTPS

Is this a request for help?:

No

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

  • kongingress
  • ssl

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug request
Kong Ingress controller version:

0.0.4

Kubernetes version (use kubectl version):

Server: v1.10.4-gke.0
Client : v1.9.7

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): I am using the Container Optimized OS provided by google:
BUILD_ID=10452.101.0
NAME="Container-Optimized OS"
PRETTY_NAME="Container-Optimized OS from Google"
VERSION=66
ID=cos
  • Kernel (e.g. uname -a): Linux gke-prod-1-default-pool-560ebb10-xx4d 4.14.22+
  • Install tools: I used the basic getting started provided by your blog
  • Others:

What happened:

I am trying to add an external service (a google cloud function https trigger) using https. I try to proxy my serverless function using kong. This is what I am trying to achieve:

 | Incoming Request | ----- http://foo.baz.local/foo -----> | Kong | ------ https:// myproject.cloudfunctions.net/foo-prod -----> | Google Function |

Using the following configuration, kong keep making request to the service through http.

kind: Service
apiVersion: v1
metadata:
  name: fooservice
  namespace: foo
spec:
  type: ExternalName
  externalName: myproject.cloudfunctions.net
  ports:
  - name: http
    port: 80
    protocol: TCP
  - name: https
    port: 443
    protocol: TCP

---
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: foo-ingress
  namespace: foo
proxy:
  path: /foo-prod
  protocol: https
route:
  strip_path: true
---
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: transform-request-to-external
  namespace: foo
config:
  remove:
    headers: host
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo-ingress
  namespace: foo
  annotations:
    request-transformer.plugin.konghq.com: |
      transform-request-to-external
spec:
  rules:
    - host: foo.baz.local
      http:
        paths:
        - path: /foo
          backend:
            serviceName: fooservice
            servicePort: https

Now if i run something similar to curl -X GET --url http://localhost:8001/services on the admin-api pods, it responds:

{"next":null,"data":[{"host":"foo.fooservice.https","created_at":1530465759,"connect_timeout":60000,"id":"xxxx-2866-xxxx-92fb-xxxx","protocol":"http","name":"foo.fooservice.https","read_timeout":60000,"port":80,"path":"\/foo-prod","updated_at":1530465759,"retries":5,"write_timeout":60000}]}

The port used by kong is "443" (the correct target is configured) BUT the protocol is "http", how can I ensure kong is making theses requests through HTTPS? I tryed to add ingress.kubernetes.io/secure-backends: "true" (from NGINX ingress spec) but this does not work. I can notice if upstream.Secure in file https://github.com/Kong/kubernetes-ingress-controller/blob/master/internal/ingress/controller/kong.go but I can not figure where ".Secure" comes from

I also tryed to dynamically update path of my KongIngress but it looks like the ingress controller does not update the path of my service. It seems as the KongIngress.proxy does not update the created service in kong. Should i report another bug on this?
What you expected to happen:

The protocol set to https

Anything else we need to know:

I opened a similar topic here: https://discuss.konghq.com/t/question-secure-service-backend-on-kong-ingress-and-proxy-path/1356

Failed to list *v1.KongPlugin forbidden


BUG REPORT

Failed to list *v1.KongPlugin: kongplugins.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannotlist kongplugins.configuration.konghq.com at the cluster scope

Kong Ingress controller version:
kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS with Kops 1.8.6 + RBAC authorization

What happened:
When deploy the yaml of the README the ingress-controller doesn't run correctly. https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/minikube.md

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
  | kubectl create -f -
kubectl logs -f kong-ingress-controller-65fd74c79d-s6lkj -c ingress-controller
E0409 14:00:52.064514       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:150: Failed to list *v1.KongCredential: kongcredentials.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannot list kongcredentials.configuration.konghq.com at the cluster scope
E0409 14:00:52.065179       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:149: Failed to list *v1.KongConsumer: kongconsumers.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannot list kongconsumers.configuration.konghq.com at the cluster scope
E0409 14:00:52.066178       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:148: Failed to list *v1.KongPlugin: kongplugins.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannotlist kongplugins.configuration.konghq.com at the cluster scope
E0409 14:00:53.066499       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:150: Failed to list *v1.KongCredential: kongcredentials.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannot list kongcredentials.configuration.konghq.com at the cluster scope
E0409 14:00:53.067300       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:149: Failed to list *v1.KongConsumer: kongconsumers.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannot list kongconsumers.configuration.konghq.com at the cluster scope
E0409 14:00:53.068372       7 reflector.go:205] github.com/kong/ingress-controller/internal/ingress/controller/store/store.go:148: Failed to list *v1.KongPlugin: kongplugins.configuration.konghq.com is forbidden: User "system:serviceaccount:kong:kong-serviceaccount" cannotlist kongplugins.configuration.konghq.com at the cluster scope

What you expected to happen:
Container ingress-controller run correctly without errors.

How to reproduce it (as minimally and precisely as possible):

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
  | kubectl create -f -

KongIngress and Ingress not combined on kong routes when annotation is used to pickup KongIngress

BUG REPORT

Kong Ingress controller version:
0.0.4

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-16T03:15:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: minikube version: v0.26.1
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a): OSX 10.12.6
  • Install tools: helm

What happened:

The KongIngress does not update the routes as it should, strip_path still false etc.

## Routes:

{
  "next": null,
  "data": [
    {
      "created_at": 1527255929,
      "strip_path": false,
      "hosts": [
        "foo.bar"
      ],
      "preserve_host": false,
      "regex_priority": 0,
      "updated_at": 1527255929,
      "paths": [
        "/mockbin"
      ],
      "service": {
        "id": "e7b388b4-f64f-4966-9277-3200aa625ae8"
      },
      "methods": null,
      "protocols": [
        "http"
      ],
      "id": "a3e02e31-cd16-498d-bc46-9a121d762513"
    }
  ]
}
## Services:

{
  "next": null,
  "data": [
    {
      "host": "default.proxy-to-mockbin.80",
      "created_at": 1527255929,
      "connect_timeout": 60000,
      "id": "e7b388b4-f64f-4966-9277-3200aa625ae8",
      "protocol": "http",
      "name": "default.proxy-to-mockbin.80",
      "read_timeout": 60000,
      "port": 80,
      "path": "/",
      "updated_at": 1527255929,
      "retries": 5,
      "write_timeout": 60000
    }
  ]
}

What you expected to happen:
strip_path and the other activated features from the KongIngress should be applied to the service.

How to reproduce it (as minimally and precisely as possible):
https://github.com/devdavidkarlsson/kubernetes-ingress-controller/blob/master/docs/examples/externalnamestrippath.md

Anything else we need to know:
Naming the KongIngress to the same as the ingress seems to work better.

[ help request ] Setup kong ingress controller into kubernetes cluster deploy by rancher 2.0

Hello,

I'm using a kubernetes cluster deployed by rancher 2.0 ( it's a beta but it's works good ) and i want to set up the kong ingress controler into.

Actually kong start properly ( and the other component too), the only problem come from the postgres DB which didn't start ! i thinks it's come from the Stateful Set .. any idea how to set up the db with another deployement mode ?

I use the Rancher CLI like this :

rancher kubernetes create -f all-in-one-postgres.yaml

All appears as created in the terminal, but the postgres database remains as "schedule" in the rancher GUI

How to configure tls certificate

Is this a request for help?: Yes

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): tls, https, sni, certificate


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kong Ingress controller version: 0.0.5
Kong version: 0.14.0

Kubernetes version (use kubectl version):

Environment: macos

  • Cloud provider or hardware configuration: AKS
  • OS (e.g. from /etc/os-release): macos
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
I'm trying to configure https for my kong ingress service (dev environment).

I have a number of ingresses configured with tls.

generate certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=*.my.host/O=my.host"

e.g.
example ingress

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: my-ingress
  namespace: my-namespace
proxy:
  path: /
route:
  protocols:
  - https
  strip_path: false
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: my-namespace
spec:
  tls:
  - hosts:
    - "*.my.host"
    secretName: tls-secret
  rules:  
    - host: "*.my.host"
      http:
        paths:
          - path: /                  #different path per ingress
            backend:
              serviceName: my-service
              servicePort: 80

ingress controller logs
I see many errors like this:

I0813 21:19:27.459918       5 kong.go:1014] creating Kong SSL Certificate for host *.my.host located in Secret my-namespace/tls-secret
E0813 21:19:27.470002       5 kong.go:1019] Unexpected error creating Kong Certificate: [400] {"fields":{"snis":"*.my.host already associated with existing certificate 'dd6686d9-9602-4194-90fb-b678ed4bb502'"},"name":"schema violation","code":2,"message":"schema violation (snis: *.my.host already associated with existing certificate 'dd6686d9-9602-4194-90fb-b678ed4bb502')"}
E0813 21:19:27.470023       5 controller.go:130] unexpected failure updating Kong configuration:
the server rejected our request for an unknown reason (post certificates.meta.k8s.io)
W0813 21:19:27.470034       5 queue.go:113] requeuing my-namespace/my-host-request-transformer, err the server rejected our request for an unknown reason (post certificates.meta.k8s.io)
I0813 21:19:30.790756       5 controller.go:127] syncing Ingress configuration...
I0813 21:19:30.793087       5 kong.go:1014] creating Kong SSL Certificate for host *.my.host located in Secret my-namespace/tls-secret
E0813 21:19:30.794675       5 kong.go:1019] Unexpected error creating Kong Certificate: [400] {"fields":{"snis":"*.my.host already associated with existing certificate 'dd6686d9-9602-4194-90fb-b678ed4bb502'"},"name":"schema violation","code":2,"message":"schema violation (snis: *.my.host already associated with existing certificate 'dd6686d9-9602-4194-90fb-b678ed4bb502')"}
E0813 21:19:30.794713       5 controller.go:130] unexpected failure updating Kong configuration:
the server rejected our request for an unknown reason (post certificates.meta.k8s.io)
W0813 21:19:30.794724       5 queue.go:113] requeuing my-namespace/my-host-ingress, err the server rejected our request for an unknown reason (post certificates.meta.k8s.io)
I0813 21:19:34.125269       5 controller.go:127] syncing Ingress configuration...

Without the tls configuration I can run my app over http, but the tls section breaks it. Not sure what I'm doing wrong.

EDIT:

I found another message in the logs

I0814 07:48:58.875411       5 controller.go:127] syncing Ingress configuration...
I0814 07:48:58.880426       5 kong.go:1014] creating Kong SSL Certificate for host *.my.host located in Secret my-namespace/tls-secret
I0814 07:48:58.961495       5 kong.go:1028] creating Kong SNI for host *.my.host and certificate id c89fe694-b38b-42df-8687-22f4d36857f7
E0814 07:48:58.963907       5 kong.go:1032] Unexpected error creating Kong SNI: [400] {"fields":{"ssl_certificate_id":"unknown field","certificate":"required field missing"},"name":"schema violation","code":2,"message":"2 schema violations (certificate: required field missing; ssl_certificate_id: unknown field)"}
E0814 07:48:58.963996       5 controller.go:130] unexpected failure updating Kong configuration:
the server rejected our request for an unknown reason (post snis.meta.k8s.io)

I believe this is the same issue as: Kong/kong#3610

The ingress controller needs updates to work with kong:0.14.0

What you expected to happen:
app to be served over https

How to reproduce it (as minimally and precisely as possible):
as above

Anything else we need to know:

runtime.go:66 panic, nullpointer exception

Kong Ingress controller version: 0.0.3

Kubernetes version (use kubectl version):

kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-16T03:15:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: OSX 10.12.6, Ubuntu 16.04.1
  • OS (e.g. from /etc/os-release): minikube version: v0.26.1
  • Install tools: Helm
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

What happened:

kubectl logs kong-ingress-controller-9d45b6677-mlcnl -n kong ingress-controller
-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.0.3
  Build:      git-51fa668
  Repository: https://github.com/Kong/kubernetes-ingress-controller.git
-------------------------------------------------------------------------------

W0524 12:56:12.856031       6 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0524 12:56:12.856316       6 main.go:204] Creating API client for https://10.96.0.1:443
I0524 12:56:12.910746       6 main.go:248] Running in Kubernetes Cluster version v1.10 (v1.10.0) - git (clean) commit fc32d2f3698e36b93322a3465f63a14e9f0eaead - platform linux/amd64
I0524 12:56:12.930063       6 main.go:94] validated kong/kong-proxy as the default backend
I0524 12:56:13.569503       6 main.go:151] kong version: 0.13.1
I0524 12:56:13.690679       6 run.go:144] starting Ingress controller
I0524 12:56:14.934969       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kong-test", Name:"proxy-from-k8s-to-mockbin", UID:"b05c3e75-5f4d-11e8-9c11-080027339b80", APIVersion:"extensions", ResourceVersion:"1792", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kong-test/proxy-from-k8s-to-mockbin
I0524 12:56:14.935365       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kong-test", Name:"proxy-from-k8s-jwt-to-mockbin", UID:"b05f3af9-5f4d-11e8-9c11-080027339b80", APIVersion:"extensions", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress kong-test/proxy-from-k8s-jwt-to-mockbin
I0524 12:56:14.994391       6 leaderelection.go:175] attempting to acquire leader lease  kong/ingress-controller-leader-nginx...
I0524 12:56:15.036140       6 leaderelection.go:184] successfully acquired lease kong/ingress-controller-leader-nginx
I0524 12:56:15.036397       6 status.go:217] new leader elected: kong-ingress-controller-9d45b6677-mlcnl
I0524 12:56:18.330493       6 controller.go:127] syncing Ingress configuration...
E0524 12:56:18.334614       6 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:573
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:63
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/signal_unix.go:388
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:885
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:77
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:128
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:85
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:2361
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x10ec7c4]

goroutine 180 [running]:
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x12a8fa0, 0x1eb6090)
	/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502 +0x229
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).syncUpstreams(0xc4201c43f0, 0xc4207be100, 0x3, 0x4, 0xc42026f9f0, 0x2, 0x2, 0x1ecece0, 0xc400000000)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:885 +0x984
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).OnUpdate(0xc4201c43f0, 0xc4202b9ce0, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:77 +0xc8
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).syncIngress(0xc4201c43f0, 0x1315940, 0xc4203c98c0, 0xc43b610246, 0x88773e8f)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:128 +0x2cd
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).(github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.syncIngress)-fm(0x1315940, 0xc4203c98c0, 0xa, 0xc42062bd68)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:85 +0x3e
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).worker(0xc42025fd70)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112 +0x34a
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).(github.com/kong/kubernetes-ingress-controller/internal/task.worker)-fm()
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x2a
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4204a17a8)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc420365fa8, 0x3b9aca00, 0x0, 0xc42011d601, 0xc4203085a0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4204a17a8, 0x3b9aca00, 0xc4203085a0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).Run(0xc42025fd70, 0x3b9aca00, 0xc4203085a0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x55
created by github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).Start
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:152 +0xe6

What you expected to happen:
Ingress controller not no crash.

Anything else we need to know:

Previously a similar error has been reported.

Invalid memory address or nil pointer exception: Pagination error?

BUG REPORT Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map):

Kong Ingress controller version:

-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.0.1
  Build:      git-a6168a0
  Repository: https://github.com/Kong/kubernetes-ingress-controller.git
-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:
AWS with kubeadm + RBAC Authorization

What happened:
Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.0.1
  Build:      git-a6168a0
  Repository: https://github.com/Kong/kubernetes-ingress-controller.git
-------------------------------------------------------------------------------
W0420 20:00:58.841335       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0420 20:00:58.841544       7 main.go:213] Creating API client for https://10.96.0.1:443
I0420 20:00:58.848018       7 main.go:257] Running in Kubernetes Cluster version v1.10 (v1.10.0) - git (clean) commit fc32d2f3698e36b93322a3465f63a14e9f0eaead - platform linux/amd64
I0420 20:00:58.864614       7 main.go:94] validated kong/kong-proxy as the default backend
I0420 20:00:58.994291       7 main.go:160] kong version: 0.13.0
I0420 20:00:59.042074       7 run.go:147] starting Ingress controller
I0420 20:01:00.152814       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1215", Name:"protheus-webservice-prd-ing", UID:"195ccd5b-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"440643", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1215/protheus-webservice-prd-ing
I0420 20:01:00.152854       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1204", Name:"protheus-webservice-homolog-ing", UID:"a09e1abe-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433696", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1204/protheus-webservice-homolog-ing
I0420 20:01:00.152866       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf123", Name:"protheus-webservice-homolog", UID:"dcf125a6-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96363", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf123/protheus-webservice-homolog
I0420 20:01:00.152900       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1201", Name:"protheus-appserver-prod-ing", UID:"1e93cd23-44d4-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"480933", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1201/protheus-appserver-prod-ing
I0420 20:01:00.152916       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf121", Name:"protheus-webservice-prod", UID:"dc9ffdc1-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96345", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf121/protheus-webservice-prod
I0420 20:01:00.152925       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1200", Name:"protheus-appserver-homolog-ing", UID:"a05f8030-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433566", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1200/protheus-appserver-homolog-ing
I0420 20:01:00.152937       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1200", Name:"protheus-appserver-prod-ing", UID:"a060ad19-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433569", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1200/protheus-appserver-prod-ing
I0420 20:01:00.152962       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1201", Name:"protheus-webservice-prd-ing", UID:"1e9674a4-44d4-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"480939", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1201/protheus-webservice-prd-ing
I0420 20:01:00.152988       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf133", Name:"protheus-appserver-prod", UID:"de065d6a-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96416", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf133/protheus-appserver-prod
I0420 20:01:00.153014       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf136", Name:"protheus-appserver-prod", UID:"de59445c-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96434", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf136/protheus-appserver-prod
I0420 20:01:00.153043       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf140", Name:"protheus-webservice-prod", UID:"ded2bc49-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96462", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf140/protheus-webservice-prod
I0420 20:01:00.153066       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1202", Name:"protheus-appserver-prod-ing", UID:"c43e3620-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435211", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1202/protheus-appserver-prod-ing
I0420 20:01:00.153141       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1216", Name:"protheus-webservice-prd-ing", UID:"31d9bc93-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442077", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1216/protheus-webservice-prd-ing
I0420 20:01:00.153177       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"monitoring", Name:"grafana", UID:"d6d66f05-436b-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"64427", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress monitoring/grafana
I0420 20:01:00.153188       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf132", Name:"protheus-appserver-homolog", UID:"dde6326c-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96408", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf132/protheus-appserver-homolog
I0420 20:01:00.153201       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1211", Name:"protheus-webservice-prd-ing", UID:"f4a899d7-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438189", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1211/protheus-webservice-prd-ing
I0420 20:01:00.153209       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1214", Name:"protheus-appserver-homolog-ing", UID:"c5175d3a-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435396", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1214/protheus-appserver-homolog-ing
I0420 20:01:00.153223       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1207", Name:"protheus-webservice-homolog-ing", UID:"a0c04974-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433748", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1207/protheus-webservice-homolog-ing
I0420 20:01:00.153231       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf134", Name:"protheus-webservice-prod", UID:"de2d7db6-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96429", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf134/protheus-webservice-prod
I0420 20:01:00.153250       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1216", Name:"protheus-appserver-prod-ing", UID:"31d709a7-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442068", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1216/protheus-appserver-prod-ing
I0420 20:01:00.153258       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1220", Name:"protheus-webservice-prd-ing", UID:"31c7deac-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442024", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1220/protheus-webservice-prd-ing
I0420 20:01:00.153266       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf137", Name:"protheus-appserver-homolog", UID:"de7c6caa-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96444", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf137/protheus-appserver-homolog
I0420 20:01:00.153275       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf127", Name:"protheus-webservice-homolog", UID:"dd67bdc5-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96387", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf127/protheus-webservice-homolog
I0420 20:01:00.153283       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1208", Name:"protheus-appserver-homolog-ing", UID:"e8d342b2-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436851", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1208/protheus-appserver-homolog-ing
I0420 20:01:00.153295       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1204", Name:"protheus-webservice-prd-ing", UID:"a0a0207a-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433705", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1204/protheus-webservice-prd-ing
I0420 20:01:00.153303       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1215", Name:"protheus-appserver-prod-ing", UID:"195720ab-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"440637", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1215/protheus-appserver-prod-ing
I0420 20:01:00.153341       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1201", Name:"protheus-webservice-homolog-ing", UID:"1e95347d-44d4-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"480934", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1201/protheus-webservice-homolog-ing
I0420 20:01:00.153358       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1210", Name:"protheus-webservice-homolog-ing", UID:"e8e6ce0f-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436888", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1210/protheus-webservice-homolog-ing
I0420 20:01:00.153376       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf127", Name:"protheus-webservice-prod", UID:"dd64a699-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96385", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf127/protheus-webservice-prod
I0420 20:01:00.153384       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf125", Name:"protheus-appserver-prod", UID:"dd192a03-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96365", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf125/protheus-appserver-prod
I0420 20:01:00.153391       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1219", Name:"protheus-appserver-prod-ing", UID:"0d5e937e-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439805", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1219/protheus-appserver-prod-ing
I0420 20:01:00.153398       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf134", Name:"protheus-appserver-homolog", UID:"de2ae91c-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96427", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf134/protheus-appserver-homolog
I0420 20:01:00.153410       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1206", Name:"protheus-appserver-homolog-ing", UID:"e8ba3f89-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436798", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1206/protheus-appserver-homolog-ing
I0420 20:01:00.153423       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf125", Name:"protheus-appserver-homolog", UID:"dd1c00ee-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96366", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf125/protheus-appserver-homolog
I0420 20:01:00.153435       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf117", Name:"protheus-appserver-prod", UID:"977f561d-4380-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"64426", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf117/protheus-appserver-prod
I0420 20:01:00.153443       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1212", Name:"protheus-appserver-homolog-ing", UID:"01163e1a-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438923", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1212/protheus-appserver-homolog-ing
I0420 20:01:00.153455       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf140", Name:"protheus-appserver-prod", UID:"decc7d2d-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96458", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf140/protheus-appserver-prod
I0420 20:01:00.153463       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf137", Name:"protheus-webservice-prod", UID:"de7fed89-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96446", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf137/protheus-webservice-prod
I0420 20:01:00.153475       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf137", Name:"protheus-webservice-homolog", UID:"de839c32-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96447", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf137/protheus-webservice-homolog
I0420 20:01:00.153483       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1216", Name:"protheus-webservice-homolog-ing", UID:"31d8cdab-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442076", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1216/protheus-webservice-homolog-ing
I0420 20:01:00.153490       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1220", Name:"protheus-appserver-homolog-ing", UID:"31c3c266-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442009", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1220/protheus-appserver-homolog-ing
I0420 20:01:00.153502       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf139", Name:"protheus-webservice-homolog", UID:"deb9b2a3-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96456", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf139/protheus-webservice-homolog
I0420 20:01:00.153512       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf130", Name:"protheus-webservice-prod", UID:"ddb83b5a-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96402", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf130/protheus-webservice-prod
I0420 20:01:00.153523       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1213", Name:"protheus-webservice-prd-ing", UID:"0d0bd16d-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439684", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1213/protheus-webservice-prd-ing
I0420 20:01:00.153531       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1202", Name:"protheus-webservice-homolog-ing", UID:"c43f5b97-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435214", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1202/protheus-webservice-homolog-ing
I0420 20:01:00.153543       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1211", Name:"protheus-appserver-homolog-ing", UID:"f4a07f94-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438175", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1211/protheus-appserver-homolog-ing
I0420 20:01:00.153563       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf140", Name:"protheus-webservice-homolog", UID:"dedba1fc-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96464", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf140/protheus-webservice-homolog
I0420 20:01:00.153578       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1220", Name:"protheus-appserver-prod-ing", UID:"31c52153-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442017", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1220/protheus-appserver-prod-ing
I0420 20:01:00.153591       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1213", Name:"protheus-appserver-homolog-ing", UID:"0d073a9e-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439674", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1213/protheus-appserver-homolog-ing
I0420 20:01:00.153614       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf122", Name:"protheus-webservice-prod", UID:"dccd32d9-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96353", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf122/protheus-webservice-prod
I0420 20:01:00.153641       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf121", Name:"protheus-appserver-prod", UID:"dc9630ee-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96341", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf121/protheus-appserver-prod
I0420 20:01:00.153662       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf133", Name:"protheus-appserver-homolog", UID:"de08fb6a-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96418", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf133/protheus-appserver-homolog
I0420 20:01:00.153674       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf126", Name:"protheus-appserver-prod", UID:"dd3c0902-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96373", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf126/protheus-appserver-prod
I0420 20:01:00.153692       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1208", Name:"protheus-webservice-prd-ing", UID:"e8d768db-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436863", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1208/protheus-webservice-prd-ing
I0420 20:01:00.153732       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1218", Name:"protheus-appserver-prod-ing", UID:"31e27afa-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442098", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1218/protheus-appserver-prod-ing
I0420 20:01:00.153751       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf123", Name:"protheus-webservice-prod", UID:"dcee7880-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96361", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf123/protheus-webservice-prod
I0420 20:01:00.153771       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf117", Name:"protheus-webservice-prod", UID:"9786f934-4380-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"64428", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf117/protheus-webservice-prod
I0420 20:01:00.153796       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf134", Name:"protheus-appserver-prod", UID:"de243d98-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96426", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf134/protheus-appserver-prod
I0420 20:01:00.153816       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf128", Name:"protheus-webservice-prod", UID:"dd81fd44-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96393", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf128/protheus-webservice-prod
I0420 20:01:00.153830       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1206", Name:"protheus-webservice-homolog-ing", UID:"e8bd5494-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436810", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1206/protheus-webservice-homolog-ing
I0420 20:01:00.153845       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf136", Name:"protheus-webservice-prod", UID:"de632be3-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96438", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf136/protheus-webservice-prod
I0420 20:01:00.153853       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1203", Name:"protheus-appserver-prod-ing", UID:"a083c804-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433648", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1203/protheus-appserver-prod-ing
I0420 20:01:00.153867       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf122", Name:"protheus-appserver-homolog", UID:"dcc83bf6-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96351", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf122/protheus-appserver-homolog
I0420 20:01:00.153913       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1212", Name:"protheus-webservice-homolog-ing", UID:"0118b0c1-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438933", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1212/protheus-webservice-homolog-ing
I0420 20:01:00.153921       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf134", Name:"protheus-webservice-homolog", UID:"de31036f-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96432", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf134/protheus-webservice-homolog
I0420 20:01:00.153929       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf133", Name:"protheus-webservice-prod", UID:"de0c7ed0-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96420", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf133/protheus-webservice-prod
I0420 20:01:00.153944       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1208", Name:"protheus-appserver-prod-ing", UID:"e8d4c45f-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436856", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1208/protheus-appserver-prod-ing
I0420 20:01:00.153952       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1209", Name:"protheus-appserver-homolog-ing", UID:"e86ce516-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436627", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1209/protheus-appserver-homolog-ing
I0420 20:01:00.153960       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf127", Name:"protheus-appserver-prod", UID:"dd59adf9-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96381", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf127/protheus-appserver-prod
I0420 20:01:00.153973       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf117", Name:"protheus-appserver-homolog", UID:"978218c7-4380-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"64424", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf117/protheus-appserver-homolog
I0420 20:01:00.153981       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf132", Name:"protheus-webservice-prod", UID:"ddee59f7-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96410", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf132/protheus-webservice-prod
I0420 20:01:00.153988       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf130", Name:"protheus-webservice-homolog", UID:"ddbc005f-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96404", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf130/protheus-webservice-homolog
I0420 20:01:00.153999       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1211", Name:"protheus-webservice-homolog-ing", UID:"f4a77258-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438184", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1211/protheus-webservice-homolog-ing
I0420 20:01:00.154007       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1210", Name:"protheus-appserver-prod-ing", UID:"e8e597ad-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436884", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1210/protheus-appserver-prod-ing
I0420 20:01:00.154015       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1204", Name:"protheus-appserver-prod-ing", UID:"a09cfb08-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433692", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1204/protheus-appserver-prod-ing
I0420 20:01:00.154026       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1200", Name:"protheus-webservice-homolog-ing", UID:"a0620703-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433581", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1200/protheus-webservice-homolog-ing
I0420 20:01:00.154033       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf122", Name:"protheus-webservice-homolog", UID:"dccfc07e-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96355", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf122/protheus-webservice-homolog
I0420 20:01:00.154042       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1214", Name:"protheus-webservice-prd-ing", UID:"c51b0504-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435406", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1214/protheus-webservice-prd-ing
I0420 20:01:00.154054       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1210", Name:"protheus-appserver-homolog-ing", UID:"e8e430e5-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436880", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1210/protheus-appserver-homolog-ing
I0420 20:01:00.154062       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1220", Name:"protheus-webservice-homolog-ing", UID:"31c67e27-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442022", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1220/protheus-webservice-homolog-ing
I0420 20:01:00.154082       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1203", Name:"protheus-webservice-prd-ing", UID:"a0866d0a-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433663", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1203/protheus-webservice-prd-ing
I0420 20:01:00.154090       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1207", Name:"protheus-appserver-homolog-ing", UID:"a0bd6723-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433741", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1207/protheus-appserver-homolog-ing
I0420 20:01:00.154097       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf132", Name:"protheus-appserver-prod", UID:"dde39b27-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96407", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf132/protheus-appserver-prod
I0420 20:01:00.154107       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1217", Name:"protheus-webservice-homolog-ing", UID:"e958f95d-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"437027", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1217/protheus-webservice-homolog-ing
I0420 20:01:00.154120       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1206", Name:"protheus-webservice-prd-ing", UID:"e8befefe-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436819", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1206/protheus-webservice-prd-ing
I0420 20:01:00.154132       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf121", Name:"protheus-webservice-homolog", UID:"dca3a061-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96346", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf121/protheus-webservice-homolog
I0420 20:01:00.154140       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1207", Name:"protheus-appserver-prod-ing", UID:"a0beb4b4-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433746", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1207/protheus-appserver-prod-ing
I0420 20:01:00.154151       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1215", Name:"protheus-webservice-homolog-ing", UID:"195c82f2-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"440638", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1215/protheus-webservice-homolog-ing
I0420 20:01:00.154159       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1215", Name:"protheus-appserver-homolog-ing", UID:"1955f8fb-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"440634", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1215/protheus-appserver-homolog-ing
I0420 20:01:00.154177       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1200", Name:"protheus-webservice-prd-ing", UID:"a063668f-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433587", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1200/protheus-webservice-prd-ing
I0420 20:01:00.154195       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1218", Name:"protheus-appserver-homolog-ing", UID:"31dffc16-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442095", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1218/protheus-appserver-homolog-ing
I0420 20:01:00.154202       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1216", Name:"protheus-appserver-homolog-ing", UID:"31cff8f5-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442062", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1216/protheus-appserver-homolog-ing
I0420 20:01:00.154210       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1207", Name:"protheus-webservice-prd-ing", UID:"a0c6a040-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433753", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1207/protheus-webservice-prd-ing
I0420 20:01:00.154223       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf121", Name:"protheus-appserver-homolog", UID:"dc9c0d03-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96343", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf121/protheus-appserver-homolog
I0420 20:01:00.154231       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1213", Name:"protheus-appserver-prod-ing", UID:"0d0894ff-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439676", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1213/protheus-appserver-prod-ing
I0420 20:01:00.154238       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1214", Name:"protheus-webservice-homolog-ing", UID:"c519b88d-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435401", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1214/protheus-webservice-homolog-ing
I0420 20:01:00.154251       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf139", Name:"protheus-webservice-prod", UID:"deb5895b-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96454", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf139/protheus-webservice-prod
I0420 20:01:00.154266       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf132", Name:"protheus-webservice-homolog", UID:"ddf172ef-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96412", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf132/protheus-webservice-homolog
I0420 20:01:00.154273       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1204", Name:"protheus-appserver-homolog-ing", UID:"a09b4b82-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433685", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1204/protheus-appserver-homolog-ing
I0420 20:01:00.154280       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf140", Name:"protheus-appserver-homolog", UID:"ded0320f-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96459", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf140/protheus-appserver-homolog
I0420 20:01:00.154288       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf130", Name:"protheus-appserver-prod", UID:"ddadb93f-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96398", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf130/protheus-appserver-prod
I0420 20:01:00.154295       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf128", Name:"protheus-webservice-homolog", UID:"dd859cfe-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96396", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf128/protheus-webservice-homolog
I0420 20:01:00.154307       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1217", Name:"protheus-appserver-prod-ing", UID:"e957bc84-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"437025", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1217/protheus-appserver-prod-ing
I0420 20:01:00.154315       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1214", Name:"protheus-appserver-prod-ing", UID:"c5189d1b-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435399", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1214/protheus-appserver-prod-ing
I0420 20:01:00.154322       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1208", Name:"protheus-webservice-homolog-ing", UID:"e8d63f05-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436858", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1208/protheus-webservice-homolog-ing
I0420 20:01:00.154333       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf123", Name:"protheus-appserver-homolog", UID:"dce65b3e-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96359", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf123/protheus-appserver-homolog
I0420 20:01:00.154345       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1212", Name:"protheus-appserver-prod-ing", UID:"011768b6-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438928", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1212/protheus-appserver-prod-ing
I0420 20:01:00.154352       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1206", Name:"protheus-appserver-prod-ing", UID:"e8bc30a9-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436805", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1206/protheus-appserver-prod-ing
I0420 20:01:00.154364       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf128", Name:"protheus-appserver-homolog", UID:"dd7e7303-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96392", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf128/protheus-appserver-homolog
I0420 20:01:00.154372       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1209", Name:"protheus-webservice-homolog-ing", UID:"e86f42fa-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436641", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1209/protheus-webservice-homolog-ing
I0420 20:01:00.154382       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1213", Name:"protheus-webservice-homolog-ing", UID:"0d09efe6-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439681", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1213/protheus-webservice-homolog-ing
I0420 20:01:00.154393       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1210", Name:"protheus-webservice-prd-ing", UID:"e8e7f085-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436892", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1210/protheus-webservice-prd-ing
I0420 20:01:00.154401       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf126", Name:"protheus-webservice-homolog", UID:"dd45bbe4-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96379", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf126/protheus-webservice-homolog
I0420 20:01:00.154411       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1201", Name:"protheus-appserver-homolog-ing", UID:"1e924f31-44d4-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"480926", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1201/protheus-appserver-homolog-ing
I0420 20:01:00.154423       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf133", Name:"protheus-webservice-homolog", UID:"de100f18-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96423", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf133/protheus-webservice-homolog
I0420 20:01:00.154445       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf126", Name:"protheus-webservice-prod", UID:"dd4228e8-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96376", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf126/protheus-webservice-prod
I0420 20:01:00.154453       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf123", Name:"protheus-appserver-prod", UID:"dce3ac91-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96357", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf123/protheus-appserver-prod
I0420 20:01:00.154464       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1203", Name:"protheus-appserver-homolog-ing", UID:"a0813248-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433639", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1203/protheus-appserver-homolog-ing
I0420 20:01:00.154472       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1217", Name:"protheus-webservice-prd-ing", UID:"e95e162e-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"437032", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1217/protheus-webservice-prd-ing
I0420 20:01:00.154481       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1202", Name:"protheus-appserver-homolog-ing", UID:"c43cdcfa-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435206", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1202/protheus-appserver-homolog-ing
I0420 20:01:00.154494       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf122", Name:"protheus-appserver-prod", UID:"dcc57e6c-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96349", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf122/protheus-appserver-prod
I0420 20:01:00.154502       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1219", Name:"protheus-webservice-prd-ing", UID:"0d6119b8-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439810", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1219/protheus-webservice-prd-ing
I0420 20:01:00.154514       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1209", Name:"protheus-webservice-prd-ing", UID:"e876cabc-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436648", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1209/protheus-webservice-prd-ing
I0420 20:01:00.154526       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1209", Name:"protheus-appserver-prod-ing", UID:"e86e011a-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"436634", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1209/protheus-appserver-prod-ing
I0420 20:01:00.154537       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf136", Name:"protheus-webservice-homolog", UID:"de65cdb3-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96440", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf136/protheus-webservice-homolog
I0420 20:01:00.154545       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1219", Name:"protheus-appserver-homolog-ing", UID:"0d5d67a6-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439802", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1219/protheus-appserver-homolog-ing
I0420 20:01:00.154556       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1212", Name:"protheus-webservice-prd-ing", UID:"011a9452-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438936", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1212/protheus-webservice-prd-ing
I0420 20:01:00.154564       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf137", Name:"protheus-appserver-prod", UID:"de79c92a-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96442", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf137/protheus-appserver-prod
I0420 20:01:00.154576       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf139", Name:"protheus-appserver-homolog", UID:"deae5b4e-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96452", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf139/protheus-appserver-homolog
I0420 20:01:00.154583       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf130", Name:"protheus-appserver-homolog", UID:"ddb04742-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96399", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf130/protheus-appserver-homolog
I0420 20:01:00.154592       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf125", Name:"protheus-webservice-prod", UID:"dd1fa837-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96369", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf125/protheus-webservice-prod
I0420 20:01:00.154603       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1203", Name:"protheus-webservice-homolog-ing", UID:"a0854912-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"433659", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1203/protheus-webservice-homolog-ing
I0420 20:01:00.154611       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf139", Name:"protheus-appserver-prod", UID:"deabb5e3-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96450", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf139/protheus-appserver-prod
I0420 20:01:00.154622       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1218", Name:"protheus-webservice-prd-ing", UID:"31e4ed64-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442105", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1218/protheus-webservice-prd-ing
I0420 20:01:00.154630       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1218", Name:"protheus-webservice-homolog-ing", UID:"31e3c241-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"442103", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1218/protheus-webservice-homolog-ing
I0420 20:01:00.154639       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf128", Name:"protheus-appserver-prod", UID:"dd7bbcee-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96390", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf128/protheus-appserver-prod
I0420 20:01:00.154647       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf117", Name:"protheus-webservice-homolog", UID:"978acf83-4380-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"64425", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf117/protheus-webservice-homolog
I0420 20:01:00.154658       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1219", Name:"protheus-webservice-homolog-ing", UID:"0d5fbf71-44c9-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"439806", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1219/protheus-webservice-homolog-ing
I0420 20:01:00.154670       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1202", Name:"protheus-webservice-prd-ing", UID:"c4408539-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"435218", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1202/protheus-webservice-prd-ing
I0420 20:01:00.154683       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1217", Name:"protheus-appserver-homolog-ing", UID:"e956827d-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"437021", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1217/protheus-appserver-homolog-ing
I0420 20:01:00.154690       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf125", Name:"protheus-webservice-homolog", UID:"dd29675d-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96371", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf125/protheus-webservice-homolog
I0420 20:01:00.154703       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf127", Name:"protheus-appserver-homolog", UID:"dd5c790d-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96383", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf127/protheus-appserver-homolog
I0420 20:01:00.154711       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf126", Name:"protheus-appserver-homolog", UID:"dd3ea381-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96375", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf126/protheus-appserver-homolog
I0420 20:01:00.154723       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf1211", Name:"protheus-appserver-prod-ing", UID:"f4a6139d-44c8-11e8-bbee-02dd6090813c", APIVersion:"extensions", ResourceVersion:"438180", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf1211/protheus-appserver-prod-ing
I0420 20:01:00.154730       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ctaf136", Name:"protheus-appserver-homolog", UID:"de5be8d5-4393-11e8-95c4-02dd6090813c", APIVersion:"extensions", ResourceVersion:"96435", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ctaf136/protheus-appserver-homolog
I0420 20:01:00.242687       7 store.go:583] running initial sync of secrets
I0420 20:01:00.242824       7 leaderelection.go:175] attempting to acquire leader lease  kong/ingress-controller-leader-nginx...
W0420 20:01:00.243071       7 controller.go:427] service ctaf1203/protheus-appserver-homolog-svc does not have any active endpoints
W0420 20:01:00.243096       7 controller.go:427] service ctaf1203/protheus-appserver-prod-svc does not have any active endpoints
W0420 20:01:00.243107       7 controller.go:427] service ctaf1203/protheus-webservice-homolog-svc does not have any active endpoints
W0420 20:01:00.243123       7 controller.go:427] service ctaf1203/protheus-webservice-prod-svc does not have any active endpoints
W0420 20:01:00.243295       7 controller.go:427] service ctaf1208/protheus-appserver-prod-svc does not have any active endpoints
W0420 20:01:00.243524       7 controller.go:427] service ctaf1216/protheus-appserver-homolog-svc does not have any active endpoints
W0420 20:01:00.243542       7 controller.go:427] service ctaf1216/protheus-appserver-prod-svc does not have any active endpoints
W0420 20:01:00.243556       7 controller.go:427] service ctaf1216/protheus-webservice-homolog-svc does not have any active endpoints
W0420 20:01:00.243570       7 controller.go:427] service ctaf1216/protheus-webservice-prod-svc does not have any active endpoints
W0420 20:01:00.243579       7 controller.go:427] service ctaf1218/protheus-appserver-homolog-svc does not have any active endpoints
W0420 20:01:00.243611       7 controller.go:427] service ctaf1201/protheus-appserver-homolog-svc does not have any active endpoints
W0420 20:01:00.243627       7 controller.go:427] service ctaf1201/protheus-appserver-prod-svc does not have any active endpoints
W0420 20:01:00.243642       7 controller.go:427] service ctaf1201/protheus-webservice-homolog-svc does not have any active endpoints
W0420 20:01:00.243653       7 controller.go:355] error obtaining service endpoints: error getting service ctaf1201/protheus-webservice-prd-svc from the cache: service ctaf1201/protheus-webservice-prd-svc was not found
W0420 20:01:00.243755       7 controller.go:427] service ctaf123/protheus-webservice-prod-svc does not have any active endpoints
W0420 20:01:00.243789       7 controller.go:427] service ctaf125/protheus-webservice-prod-svc does not have any active endpoints
W0420 20:01:00.243803       7 controller.go:427] service ctaf125/protheus-webservice-homolog-svc does not have any active endpoints
W0420 20:01:00.244096       7 controller.go:427] service ctaf139/protheus-webservice-homolog-svc does not have any active endpoints
I0420 20:01:00.244940       7 controller.go:134] syncing Ingress configuration...
I0420 20:01:00.251442       7 status.go:196] new leader elected: kong-ingress-controller-75bcbcc9b7-84xr6
E0420 20:01:00.259647       7 runtime.go:66] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:573
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/hashmap_fast.go:696
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/net/url/url.go:819
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/apis/admin/upstream.go:54
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:776
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:70
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:136
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:86
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:2361
panic: assignment to entry in nil map [recovered]
	panic: assignment to entry in nil map
goroutine 362 [running]:
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x129f540, 0x156c940)
	/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502 +0x229
net/url.url.Values.Add(...)
	/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/net/url/url.go:819
github.com/kong/kubernetes-ingress-controller/internal/apis/admin.(*upstreamAPI).List(0xc420b8c0d0, 0x0, 0x16, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/apis/admin/upstream.go:54 +0x519
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.syncUpstreams(0xc420786018, 0x1, 0x1, 0xc4202d0000, 0x80, 0x92, 0xc42052ae20, 0x1ebecc0, 0xc400000000)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:776 +0x182
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).OnUpdate(0xc420112580, 0xc420b76060, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:70 +0xcf
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).syncIngress(0xc420112580, 0x130b8c0, 0xc420778d40, 0xc40e79c84f, 0x545397e9)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:136 +0x2f2
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).(github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.syncIngress)-fm(0x130b8c0, 0xc420778d40, 0xa, 0xc420409d68)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:86 +0x3e
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).worker(0xc42001c330)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112 +0x34a
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).(github.com/kong/kubernetes-ingress-controller/internal/task.worker)-fm()
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x2a
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42055a7a8)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42097dfa8, 0x3b9aca00, 0x0, 0xe9ec01, 0xc4205fe1e0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42055a7a8, 0x3b9aca00, 0xc4205fe1e0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).Run(0xc42001c330, 0x3b9aca00, 0xc4205fe1e0)
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x55
created by github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).Start
	/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:155 +0xef

What you expected to happen:
Sync without errors

How to reproduce it (as minimally and precisely as possible):
/upstreams path must have more than 100 records: create more than 150 ingress resources

Anything else we need to know:
I think it's a pagination error, the /upstreams endpoint has more than 100 records when the controller dies, and the errors is pointing to:

/home/aledbf/go/src/github.com/kong/kubernetes-ingress-controller/internal/apis/admin/upstream.go:54

request all return 404

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kong Ingress controller version:0.04

Kubernetes version (use kubectl version):1.9

Environment:

  • Cloud provider or hardware configuration: 2h4g
  • OS (e.g. from /etc/os-release): ubuntu16.04
  • Kernel (e.g. uname -a): Linux kubemaster 4.4.0-124-generic #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubernets

What happened:
I have ingress configured but request return 404
Here is the Kong Ingress controller log

E0524 05:13:30.385111 7 kong.go:637] Unexpected error creating Kong Route: [400] {"fields":{"paths":"must not have a trailing slash"},"name":"schema violation","code":2,"message":"schema violation (paths: must not have a trailing slash)"} E0524 05:13:30.385123 7 controller.go:130] unexpected failure updating Kong configuration: the server rejected our request for an unknown reason (post routes.meta.k8s.io)

Here is the return result

{"message":"no route and no API found with those values"}

Anything else we need to know:

Here is my config

`
piVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nlp-api-kong
namespace: nlp-dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:

  • host: test.xxxx.com
    http:
    paths:
    • path: /article-summary/
      backend:
      serviceName: article-summary-rest-svc
      servicePort: http
      `

Here is admin upstreams return

{ "created_at": 1527135234334, "hash_on": "none", "id": "cd95daeb-f900-477b-84a4-3f4f62252fe6", "healthchecks": { "active": { "unhealthy": { "http_statuses": [ 429, 404, 500, 501, 502, 503, 504, 505 ], "tcp_failures": 0, "timeouts": 0, "http_failures": 0, "interval": 0 }, "http_path": "/", "healthy": { "http_statuses": [ 200, 302 ], "interval": 0, "successes": 0 }, "timeout": 1, "concurrency": 10 }, "passive": { "unhealthy": { "http_failures": 0, "http_statuses": [ 429, 500, 503 ], "tcp_failures": 0, "timeouts": 0 }, "healthy": { "successes": 0, "http_statuses": [ 200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308 ] } } }, "name": "nlp-dev.article-summary-rest-svc.http", "hash_fallback": "none", "slots": 1000 },

Multiple SNIs on a TLS certificate are not reflected in Kong certificate object

Is this a request for help?: No

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): sni, tls, certificate


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

Kong Ingress controller version: 0.13

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: kubeadm cluster
  • OS (e.g. from /etc/os-release): Linux
  • Others: A TLS certificate from Let's Encrypt

Given the following ingress definition, and a valid TLS secret that contains a certificate for SNIs foo.example.com and bar.example.com, a Kong certificate will be created with only one of the SNIs shown in spec.tls[0].hosts, not all of them.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  tls:
  - secretName: tls-secret
    hosts:
    - foo.example.com
    - bar.example.com
  rules:
  - host: foo.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: foo
          servicePort: 80
  - host: bar.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: bar
          servicePort: 80

Kong admin API: /certificates: notice that the snis array contains only one of the above tls[0].hosts values, not both.

{
  "data": [
    {
      "created_at": 1533315433000,
      "cert": "<certificate>",
      "id": "4c997a38-973c-11e8-96dc-00155dea8698",
      "key": "<private key>",
      "snis": [
        "foo.example.com"
      ]
    }
  ],
  "total": 1
}

Kong admin API: /snis:

{
  "total": 1,
  "data": [
    {
      "created_at": 1533315433000,
      "name": "foo.example.com",
      "ssl_certificate_id": "4c997a38-973c-11e8-96dc-00155dea8698"
    }
  ]
}

This means that accessing bar.example.com via HTTPS throws a certificate error, when in fact that should work. Therefore it is not currently possible to share a valid TLS certificate across multiple Kong backends in the Kubernetes controller, but this is possible in "vanilla" Kong when you POST the full list of SNIs to /snis for a given certificate ID.

How can I create the Consumer's JWT credential

Is this a request for help?:

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):


FEATURE REQUEST

Kong Ingress controller version: 0.13.1

Kubernetes version (use kubectl version): 1.9.1

What happened:
I create a KongPlugin named "wechat-jwt", a Service and an Ingress, and patch the ingress with param '{"metadata":{"annotations":{"jwt.plugin.konghq.com":"wechat-jwt\n"}}}'.

The jwt plugin and all are right, but how can I create the consumer's jwt credential? just like
curl -i -X POST http://localhost:8001/consumers/{consumer}/jwt

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

How to: same plugin, different policy for different consumer, in the same ingress resource?

Is this a request for help?:

Yes

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

annotations


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Kong Ingress controller version:

kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.3

Kubernetes version (use kubectl version):

Client: 1.10.1
Server: 1.8.11

What happened:

I'm trying to apply different rate limit policy to different consumer, but it seems that:

  1. There can be only one configuration of a plugin in a CRD
  2. And there only one key "rate-limiting.plugin.konghq.com" is allowed in annotation field.

Missing SSL certificate roots

The following error can be seen in the ingress-controller container logs every minute when you have an Ingress definition using a TLS secret (in my case a cert from Let's Encrypt):

backend_ssl.go:153] unexpected error generating SSL certificate with full intermediate chain CA certs: x509: failed to load system roots and no roots provided

The line of code in question is this one.

This is because the Debian base image used by the ingress-controller image doesn't ship with root CA certs, so it can't verify the full chain.

We can work around this by starting the ingress-controller container with a command like:

sh -c apt-get update && apt-get install -y ca-certificates && /kong-ingress-controller <options...>

But it would probably be nicer to do something like this in the Dockerfile:

apt-get update && apt-get install -y ca-certificates

I am using the Kong Ingress controller snis-fix branch.

A similar fix might be needed in the kong:0.X-centos7 Dockerfiles, by the way, but not the Alpine ones, because they already include apk add ca-certificates.

Consumers created via Kong Administrative API are deleted

NOTE: GitHub issues are reserved for bug reports only.
For anything else, please join the conversation
in Kong Nation https://discuss.konghq.com/c/kubernetes.


Summary

Consumers that are created using the administrative API are deleted after a period of time.

Kong Ingress controller version 0.1.0

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Debian 8
  • Kernel (e.g. uname -a): 4.4.121-k8s
  • Install tools: kops
  • Others:

What happened

Consumers that were created via a direct integration to the Kong admin interface are removed after a couple of minutes

Expected behvaior

Consumers created from the administrative interface to not be removed

Steps To Reproduce

  1. Deploy Kong Ingress controller using the documented all-in-one-minikube methodology: https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/minikube.md
  2. Create consumers using the administrative api in Kong
  3. Verify consumers exist
  4. After about 5 minutes, check list of consumers and verify that list is empty

Deleting the kong CRDs (KongConsumer, KongCredentials) does not update the kong proxy

BUG REPORT

Kong Ingress controller version:
0.0.4

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-16T03:15:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: minikube version: v0.26.1
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a): OSX 10.12.6
  • Install tools: helm

What happened:

Deleting the kong CRDs (KongConsumer, KongCredentials) does not update the kong proxy.

After deleting the ingress resources: the routes and services disappear as expected.
Moving on to delete the CRDs but the consumer and consumer/x/jwt still reports the old data.

	## Consumers:
	curl ${KONG_ADMIN_IP}:${KONG_ADMIN_PORT}/consumers/ |jq .
	## Consumer anonymous jwt:
	curl ${KONG_ADMIN_IP}:${KONG_ADMIN_PORT}/consumers/anonymous/jwt |jq .

RESULTS:

## Consumers:
{
  "total": 1,
  "data": [
    {
      "created_at": 1527503976000,
      "username": "anonymous",
      "id": "6a24ed65-6263-11e8-8eb0-0800274b9d73"
    }
  ]
}
## Consumer anonymous jwt:
{
  "total": 1,
  "data": [
    {
      "rsa_public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnfktZb7T7zeOTeC482AMkBpqAhusGA18xQtWZ2FuWJaHVpHbRi+NklmUgMR5pblCunNFN3h3ZWOrXSO/onI2N+Dkq+ztB+HB4T9IdxF8SWIpqP9IuapvwPGW+w/xJ+SECRETSTUFFmVXZ+7PQnOi9x0Fd4Qjw83CMPMMTWgoVhOpouCUg8rfX5Wrxni/89xvILDA1LMoolab3wjofo7OMGzlkpAfTSFSZqu4vjizgahbtBcKxNr/2NuIMu6NLKetxEjpmWo5JaT0xEVwBMM9EyaPYgXeWMUUOwjDyzwVKTGO8eGwGgf1Sb06tr9osR4oLnln3wRJAGClSokfMwIDAQAB\n-----END PUBLIC KEY-----",
      "created_at": 1527503976000,
      "id": "6a29d318-6263-11e8-8eb0-0800274b9d73",
      "algorithm": "RS256",
      "key": "https://api.mything.com/auth/realms/olt",
      "secret": "L9KS3sinRGLBjQ4Qp9xZ88m6FQYy4KqU",
      "consumer_id": "6a24ed65-6263-11e8-8eb0-0800274b9d73"
    }
  ]
}


What you expected to happen:
Deleting ingress resources should update the kong config, for CRDs as well.

Updates to KongIngress are not reflected in Kong until the Ingress is recreated

Summary

Creating a KongIngress and Ingress resource correctly configures Kong but subsequent edits ot the KongIngress are not immediately reflected in Kong. Deleting and recreating the Ingress will force Kong to be updated.

This may apply to more than the KongIngress resource, but this is where I first noticed the problem.

kong:0.13.1-centos

Kubernetes version

1.9.3

What happened

Changes to the KongIngress resource (strip_path) were not reflected in the Kong configuration.

Expected behvaior

Changes to KongIngress are picked up and applied by Kong.

Steps To Reproduce

  1. Create the following (assuming a service myservice exists)
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: mykongingress
route:
  strip_path: true

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    configuration.konghq.com: mykongingress
spec:
  rules:
  - host: foo.bar
    http:
      paths:
      - path: /foo
        backend:
          serviceName: myservice
          servicePort: 80
  1. Verify settings in Kong /routes
  2. Change strip_path to false and apply changes
  3. Observe that settings in Kong have not changed
  4. Delete/create myingress again
  5. Observe that settings in Kong have been updated

Rewrite target is not working

Hi there, the annotation ingress.kubernetes.io/rewrite-target: / doesn't works. I need that the path: /sample be removed before Kong calls the upstream service.
This is a bug?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: my.host.local
    http:
      paths:
      - path: /sample
        backend:
          serviceName: my-svc
          servicePort: 80

kong-ingress-controller version: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.4

Kong deployment was made with the yml file: https://github.com/Kong/kubernetes-ingress-controller/blob/master/deploy/single/all-in-one-postgres.yaml

kubectl version:

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.4", GitCommit:"eb2e43842aaa21d6f0bb65d6adf5a84bbdc62eaf", GitTreeState:"clean", BuildDate:"2018-06-15T21:48:39Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}

Ingress Controller for ClusterIP service type

Is this a request for help?:
Yes
What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
gke ingress, clusterip

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kong Ingress controller version:

Kubernetes version (use kubectl version):
1.10.5-gke.4

Environment:

  • Cloud provider or hardware configuration: Google Kubernetes Engine
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
I am working through the Kubernetes Ingress Controller setup. I created the ingress but see the following error in GKE console:

Error during sync: error while evaluating the ingress spec: service "default/http-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"

Does Kong Ingress Controller not support ClusterIPs?
What you expected to happen:
Should not see this error.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

How to set oauth2 for only one domain

Is this a request for help?: yes


Kong Ingress controller version: 0.

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
I found i have to set up a api if i only want to use oauth2 for one host in kong
but after i config ingress, there is no api for use
do i have to create api myself ?

KongIngress CRD doesn't work

BUG REPORT Custom KongIngress definition doesn't sync

Ref: https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/custom-types.md#kongingress

Kong Ingress controller version:

-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    dev
W0423 16:09:28.797560       6 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
  Build:      git-69e529a
  Repository: https://github.com/Kong/kubernetes-ingress-controller.git
-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:
AWS with kubeadm + RBAC Authorization

What happened:

Creating the KongIngress resource below doesn't sync

kubectl create -f - <<EOF
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: protheus-appserver-prod-ing
  namespace: ctaf1202
proxy:
  connect_timeout: 3600000
  read_timeout: 3600000
  write_timeout: 3600000
upstream:
  hash_on: null
  hash_fallback: null
  healthchecks: null
  slots: 0
route:
  methods: []
  regex_priority: 0
  strip_path: false
  preserve_host: true
EOF

What you expected to happen:

Update the respective /service/<resource> with the proxy timeouts

How to reproduce it (as minimally and precisely as possible):
Create a kubernetes ingress resource and a CRD KongIngress resource (example above)

Anything else we need to know:
I saw that the KongIngress is only used when the service doesn't exists. Is it right?
https://github.com/Kong/kubernetes-ingress-controller/blob/master/internal/ingress/controller/kong.go#L237

Another problem (nil pointer) happens when creating the KongIngress resource below:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: protheus-appserver-prod-ing
  namespace: ctaf1202
proxy:
  connect_timeout: 3600000
  read_timeout: 3600000
  write_timeout: 3600000

https://github.com/Kong/kubernetes-ingress-controller/blob/master/internal/ingress/controller/kong.go#L602

E0423 14:24:32.972171       7 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.9.2/libexec/src/runtime/asm_amd64.s:509
/usr/local/Cellar/go/1.9.2/libexec/src/runtime/panic.go:491
/usr/local/Cellar/go/1.9.2/libexec/src/runtime/panic.go:63
/usr/local/Cellar/go/1.9.2/libexec/src/runtime/signal_unix.go:367
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:602
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:98
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:130
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:86
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59
/usr/local/Cellar/go/1.9.2/libexec/src/runtime/asm_amd64.s:2337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x11cd88d]
goroutine 342 [running]:
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x111
panic(0x139a0a0, 0x1fe5cb0)
	/usr/local/Cellar/go/1.9.2/libexec/src/runtime/panic.go:491 +0x283
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).syncRoutes(0xc42056a210, 0xc420ac2870, 0x0, 0x0, 0x0)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:602 +0x90d
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).OnUpdate(0xc42056a210, 0xc420ac2870, 0x0, 0x0)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/kong.go:98 +0x2bb
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).syncIngress(0xc42056a210, 0x1405a80, 0xc42000a1e0, 0xc431340564, 0x5a6099df)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/controller.go:130 +0x2f4
github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).(github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.syncIngress)-fm(0x1405a80, 0xc42000a1e0, 0xa, 0xc420bf2d68)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:86 +0x3e
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).worker(0xc42045a870)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:112 +0x35b
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).(github.com/kong/kubernetes-ingress-controller/internal/task.worker)-fm()
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x2a
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4200cbfa8)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x5e
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc420b65fa8, 0x3b9aca00, 0x0, 0x1, 0xc4200c0720)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4200cbfa8, 0x3b9aca00, 0xc4200c0720)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
github.com/kong/kubernetes-ingress-controller/internal/task.(*Queue).Run(0xc42045a870, 0x3b9aca00, 0xc4200c0720)
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/task/queue.go:59 +0x55
created by github.com/kong/kubernetes-ingress-controller/internal/ingress/controller.(*NGINXController).Start
	/Users/sandromello/go/src/github.com/kong/kubernetes-ingress-controller/internal/ingress/controller/run.go:158 +0xf9

Ingress controller crashes trying to create SSL Certificate

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kong Ingress controller version: 0.0.4 with Kong 0.14.0-centos

Kubernetes version (use kubectl version):
Client: v1.11.0
Server: v1.10.4-gke.2

What happened:
There is an issue with the ingress-controller tries to create a SNI after new certificate is added to the server. It tries to create the SNI returning the following error:
Unexpected error creating Kong SNI: [400] {"fields":{"ssl_certificate_id":"unknown field","certificate":"required field missing"},"name":"schema violation","code":2,"message":"2 schema violations (certificate: required field missing; ssl_certificate_id: unknown field)"}

After that it keeps trying to update the created certificate for the SNI, leading to the following error:
Unexpected error creating Kong Certificate: [400] {"fields":{"snis":"example.com already associated with existing certificate 'abcd'"},"name":"schema violation","code":2,"message":"schema violation (snis: example.com already associated with existing certificate 'abcd')"}
Although it leads to this error, the HTTPS of example.com is working properly. However, because of this error, the controller does stops updating routes and services when Ingress are changed.

What you expected to happen:
After a certificate is created, HTTPS works fine and Ingress continues to update properly.

How to reproduce it (as minimally and precisely as possible):
Create a new certificate for a new SNI with valid secret. I use ingress-shim of cert-manager to issue certificates using Let's Encypt, here is the example of an Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example.com
  annotations:
      kubernetes.io/ingress.class: "nginx"
      configuration.konghq.com: ingress-cert-manager
      kubernetes.io/tls-acme: "true"
      certmanager.k8s.io/acme-http01-edit-in-place: "true"
spec:
  tls:
  - hosts:
    -  example.com
    secretName: example-crt
  rules:
  - host:  example.com
    http:
      paths:
      - path: /default
        backend:
          serviceName: default-http-backend
          servicePort: 80
---
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: ingress-cert-manager
route:
  preserve_host: true
  strip_path: false

Anything else we need to know:
I did not have this problem before using Kong 0.13.1-centos

TLS configuration fails with error 409 "name: already exists"

Is this a request for help?:

It isn't

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug report

Kong Ingress controller version:

Kong Ingress controller
Release: 0.0.1
Build: git-bec2acd

Kubernetes version (use kubectl version):

v1.9.4 (minikube version: v0.25.2)

Environment:

  • Cloud provider or hardware configuration: Minikube
  • OS (e.g. from /etc/os-release): OSX
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

I created the following Ingress object:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: 2018-04-19T10:09:16Z
  generation: 2
  labels:
    created-by: kubeless
  name: hello-ingress
  namespace: default
  ownerReferences:
  - apiVersion: kubeless.io
    kind: HTTPTrigger
    name: hello-ingress
    uid: b7b8cb1a-43b9-11e8-8323-08002760624d
  resourceVersion: "1463"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hello-ingress
  uid: b7bb5805-43b9-11e8-8323-08002760624d
spec:
  tls:
  - secretName: tls-secret
  rules:
  - host: hello.kubeless.io
    http:
      paths:
      - backend:
          serviceName: hello
          servicePort: 8080
        path: /

The secret tls-secret contains a dummy certificate generated executing:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=hello.kubeless.io"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

After that the ingress controller shows the following:

I0417 13:33:30.812604       5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"hello-ingress", UID:"eb05afbf-4243-11e8-a972-080027e9ef1d", APIVersion:"extensions", ResourceVersion:"2055", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/hello-ingress
I0417 13:33:30.832391       5 kong.go:832] creating Kong SSL Certificate for host hello.kubeless.io
I0417 13:33:30.853575       5 kong.go:845] creating Kong SNI for host hello.kubeless.io and certificate id 355ba20f-9193-4cdc-b216-8bef1852f8ad
I0417 13:33:30.933495       5 kong.go:788] creating Kong Upstream with name default.hello.8080
I0417 13:33:30.952623       5 kong.go:149] creating Kong Target 172.17.0.5:8080 for upstream 23a19415-000b-4383-93af-225d012fd4d1
I0417 13:33:31.066395       5 kong.go:240] creating Kong Service name default.hello.8080
I0417 13:33:31.072983       5 kong.go:270] service default/hello does not contains any pluging. Checking if is required to remove plugins...
I0417 13:33:31.105842       5 kong.go:550] creating Kong Route for host hello.kubeless.io, path / and service 8b81ee8d-6ac7-4f2a-b17b-50abffae33e8
E0417 13:33:31.114764       5 kong.go:595] Route 7ae6fbfd-8587-44d8-a1f7-8efb868d60e3 does not contains any pluging. Checking if is required to remove plugins...
I0417 13:33:34.145273       5 controller.go:138] syncing Ingress configuration...
I0417 13:33:34.148889       5 kong.go:832] creating Kong SSL Certificate for host hello.kubeless.io
I0417 13:33:34.152684       5 kong.go:845] creating Kong SNI for host hello.kubeless.io and certificate id bc0e8114-1b91-489d-83fe-fe5d5883667b
E0417 13:33:34.155621       5 kong.go:849] Unexpected error creating Kong SNI: [409] {"name":"already exists with value 'hello.kubeless.io'"}
E0417 13:33:34.155811       5 controller.go:142] unexpected failure updating Kong configuration:
the server reported a conflict (post snis.meta.k8s.io)
W0417 13:33:34.156051       5 queue.go:113] requeuing dummy/dummy, err the server reported a conflict (post snis.meta.k8s.io)

After that the controller enters in a loop when trying to sync the Ingress configuration showing the same error. The TLS configuration actually works (I am able to access the service through HTTPS) but I am not able to add any plugin to the Ingress object after that since the Controller becomes unresponsive.

What you expected to happen:

I wanted to create a Ingress object with TLS and add the JWT plugin after that. If I do that in reverse order (first add the JWT plugin and then the TLS configuration) it works fine.

How to reproduce it (as minimally and precisely as possible):

(Described in the previous section)

Anything else we need to know:

Observed a panic: "invalid memory address or nil pointer dereference"

BUG REPORT Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

Kong Ingress controller version:
kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:
AWS with Kops 1.8.6 + RBAC authorization

What happened:
Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

kubectl logs -f kong-ingress-controller-74f498f5c8-ssfj9 -c ingress-controller
-------------------------------------------------------------------------------
Kong Ingress controller
  Release:    0.0.1
  Build:      git-bec2acd
  Repository: https://github.com/Kong/kubernetes-ingress-controller.git
-------------------------------------------------------------------------------

W0411 11:51:33.904666       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0411 11:51:33.904826       7 main.go:232] Creating API client for https://100.64.0.1:443
I0411 11:51:33.999124       7 main.go:276] Running in Kubernetes Cluster version v1.8 (v1.8.6) - git (clean) commit 6260bb08c46c31eea6cb538b34a9ceb3e406689c - platform linux/amd64
I0411 11:51:34.000552       7 main.go:97] validated kong/kong-proxy as the default backend
I0411 11:51:39.415448       7 main.go:162] kong version: 0.13.0
W0411 11:51:39.415479       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0411 11:51:39.415669       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0411 11:51:39.415791       7 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0411 11:51:39.420922       7 run.go:149] starting Ingress controller
I0411 11:51:40.623408       7 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"example", Name:"example-api-countries-svc", UID:"629d4368-3d74-11e8-9f2b-024882f23250", APIVersion:"extensions", ResourceVersion:"9315843", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress example/example-api-countries-svc
I0411 11:51:40.721868       7 store.go:544] running initial sync of secrets
I0411 11:51:40.721998       7 leaderelection.go:175] attempting to acquire leader lease  kong/ingress-controller-leader-nginx...
W0411 11:51:40.722150       7 controller.go:431] service example/example-api-countries-svc does not have any active endpoints
I0411 11:51:40.722178       7 controller.go:138] syncing Ingress configuration...
I0411 11:51:40.723654       7 status.go:196] new leader elected: kong-ingress-controller-6d599988b-xpl8j
W0411 11:51:40.829536       7 kong.go:263] service  does not exists in kong
I0411 11:51:40.831000       7 kong.go:270] service / does not contains any pluging. Checking if is required to remove plugins...
E0411 11:51:40.837659       7 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:573
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:63
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/signal_unix.go:388
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:159
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/kong.go:91
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/kong.go:91
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/controller.go:140
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/run.go:88
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:112
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:59
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:59
/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/asm_amd64.s:2361
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x10f4749]

goroutine 135 [running]:
github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x128cdc0, 0x1e28000)
	/home/aledbf/.gimme/versions/go1.10.1.linux.amd64/src/runtime/panic.go:502 +0x229
github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.(*ObjectMeta).GetAnnotations(...)
	/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:159
github.com/kong/ingress-controller/internal/ingress/controller.(*NGINXController).syncRoutes(0xc42010e000, 0xc4206b7800, 0x0, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/kong.go:592 +0xf09
github.com/kong/ingress-controller/internal/ingress/controller.(*NGINXController).OnUpdate(0xc42010e000, 0xc4206b7800, 0x0, 0x0)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/kong.go:91 +0x221
github.com/kong/ingress-controller/internal/ingress/controller.(*NGINXController).syncIngress(0xc42010e000, 0x12f3640, 0xc420444320, 0xc42b0a3ea4, 0x1992bad2d)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/controller.go:140 +0x2f2
github.com/kong/ingress-controller/internal/ingress/controller.(*NGINXController).(github.com/kong/ingress-controller/internal/ingress/controller.syncIngress)-fm(0x12f3640, 0xc420444320, 0xa, 0xc4204b1d68)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/run.go:88 +0x3e
github.com/kong/ingress-controller/internal/task.(*Queue).worker(0xc4203d4240)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:112 +0x34a
github.com/kong/ingress-controller/internal/task.(*Queue).(github.com/kong/ingress-controller/internal/task.worker)-fm()
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:59 +0x2a
github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42001d7a8)
	/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc420229fa8, 0x3b9aca00, 0x0, 0x1, 0xc420044420)
	/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42001d7a8, 0x3b9aca00, 0xc420044420)
	/home/aledbf/go/src/github.com/kong/ingress-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
github.com/kong/ingress-controller/internal/task.(*Queue).Run(0xc4203d4240, 0x3b9aca00, 0xc420044420)
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/task/queue.go:59 +0x55
created by github.com/kong/ingress-controller/internal/ingress/controller.(*NGINXController).Start
	/home/aledbf/go/src/github.com/kong/ingress-controller/internal/ingress/controller/run.go:157 +0xec

What you expected to happen:
Not memory error.

How to reproduce it (as minimally and precisely as possible):
When we add one ingress resource, after this the service show this error.
Now can't start correctly, we configure the next resources (Too more and without) but the error is the same.

  resources:
     limits:
      cpu: 100m
      memory: 356Mi

Anything else we need to know:

<Feature Request> KongPlugin CRDs should be reusable

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature request

Kong Ingress controller version:
0.0.1

Kubernetes version (use kubectl version):
1.9.6

Environment:
GKE

What happened:
We created KongPlugin manifests for the plugins that we are using for multiple routes and annotated those ingresses to use the plugins per the documentation. However, only one on the routes got the plugin via kong API. We were able to work around this by creating one unique CRD per ingress, referencing the same kong plugin in the annotation prefix, as specified in #4 .

What you expected to happen:
Ideally we could define the plugin once and reuse it in many routes/ services.

Cannot create acl plugin group in the consumer

Kong Ingress controller version
Kong Ingress controller version:
kong:0.13.1-centos

Kubernetes version (use kubectl version):
v1.9.6

I created acl group using the following resource file information, After the execution is created,I see the difference between the real request api and the plugin document in the log file.

File information

apiVersion: configuration.konghq.com/v1
kind: KongCredential
metadata:
  name: key-auth-group
consumerRef: key-auth
type: acl
config:
  group: key-auth

log output

I0819 09:13:55.555476       6 round_trippers.go:383] POST http://localhost:8001/consumers/676742e1-a38f-11e8-a520-00163e10b5e3/acl
I0819 09:13:55.555489       6 round_trippers.go:390] Request Headers:
I0819 09:13:55.555497       6 round_trippers.go:393]     Accept: application/json
I0819 09:13:55.555506       6 round_trippers.go:393]     Content-Type: application/json
I0819 09:13:55.555895       6 round_trippers.go:408] Response Status: 404 Not Found in 0 milliseconds
E0819 09:13:55.555985       6 kong.go:564] Unexpected error updating Kong Route: [404] {"message":"Not found"}

plugin document

curl -X POST http://kong:8001/consumers/{consumer}/acls \
    --data "group=group1"

The host attribute should not be required

Kong Ingress controller version:
kong:0.13.1-centos

Kubernetes version (use kubectl version):
v1.9.3

What happened:
I tried to create an ingress resource without a host specified. The ingress resource was created but the route was not created in Kong.

What you expected to happen:
I expected the route to exist in Kong since host is semi-optional in both Kong and Ingress.

How to reproduce it (as minimally and precisely as possible):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo
spec:
  rules:
  - http:
      paths:
      - path: /foo
        backend:
          serviceName: foo
          servicePort: 80

See that the above creates an Ingress resource but not a Kong route.

kubectl patch kongingress does not work

Is this a request for help?:

No

What keywords did you search in Kong Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

patch. Looks similar to #42


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Bug request
Kong Ingress controller version:

0.0.4

Kubernetes version (use kubectl version):

Server: v1.10.4-gke.0
Client : v1.9.7

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): I am using the Container Optimized OS provided by google:
BUILD_ID=10452.101.0
NAME="Container-Optimized OS"
PRETTY_NAME="Container-Optimized OS from Google"
VERSION=66
ID=cos
  • Kernel (e.g. uname -a): Linux gke-prod-1-default-pool-560ebb10-xx4d 4.14.22+
  • Install tools: I used the basic getting started provided by your blog
  • Others:

What happened:

I added the following kongingress configuration

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: foo
  namespace:  baz
proxy:
  path: /xorf
route:
  strip_path: true

and then:

kubectl patch kongingress foo --namespace="baz" -p '{"proxy": {"path": "\/nurf"}}'

Or

kubectl patch kongingress foo --namespace="baz" -p '{"proxy": {"path": "/nurf"}}'

The server respond:

Error from server (UnsupportedMediaType): the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json

What you expected to happen:

The Path of the kongingress to be updated from /xorf into /nurf

Ingress with multiple paths does not use plugin

NOTE: GitHub issues are reserved for bug reports only.
For anything else, please join the conversation
in Kong Nation https://discuss.konghq.com/c/kubernetes.


Summary

SUMMARY_GOES_HERE

Kong Ingress controller version 0.0.5

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment

  • Cloud provider or hardware configuration: AKS

What happened

Is it possible to configure multiple paths on a single ingress while using a plugin?

When I define separate KongPlugins and Ingresses, it works:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress-A
  namespace: my-namespace
  annotations:
    jwt.plugin.konghq.com: my-plugin-jwt-A
spec:
  rules:  
    - host: localhost
      http:
        paths:
        - path: /routeA
          backend:
            serviceName: my-service-A
            servicePort: 80

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress-B
  namespace: my-namespace
  annotations:
    jwt.plugin.konghq.com: my-plugin-jwt-B
spec:
  rules:  
    - host: localhost
      http:
        paths:
        - path: /routeB
          backend:
            serviceName: my-service-B
            servicePort: 80

However, this is a bit cumbersome I would like to combine the ingresses and use one plugin CRD like so:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: my-namespace
  annotations:
    jwt.plugin.konghq.com: my-plugin-jwt
spec:
  rules:  
    - host: localhost
      http:
        paths:
        - path: /routeA
          backend:
            serviceName: my-service-A
            servicePort: 80
        - path: /routeB
          backend:
            serviceName: my-service-B
            servicePort: 80

This issue is related to KongPlugin CRDs should be reusable , but I think this refers to using one plugin over multiple ingress definitions.

Is what I am attempting possible?
I still see all of the services and routes are created, just no plugins.

Expected behvaior

Each route should get an instance of the plugin

Ingress controller does not alter the Service object properties from custom Ingress configuration

NOTE: GitHub issues are reserved for bug reports only.
For anything else, please join the conversation
in Kong Nation https://discuss.konghq.com/c/kubernetes.


Summary

SUMMARY_GOES_HERE

Kong Ingress controller version
0.13

Kubernetes version

1.11.1

Environment

  • Azure AKS

What happened

Ingress controller does not alter the Service object properties from custom Ingress configuration

Expected behvaior

Once I change for instance read_timeout in KongIngress custom resource, all ingresses which are referencing it through configuration.konghq.com annotation should all get their Service records updated with appropriate value of read_timeout

Steps To Reproduce

  1. Deploy Ingress object referencing KongIngress through configuration.konghq.com annotation
  2. Deploy or change KongIngress object setting proxy section attributes
  3. Check corresponding Service object attributes in Admin API. Attribute values are default

Updates kong services on each synchronization check when not necessary

Summary

The ingress controller is issuing a PATCH to Kong for each service in the environment on each synchronization check. This doesn't necessarily sound bad, but when you have 200+ services and they all issue a PATCH at the same time, you get contention on the database that can manifest as errors in the kong-proxy logs and 5XX errors for consumers.

Kong Ingress controller version
0.1.0

Kubernetes version

1.9.3

What happened

Users experience service outages using kong-ingress with a large number of services due to unnecessary database writes every 10 minutes.

Expected behvaior

Services in k8s with no changes needed in Kong should detect that no change is necessary and avoid sending a PATCH, similar to how plugins function.

hash_on_header not recognized

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug (maybe feature request?)

Kong Ingress controller version:
0.0.2

Kubernetes version (use kubectl version):
client: 1.10
server: 1.9

What happened:
We attempted to use a hash_on: header load balancing scheme for our KongIngress resource, and configured the upstream as such:

upstream:
  hash_on: header
  hash_on_header: x-my-cool-header

The route was not created in kong, and checking the kong logs, we found:

E0502 19:41:17.068450       6 kong.go:895] Unexpected error creating Kong Upstream: [400] {"message":"Hashing on 'header', but no header name provided"}                                                             
E0502 19:41:17.068471       6 controller.go:130] unexpected failure updating Kong configuration:                                                                                                                                                                                
the server rejected our request for an unknown reason (post upstreams.meta.k8s.io)   

What you expected to happen:
Kong will attempt to create a sticky session based on my custom header, and if not present, fall back to round robin scheme.

Thank you!

How can I configure service/path?

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature request

Kong Ingress controller version:
0.0.3

Kubernetes version (use kubectl version):
1.9.6

Environment:
Docker for mac

What happened:
I can't configure service/path.
I want to "http://localhost/sample" to "http://sample-http/external".

What you expected to happen:
Use Ingress or KongIngress .

Ingress (ingress.kubernetes.io/rewrite-target is not working)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-http
  annotations:
    ingress.kubernetes.io/rewrite-target: "/external"
spec:
  rules:
  - host: localhost
    http:
      paths:
      - path: /sample
        backend:
          serviceName: sample-http
          servicePort: 80

KongIngress (proxy/path is not working)

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: sample-http
proxy:
  path: /external
route:
  strip_path: true

creating a kongplugin resource on minikube does not create a plugin on /plugins

Kong Ingress controller version:
kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.0.3

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-16T03:15:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • OS (e.g. from /etc/os-release): minikube version: v0.26.1

What happened:
I followed some steps to add a plugin to some consumer.
https://kubeless.io/docs/http-triggers/#enable-kong-security-plugins

The plugin does not show up on the plugins endpoint.

What you expected to happen:

${KONG_ADMIN_IP}:${KONG_ADMIN_PORT}/plugins

would contain the basic-auth plugin.

How to reproduce it (as minimally and precisely as possible):

echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: basic-auth
consumerRef: basic-auth
config:
  hide_credentials: false
" | kubectl create -f -
kongplugin "basic-auth" created
echo "
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: basic-auth
username: user
" | kubectl create -f -
kongconsumer "basic-auth" created
echo "
apiVersion: configuration.konghq.com/v1
kind: KongCredential
metadata:
  name: basic-auth
consumerRef: basic-auth
type: basic-auth
config:
  username: user
  password: pass
" | kubectl create -f -
kongcredential "basic-auth" created
kubectl patch ingress get-python \
 -p '{"metadata":{"annotations":{"basic-auth.plugin.konghq.com":"basic-auth"}}}'

The plugin should have been activated. Verification:

export KONG_ADMIN_PORT=$(minikube service -n kong kong-ingress-controller --url --format "{{ .Port }}")
export KONG_ADMIN_IP=$(minikube service   -n kong kong-ingress-controller --url --format "{{ .IP }}")
export PROXY_IP=$(minikube   service -n kong kong-proxy --url --format "{{ .IP }}" | head -1)
export HTTP_PORT=$(minikube  service -n kong kong-proxy --url --format "{{ .Port }}" | head -1)

curl ${KONG_ADMIN_IP}:${KONG_ADMIN_PORT}/plugins |jq .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.