Giter Club home page Giter Club logo

contour's Introduction

Contour

GitHub release License Slack

Build and Test Pull Request Go Report Card OpenSSF Scorecard CII Best Practices

Contour is fun at parties!

Overview

Contour is an ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.

Contour supports multiple configuration APIs in order to meet the needs of as many users as possible:

  • Ingress - A stable upstream API that enables basic ingress use cases.
  • HTTPProxy - Contour's Custom Resource Definition (CRD) which expands upon the functionality of the Ingress API to allow for a richer user experience as well as solve shortcomings in the original design.
  • Gateway API - A new CRD-based API managed by the Kubernetes SIG-Network community that aims to evolve Kubernetes service networking APIs in a vendor-neutral way.

Prerequisites

See the compatibility matrix for the Kubernetes versions Contour is supported with.

RBAC must be enabled on your cluster.

Get started

Getting started with Contour is as simple as one command. See the Getting Started document.

Troubleshooting

If you encounter issues, review the Troubleshooting section of the docs, file an issue, or talk to us on the #contour channel on the Kubernetes Slack server.

Contributing

Thanks for taking the time to join our community and start contributing!

Roadmap

See Contour's roadmap to learn more about where we are headed.

Security

Security Audit

A third party security audit was performed by Cure53 in December of 2020. You can see the full report here.

Reporting security vulnerabilities

If you've found a security related issue, a vulnerability, or a potential vulnerability in Contour please let the Contour Security Team know with the details of the vulnerability. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issue positively or negatively.

For further details please see our security policy.

Changelog

See the list of releases to find out about feature changes.

contour's People

Contributors

abhide avatar alexbrand avatar clayton-gonsalves avatar danehans avatar davecheney avatar davinci26 avatar dependabot[bot] avatar erwbgy avatar gary-tai avatar glerchundi avatar izturn avatar jonasrosland avatar jpeach avatar lubronzhan avatar mattmoor avatar michmike avatar nak3 avatar rajatvig avatar rbankston avatar rochacon avatar rosskukulinski avatar sdbrett avatar sevein avatar skriss avatar stevesloka avatar sunjaybhatia avatar tsaarni avatar vaamarnath avatar yangyy93 avatar youngnick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contour's Issues

Review the k8s watcher resync interval

@ncdc recommends that we raise the current shared informer resync interval from 30 minutes to 12 or 24 hours, or perhaps disable it completely.

sw := cache.NewSharedInformer(lw, objType, 30*time.Minute)

Wildcard host for ingress

Hello,

I have a use case where I need multiple hostnames routed to the same backend service, example:

http://foo.webrelay.io -> relayServiceName:9400
http://bar.webrelay.io -> relayServiceName:9400

https://foobar.webrelay.io -> relayServiceName:9500
https://foobarfoo.webrelay.io -> relayServiceName:9500

This list could be expanded and contracted dynamically, therefore wildcard hostnames would be great. Ingress example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: relay-ingress
spec:
  rules:
  - host: '*.webrelay.io'
    http:
      paths:
      - backend:
          serviceName: whr
          servicePort: 9400
        path: /

Virtual host exceeds allowed maximum length

Hi,
I am trying to use contour in conjunction with ingress objects for host-based routing to backend services. Here is an example of the spec section from the ingress definition:

  "spec": {
    "rules": [
      {
        "host": "my-very-very-long-service-host-name.my.domainname",
        "http": {
          "paths": [
            {
              "backend": {
                "serviceName": "my-service-name",
                "servicePort": 8088
              }
            }
          ]
        }
      }
    ]
  }

Contour picks it up but throws an error because the generated virtual host name turns out to be too long:

[2017-11-01 14:22:54.787][1][warning][upstream] source/common/router/rds_subscription.cc:65] rds: fetch failure: Invalid virtual host name: Length of default/my-service-name/my-very-very-long-service-host-name.my.domainname (73) exceeds allowed maximum length (60)

Unfortunately, I am unable to change the enforced naming convention to shorten the host names very quickly. Is there a workaround or a plan to make this configurable? What would the considerations be if the virtual host could be overridden through an annotation? Thanks!

Possible issue with Ingress rules not routing?

Running on K8s 1.8.4 + RBAC through OpenStack, LoadBalancer support (I think..), I re-deployed Contour just now to make sure I was on a new version of Contour too. Testing out a simple rule.

This Ingress definition works, confirmed with curl -i http://<LB IP>:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello
  labels:
    app: hello
spec:
  backend:
    serviceName: hello
    servicePort: 80

This however does not, giving instead a 404 when I curl -i http://<LB IP>/hello:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello
  labels:
    app: nelson
spec:
  rules:
    - http:
        paths:
          - path: /hello
            backend:
              serviceName: hello
              servicePort: 80

Here are the Deployment and Service objects I'm using:

kind: Deployment
apiVersion: apps/v1beta1
metadata:
  labels:
    app: hello
    version: v1
  name: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
        version: v1
      name: hello
    spec:
      containers:
        - name: hello
          image: gcr.io/hightowerlabs/server:0.0.1
          imagePullPolicy: Always
          args:
            - "-name=istio-test-v1"
apiVersion: v1
kind: Service
metadata:
  name: hello
  labels:
    app: hello
spec:
  ports:
  - port: 80
    name: http
    targetPort: 80
    protocol: TCP
  selector:
    app: hello
  type: ClusterIP

Issue with Contour after upgrading Kubernetes to 1.8.4

Hello,

We recently were trying out contour on our 1.7.7 k8s dev cluster. We needed to upgrade the cluster to 1.8.4 in order to resolve a bug. The cluster itself was deployed using kops, so it was upgraded using this tool. The upgrade went fine, but noticed that the contour deployment was trying to run on a node that no longer existed:

node 'ip-172-20-95-73.ec2.internal' not found

I deleted the deployment, service, namespace, service account, cluster role, and cluster role binding. Tried redeploying, and I still keep getting the same error. Any thoughts on how to resolve this? I was using the following to deploy contour:

kubectl apply -f https://j.hept.io/contour-deployment-rbac

Please let me know if you require further info. TIA!

Customizing Envoy through Ingress resources

Envoy provides a collection of features that are beyond host/URI routing formalized in the Ingress resources (advanced load balancing, fault injection, custom filters, etc). Do you have any plans or thoughts how to expose those features in the ingress controller?

Support ingress.kubernetes.io/configuration-snippet

Possibly support the ingress.kubernetes.io/configuration-snippet annotation

Most of the configuration for Envoy has a json or yaml representation so in theory it could be overlaid with the configuration passed back to Envoy in the caches. In practice this sounds quite hard to do, merging, then retaining the configuration in the face of dynamic updates from k8s, but it's still an interesting prospect.

default ingress appears on ingress_https

This ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: kuard
  name: kuard
  namespace: default
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: kuard
          servicePort: 80

Will register the route on ingress_https even though it has not hostname or TLS stanza. This should not happen.

Add benchmarks for v1 REST xDS APIs

Add some basic benchmarks for the CDS, SDS, and RDS for 1, 10, 100, and 10,000 cache entires.

The key things I want to see is the processing time, and the bytes allocated (JSON is expensive).

We need this information to judge the cost of making v1 REST a wrapper around v2 gRPC.Fetch.

README: short links in examples should be HTTPS

From #22

kubectl apply is arbitrary code execution as your user on a cluster. As such, the urls should not be trivially MITM-able to malicious code on. e.g. coffee shop wifi.

We should find a solution for serving the short links over https.

Make bootstrap mode optional

Contour is currently deployed as a daemonset on each node with the bootstrap configuration provided by Contour in a shared Volume Mount. The configuration is written via an init container, which is Contour with the -initconfig flag.

There is no fixed requirement for this to be requirement when deploying Contour, this information could be provided by a ConfigMap, or perhaps a PV, noting that hot reloading of configuration via the filesystem is not the recommended use case for Contour.

This issue exists mainly for documentation of the use of -initconfig and the options for providing it in other ways.

Envoy should bind in IPv4/v6

Hi,

I'm trying Contour (and envoy too) in my Kubernetes cluster, but I'm not able to make it work due of Envoy that bind in IPv4 only in the pod.

It would be awesome if all services can bind in IPv6 too, by default ?

/ # netstat -lptn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:8001          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      1/envoy
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      1/envoy

By looking the code, it seem that it need a small change in internal/contour/json.go (l147) :

		Listeners: []envoy.Listener{{
			Name:    "ingress_http",
-			Address: "tcp://0.0.0.0:8080", // TODO(dfc) should come from pod.hostIP
+			Address: "tcp://[::]:8080", // TODO(dfc) should come from pod.hostIP

			Filters: []envoy.Filter{

What do you think about that ?

SSL Support

The README mentions that SLL/TLS support is a work in progess. SNI has landed in envoy now (envoyproxy/envoy#95).

I'm wondering what is the plan / timeline for using that SNI support in contour? I have some projects blocked by the support, and if it's not fairly soon was wondering if it's an open project I could take on.

Create a /healthz style probe

Contour should be defining probes so that traffic isn't sent to an instance until it is ready. Similarly, this should support a more graceful drain mechanism when downscaling.

The way I thought about doing this was injecting a default vhost into Envoy pointing to Contour that would provide a /healthz endpoint. The idea being that until both Envoy and Contour were up; hitting the node via IP or internal host name (as the ELB does) would not return healthy until both processes were running.

grpc: support individual http and https routes

Currently for SNI support the route table for both the http and https listeners point to the same route table, ingress_http.

To support #63 and/or #88 internal/contour/listener.go will need to be able to write different route records for the http or https version of the vhost.

SSL Passthrough

Hello, it would be good to know whether Contour supports ssl passthrough and if it doesn't - whether it would be possible/reasonable to add it.

NGINX ingress controller has this ingress.kubernetes.io/ssl-passthrough: "true" annotation, it might make sense to keep the format the same to make it easier to switch between ingresses.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: contour
    ingress.kubernetes.io/ssl-passthrough: "true"
  name: relay-ingress
  namespace: default

Add benchmarks for the watcher caches

Add basic benchmarks for insert, delete, and each for the three watcher caches (maybe one is sufficient).

Benchmark sizes should be 1, 10, 100 and 10,000

This will inform #5 and possibly inform a move from using a map to something like xtgo's sorted set implementation.

Support vhosts split across ingress objects

The Ingress spec permits the routes for a vhost to be defined across several Ingress objects. This could be in the same, or across, namespaces.

Currently contour does not handle this, generating duplicate vhost definitions which envoy will therefor reject.

This is needed for #78 because cert managers like jetstack/kube-logo expect to be able to inject a route onto an ingress from a different namespace.

Watching all endpoints is expensive

@timothysc suggested that watching all endpoints is expensive in high scale systems.

Evaluate the cost of this; does this put a load on the api server, or contour.

Regardless of the former or the latter, the result will be we have to filter the endpoints we need to watch to the set of endpoints that are referenced by services that are referenced by ingresses.

Support backend services that talk https

At the moment Contour configures Envoy to accept http from upstream connections (either ELB or a NLB in pass through mode). However on the backend, from Envoy to Pods is always HTTP.

There is no reason this has to be--each port on a service has its own envoy cluster stanza which can hold relevant connection parameters--but there is no established mechanism on the service object to say what protocol to talk on a port.

This issue tracks two things

  1. defining a protocol to identify the protocol a port speaks. The naive implementation would be the port's name, but this seems shortsighted.
  2. support talking to a backend over https.

gRPC support GA

This is a high level tracking issue for promoting gRPC support to GA. Specifically

  • Update all deployments manifests to use gRPC, effectively
  • Additional testing, preferably e2e testing for gRPC.

Cannot deploy ClusterRoles on GKE 1.8.1 Cluster

Seems I run into this issue, after cluster is up, this is the first thing I tried. Let me know what you need to help me debug this.

$ kubectl get nodes
NAME                                         STATUS    ROLES     AGE       VERSION
gke-gke-cluster-default-pool-97bc52ee-076l   Ready     <none>    19h       v1.8.1-gke.1
gke-gke-cluster-default-pool-97bc52ee-b0wh   Ready     <none>    19h       v1.8.1-gke.1
gke-gke-cluster-default-pool-97bc52ee-g42r   Ready     <none>    19h       v1.8.1-gke.1
$ kubectl apply -f http://j.hept.io/contour-deployment-rbac
namespace "heptio-contour" created
serviceaccount "contour" created
deployment "contour" created
clusterrolebinding "contour" created
service "contour" created
Error from server (Forbidden): error when creating "http://j.hept.io/contour-deployment-rbac": clusterroles.rbac.authorization.k8s.io "contour" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
$ kubectl delete -f http://j.hept.io/contour-deployment-rbac
namespace "heptio-contour" deleted
serviceaccount "contour" deleted
deployment "contour" deleted
clusterrolebinding "contour" deleted
service "contour" deleted
Error from server (NotFound): error when deleting "http://j.hept.io/contour-deployment-rbac": clusterroles.rbac.authorization.k8s.io "contour" not found



Add benchmarks for the watcher caches

Add basic benchmarks for insert, delete, and each for the three watcher caches (maybe one is sufficient).

Benchmark sizes should be 1, 10, 100 and 10,000

This will inform #5 and possibly inform a move from using a map to something like xtgo's sorted set implementation.

SSL Redirect

Hi,

Actually my ELB is configured to handle the SSL stuff, and its working fine but I want to enforce https when hitting the url. It will be awesome if contour could support ingress.kubernetes.io/force-ssl-redirect: "true" like NGINX does. see here

bug: does cmd/contour have to call flag.Parse unconditionally

ERROR: logging before flag.Parse: W1218 11:31:33.776933   73199 reflector.go:341] github.com/heptio/contour/internal/k8s/watcher.go:61: watch of *v1.Endpoints ended with: too old resource version: 8605078 (8606302)

cmd/contour uses kingpin, not flag, but I suspect the glog infection via client-go requires flag.Parse to be called unconditionally.

feature: contourcli

This is mainly intended as a debugging aide.

When running contour local outside the cluster it would be useful to have a grpc client to interrogate contour without having to deploy the pod to k8s and divine it’s workings via envoy’s debug logs

Implement Contour ksonnet library

This is a follow up from our slack conversation some time ago. I'll outline what would be involved in this work and how people would use it.

Background

A ksonnet library is collected into a set of parts that can be used to configure and deploy an application on Kubernetes for many different scenarios. For example, in redis, we expose parts.networkPolicy, parts.deployment, parts.secret (for storing the password, if we need one), parts.pvc (if we wish to deploy redis backed by a persistent volume), and so on.

We expose a set of pre-fabricated combinations of these parts using prototypes, which can be used with ks generate. For example, the redis library exposes the following prototypes:

io.ksonnet.pkg.redis-all-features
io.ksonnet.pkg.redis-persistent
io.ksonnet.pkg.redis-stateless

Each of these uses a subset of the parts exposed by the library. For example, io.ksonnet.pkg.redis-stateless uses a deployment, a secret, but does not use a PVC. Using the ks tool to generate a fully-formed and functional manifest is as easy as:

ks generate io.ksonnet.pkg.redis-stateless lovely-cache --name redis

This would generate a file that contains (roughly) the following:

{
  kind: "List",
  apiVersion: "v1",
  items: [
    redis.parts.deployment.nonPersistent(namespace, name, name),
    redis.parts.secret(namespace, name, redisPassword),
    redis.parts.svc.metricDisabled(namespace, name),
  ],
}

For Contour

The ultimate goal is that getting started for Contour users should be a couple of commands:

$ PATH=$PATH:$GOPATH/bin
$ go get github.com/ksonnet/ksonnet
$ ks init simple-contour && cd simple-contour
$ ks generate com.heptio.pkg.simple-contour contour --name simple-contour [... other flags here ...]

To accomplish this, I think we can follow basically the same pattern here:

  • Put the "common" API objects in deployment/ (e.g., the DaemonSet, Deployment, NetworkPolicy, and so on) into a contour.libsonnet.
  • Create the rest of the scaffolding a ksonnet library needs -- notably parts.yaml folder.
  • Create a set of prototypes that allow users to combine these parts into pre-fabricated "flavors".
    • I'll need your help with deciding which.
  • Publish, either in heptio/contour or in ksonnet/parts.
  • Update documentation.
  • Consider whether to retire the scripts that generate the API objects

Work for the future

Following this, and out of scope of this issue, is to talk about how we can make it very easy for people to write new apps against the Contour ingress controller. I think it's possible to have a good story there, but I don't think it's purely a question of templating, so we should defer this discussion for later.

Support basic host rewriting

Support, probably via an annotation, an Ingress object which exists to redirect www.example.com to example.com (or vice versa)

This might look like, in yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.host-rewrite "example.com"
spec:
  rules:
    - host: www.example.com
            backend:
              serviceName: foo // ignored
              servicePort: http // ignored

Ingress Status Updates

It was outlined in the Roadmap that Ingress Status updates were coming, just adding an issue for tracking.

In addition to the example given, there are several projects that rely on this information being present. Of particular interest to us would be External-DNS, which utilizes the status to determine the appropriate destination for the relevant DNS entries.

Blocked:

documentation: document that envoy dynamically opens http/https ports as required

Starting in contour 0.3, Envoy will only open a listening socket if there is something to listen for. For example, if no ingress objects in your cluster that are visible to contour (ingress class can exclude ingress objects from contour) use TLS, envoy will not open the HTTPS listening socket.

Possibly more surprisingly if there are no ingress objects visible to contour (maybe because they are all assigned to a different ingress class) or because this is a fresh cluster, envoy will not open any listening sockets. This can be confusing for admins who are expecting envoy to open a listening socket before there is anything to listen for.

We probably need to document this in a troubleshooting page.

Updates #48

/cc @Bradamant3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.