Giter Club home page Giter Club logo

tyk-helm-chart's Introduction

Tyk Helm Chart

Tyk provides 3 different helm charts in this repo. Please visit the respective pages for each chart to learn how to install the chart and find out all the information relevant to that chart.

Warning

tyk-hybrid will be deprecated soon. Please use our new Helm Chart for Tyk Hybrid Data Plane at tyk-data-plane instead. We recommend all users to migrate to the new Helm Chart. Please review the Configuration section of the new helm chart and cross-check with your existing configurations while planning for migration.

Warning

tyk-headless will be deprecated soon. Please use our new Helm Chart for Tyk open source at tyk-oss instead. We recommend all users to migrate to the new Helm Chart. Please review the Configuration section of the new helm chart and cross-check with your existing configurations while planning for migration.

Redis and MongoDB or PostgreSQL

  • Redis is required for all of the Tyk installations it must be installed in the cluster or reachable from inside K8s.
  • MongoDB or PostgreSQL are only required for the tyk-pro installation and must be installed in the cluster, or reachable from inside K8s. If you are using the MongoDB or SQL pumps in the tyk-headless installation you will require MongoDB or PostgreSQL installed for that as well.

For Redis you can use these rather excellent charts provided by Bitnami:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace tyk

helm install tyk-redis bitnami/redis -n tyk
(follow notes from the installation output to get connection details and update them in `values.yaml` file)

For Mongo or PostgreSQL you can use these rather excellent charts provided by Bitnami:

helm install tyk-mongo bitnami/mongodb --version {HELM_CHART_VERSION} --set "replicaSet.enabled=true" -n tyk
(follow notes from the installation output to get connection details and update them in `values.yaml` file)

NOTE: Here is list of supported MongoDB versions. Please make sure you are installing mongo helm chart that matches these version.

helm install tyk-postgres bitnami/postgresql --set "auth.database=tyk_analytics" -n tyk
(follow notes from the installation output to get connection details and update them in `values.yaml` file)

Important Note regarding MongoDB: This helm chart enables the PodDisruptionBudget for MongoDB with an arbiter replica-count of 1. If you intend to perform system maintenance on the node where the MongoDB pod is running and this maintenance requires for the node to be drained, this action will be prevented due the replica count being 1. Increase the replica count in the helm chart deployment to a minimum of 2 to remedy this issue.

Another option for Redis and MongoDB, if you want to get started quickly is to use our simple charts. Please note that these provided charts must not ever be used in production and for anything but a quick start evaluation only, use external DBs or Official Helm charts for MongoDB and Redis in any other case. We provide these charts so you can quickly have Tyk running however they are not meant for long term storage of data for example.

kubectl create namespace tyk
helm repo add tyk-helm https://helm.tyk.io/public/helm/charts/
helm repo update

Redis

helm install redis tyk-helm/simple-redis -n tyk

MongoDB

helm install mongo tyk-helm/simple-mongodb -n tyk

TLS

You can turn on the tls option under the gateway section in the values.yaml files which will make the gateways listen on port 443 and load up a dummy certificate. It is recommended that you set your own default certificate by replacing the files in the certs/ folder.

Warning The default certificate should not be used in production environments

Kubernetes Ingress

NB: tyk-k8s has been deprecated. For reference, old documentation may be found here: Tyk K8s

For further detail on how to configure Tyk as an Ingress Gateway, or how to manage APIs in Tyk using the Kubernetes API, please refer to our Tyk Operator documentation. The Tyk Operator can be installed along this chart and works with all installation types.

Mounting Files

To mount files to any of the Tyk stack components, add the following to the mounts array in the section of that component:

- name: aws-mongo-ssl-cert
  filename: rds-combined-ca-bundle.pem
  mountPath: /etc/certs

tyk-helm-chart's People

Contributors

andrei-tyk avatar asoorm avatar bouwdie avatar buger avatar buraksekili avatar caroltyk avatar cherrymu avatar christtyk avatar clemensk1 avatar davegarvey avatar deployinbinary avatar drpebcak avatar excieve avatar gernest avatar gothka avatar joshblakeley avatar komalsukhani avatar letzya avatar lonelycode avatar lyndon160 avatar marksou avatar matiasinsaurralde avatar nerdydread avatar olamilekan000 avatar padmaraj85 avatar raman-nbg avatar rewsmith avatar sedkis avatar uddmorningsun avatar zalbiraw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tyk-helm-chart's Issues

Helm charts for TIB

Helm chart for the deployment of Tyk Identity broker is not present in the repo. Similar to other components like gw, dash and pmp it would be great to have TIB deployment files also available with a flag if we want to install it.

Failed post install for tyk-pro

I am having an issue when installing the tyk-pro

image

Listing all the kuberenetes objects inside the namespace: k get all -n tyk-pro

image

Investigating the failing pod that was created by the job

image

Do not use 'latest' docker images

Using 'latest' is considered a bad practice because it can cause your applications to be updated at any time, without a proper rollback path. These charts should call out specific versions that can be overridden.

Improve bootstrap process

Currently we require manually running a script that calls dashboard admin APIs in order to bootstrap the deployment. It also writes a Secret manifest with dashboard user API credentials that needs to be applied as a part of another chart (tyk-k8s).

I believe this is not ideal, especially for automation or installing it as a Rancher catalogue app. In addition to this, calling dashboard API remotely for an ingress controller setup process is a bit idiosyncratic. This means one either needs another ingress controller/LB to expose it or rely on node ports, which might not be a desirable option if k8s cluster is running in a private section of a network and therefore nodes are not publicly exposed.

Perhaps running a job resource or an init container that does initial bootstrap and puts the result somewhere could improve it. Ideally, there also needs to be a way to pass the values to the ingress controller without manual intervention. E.g. it could automatically create a Secret resource that ingress controller would be able to discover.

bootstrap.sh ValueError: Invalid control character at: line 1 column 33 (char 32)

We have installed the dashboard RPMs from Tyk Yum repo onto RHEL 7 x86_64.
Receiving this error below. The bootstrap.sh script never sets variable $USER_AUTH_CODE, so I am wondering how this gets set.

Seems similar to this issue: #10

`/opt/tyk-dashboard/install/bootstrap.sh localhost
Found Python interpreter at: /bin/python

Creating Organisation
ORG DATA: {"Status":"OK","Message":"Org created","Meta":"5d3624e60f48ec660c7d9173"}
ORG ID: 5d3624e60f48ec660c7d9173

Adding new user
USER_AUTH_CODE =
USER AUTHENTICATION CODE:
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.7/json/init.py", line 290, in load
**kw)
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 1 column 33 (char 32)
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
ERROR: Unable to parse JSON`

More document on injector?

Hi, I'm new to Tyk and I'm considering using it for my project. Is there any more document on service mesh injector other than the README.md in this repository? The readme only mentioned the injector.tyk.io/inject and injector.tyk.io/route annotation. Is there any other annotations? Where can I find them?

support multiple ingresses

At the moment, ingress gateways are tagged ingress.

It would be useful to be able to have multiple ingresses, each loading different services.

e.g. ingress-internal, and ingress-external.

Chart is not idiomatic

This is a somewhat subjective issue title, but there are a couple things that stand out to me about the chart, having worked with many of the official charts, just a couple things off the top of my mind:

  1. "Hardcoded" namespace. The chart wants to create its own namespace and install things in there. I can't see a good reason this should be the case, while at the same time it has many downsides: the chart can't be used in environments where helm doesn't have cluster-wide access so namespaces can't be created, it can't be installed in an existing namespace, etc. Unless there is a strong reason a chart should be creating its own namespace I would avoid this.
  2. Hardcoded to run on SSL. While encryption is not a bad default many environments terminate SSL before traffic enters k8s. Keeping SSL/port 443 as a default is ok but I would make running it on port 80 an option.
  3. The default dashboard ingress config has two issues: it has a hardcoded ingress class "nginx". The default installation of the nginx ingress controller does use that but that's a configurable parameter. It should be configurable here too. The other annotation uses the old ingress.kubernetes.io/rewrite-target key. That has an nginx. prefix since quite a while ago, so users of more recent versions of the ingress controller will have that silently ignored.

These are hard issues that IMO need fixing. The following points are more debatable but still worth mentioning:

  1. Why is the gateway a DaemonSet? If it used a hostPort setup I would understand but this way I see no benefit to it not just being a plain old Deployment. I would make this configurable (see official nginx ingress controller chart for example).
  2. Allowing a LoadBalancer type service and/or Ingresses (for different listen hosts) for the gateway would be a welcome addition.

I will add more points as they come up.

Allow to configure the redis.database value for tyk-hybrid

Currently when deploying Tyk with the Helm Chart, storage.database is hard-coded to 0.

If this database index is already in use it seems to cause issues of sync with the gateway.
I could not understand exactly what was the issue.

But manually changing the database index to 1 fixed it.

Being able to configure it via the charts values.yaml would allow for more flexibility and a more stable deployment setup.

Add readiness and liveness probes

Kubernetes should be aware if pods are ready to work and still alive when up beyond basic container not crashed checks. Gateway, dashboard, pump and possibly ingress controller itself need to have readiness and (if available) separate liveness probes configured for this.

Partly depends on TykTechnologies/tyk#2180 (and probably more, e.g. not sure if ingress controller has this at all currently).

Tyk Developer Portal doesn't work with default Helm Chart deployment Tyk-Pro

After running the Tyk Helm Chart, I don't have a working Portal.

  1. I add my license to the values.yaml
  2. everything else is default

I run the following commands:

	$ kubectl create namespace tyk
	$ kubectl apply -f deploy/dependencies/mongo.yaml -n tyk
	$ kubectl apply -f deploy/dependencies/redis.yaml -n tyk
	$ helm install tyk-pro ./tyk-pro -n tyk --wait

Everything works correctly, but no Portal. I try to access:
image

And it loads infinitely.

WORKAROUND

in order to get it to work, I have to

  1. set the portal domain in the ui
  2. restart my dashboard pods
  3. create a default page template

Helm output of tyk pro needs to be updated

This is the current output:

$ helm install tyk-pro -f ./values.yaml ./tyk-pro

1. Bootstrap the dashboard so we can get a username and password to login, this also generates access tokens for the controller to use
export NODE_PORT=$(kubectl get --namespace tyk-ingress -o jsonpath="{.spec.ports[0].nodePort}" services dashboard-svc-tyk-pro)
export NODE_IP=$(kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}')

If you're using minikube, run this instead:
export NODE_IP=$(minikube ip)

export DASH_URL="$NODE_IP:$NODE_PORT"
export DASH_HTTPS=""

1a. You may need to open up that port in your firewall so that we can access the dashboard. Example in GCloud

gcloud compute firewall-rules create dashboard --allow tcp:$NODE_PORT

1b. Bootstrap the dashboard

./tyk-pro/scripts/bootstrap_k8s.sh $DASH_URL 12345 tyk-ingress $DASH_HTTPS

At this point, Tyk Pro is fully installed and should be accessible, proceed in case you want to install Tyk ingress controller.

Note that all the bootstrap steps have been replaced by the bootstrap UI. The user simply needs to navigate to the Dashboard now.

Note that step 1A might still be necessary

Unable to parse json when bootstrapping

I have an issue when trying to boostrap the dashboard. Using the shell script provided I get a JSON parsing error.

$ ./tyk-pro/scripts/bootstrap_k8s.sh $NODE_IP:$NODE_PORT 12345 tyk

Creating Organisation
ORG DATA: {"Status":"OK","Message":"Org created","Meta":"5cd2ed7bff21b20001ad8d46"}
ORG ID: 5cd2ed7bff21b20001ad8d46

Adding new user
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 290, in load
    **kw)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
    return _default_decoder.decode(s)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode
    obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 1 column 35 (char 34)
ERROR: Unable to parse JSON

Could not get session" error="redis: nil

Hi All,

Update: The login is working with Firefox but not with Chrome "Version 91.0.4472.114 (Official Build) (x86_64)".

I'm having a problem with the Tyk Pro dashboard getting session details for the logged in user. The log of the dashbaord are as follows:

time="Jul 03 06:59:07" level=info msg="Tyk Analytics Dashboard v3.2.0"
time="Jul 03 06:59:07" level=info msg="Copyright Tyk Technologies Ltd 2020"
time="Jul 03 06:59:07" level=info msg="https://www.tyk.io"
time="Jul 03 06:59:07" level=info msg="Using /etc/tyk-dashboard/tyk_analytics.conf for configuration"
time="Jul 03 06:59:07" level=info msg="Listening on port: 3000"
time="Jul 03 06:59:07" level=info msg="Connecting to MongoDB: [tyk-mongo-mongodb.tykpoc.svc.cluster.local:27017]"
time="Jul 03 06:59:07" level=info msg="Mongo connection established"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Licensing: Setting new license"
time="Jul 03 06:59:07" level=info msg="Licensing: Registering nodes..."
time="Jul 03 06:59:07" level=info msg="Adding available nodes..."
time="Jul 03 06:59:07" level=info msg="Licensing: Checking capabilities"
time="Jul 03 06:59:07" level=info msg="Audit log is disabled in config"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="--> Standard listener (http) for dashboard and API"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Starting zeroconf heartbeat"
time="Jul 03 06:59:07" level=info msg="Starting notification handler for gateway cluster"
time="Jul 03 06:59:07" level=info msg="Loading routes..."
time="Jul 03 06:59:07" level=info msg="Initializing Internal TIB"
time="Jul 03 06:59:07" level=info msg="Initializing Identity Cache" prefix="TIB INITIALIZER"
time="Jul 03 06:59:07" level=info msg="Set DB" prefix="TIB REDIS STORE"
time="Jul 03 06:59:07" level=info msg="Initializing Identity Cache" prefix="TIB INITIALIZER"
time="Jul 03 06:59:07" level=info msg="Set DB" prefix="TIB REDIS STORE"
time="Jul 03 06:59:07" level=info msg="Using internal Identity Broker. Routes are loaded and available."
time="Jul 03 06:59:07" level=info msg="Generating portal on the custom domain: tyk-portal.apps.domain.com"
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: 009661fa-3896-40be-4ea3-d42e8a751854|gateway-tykpocwodis-hzk7w" prefix=pub-sub
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: e1a569e5-aeb3-4fb8-66ff-ddef2efc7849|gateway-tykpocwodis-rc5lb" prefix=pub-sub
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: 2d2a076a-3a4b-4ec4-79e1-2b692471f73d|gateway-tykpocwodis-wcx7d" prefix=pub-sub
time="Jul 03 06:59:12" level=info msg="Got configuration for nodeID: 25aac502-2733-4133-74ea-ff19338baee1|gateway-tykpocwodis-9bbm4" prefix=pub-sub
time="Jul 03 07:00:21" level=error msg="Could not get session in GetCurrentUser" error="redis: nil"
time="Jul 03 07:01:05" level=warning msg="Successful login ([email protected]) from: 10.0.132.1:34600"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"

I think the redis connection from the dashboard and gateways are configured and working. All checks are running (see below for manual adding an env parameter). At the end, I can login to the dashboard successfully but after 1-2 seconds I'm getting logged out. The logs for this reason are stated above.

Any ideas where to look at or how to fix it?

** Added manually the following env parameter to the dashboard template to avoid some error message:

- name: TYK_IB_SESSION_SECRET
            valueFrom:
              secretKeyRef:
                name: {{ if .Values.secrets.useSecretName }} {{ .Values.secrets.useSecretName }} {{ else }} secrets-{{ include "tyk-pro.fullname" . }} {{ end}}
                key: APISecret

Not sure, if this is needed. But this has no effect on the errors above (with and without the error for redis are the same).

Thanks for your help and cheers.

Add support for gateway whitelisting

We would like to implement a public-facing gateway with an IP address whitelist.
This would work if Helm Chart supported loadBalancerSourceRanges for the gateway service.
I have a working branch I can submit to support this.

Example ingress does not work

Linked to #30, the ingress in the documentation will actually crash the controller:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    kubernetes.io/ingress.class: tyk
spec:
  rules:
    - host: cafe.example.com

It needs a path element:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    kubernetes.io/ingress.class: tyk
spec:
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

Invalid ingress will crash the ingress listener so controller needs restarting

If you supply an invalid ingress like the one in the example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    kubernetes.io/ingress.class: tyk
spec:
  rules:
    - host: cafe.example.com

This is perfectly valid, but the controller expects a path, which causes a panic. Unfortunately the controller does not recover from this and needs restarting.

bootstrap script assumes http

We use the ingress resource for dashboard to expose it public, and ssl offloading in WAF. When running bootstrap script it tries to connect to the internal pod, but we would like it to just connect to the public dashboard api. However script assumes http and its exposed on https. Script works by changing all calls from http to https, so should be possible to do this with some option so we dont have to change script

When using values.redis.addr, stray newline prevents last entry from working

The templates for tyk-*.redis_url add a stray \n newline character when redis.addr is set.

e.g. values containing:

redis:
  addrs:
  - redis-server-1:6379
  - redis-server-2:6379

results in

          - name: TYK_GW_STORAGE_ADDRS
            value: "redis-server-1:6379,redis-server-2:6379\n"

The stray newline breaks that entry. The newline comes from the {{/* Adds support for ... }} comment entries:

{{- define "tyk-pro.redis_url" -}}
{{- if .Values.redis.addrs -}}
{{ join "," .Values.redis.addrs }}
{{/* Adds support for older charts with the host and port options */}}
{{- else if and .Values.redis.host .Values.redis.port -}}
{{ .Values.redis.host }}:{{ .Values.redis.port }}
{{- else -}}
redis.{{ .Release.Namespace }}.svc.cluster.local:6379
{{- end -}}
{{- end -}}

{{- define "tyk-headless.redis_url" -}}
{{- if .Values.redis.addrs -}}
{{ join "," .Values.redis.addrs }}
{{/* Adds support for older charts with the host and port options */}}
{{- else if and .Values.redis.host .Values.redis.port -}}
{{ .Values.redis.host }}:{{ .Values.redis.port }}
{{- else -}}
redis.{{ .Release.Namespace }}.svc.cluster.local:6379
{{- end -}}
{{- end -}}

{{- define "tyk-hybrid.redis_url" -}}
{{- if .Values.redis.addrs -}}
{{ join "," .Values.redis.addrs }}
{{/* Adds support for older charts with the host and port options */}}
{{- else if and .Values.redis.host .Values.redis.port -}}
{{ .Values.redis.host }}:{{ .Values.redis.port }}
{{- else -}}
redis.{{ .Release.Namespace }}.svc.cluster.local:6379
{{- end -}}
{{- end -}}

It can be fixed by replacing {{/* with {{- /*.

(tcp) Enable multi port/protocol support

Tyk Gateway can now proxy TCP(s) traffic, and the gateways can listen on multiple ports.
As such, the ingress controller should be updated to reflect the same.

Reconnecting storage: Redis is either down or ws not configured

We are installing Tyk gateway(oss) through helm chart but seeing some error in gateway pods. Below are the gateway pod logs -

time="Apr 07 13:41:46" level=info msg="Tyk API Gateway v3.1.2" prefix=main
time="Apr 07 13:41:46" level=warning msg="Insecure configuration allowed" config.allow_insecure_configs=true prefix=checkup
time="Apr 07 13:41:46" level=info msg="Starting Poller" prefix=host-check-mgr
time="Apr 07 13:41:46" level=info msg="PIDFile location set to: /mnt/tyk-gateway/tyk.pid" prefix=main
time="Apr 07 13:41:46" level=info msg="Initialising Tyk REST API Endpoints" prefix=main
time="Apr 07 13:41:46" level=info msg="--> [REDIS] Creating single-node client"
time="Apr 07 13:41:46" level=info msg="--> Standard listener (http)" port=":9696" prefix=main
time="Apr 07 13:41:46" level=warning msg="Starting HTTP server on:0.0.0.0:9696" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Standard listener (http)" port=":8080" prefix=main
time="Apr 07 13:41:46" level=warning msg="Starting HTTP server on:0.0.0.0:8080" prefix=main
time="Apr 07 13:41:46" level=info msg="Initialising distributed rate limiter" prefix=main
time="Apr 07 13:41:46" level=info msg="Tyk Gateway started (v3.1.2)" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Listening on address: (open interface)" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Listening on port: 8080" prefix=main
time="Apr 07 13:41:46" level=info msg="--> PID: 1" prefix=main
time="Apr 07 13:41:46" level=info msg="Loading policies" prefix=main
time="Apr 07 13:41:46" level=info msg="Policies found (1 total):" prefix=main
time="Apr 07 13:41:46" level=info msg="Starting gateway rate limiter notifications..."
time="Apr 07 13:41:46" level=info msg="Detected 0 APIs" prefix=main
time="Apr 07 13:41:46" level=warning msg="No API Definitions found, not reloading" prefix=main
time="Apr 07 13:41:56" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:41:56" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:06" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:06" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:16" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:16" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:26" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:26" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:26" level=info msg="--> [REDIS] Creating single-node client"
time="Apr 07 13:42:36" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:36" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub

Below is the values.yaml file -

nameOverride: ""
fullnameOverride: "tyk-headless"

secrets:
  APISecret: "CHANGEME"
  OrgID: "1"

redis:
    shardCount: 128
    useSSL: true
    addrs:
      - "tyk-redis-master.tyk.svc.cluster.local:6379"
    pass: "somepassword"

mongo:
  enabled: false

gateway:
  kind: DaemonSet
  replicaCount: 1
  hostName: "gateway.tykbeta.com"
  tls: false
  containerPort: 8080
  tags: ""
  image:
    repository: tykio/tyk-gateway
    tag: v3.1.2
    pullPolicy: IfNotPresent
  service:
    type: NodePort
    port: 443
    externalTrafficPolicy: Local
    annotations: {}
  control:
    enabled: false
    containerPort: 9696
    port: 9696
    type: ClusterIP
    annotations: {}
  ingress:
    enabled: false
    annotations: {}
    path: /
    hosts:
      - tyk-gw.local
    tls: []
  resources: {}
  nodeSelector: {}
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule
  affinity: {}
  extraEnvs: []


pump:
  enabled: false
  replicaCount: 1
  image:
    repository: tykio/tyk-pump-docker-pub
    tag: v1.2.0
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []

rbac: true

We tried with both useSSL to true and false but getting same error in both. Are we missing anything here?

[TT-5609] The post-install job needs to be deleted with helm uninstall of tyk-pro

If my install failed and I do helm uninstall tyk-pro, i still cant install since I get:

Error: failed post-install: warning: Hook post-install tyk-pro/templates/bootstrap-post-install.yaml failed: jobs.batch "bootstrap-post-install-tyk-pro" already exists

The post-install job needs to be deleted as well.

Currently it's causing a lot of pain.

Tyk CE Ingress Definition Doesn't do anything

It looks like there's no "ingress-gw.yaml" template under the tyk-headless directory.

As a result, if I enable "ingress" in the tyk-headless deployment values.yaml, no ingress objects are actually created

Add configurable (anti-)affinity rules for gateways

Affinity rules present more expressive and flexible way to select nodes for pod scheduling, compared to nodeSelector. This is useful in more heterogeneous environments. The charts should allow configuring both hard and soft affinity.

Anti-affinity rules allow flexible scaling while avoiding scheduling to e.g. nodes that already have a pod that meets a certain rule. This is useful to avoid contention, emulate DaemonSet behaviour (while keeping dynamic scaling), etc.

See k8s docs for more info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

[TD-629] Allow multiple container ports in hybrid

Currently, there is no way of adding more container ports for the gateways whitelisting of ports feature as the "ports:
- containerPort: {{ .Values.gateway.containerPort }}" in the "tyk-hybrid/templates/deployment-gw-repset.yaml only accepts a single value rather than an array.

It would be solved by having:
gateway:
containerPort:
- 8443
- 2223
- 2224

extraEnvs:
- name: "TYK_GW_PORTSWHITELIST_PORTSWHITELIST_TCP_PORTS"
value: "2223,2224"

MongoDB auth fails when following instructions

I'm following the instructions in order to try out Tyk, I got held up for a while with auth to MongoDB failing. After some time I found out it was related to setting replicaSet.enabled=true See: helm/charts#15244
It works and I was able to continue with trial after removing replicaSet.enabled=true

Tyk-Hybrid installation of tyk-k8s isn't/wasn't obvious enough

I spent a few days trying to get tyk-hybrid to accept ingress definitions, before I came to the realization that tyk-k8s wasn't installed and needed to be installed for the ingress to work. While https://github.com/TykTechnologies/tyk-helm-chart/blob/master/tyk-hybrid/templates/NOTES.txt does state that it needs to be installed, that seems to be the only place that the requirement is mentioned. As this is fairly important to the operation of this system I'd like to propose adding it to the read me so it smacks you in the face that it should be installed separately from the tyk-hybrid or tyk-pro installation.

I think I overlooked the helm output primarily because it looked so similar to what I had just run (helm install tyk-hybrid -f ./values_hybrid.yaml ./tyk-hybrid -n tyk-hybrid).

I can make a PR to help make this more clear (and maybe having this issue searchable can save someone else's time in the future as well ๐Ÿ˜„ )

Create installation and usage guides

There's a wide variety of ways this group of charts can be installed:

  • Tyk Pro
  • Tyk Hybrid
  • Everything mentioned with gateways configured by Tyk ingress controller
  • Tyk CE (with headless ingress controller)
  • Service Mesh

These options service different needs and should be guided separately as the existing readme causes a lot of confusion. It should also be clear which ways are stable, which experimental, etc. There should be full examples of ingress controller usage patterns.

All chart options must be documented.

Remove dependency on namespace "tyk"

If you try to deploy tyk-pro outside of the tyk namespace, it gives you many errors related to namespace "tyk".

You should be able to install it anywhere. If I have multiple namespaces in my team, this is problematic.

Can we please remove the hard-coded "tyk" namespace

Make secrets better

  1. secrets are in plain text
  2. For bootstrap a fixed secret is used, this means manual intervention needed to secure after release

tyk-pro: recursive template include call

Running

 helm template pro ./tyk-pro/

Gives the following error

Error: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling include: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling include: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling inc

....

This error is long and exhaustive until helm bails out

reading the error it seems tyk-pro/templates/deployment-gw-repset.yaml keeps including itself recusively. A quick hunt shows the change was introduced in #84

The offending line is

      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/deployment-gw-repset.yaml") . | sha256sum }}

Run multiple gateways with different tags

Hello there,

I would like to run n-Gateways (Tyk-Pro & On-Premise) with different tags in a k8s cluster.
I could not find how this is getting done.
Do you have any tips or ideas?
If I run ReplicaSet Count=5, Iยดm deploying 5 Gateways, but they have same tags.

Thanks for help guys! :)

Run containers as non-root by default

Change the templates to run all the Tyk containers as non-root user by default, but keep an option to change the user for the gateway since there might be a legitimate use case with a host network (and privileged port).

The images should mostly work fine with this, known issues are:

  • tyk-gateway container will need a pid file path configured for a non-root user accessible path
  • tyk-k8s container should specify a non-privileged port for the injector webhook with relevant changes to the service

MDCB issue in installation

Hello, i'm trying to helm upgrade with licence of MDCB but got issue connection reset by peers in logs

helm show values tyk-helm/tyk-pro > values.yaml

After got licence edit values.yaml mdcb.enable: true and mdcb.license: "valueOfLicence"

helm upgrade -f values.yaml tyk-pro tyk-helm/tyk-pro -n tyk-dev

version helm: tyk-pro-0.9.3
Redis (redis_version:5.0.5) and Mongodb is using Apsara RDS by Alicloud

Here's the log, sorry for using images as logs
k8s-logs-2021-09-14 at 18 20 41

In values.yaml in mdcb part

  image:
    # Requires credential
    repository: tykio/tyk-mdcb-docker

Is this still relevant, credentials for docker images?

Overriding mongoURL Key Not Working

Hello,

I am attempting to deploy tyk pro using the Bitnami mongo/redis charts. I've replaced the mongoURL key in the values file with my own generated URL as the README instructs, but my dashboard pod liveness/readiness checks continue to fail, and checking the logs, the reason seems to be that it's still attempting to connect to the default mongo URL:

time="Jun 09 15:16:54" level=warning msg="toth/tothic: no TYK_IB_SESSION_SECRET environment variable is set. The default cookie store is not available and any calls will fail. Ignore this warning if you are using a different store."
8
7
time="Jun 09 15:16:54" level=info msg="Tyk Analytics Dashboard v3.1.2"
6
time="Jun 09 15:16:54" level=info msg="Copyright Tyk Technologies Ltd 2020"
5
time="Jun 09 15:16:54" level=info msg="https://www.tyk.io"
4
time="Jun 09 15:16:54" level=info msg="Using /etc/tyk-dashboard/tyk_analytics.conf for configuration"
3
time="Jun 09 15:16:54" level=info msg="Listening on port: 3000"
2
time="Jun 09 15:16:54" level=warning msg="Default admin_secret `12345` should be changed for production use."
1
time="Jun 09 15:16:54" level=info msg="Connecting to MongoDB: [mongo.tyk.svc.cluster.local:27017]"

The only way I've gotten that last line's URL to update is by directly updating the mongoURL key in the secrets.yaml file, which I assume the values file key should be overriding.

For reference, here is my values file:

# Default values for tyk-pro chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

nameOverride: ""
fullnameOverride: ""

# Used to shard the gateways. If you enable it make sure you have at least one gateway that is not sharded and also tag the APIs accordingly
enableSharding: false

# Set to true to use Tyk as the Ingress gateway to an Istio Service Mesh
# We apply some exceptions to the Istio IPTables so inbound calls to the
# Gateway and Dashboard are exposed to the outside world - see the deployment templates for implementation
enableIstioIngress: false

# Master switch for enabling/disabling bootstraing job batch, role binding, role, and account service.
bootstrap: true

secrets:
  APISecret: CHANGEME
  AdminSecret: "12345"
  # If you don't want to store plaintext secrets in the Helm value file and would rather provide the k8s Secret externally please populate the value below
  useSecretName: ""

redis:
  shardCount: 128
  # addrs:
  #   - redis.tyk.svc.cluster.local:6379
  useSSL: false
   #If you're using Bitnami Redis chart please input the correct host to your installation in the field below
  addrs:
    - tyk-shared-stg-redis-master-0.tyk-shared-stg-redis-headless.tyk.svc.cluster.local:6379
  #If you're using Bitnami Redis chart please input your password in the field below
  pass: //pass
#   #If you are using Redis cluster, enable it here.
#    enableCluster: false
#   #By default the database index is 0. Setting the database index is not supported with redis cluster. As such, if you have enableCluster: true, then this value should be omitted or explicitly set to 0.
  storage:
    database: 0

mongo:
  # mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
  # If you're using Bitnami MongoDB chart please input your password below
  mongoURL: mongodb://root:<pass>@tyk-shared-stg-mongodb.tyk.svc.cluster.local:27017/tyk-dashboard?authSource=admin
  useSSL: false

mdcb:
  enabled: false
  useSSL: false
  replicaCount: 1
  containerPort: 9090
  healthcheckport: 8181
  license: ""
  forwardAnalyticsToPump: true
  image:
    repository: tykio/tyk-mdcb-docker    # requires credential
    tag: v1.7.7
    pullPolicy: Always
  service:
    type: LoadBalancer
    port: 9090
  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []

tib:
  enabled: false
  useSSL: true
  # The REST API secret to configure the Tyk Identity Broker remotely
  secret: ""
  replicaCount: 1
  containerPort: 3010
  image:
    repository: tykio/tyk-identity-broker
    tag: v1.1.0
    pullPolicy: Always
  service:
    type: ClusterIP
    port: 3010
  ingress:
    enabled: false
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: true
    path: /
    hosts:
      - tib.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local
  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []
  configMap:
    # Create a configMap to store profiles json
    profiles: tyk-tib-profiles-conf

dash:
  enabled: true
  # Dashboard will only bootstrap if the master bootstrap option is set to true
  bootstrap: true
  replicaCount: 1
  hostName: tyk-dashboard.local
  license: //license
  containerPort: 3000
  image:
    repository: tykio/tyk-dashboard
    tag: v3.1.2
    pullPolicy: Always
  service:
    type: NodePort
    port: 3000
  ingress:
    enabled: false
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    path: /
    hosts:
      - tyk-dashboard.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local
  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []
  # Set these values for dashboard admin user
  adminUser:
    firstName: admin
    lastName: user
    email: [email protected]
    # Set a password or a random one will be assigned
    password: ""
  org:
    name: Default Org
    # Set this value to the domain of your developer portal
    cname: tyk-portal.local

portal:
  # Portal will only bootstrap if both the Master and Dashboard bootstrap options are set to true
  # Only set this to false if you're not planning on using developer portal
  bootstrap: true
  path: /
  ingress:
    enabled: false
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    hosts:
      - tyk-portal.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local

gateway:
  enabled: true
  kind: DaemonSet
  replicaCount: 2
  hostName: tyk-gw.local
  # if enableSharding set to true then you must define a tag to load APIs to these gateways i.e. "ingress"
  tags: ""
  tls: false
  containerPort: 8080
  image:
    repository: tykio/tyk-gateway
    tag: v3.1.2
    pullPolicy: Always
  service:
    type: NodePort
    port: 8080
    externalTrafficPolicy: Local
    annotations: {}
  control:
    enabled: false
    containerPort: 9696
    port: 9696
    type: ClusterIP
    annotations: {}
  ingress:
    enabled: false
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    path: /
    hosts:
      - tyk-gw.local
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - chart-example.local
  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule
  affinity: {}
  extraEnvs: []

pump:
  enabled: true
  replicaCount: 1
  image:
    repository: tykio/tyk-pump-docker-pub
    tag: v1.2.0
    pullPolicy: Always
  annotations: {}
  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []

rbac: true

Any help would be appreciated.

Thanks,
Mike

ERROR: After approving csr tyk-k8s-svc.tyk, the signed certificate did not appear on the resource. Giving up after 10 attempts.

I'm just getting started with this helm chart. After helm install, it shows several steps to do. One of them is:

3. Prepare the SSL and CA bundle for webhook

./tyk-k8s/webhook/create-signed-cert.sh -n tyk
cat ./tyk-k8s/webhook/mutatingwebhook.yaml | ./tyk-k8s/webhook/webhook-patch-ca-bundle.sh > ./tyk-k8s/webhook/mutatingwebhook-ca-bundle.yaml

However, when running the first command. I run into an error:

[root@VM_1_16_centos tyk-helm-chart]# ./tyk-k8s/webhook/create-signed-cert.sh -n tyk
creating certs in tmpdir /tmp/tmp.OMHaltlIKA
Generating RSA private key, 2048 bit long modulus
....+++
......................................+++
e is 65537 (0x10001)
certificatesigningrequest.certificates.k8s.io "tyk-k8s-svc.tyk" created
NAME              AGE       REQUESTOR   CONDITION
tyk-k8s-svc.tyk   1s        admin       Pending
certificatesigningrequest.certificates.k8s.io "tyk-k8s-svc.tyk" approved
ERROR: After approving csr tyk-k8s-svc.tyk, the signed certificate did not appear on the resource. Giving up after 10 attempts.

My question is:

  1. What is webhook and how is that related to sidecar-injector?
  2. What can I do to solve the error?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.