tyktechnologies / tyk-helm-chart Goto Github PK
View Code? Open in Web Editor NEWA Helm chart repository to install Tyk Pro (with Dashboard), Tyk Hybrid or Tyk Headless chart.
Home Page: https://tyk.io
A Helm chart repository to install Tyk Pro (with Dashboard), Tyk Hybrid or Tyk Headless chart.
Home Page: https://tyk.io
If you supply an invalid ingress like the one in the example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
kubernetes.io/ingress.class: tyk
spec:
rules:
- host: cafe.example.com
This is perfectly valid, but the controller expects a path, which causes a panic. Unfortunately the controller does not recover from this and needs restarting.
I'm following the instructions in order to try out Tyk, I got held up for a while with auth to MongoDB failing. After some time I found out it was related to setting replicaSet.enabled=true
See: helm/charts#15244
It works and I was able to continue with trial after removing replicaSet.enabled=true
It looks like there's no "ingress-gw.yaml" template under the tyk-headless directory.
As a result, if I enable "ingress" in the tyk-headless deployment values.yaml
, no ingress objects are actually created
If we talking about real life installations usually external DBs are used. But in case of PoC it does make sense have such option.
Kong helm chart, in this case, does a pretty good job, since everything is configurable and enabled on demand https://github.com/helm/charts/tree/master/stable/kong
We are installing Tyk gateway(oss) through helm chart but seeing some error in gateway pods. Below are the gateway pod logs -
time="Apr 07 13:41:46" level=info msg="Tyk API Gateway v3.1.2" prefix=main
time="Apr 07 13:41:46" level=warning msg="Insecure configuration allowed" config.allow_insecure_configs=true prefix=checkup
time="Apr 07 13:41:46" level=info msg="Starting Poller" prefix=host-check-mgr
time="Apr 07 13:41:46" level=info msg="PIDFile location set to: /mnt/tyk-gateway/tyk.pid" prefix=main
time="Apr 07 13:41:46" level=info msg="Initialising Tyk REST API Endpoints" prefix=main
time="Apr 07 13:41:46" level=info msg="--> [REDIS] Creating single-node client"
time="Apr 07 13:41:46" level=info msg="--> Standard listener (http)" port=":9696" prefix=main
time="Apr 07 13:41:46" level=warning msg="Starting HTTP server on:0.0.0.0:9696" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Standard listener (http)" port=":8080" prefix=main
time="Apr 07 13:41:46" level=warning msg="Starting HTTP server on:0.0.0.0:8080" prefix=main
time="Apr 07 13:41:46" level=info msg="Initialising distributed rate limiter" prefix=main
time="Apr 07 13:41:46" level=info msg="Tyk Gateway started (v3.1.2)" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Listening on address: (open interface)" prefix=main
time="Apr 07 13:41:46" level=info msg="--> Listening on port: 8080" prefix=main
time="Apr 07 13:41:46" level=info msg="--> PID: 1" prefix=main
time="Apr 07 13:41:46" level=info msg="Loading policies" prefix=main
time="Apr 07 13:41:46" level=info msg="Policies found (1 total):" prefix=main
time="Apr 07 13:41:46" level=info msg="Starting gateway rate limiter notifications..."
time="Apr 07 13:41:46" level=info msg="Detected 0 APIs" prefix=main
time="Apr 07 13:41:46" level=warning msg="No API Definitions found, not reloading" prefix=main
time="Apr 07 13:41:56" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:41:56" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:06" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:06" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:16" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:16" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:26" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:26" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
time="Apr 07 13:42:26" level=info msg="--> [REDIS] Creating single-node client"
time="Apr 07 13:42:36" level=error msg="Redis health check failed" error="storage: Redis is either down or ws not configured" liveness-check=true prefix=main
time="Apr 07 13:42:36" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub
Below is the values.yaml file -
nameOverride: ""
fullnameOverride: "tyk-headless"
secrets:
APISecret: "CHANGEME"
OrgID: "1"
redis:
shardCount: 128
useSSL: true
addrs:
- "tyk-redis-master.tyk.svc.cluster.local:6379"
pass: "somepassword"
mongo:
enabled: false
gateway:
kind: DaemonSet
replicaCount: 1
hostName: "gateway.tykbeta.com"
tls: false
containerPort: 8080
tags: ""
image:
repository: tykio/tyk-gateway
tag: v3.1.2
pullPolicy: IfNotPresent
service:
type: NodePort
port: 443
externalTrafficPolicy: Local
annotations: {}
control:
enabled: false
containerPort: 9696
port: 9696
type: ClusterIP
annotations: {}
ingress:
enabled: false
annotations: {}
path: /
hosts:
- tyk-gw.local
tls: []
resources: {}
nodeSelector: {}
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
affinity: {}
extraEnvs: []
pump:
enabled: false
replicaCount: 1
image:
repository: tykio/tyk-pump-docker-pub
tag: v1.2.0
pullPolicy: IfNotPresent
annotations: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
rbac: true
We tried with both useSSL to true and false but getting same error in both. Are we missing anything here?
Currently when deploying Tyk with the Helm Chart, storage.database
is hard-coded to 0.
If this database index is already in use it seems to cause issues of sync with the gateway.
I could not understand exactly what was the issue.
But manually changing the database index to 1 fixed it.
Being able to configure it via the charts values.yaml
would allow for more flexibility and a more stable deployment setup.
There's a wide variety of ways this group of charts can be installed:
These options service different needs and should be guided separately as the existing readme causes a lot of confusion. It should also be clear which ways are stable, which experimental, etc. There should be full examples of ingress controller usage patterns.
All chart options must be documented.
Hi All,
Update: The login is working with Firefox but not with Chrome "Version 91.0.4472.114 (Official Build) (x86_64)".
I'm having a problem with the Tyk Pro dashboard getting session details for the logged in user. The log of the dashbaord are as follows:
time="Jul 03 06:59:07" level=info msg="Tyk Analytics Dashboard v3.2.0"
time="Jul 03 06:59:07" level=info msg="Copyright Tyk Technologies Ltd 2020"
time="Jul 03 06:59:07" level=info msg="https://www.tyk.io"
time="Jul 03 06:59:07" level=info msg="Using /etc/tyk-dashboard/tyk_analytics.conf for configuration"
time="Jul 03 06:59:07" level=info msg="Listening on port: 3000"
time="Jul 03 06:59:07" level=info msg="Connecting to MongoDB: [tyk-mongo-mongodb.tykpoc.svc.cluster.local:27017]"
time="Jul 03 06:59:07" level=info msg="Mongo connection established"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Licensing: Setting new license"
time="Jul 03 06:59:07" level=info msg="Licensing: Registering nodes..."
time="Jul 03 06:59:07" level=info msg="Adding available nodes..."
time="Jul 03 06:59:07" level=info msg="Licensing: Checking capabilities"
time="Jul 03 06:59:07" level=info msg="Audit log is disabled in config"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="--> Standard listener (http) for dashboard and API"
time="Jul 03 06:59:07" level=info msg="Creating new Redis connection pool"
time="Jul 03 06:59:07" level=info msg="--> [REDIS] Creating single-node client"
time="Jul 03 06:59:07" level=info msg="Starting zeroconf heartbeat"
time="Jul 03 06:59:07" level=info msg="Starting notification handler for gateway cluster"
time="Jul 03 06:59:07" level=info msg="Loading routes..."
time="Jul 03 06:59:07" level=info msg="Initializing Internal TIB"
time="Jul 03 06:59:07" level=info msg="Initializing Identity Cache" prefix="TIB INITIALIZER"
time="Jul 03 06:59:07" level=info msg="Set DB" prefix="TIB REDIS STORE"
time="Jul 03 06:59:07" level=info msg="Initializing Identity Cache" prefix="TIB INITIALIZER"
time="Jul 03 06:59:07" level=info msg="Set DB" prefix="TIB REDIS STORE"
time="Jul 03 06:59:07" level=info msg="Using internal Identity Broker. Routes are loaded and available."
time="Jul 03 06:59:07" level=info msg="Generating portal on the custom domain: tyk-portal.apps.domain.com"
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: 009661fa-3896-40be-4ea3-d42e8a751854|gateway-tykpocwodis-hzk7w" prefix=pub-sub
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: e1a569e5-aeb3-4fb8-66ff-ddef2efc7849|gateway-tykpocwodis-rc5lb" prefix=pub-sub
time="Jul 03 06:59:11" level=info msg="Got configuration for nodeID: 2d2a076a-3a4b-4ec4-79e1-2b692471f73d|gateway-tykpocwodis-wcx7d" prefix=pub-sub
time="Jul 03 06:59:12" level=info msg="Got configuration for nodeID: 25aac502-2733-4133-74ea-ff19338baee1|gateway-tykpocwodis-9bbm4" prefix=pub-sub
time="Jul 03 07:00:21" level=error msg="Could not get session in GetCurrentUser" error="redis: nil"
time="Jul 03 07:01:05" level=warning msg="Successful login ([email protected]) from: 10.0.132.1:34600"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:06" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="Could not get session" error="redis: nil"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
time="Jul 03 07:01:07" level=error msg="http: named cookie not present"
I think the redis connection from the dashboard and gateways are configured and working. All checks are running (see below for manual adding an env parameter). At the end, I can login to the dashboard successfully but after 1-2 seconds I'm getting logged out. The logs for this reason are stated above.
Any ideas where to look at or how to fix it?
** Added manually the following env parameter to the dashboard template to avoid some error message:
- name: TYK_IB_SESSION_SECRET
valueFrom:
secretKeyRef:
name: {{ if .Values.secrets.useSecretName }} {{ .Values.secrets.useSecretName }} {{ else }} secrets-{{ include "tyk-pro.fullname" . }} {{ end}}
key: APISecret
Not sure, if this is needed. But this has no effect on the errors above (with and without the error for redis are the same).
Thanks for your help and cheers.
I spent a few days trying to get tyk-hybrid to accept ingress definitions, before I came to the realization that tyk-k8s wasn't installed and needed to be installed for the ingress to work. While https://github.com/TykTechnologies/tyk-helm-chart/blob/master/tyk-hybrid/templates/NOTES.txt
does state that it needs to be installed, that seems to be the only place that the requirement is mentioned. As this is fairly important to the operation of this system I'd like to propose adding it to the read me so it smacks you in the face that it should be installed separately from the tyk-hybrid
or tyk-pro
installation.
I think I overlooked the helm output primarily because it looked so similar to what I had just run (helm install tyk-hybrid -f ./values_hybrid.yaml ./tyk-hybrid -n tyk-hybrid
).
I can make a PR to help make this more clear (and maybe having this issue searchable can save someone else's time in the future as well ๐ )
Using 'latest' is considered a bad practice because it can cause your applications to be updated at any time, without a proper rollback path. These charts should call out specific versions that can be overridden.
Helm chart for the deployment of Tyk Identity broker is not present in the repo. Similar to other components like gw, dash and pmp it would be great to have TIB deployment files also available with a flag if we want to install it.
Add horizontal pod autoscaling spec for the gateway deployments (when Deployment
kind is used). Should be optional as this depends on licensing and environment (needs a metrics-server). Scaling policies must be configurable.
We would like to implement a public-facing gateway with an IP address whitelist.
This would work if Helm Chart supported loadBalancerSourceRanges for the gateway service.
I have a working branch I can submit to support this.
I'm just getting started with this helm chart. After helm install
, it shows several steps to do. One of them is:
3. Prepare the SSL and CA bundle for webhook
./tyk-k8s/webhook/create-signed-cert.sh -n tyk
cat ./tyk-k8s/webhook/mutatingwebhook.yaml | ./tyk-k8s/webhook/webhook-patch-ca-bundle.sh > ./tyk-k8s/webhook/mutatingwebhook-ca-bundle.yaml
However, when running the first command. I run into an error:
[root@VM_1_16_centos tyk-helm-chart]# ./tyk-k8s/webhook/create-signed-cert.sh -n tyk
creating certs in tmpdir /tmp/tmp.OMHaltlIKA
Generating RSA private key, 2048 bit long modulus
....+++
......................................+++
e is 65537 (0x10001)
certificatesigningrequest.certificates.k8s.io "tyk-k8s-svc.tyk" created
NAME AGE REQUESTOR CONDITION
tyk-k8s-svc.tyk 1s admin Pending
certificatesigningrequest.certificates.k8s.io "tyk-k8s-svc.tyk" approved
ERROR: After approving csr tyk-k8s-svc.tyk, the signed certificate did not appear on the resource. Giving up after 10 attempts.
My question is:
Thanks in advance.
At the very least this includes things like addrs
instead of hosts
for redis.. Maybe other things.
Linked to #30, the ingress in the documentation will actually crash the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
kubernetes.io/ingress.class: tyk
spec:
rules:
- host: cafe.example.com
It needs a path element:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
kubernetes.io/ingress.class: tyk
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
In case you are using another ingress controller and want to disable this sharding ....
Tagged gateways like this will only load APIs that have also been tagged as ingress.
- this is hidden between the linesHi, I'm new to Tyk and I'm considering using it for my project. Is there any more document on service mesh injector other than the README.md in this repository? The readme only mentioned the injector.tyk.io/inject
and injector.tyk.io/route
annotation. Is there any other annotations? Where can I find them?
Current link in readme to Tyk Identity Broker: https://tyk.io/docs/getting-started/key-concepts/tyk-components/identity-broker/
Looks like this has been moved to: https://tyk.io/docs/getting-started/tyk-components/identity-broker/
Hello there,
I would like to run n-Gateways (Tyk-Pro & On-Premise) with different tags in a k8s cluster.
I could not find how this is getting done.
Do you have any tips or ideas?
If I run ReplicaSet Count=5, Iยดm deploying 5 Gateways, but they have same tags.
Thanks for help guys! :)
Hello,
I am attempting to deploy tyk pro using the Bitnami mongo/redis charts. I've replaced the mongoURL
key in the values file with my own generated URL as the README instructs, but my dashboard pod liveness/readiness checks continue to fail, and checking the logs, the reason seems to be that it's still attempting to connect to the default mongo URL:
time="Jun 09 15:16:54" level=warning msg="toth/tothic: no TYK_IB_SESSION_SECRET environment variable is set. The default cookie store is not available and any calls will fail. Ignore this warning if you are using a different store."
8
7
time="Jun 09 15:16:54" level=info msg="Tyk Analytics Dashboard v3.1.2"
6
time="Jun 09 15:16:54" level=info msg="Copyright Tyk Technologies Ltd 2020"
5
time="Jun 09 15:16:54" level=info msg="https://www.tyk.io"
4
time="Jun 09 15:16:54" level=info msg="Using /etc/tyk-dashboard/tyk_analytics.conf for configuration"
3
time="Jun 09 15:16:54" level=info msg="Listening on port: 3000"
2
time="Jun 09 15:16:54" level=warning msg="Default admin_secret `12345` should be changed for production use."
1
time="Jun 09 15:16:54" level=info msg="Connecting to MongoDB: [mongo.tyk.svc.cluster.local:27017]"
The only way I've gotten that last line's URL to update is by directly updating the mongoURL
key in the secrets.yaml
file, which I assume the values file key should be overriding.
For reference, here is my values file:
# Default values for tyk-pro chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nameOverride: ""
fullnameOverride: ""
# Used to shard the gateways. If you enable it make sure you have at least one gateway that is not sharded and also tag the APIs accordingly
enableSharding: false
# Set to true to use Tyk as the Ingress gateway to an Istio Service Mesh
# We apply some exceptions to the Istio IPTables so inbound calls to the
# Gateway and Dashboard are exposed to the outside world - see the deployment templates for implementation
enableIstioIngress: false
# Master switch for enabling/disabling bootstraing job batch, role binding, role, and account service.
bootstrap: true
secrets:
APISecret: CHANGEME
AdminSecret: "12345"
# If you don't want to store plaintext secrets in the Helm value file and would rather provide the k8s Secret externally please populate the value below
useSecretName: ""
redis:
shardCount: 128
# addrs:
# - redis.tyk.svc.cluster.local:6379
useSSL: false
#If you're using Bitnami Redis chart please input the correct host to your installation in the field below
addrs:
- tyk-shared-stg-redis-master-0.tyk-shared-stg-redis-headless.tyk.svc.cluster.local:6379
#If you're using Bitnami Redis chart please input your password in the field below
pass: //pass
# #If you are using Redis cluster, enable it here.
# enableCluster: false
# #By default the database index is 0. Setting the database index is not supported with redis cluster. As such, if you have enableCluster: true, then this value should be omitted or explicitly set to 0.
storage:
database: 0
mongo:
# mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
# If you're using Bitnami MongoDB chart please input your password below
mongoURL: mongodb://root:<pass>@tyk-shared-stg-mongodb.tyk.svc.cluster.local:27017/tyk-dashboard?authSource=admin
useSSL: false
mdcb:
enabled: false
useSSL: false
replicaCount: 1
containerPort: 9090
healthcheckport: 8181
license: ""
forwardAnalyticsToPump: true
image:
repository: tykio/tyk-mdcb-docker # requires credential
tag: v1.7.7
pullPolicy: Always
service:
type: LoadBalancer
port: 9090
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
tib:
enabled: false
useSSL: true
# The REST API secret to configure the Tyk Identity Broker remotely
secret: ""
replicaCount: 1
containerPort: 3010
image:
repository: tykio/tyk-identity-broker
tag: v1.1.0
pullPolicy: Always
service:
type: ClusterIP
port: 3010
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: true
path: /
hosts:
- tib.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
configMap:
# Create a configMap to store profiles json
profiles: tyk-tib-profiles-conf
dash:
enabled: true
# Dashboard will only bootstrap if the master bootstrap option is set to true
bootstrap: true
replicaCount: 1
hostName: tyk-dashboard.local
license: //license
containerPort: 3000
image:
repository: tykio/tyk-dashboard
tag: v3.1.2
pullPolicy: Always
service:
type: NodePort
port: 3000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- tyk-dashboard.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
# Set these values for dashboard admin user
adminUser:
firstName: admin
lastName: user
email: [email protected]
# Set a password or a random one will be assigned
password: ""
org:
name: Default Org
# Set this value to the domain of your developer portal
cname: tyk-portal.local
portal:
# Portal will only bootstrap if both the Master and Dashboard bootstrap options are set to true
# Only set this to false if you're not planning on using developer portal
bootstrap: true
path: /
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- tyk-portal.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
gateway:
enabled: true
kind: DaemonSet
replicaCount: 2
hostName: tyk-gw.local
# if enableSharding set to true then you must define a tag to load APIs to these gateways i.e. "ingress"
tags: ""
tls: false
containerPort: 8080
image:
repository: tykio/tyk-gateway
tag: v3.1.2
pullPolicy: Always
service:
type: NodePort
port: 8080
externalTrafficPolicy: Local
annotations: {}
control:
enabled: false
containerPort: 9696
port: 9696
type: ClusterIP
annotations: {}
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- tyk-gw.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
affinity: {}
extraEnvs: []
pump:
enabled: true
replicaCount: 1
image:
repository: tykio/tyk-pump-docker-pub
tag: v1.2.0
pullPolicy: Always
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
rbac: true
Any help would be appreciated.
Thanks,
Mike
Hello, i'm trying to helm upgrade with licence of MDCB but got issue connection reset by peers
in logs
helm show values tyk-helm/tyk-pro > values.yaml
After got licence edit values.yaml mdcb.enable
: true and mdcb.license
: "valueOfLicence"
helm upgrade -f values.yaml tyk-pro tyk-helm/tyk-pro -n tyk-dev
version helm: tyk-pro-0.9.3
Redis (redis_version:5.0.5
) and Mongodb is using Apsara RDS by Alicloud
Here's the log, sorry for using images as logs
In values.yaml in mdcb part
image:
# Requires credential
repository: tykio/tyk-mdcb-docker
Is this still relevant, credentials for docker images?
Affinity rules present more expressive and flexible way to select nodes for pod scheduling, compared to nodeSelector
. This is useful in more heterogeneous environments. The charts should allow configuring both hard and soft affinity.
Anti-affinity rules allow flexible scaling while avoiding scheduling to e.g. nodes that already have a pod that meets a certain rule. This is useful to avoid contention, emulate DaemonSet
behaviour (while keeping dynamic scaling), etc.
See k8s docs for more info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
This is a somewhat subjective issue title, but there are a couple things that stand out to me about the chart, having worked with many of the official charts, just a couple things off the top of my mind:
ingress.kubernetes.io/rewrite-target
key. That has an nginx.
prefix since quite a while ago, so users of more recent versions of the ingress controller will have that silently ignored.These are hard issues that IMO need fixing. The following points are more debatable but still worth mentioning:
hostPort
setup I would understand but this way I see no benefit to it not just being a plain old Deployment. I would make this configurable (see official nginx ingress controller chart for example).I will add more points as they come up.
tyk-hybrid
chart is currently missing the Ingress
resource for the gateways, despite value options being present:
https://github.com/TykTechnologies/tyk-helm-chart/blob/master/values_hybrid.yaml#L46
Add one similarly to the tyk-pro
chart:
https://github.com/TykTechnologies/tyk-helm-chart/blob/master/tyk-pro/templates/ingress-gw.yaml
We can address this in our deployment yamls with this trick https://helm.sh/docs/developing_charts/#automatically-roll-deployments-when-configmaps-or-secrets-change
This will save having to helm upgrade --recreate-pods
each time we want new configuration propagated to the pods.
We use the ingress resource for dashboard to expose it public, and ssl offloading in WAF. When running bootstrap script it tries to connect to the internal pod, but we would like it to just connect to the public dashboard api. However script assumes http and its exposed on https. Script works by changing all calls from http to https, so should be possible to do this with some option so we dont have to change script
The templates for tyk-*.redis_url
add a stray \n
newline character when redis.addr
is set.
e.g. values containing:
redis:
addrs:
- redis-server-1:6379
- redis-server-2:6379
results in
- name: TYK_GW_STORAGE_ADDRS
value: "redis-server-1:6379,redis-server-2:6379\n"
The stray newline breaks that entry. The newline comes from the {{/* Adds support for ... }}
comment entries:
tyk-helm-chart/tyk-pro/templates/_helpers.tpl
Lines 58 to 67 in aa9eee1
tyk-helm-chart/tyk-headless/templates/_helpers.tpl
Lines 41 to 50 in aa9eee1
tyk-helm-chart/tyk-hybrid/templates/_helpers.tpl
Lines 42 to 51 in aa9eee1
It can be fixed by replacing {{/*
with {{- /*
.
If my install failed and I do helm uninstall tyk-pro
, i still cant install since I get:
Error: failed post-install: warning: Hook post-install tyk-pro/templates/bootstrap-post-install.yaml failed: jobs.batch "bootstrap-post-install-tyk-pro" already exists
The post-install job needs to be deleted as well.
Currently it's causing a lot of pain.
At the moment, ingress gateways are tagged ingress
.
It would be useful to be able to have multiple ingresses, each loading different services.
e.g. ingress-internal
, and ingress-external
.
Currently hardcoded to 7676 - should be configurable via annotation
Same goes for the ip tables rule in the init container
I have an issue when trying to boostrap the dashboard. Using the shell script provided I get a JSON parsing error.
$ ./tyk-pro/scripts/bootstrap_k8s.sh $NODE_IP:$NODE_PORT 12345 tyk
Creating Organisation
ORG DATA: {"Status":"OK","Message":"Org created","Meta":"5cd2ed7bff21b20001ad8d46"}
ORG ID: 5cd2ed7bff21b20001ad8d46
Adding new user
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 290, in load
**kw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 1 column 35 (char 34)
ERROR: Unable to parse JSON
Kubernetes should be aware if pods are ready to work and still alive when up beyond basic container not crashed checks. Gateway, dashboard, pump and possibly ingress controller itself need to have readiness and (if available) separate liveness probes configured for this.
Partly depends on TykTechnologies/tyk#2180 (and probably more, e.g. not sure if ingress controller has this at all currently).
Currently we require manually running a script that calls dashboard admin APIs in order to bootstrap the deployment. It also writes a Secret
manifest with dashboard user API credentials that needs to be applied as a part of another chart (tyk-k8s
).
I believe this is not ideal, especially for automation or installing it as a Rancher catalogue app. In addition to this, calling dashboard API remotely for an ingress controller setup process is a bit idiosyncratic. This means one either needs another ingress controller/LB to expose it or rely on node ports, which might not be a desirable option if k8s cluster is running in a private section of a network and therefore nodes are not publicly exposed.
Perhaps running a job resource or an init container that does initial bootstrap and puts the result somewhere could improve it. Ideally, there also needs to be a way to pass the values to the ingress controller without manual intervention. E.g. it could automatically create a Secret
resource that ingress controller would be able to discover.
https://github.com/TykTechnologies/tyk-helm-chart/tree/master/tyk-headless/certs
The certificates in the headless chart has expired
Currently, there is no way of adding more container ports for the gateways whitelisting of ports feature as the "ports:
- containerPort: {{ .Values.gateway.containerPort }}" in the "tyk-hybrid/templates/deployment-gw-repset.yaml only accepts a single value rather than an array.
It would be solved by having:
gateway:
containerPort:
- 8443
- 2223
- 2224
extraEnvs:
- name: "TYK_GW_PORTSWHITELIST_PORTSWHITELIST_TCP_PORTS"
value: "2223,2224"
If you try to deploy tyk-pro
outside of the tyk
namespace, it gives you many errors related to namespace "tyk".
You should be able to install it anywhere. If I have multiple namespaces in my team, this is problematic.
Can we please remove the hard-coded "tyk" namespace
We have installed the dashboard RPMs from Tyk Yum repo onto RHEL 7 x86_64.
Receiving this error below. The bootstrap.sh script never sets variable $USER_AUTH_CODE, so I am wondering how this gets set.
Seems similar to this issue: #10
`/opt/tyk-dashboard/install/bootstrap.sh localhost
Found Python interpreter at: /bin/python
Creating Organisation
ORG DATA: {"Status":"OK","Message":"Org created","Meta":"5d3624e60f48ec660c7d9173"}
ORG ID: 5d3624e60f48ec660c7d9173
Adding new user
USER_AUTH_CODE =
USER AUTHENTICATION CODE:
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.7/json/init.py", line 290, in load
**kw)
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 1 column 33 (char 32)
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.7/json/init.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
ERROR: Unable to parse JSON`
Change the templates to run all the Tyk containers as non-root user by default, but keep an option to change the user for the gateway since there might be a legitimate use case with a host network (and privileged port).
The images should mostly work fine with this, known issues are:
helm repo add tc http://trusted-charts.stackpoint.io
Error: Looks like "http://trusted-charts.stackpoint.io" is not a valid chart repository or cannot be reached: Get http://trusted-charts.stackpoint.io/index.yaml: read tcp 192.168.0.172:51140->216.58.223.240:80: read: connection reset by peer
After running the Tyk Helm Chart, I don't have a working Portal.
values.yaml
I run the following commands:
$ kubectl create namespace tyk
$ kubectl apply -f deploy/dependencies/mongo.yaml -n tyk
$ kubectl apply -f deploy/dependencies/redis.yaml -n tyk
$ helm install tyk-pro ./tyk-pro -n tyk --wait
Everything works correctly, but no Portal. I try to access:
And it loads infinitely.
in order to get it to work, I have to
In the tyk-hybrid chart, redis and gateway passwords are directly passed as environment variables.
These are sensitive and should be stored as Secrets instead.
In order to introduce proper versioning/archiving and to be able to install a chart remotely we'll need a chart repository, which can be as simple as GH Pages/S3 bucket or more advanced with Chart Museum.
More details here: https://helm.sh/docs/topics/chart_repository/
Needs automation to create chart packages and upload to the repository.
Tyk Gateway can now proxy TCP(s) traffic, and the gateways can listen on multiple ports.
As such, the ingress controller should be updated to reflect the same.
This is the current output:
$ helm install tyk-pro -f ./values.yaml ./tyk-pro
1. Bootstrap the dashboard so we can get a username and password to login, this also generates access tokens for the controller to use
export NODE_PORT=$(kubectl get --namespace tyk-ingress -o jsonpath="{.spec.ports[0].nodePort}" services dashboard-svc-tyk-pro)
export NODE_IP=$(kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}')
If you're using minikube, run this instead:
export NODE_IP=$(minikube ip)
export DASH_URL="$NODE_IP:$NODE_PORT"
export DASH_HTTPS=""
1a. You may need to open up that port in your firewall so that we can access the dashboard. Example in GCloud
gcloud compute firewall-rules create dashboard --allow tcp:$NODE_PORT
1b. Bootstrap the dashboard
./tyk-pro/scripts/bootstrap_k8s.sh $DASH_URL 12345 tyk-ingress $DASH_HTTPS
At this point, Tyk Pro is fully installed and should be accessible, proceed in case you want to install Tyk ingress controller.
Note that all the bootstrap steps have been replaced by the bootstrap UI. The user simply needs to navigate to the Dashboard now.
Note that step 1A might still be necessary
Running
helm template pro ./tyk-pro/
Gives the following error
Error: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling include: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling include: template: tyk-pro/templates/deployment-gw-repset.yaml:44:28: executing "tyk-pro/templates/deployment-gw-repset.yaml" at <include (print $.Template.BasePath "/deployment-gw-repset.yaml") .>: error calling inc
....
This error is long and exhaustive until helm bails out
reading the error it seems tyk-pro/templates/deployment-gw-repset.yaml
keeps including itself recusively. A quick hunt shows the change was introduced in #84
The offending line is
annotations:
checksum/config: {{ include (print $.Template.BasePath "/deployment-gw-repset.yaml") . | sha256sum }}
By default, the dashboard uses plain HTTP. Can I attach an SSL certificate?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.