rimusz / charts Goto Github PK
View Code? Open in Web Editor NEWHelm Charts for Kubernetes
License: MIT License
Helm Charts for Kubernetes
License: MIT License
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report
Version of Helm and Kubernetes:
Which chart:
gcloud-sqlproxy
What happened:
I'm trying to deploy the chart and defining 2 SQL instances that are in different projects but having the same instance name. The deployment failed with the following message:
Service "x" is invalid: spec.ports[5].name: Duplicate value: "x-consultation"
Even with the instanceShortName
set it still uses the original instance name for naming the port in the service object.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
cloudsql:
instances:
- project: project-1
region: europe-west1
instance: abc-consultation
instanceShortName: abc-consult-stg
port: 3309
- project: project-2
region: europe-west1
instance: abc-consultation
instanceShortName: abc-consult-prd
port: 3310
Anything else we need to know:
This might be fixable by also using the instanceShortName
(when available) in the svc.yaml
for naming the port.
Ay,
This doesn't work when using HPA and no value for replicasCount.
Service name no longer corresponds to application name rendering the Helm chart non-functional. It caused 2 of my production setups a few hours of down time.
Please don't do something like this again.............
Caused by commit 867f85b
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
I consider this a BUG as it is contrary to community practice.
Version of Helm and Kubernetes:
any supported versions
Which chart:
stable/contour
What happened:
must override a values.yml
file default to change namespace
What you expected to happen:
helm --namespace my-namespace rimusz/contour --everything else
Feature Request
Add a configuration for ReclaimPolicy in your StorageClass template, something like
storageClass.reclaimPolicy: Retain
Options: are Retain
and Delete
For sqlproxy to default to no credentials.json
, usingGCPController: "false"
should instead be set to usingGCPController: ""
in values.yaml
.
{{- define "gcloud-sqlproxy.hasCredentials" -}}
{{ or .Values.serviceAccountKey ( or .Values.existingSecret .Values.usingGCPController ) -}}
{{- end -}}
{{ if $hasCredentials -}}
- -credential_file=/secrets/cloudsql/{{ include "gcloud-sqlproxy.secretKey" . }}
{{ end -}}
https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/templates/_helpers.tpl#L58-L60
Is this a request for help?:
No
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
feature request
Version of Helm and Kubernetes:
does not apply
Which chart:
hostpath-provisioner
What happened:
All PVs have retain policy "delete"
What you expected to happen:
Retain policy should be possible to set
How to reproduce it (as minimally and precisely as possible):
just create a PVC, check PV's retain policy
Anything else we need to know:
This can easily be achieved by setting the policy for the created storage class: https://medium.com/faun/kubernetes-how-to-set-reclaimpolicy-for-persistentvolumeclaim-7eb7d002bb2e
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG
If this is a BUG REPORT, please:
charts/stable/gcloud-sqlproxy/values.yaml
Line 21 in 36e7528
Version of Helm and Kubernetes:
helm: 3.13.0
kubernetes: 1.24
Which chart:
gcloud-sqlproxy:0.25.2
What happened:
Template is setting my HPA kind as Deployment even though in my values.yaml file i have useStatefulset: true.
What you expected to happen:
Should set the HPA kind as Statefulset instead of Deployment
*** Possible Fix***:
Probably just need an if else statement depending on what useStatefulset is set to in the values.yaml file.
Thank you
FEATURE REQUEST:
add support for PRIAVTE IPs
Which chart:
gcloud-sqlproxy
What happened:
I needed the connect to the CloudSQL via private IP; so I modified the command for the container to:
/cloud_sql_proxy
--dir=/cloudsql
-instances= XXXXXXX,
-ip_address_types=PRIVATE
But it would be nice to get this from the values file
FEATURE REQUEST
Version of Helm and Kubernetes: All
Which chart: gcloud-sqlproxy
Feature: Add custom labels to the deployments/stateful sets.
It would be great if we could have the possibility of adding additional labels, on top of what is already there. In my case I would need to add custom labels for Datadog, Istio, etc. Technically I know it's possible to use the post-rendering feature helm offers however that introduces a lot of complexity for something so simple.
Let me know what you think.
Thanks and have a great Friday!
Thank you for the work you have done @rimusz !
I noticed that helm/charts-gcloud-sqlproxy were initially made available there, but now transitioned here.
What has motivated this move?
Now it is necessory to add
values:
extraArgs:
private-ip: true
to the configuration.
Is this a request for help?: NO
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report
Version of Helm and Kubernetes:
helm:
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
kubectl:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Using minikube version: v0.32.0
Which chart:
gcloud-sqlproxy
What happened:
$ helm upgrade pg-sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy \
--set serviceAccountKey="$(cat service-account.json | base64)" \
--set cloudsql.instances[0].instance=INSTANCE \
--set cloudsql.instances[0].project=PROJECT \
--set cloudsql.instances[0].region=REGION \
--set cloudsql.instances[0].port=5432 -i
Does not work
Error was:
Error: YAML parse error on gcloud-sqlproxy/templates/secrets.yaml: error converting YAML to JSON: yaml: line 14: could not find expected ':'
This was caused by the base64
command appending newlines after so many characters.
What you expected to happen:
The chart to deploy properly
How to reproduce it (as minimally and precisely as possible):
Attempt to deploy the chart using a version of base64 that appends newlines.
Anything else we need to know:
The fix is to do $(cat db-proxy_key.json | base64 | tr -d '\n')`. This removes the newlines.
Standard set +x
was really useful for figuring this one out.
Edit:
Command is tr -d '\n'
. I accidentally pasted an early attempt.
Is this a request for help?:
Yes
Version of Helm and Kubernetes:
Helm v3.9.0
Kubernetes: 1.21.10-gke.2000
Which chart:
sqlproxy
What happened:
None of my pods are able to connect to CloudSQL (mysql) anymore. It was working fine for well over a year, it suddenly broke.
What you expected to happen:
All deployments are up and the sqlproxy service is not outputting any errors, but none of the pods are connecting. If I try to access any endpoint from these pods which are not triggering the DB connection, they work. Only DB connection endpoints are broken. Also tried to login through PHPmyadmin to the DB and it wont login (because it is unable to connect to the DB).
How to reproduce it (as minimally and precisely as possible):
I am unsure, this was working perfectly fine for well over a year.
Anything else we need to know:
Yes, thinking I might have had a broken install or perhaps missing an update I tried upgrading by using:
helm upgrade sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy --set serviceAccountKey="$(cat sqlproxy-key.json | base64 | tr -d '\n')" --set cloudsql.instances[0].instance=instancename --set cloudsql.instances[0].project=projectid --set cloudsql.instances[0].region=region --set cloudsql.instances[0].port=1234 -i
However I am receiving: zsh: no matches found: cloudsql.instances[0].instance=instancename
Yet the instance is certainly there and the name is correct.
So I created a completely new project from scratch and created a new SQL Cloud Instance and a new Cluster and tried to rerun the setup from scratch and I am coming up with the exact same error. So I am unclear if maybe something has changed somewhere between GKE, SQL Cloud, Helm or ProxySQL which has things in a broken state.
I'm in a bit of panic as some services in production are offline as a consequence.
Appreciate the help.
UPDATE: I was actually able to run the update eventually after realizing I just needed to escape brakets (sorry first time on macos). After updating it would appear that sqlproxy is able to connect, my apps are still not connecting but reading the log from sqlproxy appears to suggest it is connected, so I will re-run some more tests and report back soon.
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Helm version: `version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2eb
f7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}``
Kubernetes version: 1.16
Minikube version: `minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad``
Which chart:
https://github.com/rimusz/charts/tree/master/stable/gcloud-sqlproxy
What happened:
When running:
helm upgrade pg-sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy \
--set serviceAccountKey="$(cat service-account.json | base64 | tr -d '\n')" \
--set cloudsql.instances[0].instance=INSTANCE \
--set cloudsql.instances[0].project=PROJECT \
--set cloudsql.instances[0].region=REGION \
--set cloudsql.instances[0].port=5432 -i
(of course, I replaced the proper macros)
I get:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
What you expected to happen:
The chart should install itself.
How to reproduce it (as minimally and precisely as possible):
Create a kubernetes with version 1.16 with minikube start
It currently create a cluster with kubernetes 1.16 and then run the above command (helm upgrade ...
).
Anything else we need to know:
Based on https://stackoverflow.com/questions/58481850/no-matches-for-kind-deployment-in-version-extensions-v1beta1 it seems that the Deployment object is no longer working with the extensions/v1beta1
apiVersion.
It is now only working with apps/v1
.
See: https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/templates/deployment.yaml#L3
Note: The apps/v1
seems to work on 1.14.9 as well:
$minikube kubectl api-resources | grep deployment
deployments deploy apps true Deployment
deployments deploy extensions true Deployment
with a minikube created with 1.14.9 (minikube start --kubernetes-version v1.14.9
)
FEATURE REQUEST:
Ability to pass extraArgs without a value
Version of Helm and Kubernetes:
K8s: 1.20
Helm: 3.7.1
Which chart:
gcloud-sqlproxy
What happened:
-enable_iam_login
extraArgs does not have a value: https://github.com/GoogleCloudPlatform/cloudsql-proxy#-enable_iam_login
Thus the chart needs to accomodate extra_args without a value and with a single dash prefix.
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? Bug report
Which chart: gcloud-sqlproxy
What happened: CPU limit causes odd queries to spike in response time
How to reproduce it (as minimally and precisely as possible):
\timing
SELECT "id" FROM "User" WHERE "id" = 1;
Will result in response times like:
2ms, 2ms, 2ms, 50ms, 2ms, 2ms, 90ms, 2ms, 2ms
After ripping my hair out wondering how some of my API calls are slow I found this issue which mentions removing the CPU Limit stops spikes: GoogleCloudPlatform/cloud-sql-proxy#168 (comment)
Tried it and as expected, now my queries are consistently around 2ms
So I guess what I propose is increasing the CPU limit or removing it entirely as it'll affect people using the provided one by default
Thanks for maintaining this chart. I notice that the default has replica 1 and pdb of 1. In the ingress-nginx chart this led to node upgrades never occurring which was solved with:
{{- if gt .Values.controller.replicaCount 1.0 }}
Thank you for your work on this @rimusz!
I created an issue for the clousql-proxy that this Helm chart deploy about reducing the permissions to a minimum, this may be relevant to keep an eye on as this Helm chart's readme file suggest we require "Cloud SQL Admin" while it may require less.
Please implement loadBalancerSourceRanges in the service
We are unable to use fw rules to restrict connections
This is a core feature
https://kubernetes.io/docs/concepts/services-networking/service/
Is this a request for help?:
Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request
** If this is a FEATURE REQUEST, please:
Version of Helm and Kubernetes:
Kubernetes: 1.17
Helm: 3.2.4
Which chart:
gcloud-sqlproxy 0.19.13
What happened:
Is there a workaround to use a different image that has nc installed.
I enabled livenessProbe and defined the port as 3306
livenessProbe:
exec:
command: ["nc", "-z", "127.0.0.1", "3306"]
What you expected to happen:
I did read that as of v1.16 and above cloudsql proxy container is based on distroless and there are a couple of alternatives mentioned here https://github.com/GoogleCloudPlatform/cloudsql-proxy#container-images
The version of the chart I'm using is 0.19.13. So if I were to set the image tag in my values to this:
image: gcr.io/cloudsql-docker/gce-proxy-alpine
imageTag: "1.16"
Would it have any breaking changes ?
How to reproduce it (as minimally and precisely as possible):
define livenessProbe with exec
livenessProbe:
exec:
command: ["nc", "-z", "127.0.0.1", "3306"]
enabled: true
port: 3306
Hello, thank you for maintaining this helm chart, it is unfortunately not compatible with the cloud-sql-proxy
images starting at the version 2.0.0
.
The entrypoint of the container has changed from cloud_sql_proxy
to cloud-sql-proxy
and since it is not configurable via the values, we are stuck with containers <2.0.0.
This is the PR which introduced the change: GoogleCloudPlatform/cloud-sql-proxy#1326
Note that the container registry has changed at the same time: gcr.io/cloudsql-docker/gce-proxy
-> gcr.io/cloud-sql-connectors/cloud-sql-proxy
(cf GoogleCloudPlatform/cloud-sql-proxy#1607 (comment))
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
3 / 1.14
Which chart:
glcoud-sqlproxy
What happened:
Best practice is that livenessprobe failure threshold should be higher as readinessprobe failure threshold to prevent unwanted restarts.
See: https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/values.yaml#L108
But maybe a real readiness check via sql statement would be much nicer anyway. Would you consider to add another gcloud docker image which contains a mysql client too?
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BugReport
Version of Helm and Kubernetes:
k8s: 1.21.3
helm: 3.6.3
Which chart: hostpath-provisioner
What happened:
Provisioner doesn't provision PVs:
I0901 10:41:21.403079 1 controller.go:926] provision "local-ns/s3" class "SomeName": started
E0901 10:41:21.410537 1 controller.go:943] provision "local-ns/s3" class "SomeName": unexpected error getting claim reference: selfLink was empty, can't make reference
Googled and found similar issue in nfs provisioner (kubernetes-sigs/nfs-subdir-external-provisioner#25).
k8s from v1.20.0 changed default behaviour and started using RemoveSelfLink=true
Workaround for kube-apiserver was to pass --feature-gates=RemoveSelfLink=false
as argument or load via kubectl apply -f. It worked until k8s v1.21.0
Now I can't make hostpath-provisioner work with kind cluster.
What you expected to happen:
Provisioner should provision PVs.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Any help will be appreciated.
I'm using a pulumi provider to deploy the gcloud-sqlproxy
helm chart, and something's preventing the workload from scaling up. I'm pretty sure it's just my bad configuration, so I'm going through the documentation again.
I'm a bit confused, because the installation instructions tell me that I need to manually create the service account and pass the proxy the credentials. Which is fine, but then I also notice in values serviceAccount.create
, which defaults to true...
Are these separate? The same? Is the documentation out of date?
Help! :D
Have you considered submitting this to helm/charts?
Used Tools:
Google Kubernetes Engine Version: v1.27.3-gke.1700
Helm Version: v3.11.3
Helm Terraform Provider Version: 2.11.0
Description:
This bug affects gcloud-sqlproxy chart, because of this I could neither install this chart as a new release nor upgrade an existing one.
The values.yaml of the release, looked like this
cloudsql:
instances:
- instance: "long-instance-name-foo"
project: "project"
region: "region"
port: 5432
- instance: "long-instance-name-bar"
project: "project"
region: "region"
port: 5433
# other values....
The helm release install/upgrade had failed with the error message " Duplicate Values for port name in deployment.yaml "
snippet from error message:
* Service "sqlproxy" is invalid: [spec.ports[1].name: Duplicate value: "long-instance-n", spec.ports[2].name: Duplicate value: "long-instance-n"] * Deployment.apps "sqlproxy" is invalid: [spec.template.spec.containers[0].ports[1].name: Duplicate value: "long-instance-n", spec.template.spec.containers[0].ports[2].name: Duplicate value: "long-instance-n"]
My analysis on this issue:
if the instanceShortName is not explicitly set, the chart creates instanceShortName using the first 15 characters of instance
and the created instanceShortName is no more unique if two instances have identitical prefix names of length 15 characters or longer.
As in our case we had multiple cloudsql instances with the same prefix and the prefix was longer than 15 characters.
What you expected to happen:
When the sqlproxy is deployed, it should have generated a random alphanumeric string of 9 or less digits and have appended it to instance name, so to create a unique instanceShortName
instanceShortName in helpers.tpl (gcloud-sqlproxy Chart Version 0.25.2):
{{/*
Create the short instance name
*/}}
{{- define "gcloud-sqlproxy.instanceShortName" -}}
{{ .instanceShortName | default ((.instance | trunc 15 | trimSuffix "-" ) }}
{{- end -}}
Suggested Fix (gcloud-sqlproxy Chart Version 0.25.3):
{{- define "gcloud-sqlproxy.instanceShortName" -}}
{{- $randomString := randAlphaNum 9 | lower -}}
{{ .instanceShortName | default (printf "%s-%s" (.instance | trunc 5 | trimSuffix "-") $randomString) }}
{{- end -}}
as a result, instanceShortName looks like this
Deploy the chart using helm (even without terraform helm provider), with the following values but the only condition is to have the prefix on more than one instances, and also the prefix should be more than 15 characters long
cloudsql:
instances:
- instance: "long-instance-name-foo" # or any other name you prefer
project: "project"
region: "region"
port: 5432
- instance: "long-instance-name-bar" # or any other name you prefer
project: "project"
region: "region"
port: 5433
Is this a request for help?:
Yes
Version of Helm and Kubernetes: helm v3.5, kubectl: v1.21.2
Which chart: rimusz/nfs-client-provisioner
What happened:
Error on installation
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2"
What you expected to happen:
helm v3 install the chart
How to reproduce it (as minimally and precisely as possible):
helm install nfs-us-central1-c rimusz/nfs-client-provisioner --namespace nfs-storage --set nfs.server=${FSADDR} --create-namespace
Anything else we need to know:
No
Is this a request for help?:
Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request
Version of Helm and Kubernetes:
helm: 3.9.1
kubernetes: 1.22.11
Which chart:
gcloud-sqlproxy:0.22.8
What happened:
Cannot pass instance values from secret and can't define extra env vars for the deployment.
What you expected to happen:
Possibility to have an "extraEnv" value to set the instance values like project:region:instance or, preferably, pass the values from a secret
Hello, in version 0.24.0
helmchart fails while applying hpa. For version 0.23.0
everything works well.
I see that between both version some changes regarding apiVersion
happend, it was changed from autoscaling/v2beta1
to autoscaling/v2
which is fine, because v2beta1 is deprecated in 1.19 and no longer served as of 1.25.
autoscaling/v2
is available since 1.23, but has some breaking changes that are currently not reflected in the gcloud-sqlproxy chart.
I would like to suggest using autoscaling/v2beta2
to apply HorizontalPodAutoscaler ressource.
K8s: v1.23.14-gke.1800
Error while applying helm chart:
Helm upgrade failed: error validating "": error validating data: [ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): unknown field "targe
tAverageUtilization" in io.k8s.api.autoscaling.v2.ResourceMetricSource, ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): missing required field "target" in io.k8s
.api.autoscaling.v2.ResourceMetricSource]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.