Add the repository: helm repo add runix https://helm.runix.net
Name | Description |
---|---|
pgadmin4 | pgAdmin 4 is a web based administration tool for the PostgreSQL database |
Curated applications for Kubernetes
License: Apache License 2.0
Add the repository: helm repo add runix https://helm.runix.net
Name | Description |
---|---|
pgadmin4 | pgAdmin 4 is a web based administration tool for the PostgreSQL database |
Requested in the old repo: rowanruseler/pgadmin4#1
1)Environment
helm version: helm3
kubernetes version: v1.17.6
2)deploy
helm install pgadmin runix/pgadmin4 --set env.password=admin --namespace test
kubectl get po,svc -n test |grep pgadmin
kubectl get logs for pgadmin:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
[2020-07-01 07:20:22 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2020-07-01 07:20:22 +0000] [1] [ERROR] Retrying in 1 second.
[2020-07-01 07:20:23 +0000] [1] [ERROR] Retrying in 1 second.
[2020-07-01 07:20:24 +0000] [1] [ERROR] Retrying in 1 second.
[2020-07-01 07:20:25 +0000] [1] [ERROR] Retrying in 1 second.
[2020-07-01 07:20:26 +0000] [1] [ERROR] Retrying in 1 second.
[2020-07-01 07:20:27 +0000] [1] [ERROR] Can't connect to ('::', 80)
Can you tell me what's going on?
thanks very much!
Describe the bug
A new error occures on 1.5.9
template: pgadmin4/templates/ingress.yaml:35:21: executing "pgadmin4/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}
Version of Helm and Kubernetes:
Kubernetes: v1.20.5+k3s1
Helm: 2.0.3 ( using Terraform)
Which chart:
pgadmin4 1.5.9 (upgrade from 1.5.6)
What happened:
Upgrade fails with error above
What you expected to happen:
Should update
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
I think it's again related to #90
Describe the bug
I'm trying to run pgadmin4 inside a very restricted kubernetes environment (company internal). The environment prohibits executing anything as root. Init container has "runAsUser: 0" hardcoded.
What it does is simply chown the runtime directory to the correct user. However, I have found out that this is unnecessary, since fsGroup inside POD security context ensures that the runtime directory is writable by the correct GID.
I've removed initContainer block entirely, pgadmin4 was able to start and it works as expected.
Version of Helm and Kubernetes:
3.2.0/1.16.x
Which chart:
pgadmin4
What happened:
POD never started due to restriction.
What you expected to happen:
For POD to start.
How to reproduce it (as minimally and precisely as possible):
Try to install it on an environment that prohibits running anything as root. Not sure if there's such environment publicly available.
Describe the bug
There is a bug in ingress.yaml. The problem is in line 35:
is:
- path: {{ .path }}
Should be:
- path: {{ . }}
Hi,
I'm facing issues when trying to use keycloak gatekeeper in front of pgadmin4. This is my helm values:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: pgadmin4
namespace: hal-dev
annotations:
fluxcd.io/automated: "false"
spec:
releaseName: pgadmin4
helmVersion: v3
chart:
repository: https://helm.runix.net
name: pgadmin4
version: 1.3.6
values:
serverDefinitions:
enabled: true
VolumePermissions:
enabled: true
env:
email: [email protected]
pgpassfile: /var/lib/pgadmin/storage/pgadmin/file.pgpass
image:
repository: dpage/pgadmin4
tag: 4.27
existingSecret: pgadmin4
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-redirect-from: "~^http://([^/]+)/(.*)$"
nginx.ingress.kubernetes.io/proxy-redirect-to: "https://pgadmin4-hal-dev.domain.com/$2"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: pgadmin4-hal-dev.domain.com
paths:
- /
tls:
- hosts:
- pgadmin4-hal-dev.k8s.domain.com
secretName: pgadmin4-tls
persistentVolume:
enabled: true
storageClass: nfs
size: 1Gi
extraSecretMounts:
- name: pgpassfile
secret: pgadmin4
subPath: file.pgpass
mountPath: "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
defaultMode: 0600
extraConfigmapMounts:
- name: gatekeeper-config
configMap: pgadmin4-gatekeeper-config
readOnly: true
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
extraInitContainers: |
- name: add-folder-for-pgpass
image: "dpage/pgadmin4:4.27"
command: ["/bin/mkdir", "-p", "/var/lib/pgadmin/storage/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050
extraContainers: |
- name: proxy
image: quay.io/louketo/louketo-proxy:1.0.0
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- name: proxy-web
containerPort: 3000
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
service:
targetPort: proxy-web
And my gatekeeper config:
discovery-url: https://keycloak-tools.k8s.domain.com/auth/realms/MyRealm
skip-openid-provider-tls-verify: true
client-id: <redacted>
client-secret: <redacted>
listen: :3000
enable-refresh-tokens: true
redirection-url: https://pgadmin4-hal-dev.k8s.domain.com
secure-cookie: false
encryption-key: <redacted>
upstream-url: http://127.0.0.1:80
resources:
- uri: /*
roles:
- group:admin
enable-default-deny: true
verbose: true
enable-logging: true
cors-origins:
- '*'
cors-methods:
- GET
- POST
groups:
- group1
Now when the ingress is up and running and I open up https://pgadmin4-hal-dev.k8s.domain.com, it redirects me to keycloak which is fine. I login with some credentials but then keycloak redirects me to https://127.0.0.1/login which is localhost. Shouldn't it be redirecting to https://pgadmin4-hal-dev.k8s.domain.com/login? I know this issue is not directly related to the helm chart but since you've mentioned a way of doing this, I tried but failed to make it work fully. Do I need to specify something more in the gatekeeper config? It would be great if I could get some help on this as I've been stuck for quite a while on this problem. Thanks
Hi,
Trying to install the pgadmin4 helm chart with the pvc enabled to true and to create a volume in AWS.
Also 2 other Questions,
Include information on how to install pgAgent along with the chart.
Since the image does not include any package manager or tools like gcc, I don't see how it is possible to install pgAgent even after the pod has been deployed.
Please include information on how to do this. Thank you.
Hi @rowanruseler,
Would you accept a PR allowing overriding pgadmin
entrypoint?
I'd like to store OAUTH2_CLIENT_SECRET
in HashiCorp Vault and inject the secret with Agent Sidecar Injector so it'll be possible to access the secret via Environment variable.
This may also assist other with different approaches people find useful.
Hello, we tried with your chart and for some reason unable to load pre-configured db definitions from servers.json to all users .
If I enabled internal mode it works but only for initial user.
Would be nice to have option load servers.json for oauth2 users.
Is your feature request related to a problem? Please describe.
Hi and thanks a lot for maintaining this chart!
I am provisioning a postgres cluster with the zalando operator. I could already successfully connect to the database using the service name as host, and the password extracted from the secret created by the operator.
As the setup is probably going to be deployed a lot of times for short lived review instance, I would like to automatically register the server with pgadmin. I saw that it was possible using serverDefinitions
. That helps a lot, because I can set up a server there and the user than only has to supply the password, which one has to manually extract from the secret.
As for the server definition, it seems only possible to pass a reference to a pgpass file there, but not directly to a secret.
Describe the solution you'd like
At best I would of course like to simply pass the name of the postgres resource and have the rest be figured out be some logic in the chart, or maybe by some operator.
For the shorter term, I would simply like know whether there is a good way to automatically pass the credentials generated by the postgres operator to pgadmin.
The issue is that after enabling ouath2, Pgadmin crashes with the 500 error and the following in the logs:
File "/pgadmin4/pgadmin/utils/session.py", line 270, in put
dump(
_pickle.PicklingError: Can't pickle <class 'wtforms.form.Meta'>: attribute lookup Meta on wtforms.form failed
Hello,
PGAGMIN 5.5 add OAUTH2 support.
Is there a way to enable it in the values.yml
?
What would be nice would be to allow custom config to be defined in values.yml
Thanks!
The documentation table mentions ingress.path
, but this variable is not used in the ingress template.
Best regards
Matthias
Hello @rowanruseler,
I am to set up a new postgres database and pgadmin4 with the server definition in minikube, so that pgadmin loads that server definition on startup and I don't have to add it manually on every next launch. I have configured the following in values.yaml:
serverDefinitions:
## If true, server definitions will be created
##
enabled: true
servers: |-
"1": {
"Name": "My Server",
"Group": "MyServers",
"Port": 30066,
"Username": "[email protected]",
"Host": 172.17.0.2,
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
This does not work for me, I tried to delete the database with persistent volume and create it over again, but still this definition does not get picked up. Also, I noticed that the port for the postgres service is dynamic and changes on every next startup, e.g. from 5432:30011 to 5432:30066. I tried to set 5432 as static "Port" value there, but this did not work out either.
What am I doing wrong? Please, provide your suggestions.
Hello,
it´s possible to use LDAP or AD to Authenticate Users in pgadmin ?
Or how can I change the AUTHENTICATION_SOURCES in the config.py in the container?
connection refused.
the svc cannnot be reached.
Describe the bug
helm upgrade
fails because the Persistent Volume can not be mounted to two pods at once with default configuration. I was upgrading from chart 1.4.1 / app 4.29.0 to chart 1.6.8 / app 5.3.
Version of Helm and Kubernetes:
$ helm version
version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.15.8"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1600", GitCommit:"7b8e568a7fb4c9d199c2ba29a5f7d76f6b4341c2", GitTreeState:"clean", BuildDate:"2021-05-07T09:18:53Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.19) exceeds the supported minor version skew of +/-1
Which chart:
runix/pgadmin4:1.6.8
What happened:
$ helm -n $NS upgrade pgadmin4 runix/pgadmin4 --version 1.6.8 -f values-pgadmin4-20210624.yaml --description 'upgrade to pgadmin 5.3'
Release "pgadmin4" has been upgraded. Happy Helming!
NAME: pgadmin4
LAST DEPLOYED: Thu Jun 24 09:33:57 2021
NAMESPACE: [REDACTED]
STATUS: deployed
REVISION: 12
NOTES:
1. Get the application URL by running these commands:
http://pgadmin4.local/
Pod failed to start:
$ kubectl -n $NS describe pod pgadmin4-58b6f99684-tw7cx
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 71s default-scheduler Successfully assigned [REDACTED]/pgadmin4-58b6f99684-tw7cx to gke-[REDACTED]-122ed912-9l5g
Warning FailedAttachVolume 71s attachdetach-controller Multi-Attach error for volume "pvc-d1dd43a2-14c1-46e8-853c-8e406fd32ac7" Volume is already used by pod(s) pgadmin4-b9b5988d-g9gnx
What you expected to happen:
I expected the upgraded deployment to mount the same volume.
How to reproduce it (as minimally and precisely as possible):
helm install pgadmin4 runix/pgadmin4 --version 1.4.1 -f values-pgadmin4-20210624.yaml
helm upgrade pgadmin4 runix/pgadmin4 --version 1.6.8 -f values-pgadmin4-20210624.yaml
Anything else we need to know:
Helm values:
$ cat values-pgadmin4-20210624.yaml
env:
email: [REDACTED]
enhanced_cookie_protection: "False"
password: [REDACTED]
variables:
- name: PGADMIN_CONFIG_CONSOLE_LOG_LEVEL
value: "50"
- name: PGADMIN_CONFIG_FILE_LOG_LEVEL
value: "50"
- name: GUNICORN_ACCESS_LOGFILE
value: /dev/null
ingress:
annotations:
kubernetes.io/ingress.class: traefik
enabled: true
hosts:
- host: pgadmin4.local
paths:
- path: /
pathType: Prefix
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
I was also surprised to find that Helm uninstall removed the volume.
Describe the bug
Can't updgrade to newest chart, see error
Error: error validating "": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[1].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[1].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]
Version of Helm and Kubernetes:
Kubernetes: v1.20.4+k3s1
Helm: 2.0.3 ( using Terraform)
Which chart:
pgadmin4 1.5.7 (upgrade from 1.5.6)
What happened:
Upgrade fails with error above, i think #89 has broken something
What you expected to happen:
Upgrade should succeed.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
I am deploying pgadmin4 on Openshift, in Openshift there are Routes and not Ingresses so default NetworkPolicy doesn't work so I want to remove NetworkPolicy and there is no way to remove it currently
when setting the email and password for example as:
pgadmin4:
env:
email: test
password: test
PgAdmin UI doesn't let me log in with those details?
Describe the bug
I want to deploy pgadmin in a Openshift environment, where it is not allowed to give pods root access:
I deploy the chart with th following config
securityContext:
runAsUser: Null
runAsGroup: Null
fsGroup: Null
the chart deployment works perfectly, also the pod runs with following error:
non root user kubernetes Failed to create the directory /var/log/pgadmin: [Errno 13] Permission denied: '/var/log/pgadmin'
Version of Helm and Kubernetes:
helm_version v3.4.2
Kubernetes v1.19.0+9c69bdc (Openshift 4)
Which chart:
runix/pgadmin4
What happened:
non root user kubernetes Failed to create the directory /var/log/pgadmin: [Errno 13] Permission denied: '/var/log/pgadmin'
What you expected to happen:
should create mountpoint
How to reproduce it (as minimally and precisely as possible):
set Securtycontext:
securityContext:
runAsUser: Null
runAsGroup: Null
fsGroup: Null
Anything else we need to know:
Describe the bug
When I want to connect to a database server the application keeps asking me for a database password even though it is configured.
Version of Helm and Kubernetes:
Helm: v3.2.1
Kubernetes: v1.17.13
Which chart:
runix/pgadmin4
version: pgadmin4-1.4.6
app version: 4.29.0
What happened:
I have a pgpassfile secret:
k describe secret pgpassfile
Name: pgpassfile
Namespace: flexvoucher-dev
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
pgpassfile: 99 bytes
pgadmin % k get secret pgpassfile -o yaml
apiVersion: v1
data:
pgpassfile: <base64 encoded content>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{},"name":"pgpassfile","namespace":"flexvoucher-dev"},"stringData":{"pgpassfile":"flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:vksx28n65nlt03q5\n"},"type":"Opaque"}
creationTimestamp: "2021-01-19T14:18:47Z"
name: pgpassfile
namespace: flexvoucher-dev
resourceVersion: "40453082"
selfLink: /api/v1/namespaces/flexvoucher-dev/secrets/pgpassfile
uid: a884e2ed-b713-4a2c-a12e-8067aae67889
type: Opaque
pgadmin % echo <base64 encoded content> | base64 -d
flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:MySecretPassword
In values.yaml I defined my server:
serverDefinitions:
enabled: true
servers: |-
"1": {
"Name": "flexVOUCHER",
"Group": "Servers",
"Port": 25060,
"Username": "doadmin",
"Host": "flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com",
"SSLMode": "prefer",
"MaintenanceDB": "defaultdb"
}
I have env.pgpassfile defined:
env:
# can be email or nickname
email: [email protected]
password: SuperSecret
pgpassfile: /var/lib/pgadmin/storage/pgadmin/file.pgpass
VolumePermissions:
## If true, enables an InitContainer to set permissions on /var/lib/pgadmin.
##
enabled: true
extraInitContainers: |
- name: add-folder-for-pgpass
image: "dpage/pgadmin4:4.23"
command: ["/bin/mkdir", "-p", "/var/lib/pgadmin/storage/pgadmin"]
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
securityContext:
runAsUser: 5050
The secret above is mounted as PV:
extraSecretMounts:
- name: pgpassfile
secret: pgpassfile
subPath: pgpassfile
mountPath: "/var/lib/pgadmin/storage/pgadmin/file.pgpass"
readOnly: true
With the above configuration the server is configured and is presented in the left panel, but when I try to connect to the server I must provide the password:
What you expected to happen:
I would expect not to be asked for password.
Anything else we need to know:
Below some data from the deployment. All seems to be ok IMO.
pgadmin % k get pods | grep my-pgadmin
my-pgadmin4-bf544d96f-v68cg 1/1 Running 0 10m
pgadmin % k describe pod my-pgadmin4-bf544d96f-v68cg
Name: my-pgadmin4-bf544d96f-v68cg
[...]
Mounts:
/pgadmin4/servers.json from definitions (rw,path="servers.json")
/var/lib/pgadmin from pgadmin-data (rw)
/var/lib/pgadmin/storage/pgadmin/file.pgpass from pgpassfile (ro,path="pgpassfile")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmn8x (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-pgadmin4
ReadOnly: false
pgpassfile:
Type: Secret (a volume populated by a Secret)
SecretName: pgpassfile
Optional: false
definitions:
Type: Secret (a volume populated by a Secret)
SecretName: my-pgadmin4
Optional: false
default-token-kmn8x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kmn8x
Optional: false
[...]
The volumes are there. Lets see the details:
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- cat /pgadmin4/servers.json
{
"Servers": {
"1": {
"Name": "flexVOUCHER",
"Group": "Servers",
"Port": 25060,
"Username": "doadmin",
"Host": "flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com",
"SSLMode": "prefer",
"MaintenanceDB": "defaultdb"
}
}
}%
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- cat /var/lib/pgadmin/storage/pgadmin/file.pgpass
flexvoucher-dev-do-user-7158554-0.a.db.ondigitalocean.com:25060:defaultdb:doadmin:MySecretPassword
pgadmin % k exec -it my-pgadmin4-bf544d96f-v68cg -- env | grep PGPASSFILE
PGPASSFILE=/var/lib/pgadmin/storage/pgadmin/file.pgpass
Hi @rowanruseler ,
I am trying to install pgadmin4 into a GKE cluster using Jenkins-x. Unfortunately the promotion build is already unsuccessful. The following command results in an error:
helm lint --set tags.jx-lint=true --set global.jxLint=true --set-string global.jxTypeEnv=lint --values values.yaml
Console log:
Successfully added Helm repository helm.runix.net.
error: failed to lint the chart '.': failed to run 'helm lint --set tags.jx-lint=true --set global.jxLint=true --set-string global.jxTypeEnv=lint --values values.yaml' command in directory '.', output: '2020/05/06 15:16:16 Warning: Merging destination map for chart 'pgadmin4'. The destination item 'tls' is a table and ignoring the source 'tls' as it has a non-table value of: []
2020/05/06 15:16:16 Warning: Merging destination map for chart 'pgadmin4'. The destination item 'tls' is a table and ignoring the source 'tls' as it has a non-table value of: []
2020/05/06 15:16:16 Warning: Merging destination map for chart 'pgadmin4'. The destination item 'tls' is a table and ignoring the source 'tls' as it has a non-table value of: []
2020/05/06 15:16:16 Warning: Merging destination map for chart 'pgadmin4'. The destination item 'tls' is a table and ignoring the source 'tls' as it has a non-table value of: []
==> Linting .
[ERROR] templates/: render error in "env/charts/pgadmin4/templates/ingress.yaml": template: env/charts/pgadmin4/templates/ingress.yaml:18:16: executing "env/charts/pgadmin4/templates/ingress.yaml" at <.hosts>: can't evaluate field hosts in type interface {}
Error: 1 chart(s) linted, 1 chart(s) failed'
make: *** [build] Error 1
Pipeline failed on stage 'pr-checks' : container 'step-lint-env-helm'. The execution of the pipeline has stopped.
I attached my values.yaml file (added .txt at the end of the filename to be able to attach it):
values.yaml.txt
Can you please show me what I'm doing wrong?
pgadmin4 chart version: 1.2.14
Environment:
jx 2.1.13+cjxd.9
Kubernetes cluster v1.14.10-gke.27
kubectl v1.13.2
helm client 2.12.2
git 2.17.1
Operating System Ubuntu 18.04.4 LTS
Many thanks in advance!
Describe the bug
If you start pgadmin, each liveness/readiness probe will create a session in the /var/lib/pgadmin/sessions folder. After some time the volume is full and you get the error "No space left"
Version of Helm and Kubernetes:
Helm: v3.6.1
Kubernetes: 1.20
Which chart:
pgadmin4: 1.6.8
image: dpage/pgadmin4:5.4
What happened:
Each liveness probe will create a session in the /var/lib/pgadmin/sessions folder til the folder is full. The liveness probe path is /misc/ping
What you expected to happen:
Liveness/readiness probe shouldn't create a session
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
This chart value does not appear documented on the README
.Values.persistentVolume.existingClaim
Hello,
I encountered an issue when using pgAdmin4 behind a Traefik reverse-proxy with AWS Cognito OAuth2 configured.
Describe the bug
The redirect_uri returned by pgAdmin when trying to connect with Cognito isn't secured with HTTPS, which causes AWS Cognito to refuse the operation with "error=redirect_mismatch".
I can't find which variable I should set in the helm values to let pgAdmin know I use HTTPS in front of the reverse-proxy.
In the value file, I have tried the variables SCHEME
, HTTP_X_SCHEME
, wsgi.url_scheme
, to set the internal wsgi.url_scheme to "https", without success.
Version of Helm and Kubernetes:
Terraform Helm provider v2.2.0
K8s v1.19
Which chart:
pgadmin4
What happened:
On the pgAdmin4 login page, when the Cognito login option is clicked, the HTTP response header (which is a redirection) given by pgAdmin4 contains:
location = https://<obfuscated cognito domain>.auth.<AWS Region>.amazoncognito.com/oauth2/authorize?response_type=code&client_id=<obfuscated client id>&redirect_uri=http%3A%2F%2Fpgadmin.example.org%2Foauth2%2Fauthorize&[...]
What you expected to happen:
The expected value should be:
location = https://<obfuscated cognito domain>.auth.<AWS Region>.amazoncognito.com/oauth2/authorize?response_type=code&client_id=<obfuscated client id>&redirect_uri=https%3A%2F%2Fpgadmin.example.org%2Foauth2%2Fauthorize&[...]
Notes:
https
How to reproduce it (as minimally and precisely as possible):
Our OAuth2 config template:
MASTER_PASSWORD_REQUIRED = False
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
OAUTH2_AUTO_CREATE_USER = True
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'cognito',
'OAUTH2_DISPLAY_NAME': 'Cognito',
'OAUTH2_CLIENT_ID': '${COGNITO_CLIENT_ID}',
'OAUTH2_CLIENT_SECRET': '${COGNITO_CLIENT_SECRET}',
'OAUTH2_TOKEN_URL': 'https://${COGNITO_USER_POOL_NAME}.auth.${AWS_REGION}.amazoncognito.com/oauth2/token',
'OAUTH2_AUTHORIZATION_URL': 'https://${COGNITO_USER_POOL_NAME}.auth.${AWS_REGION}.amazoncognito.com/oauth2/authorize',
'OAUTH2_API_BASE_URL': 'https://${COGNITO_USER_POOL_NAME}.auth.${AWS_REGION}.amazoncognito.com/oauth2/',
'OAUTH2_USERINFO_ENDPOINT': 'https://${COGNITO_USER_POOL_NAME}.auth.${AWS_REGION}.amazoncognito.com/oauth2/userInfo',
'OAUTH2_ICON': 'fa-aws',
'OAUTH2_BUTTON_COLOR': '#ff9900',
'OAUTH2_SCOPE': 'openid email profile',
}
]
Hi,
I want to use pgadmin helm chart, but
I think https://helm.runix.net has error.
I think that is github pages, but that is empty.
If not, is this my connection problem??
Permit to define the subPath of pgadmin4 on PVC, this avoid some PVC shared with others APPS on the same namespace
Hi There
i am trying to deploy this chart on Azure Kubernetes service (running kubernetes v1.17.9), here are my values.yaml
env:
email: admin@domain
password: <REDACTED>
variables:
- name: PGADMIN_LISTEN_ADDRESS
value: "0.0.0.0"
persistentVolume:
storageClass: azurefilepgadmin
service:
type: LoadBalancer
annotations:
external-dns.alpha.kubernetes.io/hostname: "pgadmin.azure.domain."
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
VolumePermissions:
## If true, enables an InitContainer to set permissions on /var/lib/pgadmin.
##
enabled: true
i get no real errors in the logs other than what appear to be just warnings:
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
[2020-09-11 06:51:17 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2020-09-11 06:51:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2020-09-11 06:51:17 +0000] [1] [INFO] Using worker: threads
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-11 06:51:17 +0000] [88] [INFO] Booting worker with pid: 88
however the container never seems to become "available" if i open a shell up in the container i can see the gunicorn process has started and listening on port 80 but it just does not respond and no error gets emitted in logs. In the kubernetes events this is also correlated by the liveness and readiness probles also failing
Warning Unhealthy 45s kubelet, aks-prdakspool1-37452914-vmss000001 Liveness probe failed: Get http://10.181.67.42:80/misc/ping: dial tcp 10.181.67.42:80: connect: connection refused
Warning Unhealthy 40s kubelet, aks-prdakspool1-37452914-vmss000001 Readiness probe failed: Get http://10.181.67.42:80/misc/ping: dial tcp 10.181.67.42:80: connect: connection refused
am currently at a loss as to how to debug further or next steps towards a resolution
I am on k8s 1.20 and I see this error message:
networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
according to _helper networking.k8s.io/v1beta1
is chosen if supported, otherwise it falls back to even older verison.
Correct should be the test for networking.k8s.io/v1
and then fallback to networking.k8s.io/v1beta1
.
Describe the bug
Having permission denied error on startup for the pod pgadmin4-xxxx.
Version of Helm and Kubernetes:
Kubernetes: 1.18.3 - Running on Premise
Helm: 3.2.3
Which chart:
runix/pgadmin4
What happened:
Installed the chart with the following command.
helm upgrade --install --namespace xxxx
--set env.email= \
--set env.password=
--set persistentVolume.enabled=true
--set persistentVolume.storageClass=
--set persistentVolume.size=2Gi
--set service.type=ClusterIP
--set ingress.enabled=false
pgadmin runix/pgadmin4
The chart has been installed successfully but the app crashed with the following errors:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process()
File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 350, in import_app
import(module)
File "/pgadmin4/run_pgadmin.py", line 4, in
from pgAdmin4 import app
File "/pgadmin4/pgAdmin4.py", line 92, in
app = create_app()
File "/pgadmin4/pgadmin/init.py", line 241, in create_app
create_app_data_directory(config)
File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
_create_directory_if_not_exists(config.SESSION_DB_PATH)
File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
os.mkdir(_path)
PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
[2020-06-18 13:07:36 +0000] [91] [INFO] Worker exiting (pid: 91)
WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-06-18 13:07:36 +0000] [1] [INFO] Shutting down: Master
[2020-06-18 13:07:36 +0000] [1] [INFO] Reason: Worker failed to boot.
When execute:
helm repo add runix https://helm.runix.net/
return error:
Error: looks like "https://helm.runix.net/" is not a valid chart repository or cannot be reached: Get https://helm.runix.net/index.yaml: dial tcp: lookup helm.runix.net on 127.0.0.53:53: no such host
If remove "/" on url, work's fine.
Suggested:
helm repo add runix https://helm.runix.net
Hi,
It would be nice if we could have a better control of env variables. Currently, few options have explicit configuration, but for each new option we have to crete a new helm value, which is tiresome. Right now, I'd like to set two additional env vars:
- name: PGADMIN_CONFIG_ALLOW_SAVE_PASSWORD
value: 'False'
- name: PGADMIN_CONFIG_UPGRADE_CHECK_ENABLED
value: 'False'
and I'd either have to create two additional HELM options, or extend usage of "values.env" with to generate the block similar to ingress.hosts. See also
Describe the bug
When using versions of the chart above 1.2.14 initContainers
is rendered with an extra - name
when extraInitContainers
is specified.
Version of Helm and Kubernetes:
Helm 3.2.0
Which chart:
pgadmin4
What happened:
Rendered:
spec:
initContainers:
- name: chmod-pgpassfile
image: dpage/pgadmin4:4.21
imagePullPolicy: IfNotPresent
command: ["/bin/chmod", "0600", "/etc/pgadmin4/pgpassfile"]
volumeMounts:
- name: pgpassfile
mountPath: /etc/pgadmin4
securityContext:
runAsUser: 0
- name
What you expected to happen:
spec:
initContainers:
- name: chmod-pgpassfile
image: dpage/pgadmin4:4.21
imagePullPolicy: IfNotPresent
command: ["/bin/chmod", "0600", "/etc/pgadmin4/pgpassfile"]
volumeMounts:
- name: pgpassfile
mountPath: /etc/pgadmin4
securityContext:
runAsUser: 0
How to reproduce it (as minimally and precisely as possible):
helm template pgadmin runix/pgadmin4 -f pgadmin-values.yaml
using the following values:
extraInitContainers: |
- name: chmod-pgpassfile
image: dpage/pgadmin4:4.21
imagePullPolicy: IfNotPresent
command: ["/bin/chmod", "0600", "/etc/pgadmin4/pgpassfile"]
volumeMounts:
- name: pgpassfile
mountPath: /etc/pgadmin4
securityContext:
runAsUser: 0
Anything else we need to know:
Hi Guys,
I am trying to deploy this chart, I would also need the 9.4.24 postgres binary to be available for connecting to our greenplum instances.
Is this easy to do? Can someone give me some pointers?
Would be grateful if you guys could help me out. Thanx for all your work so far.
Is your feature request related to a problem? Please describe.
pgAdmin4 supports ldap authentication. The config is enabled if we put some parameters in config_local.py
, which can be inject in the pod by configmap, such as:
key: config_local.py
value:
import os
AUTHENTICATION_SOURCES = ['ldap', 'internal']
LDAP_AUTO_CREATE_USER = False
LDAP_CONNECTION_TIMEOUT = 30
LDAP_SERVER_URI = 'ldap://myldap.domain.com:389'
LDAP_SEARCH_BASE_DN = 'dc=mydomain,dc=com'
LDAP_USERNAME_ATTRIBUTE = 'sAMAccountName'
LDAP_BIND_USER = os.environ['LDAP_BIND_USER']
LDAP_BIND_PASSWORD = os.environ['LDAP_BIND_PASSWORD']
This configmap can be mounted as a volume in /pgadmin4/config_local.py
. This is enough to pgadmin ldap authentication works with the current image 4.26
.
Describe the solution you'd like
It would be worth if we could put only the content of the config_local.py
in values.yml
and the chart does all the rest (creates the configmap and mount the volume in /pgadmin4/config_local.py
), such as:
config_local: |
# put here the values you want to be mounted in /pgadmin4/config_local.py
# such as to enable ldap autenthication:
# import os
# AUTHENTICATION_SOURCES = ['ldap', 'internal']
# LDAP_AUTO_CREATE_USER = False
# LDAP_CONNECTION_TIMEOUT = 30
# LDAP_SERVER_URI = 'ldap://myldap.domain.com:389'
# LDAP_SEARCH_BASE_DN = 'dc=domain,dc=com'
# LDAP_USERNAME_ATTRIBUTE = 'sAMAccountName'
# LDAP_BIND_USER = os.environ['LDAP_BIND_USER']
# LDAP_BIND_PASSWORD = os.environ['LDAP_BIND_PASSWORD']
Describe alternatives you've considered
In addition, the values.yml could accept a existing secret with the keys user
and password
, to bind in the ldap, and the chart automatically put the values as enviroment variables LDAP_BIND_USER and LDAP_BIND_PASSWORD in the pod, such as:
ldap_bind_secret: ldap-bind
# an existing secret with keys user and password to be
# exposted as enviroment variables LDAP_BIND_USER and LDAP_BIND_PASSWORD
# in the pod.
Additional context
Add any other context or screenshots about the feature request here.
I was getting error:
Error: template: pgadmin4/templates/ingress.yaml:38:21: executing "pgadmin4/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}
helm.go:81: [debug] template: pgadmin4/templates/ingress.yaml:38:21: executing "pgadmin4/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}
To fix that I've changed this:
to:
paths:
- path: /
Not sure if that's my issue only.
When you specify image.tag
different that specified in Chart.yaml, resource label app.kubernetes.io/version
show wrong version.
Add LDAP support to the chart (currently pgadmin uses default credentials)
Currently not supported
Kubernetes version 1.19 made adjustment to ingress:
1:
api version: networking.k8s.io/v1
2:
service:
port:
number: 80
Currently supported is api version networking.k8s.io/v1beta1 and extensions/v1beta1, both use servicePort rather then service:port:number
.
Solution
Implement networking.k8s.io/v1, but keeping in mind the following versions should also be supported:
Describe the bug
For now there are two different version for docker image dpage/pgadmin4
in values.yaml
file.
First location:
image:
registry: docker.io
repository: dpage/pgadmin4
tag: "5.1"
pullPolicy: IfNotPresent
The second, out-of-sync (and disabled by default) section:
extraInitContainers: |
# - name: add-folder-for-pgpass
# image: "dpage/pgadmin4:4.23"
# command: ["/bin/mkdir", "-p", "/var/lib/pgadmin/storage/pgadmin"]
# volumeMounts:
# - name: pgadmin-data
# mountPath: /var/lib/pgadmin
# securityContext:
# runAsUser: 5050
Which chart:
charts/pgadmin4
I'm not sure if this difference is intended or not. I guess, it would be nice to use the same version to avoid downloading two different images resulting slower initial pod start after install.
Thanks,
Laszlo
since dpage/pgadmin4 Version 4.21
Ldap authentification support has bean add it to the project,
It will be usefull to add the possibility to adds this feature to the helm chart
to make this possible on the helm chart we need to add the possibility to adds the parameters available in the link above
Describe the bug
In a Rancher cluster with kubernetes 1.19.6, Rancher logs displays this warning:
extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Version of Helm and Kubernetes:
Kubernetes 1.19.6
Which chart: pgAdmin4
What happened:
What you expected to happen:
No warning at all.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Solution:
apiVersion as a parameter at ingress configuration. The values.yml should accept:
image:
tag: "4.29"
ingress:
enabled: true
apiVersion: networking.k8s.io/v1
...
Hello! Current version of chart , pgadmin4-1.7.6 is using pgadmin4-5.7.
Is it possible to upgrade to the latest currently available pgadmin4-6.1
Thanks.
Hi @rowanruseler ,
I deployed pgadmin4 through bitnami's kubeapps into my gke cluster. The deployment was successfull but kubeapps is now not able to show the deployed pgadmin4.
I opened an issue in the kubeapps repository and it might be interesting for you to have a look at it:
vmware-tanzu/kubeapps#1725
The problem is, that the library they use to parse the deployment.yaml is not able to parse line 73 in there: value: !!string {{ .Values.env.enhanced_cookie_protection }}
Is your feature request related to a problem? Please describe.
Current values.yaml
doesn't allow other image registry to be specified. This leads to default docker.io
is used and this affects our pipelines due to public pull rate limit recently imposed by Docker.
helm-charts/charts/pgadmin4/templates/tests/test-connection.yaml
where image busybox
can only be pulled from docker.io
.Describe the solution you'd like
registry
to image declaration under helm-charts/charts/pgadmin4/values.yaml
, default value could be docker.io
.image:
registry: docker.io /// << --- added
repository: dpage/pgadmin4
tag: "4.28"
pullPolicy: IfNotPresent
registry
in all related templates, e.g. helm-charts/charts/pgadmin4/templates/deployment.yamlimage: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
Describe alternatives you've considered
N/A
Additional context
N/A
Thank you
The variables env.email
and env.password
are documented twice in the table.
Best regards
Matthias
Describe the bug
FATA: 2020/05/29 13:45:29.801809 add network policy: named ports in network policies is not supported yet. Rejecting network policy: pgadmin-pgadmin4 from further processing. named port http in network policy
Version of Helm and Kubernetes:
Helm 2
Kubernetes 1.14
Weave 2.6
Which chart:
pgadmin4
Hi,
I successfully deployed this helm chart using a ci/cd pipeline and it was successful. I am also able to add databases.
I tried to enable ldap according to [https://github.com//issues/47] by adding the following details to the value.yaml env variables:
env:
variables:
- name: PGADMIN_CONFIG_AUTHENTICATION_SOURCES
value: "['ldap', 'internal']"
- name: PGADMIN_CONFIG_LDAP_AUTO_CREATE_USER
value: "False"
- name: PGADMIN_CONFIG_LDAP_CONNECTION_TIMEOUT
value: 30
- name: PGADMIN_CONFIG_LDAP_SERVER_URI
value: "'ldap://ldap.mydomain.com:389'"
- name: PGADMIN_CONFIG_LDAP_SEARCH_BASE_DN
value: "'dc=mydomain,dc=com'"
- name: PGADMIN_CONFIG_LDAP_USERNAME_ATTRIBUTE
value: "'UserPrincipalName'"
- name: PGADMIN_CONFIG_LDAP_BIND_USER
value: "'myuser'"
- name: PGADMIN_CONFIG_LDAP_BIND_PASSWORD
value: "'mypassword'"
Which generates the following config_distro.py:
HELP_PATH = '../../docs'
DEFAULT_BINARY_PATHS = {
'pg': '/usr/local/pgsql-12'
}
LDAP_SEARCH_BASE_DN = 'dc=mydomain,dc=com'
LDAP_AUTO_CREATE_USER = False
LDAP_SERVER_URI = 'ldap://ldap.mydomain.com:389'
AUTHENTICATION_SOURCES = ['ldap', 'internal']
ENHANCED_COOKIE_PROTECTION = False
LDAP_USERNAME_ATTRIBUTE = 'UserPrincipalName'
LDAP_BIND_PASSWORD = 'mypassword'
LDAP_CONNECTION_TIMEOUT = 30
LDAP_BIND_USER = 'myuser'
But I still cannot login. Is there anything that I am missing.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.