Giter Club home page Giter Club logo

helm-openldap's Introduction

build Artifact HUB License Version

OpenLDAP Helm Chart

Disclaimer

This version now use the Bitnami Openldap container image.

More detail on the container image can be found here

The chart now support Bitnami/Openldap 2.6.6.

Due to #115, the chart does not fully support scaling the openldap cluster. To scale the cluster please follow scaling your cluster

  • This will be fixed in priority

Prerequisites Details

  • Kubernetes 1.8+
  • PV support on the underlying infrastructure

Chart Details

This chart will do the following:

  • Instantiate 3 instances of OpenLDAP server with multi-master replication
  • A phpldapadmin to administrate the OpenLDAP server
  • ltb-passwd for self service password

TL;DR

To install the chart with the release name my-release:

$ helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
$ helm install my-release helm-openldap/openldap-stack-ha

Configuration

We use the container images provided by https://github.com/bitnami/containers/tree/main/bitnami/openldap. The container image is highly configurable and well documented. Please consult to documentation of the image for more information.

The following table lists the configurable parameters of the openldap chart and their default values.

Global section

Global parameters to configure the deployment of the application.

Parameter Description Default
global.imageRegistry Global image registry ""
global.imagePullSecrets Global list of imagePullSecrets []
global.ldapDomain Domain LDAP can be explicit dc=example,dc=org or domain based example.org example.org
global.existingSecret Use existing secret for credentials - the expected keys are LDAP_ADMIN_PASSWORD and LDAP_CONFIG_ADMIN_PASSWORD ""
global.adminUser Openldap database admin user admin
global.adminPassword Administration password of Openldap Not@SecurePassw0rd
global.configUserEnabled Whether to create a configuration admin user true
global.configUser Openldap configuration admin user admin
global.configPassword Configuration password of Openldap Not@SecurePassw0rd
global.ldapPort Ldap port 389
global.sslLdapPort Ldaps port 636

Application parameters

Parameters related to the configuration of the application.

Parameter Description Default
replicaCount Number of replicas 3
users User list to create (comma separated list) , can't be use with customLdifFiles ""
userPasswords User password to create (comma seprated list) ""
group Group to create and add list of user above ""
env List of key value pairs as env variables to be sent to the docker image. See https://github.com/bitnami/containers/tree/main/bitnami/openldap for available ones [see values.yaml]
initTLSSecret.tls_enabled Set to enable TLS/LDAPS with custom certificate - Please also set initTLSSecret.secret, otherwise it will not take effect false
initTLSSecret.secret Secret containing TLS cert and key must contain the keys tls.key , tls.crt and ca.crt ""
customSchemaFiles Custom openldap schema files used in addition to default schemas ""
customLdifFiles Custom openldap configuration files used to override default settings ""
customLdifCm Existing configmap with custom ldif. Can't be use with customLdifFiles ""
customAcls Custom openldap ACLs. Overrides default ones. ""
replication.enabled Enable the multi-master replication true
replication.retry retry period for replication in sec 60
replication.timeout timeout for replication in sec 1
replication.starttls starttls replication critical
replication.tls_reqcert tls certificate validation for replication never
replication.interval interval for replication 00:00:00:10
replication.clusterName Set the clustername for replication "cluster.local"

Phpladadmin configuration

Parameters related to PHPLdapAdmin

Parameter Description Default
phpldapadmin.enabled Enable the deployment of PhpLdapAdmin true
phpldapadmin.ingress Ingress of Phpldapadmin {}
phpldapadmin.env Environment variables for PhpldapAdmin {PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT: "never"}

For more advance configuration see README.md
For all possible chart parameters see chart's README.md

Self-service password configuration

Parameters related to Self-service password.

Parameter Description Default
ltb-passwd.enabled Enable the deployment of Ltb-Passwd true
ltb-passwd.ingress Ingress of the Ltb-Passwd service {}

For more advance configuration see README.md
For all possible parameters see chart's README.md

Kubernetes parameters

Parameters related to Kubernetes.

Parameter Description Default
updateStrategy StatefulSet update strategy {}
kubeVersion kubeVersion Override Kubernetes version ""
nameOverride String to partially override common.names.fullname ""
fullnameOverride fullnameOverride String to fully override common.names.fullname ""
commonLabels commonLabels Labels to add to all deployed objects {}
clusterDomain clusterDomain Kubernetes cluster domain name cluster.local
extraDeploy extraDeploy Array of extra objects to deploy with the release ""
service.annotations Annotations to add to the service {}
service.externalIPs Service external IP addresses []
service.enableLdapPort Enable LDAP port on the service and headless service true
service.enableSslLdapPort Enable SSL LDAP port on the service and headless service true
service.ldapPortNodePort Nodeport of External service port for LDAP if service.type is NodePort nil
service.clusterIP Static cluster IP to assign to the service (if supported) nil
service.loadBalancerIP IP address to assign to load balancer (if supported) ""
service.loadBalancerSourceRanges List of IP CIDRs allowed access to load balancer (if supported) []
service.sslLdapPortNodePort Nodeport of External service port for SSL if service.type is NodePort nil
service.type Service type can be ClusterIP, NodePort, LoadBalancer ClusterIP
persistence.enabled Whether to use PersistentVolumes or not false
persistence.storageClass Storage class for PersistentVolumes. <unset>
persistence.existingClaim Add existing Volumes Claim. <unset>
persistence.accessMode Access mode for PersistentVolumes ReadWriteOnce
persistence.size PersistentVolumeClaim storage size 8Gi
extraVolumes Allow add extra volumes which could be mounted to statefulset None
extraVolumeMounts Add extra volumes to statefulset None
customReadinessProbe Liveness probe configuration [see values.yaml]
customLivenessProbe Readiness probe configuration [see values.yaml]
customStartupProbe Startup probe configuration [see values.yaml]
resources Container resource requests and limits in yaml {}
podSecurityContext Enabled OPENLDAP pods' Security Context true
containerSecurityContext Set OPENLDAP pod's Security Context fsGroup true
existingConfigmap existingConfigmap The name of an existing ConfigMap with your custom configuration for OPENLDAP ``
podLabels podLabels Extra labels for OPENLDAP pods {}
podAnnotations podAnnotations Extra annotations for OPENLDAP pods {}
podAffinityPreset podAffinityPreset Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard ``
podAntiAffinityPreset podAntiAffinityPreset Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard soft
pdb.enabled Enable Pod Disruption Budget false
pdb.minAvailable Configure PDB to have at least this many health replicas. 1
pdb.maxUnavailable Configure PDB to have at most this many unhealth replicas. <unset>
nodeAffinityPreset nodeAffinityPreset.type Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard true
affinity affinity Affinity for OPENLDAP pods assignment ``
nodeSelector nodeSelector Node labels for OPENLDAP pods assignment ``
sidecars sidecars Add additional sidecar containers to the OPENLDAP pod(s) ``
initContainers initContainers Add additional init containers to the OPENLDAP pod(s) ``
volumePermissions 'volumePermissions' init container parameters ``
priorityClassName OPENLDAP pods' priority class name ``
tolerations Tolerations for pod assignment []

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install --name my-release -f values.yaml stable/openldap

Tip: You can use the default values.yaml

PhpLdapAdmin

To enable PhpLdapAdmin set phpldapadmin.enabled to true

Ingress can be configure if you want to expose the service. Setup the env part of the configuration to access the OpenLdap server

Note : The ldap host should match the following namespace.Appfullname

Example :

phpldapadmin:
  enabled: true
  ingress:
    enabled: true
    annotations: {}
    # Assuming that ingress-nginx is used
    ingressClassName: nginx
    path: /
    ## Ingress Host
    hosts:
    - phpldapadmin.local
  env:
    PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT: "never"

Self-service-password

To enable Self-service-password set ltb-passwd.enabled to true

Ingress can be configure if you want to expose the service.

Setup the ldap part with the information of the OpenLdap server.

Set bindDN accordingly to your ldap domain

Note : The ldap server host should match the following ldap://namespace.Appfullname

Example :

ltb-passwd:
  enabled : true
  ingress:
    enabled: true
    annotations: {}
    # Assuming that ingress-nginx is used
    ingressClassName: nginx
    host: "ssl-ldap2.local"

Cleanup orphaned Persistent Volumes

Deleting the Deployment will not delete associated Persistent Volumes if persistence is enabled.

Do the following after deleting the chart release to clean up orphaned Persistent Volumes.

$ kubectl delete pvc -l release=${RELEASE-NAME}

Custom Secret

global.existingSecret can be used to override the default secret.yaml provided

Scaling your cluster

In order to scale the cluster, first use helm to updrgade the number of replica

helm upgrade -n openldap-ha --set replicaCount=4 openldap-ha .

Then connect to the <openldap>-0 container, under /opt/bitnami/openldap/etc/schema/, edit :

  1. serverid.ldif and remove existing olcServerID (only keep the one you added by scaling)
  2. brep.ldif and remove existing olcServerID (only keep the one you added by scaling)
  3. Apply your changes
ldapmodify -Y EXTERNAL -H ldapi:/// -f /tmp/serverid.ldif
ldapmodify -Y EXTERNAL -H ldapi:/// -f /tmp/brep.ldif

Tips : to edit in the container, use :

cat <<EOF > /tmp/serverid.ldif
copy
your 
line
EOF

Troubleshoot

You can increase the level of log using env.LDAP_LOGLEVEL

Valid log levels can be found here

Boostrap custom ldif

Warning when using custom ldif in the customLdifFiles or customLdifCm section you have to create the high level object organization

dn: dc=test,dc=example
dc: test
o: Example Inc.
objectclass: top
objectclass: dcObject
objectclass: organization

note the admin user is created by the application and should not be added as a custom ldif

All internal configuration like cn=config , cn=module{0},cn=config cannot be configured yet.

Changelog/Updating

To 4.0.0

This major update switch the base image from Osixia to Bitnami Openldap

  • Upgrade may not work fine between 3.x and 4.x
  • Ldap and Ldaps port are non privileged ports (1389 and 1636)
  • Replication is now purely setup by configuration
  • Extra schema cannot be added/modified

A default tree (Root organisation, users and group) is created during startup, this can be skipped using LDAP_SKIP_DEFAULT_TREE , however you need to use customLdifFiles or customLdifCm to create a root organisation.

  • This will be improved in a future update.

To 3.0.0

This major update of the chart enable new feature for the deployment such as :

  • supporting initcontainer
  • supporting sidecar
  • use global parameters to ease the configuration of the app
  • out of the box integration with phpldapadmin and self-service password in a secure way

helm-openldap's People

Contributors

conneryn avatar cvalentin-dkt avatar dplookup avatar eugenmayer avatar guillomep avatar heyanlong avatar j0nm1 avatar jean-philippegouin avatar jgielstra avatar jp-gouin avatar m4dm4rtig4n avatar mrachuta avatar olivierjavaux avatar opencmit2 avatar pschichtel avatar qianwch avatar realkiessla avatar rkul avatar roman-aleksejuk-telia avatar syphr42 avatar thriqon avatar wkloucek avatar x-xymos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

helm-openldap's Issues

existingClaim ?

Hello,

I want use longhorn for persistantdata, but i don't find where i can set existingClaim.
Am I missing a few things ?

serviceAccount configuration not working.

Describe the bug
serviceAccount.name configuration is not working. It always set to default sa.
This is my values.yaml.

serviceAccount:
  create: true
  name: "ldap-sa"

To Reproduce
Steps to reproduce the behavior:

  1. Deploy using helm
  2. See StatefulSet information using kubectl describe statefulset openldap

Additional context
ServiceAccount configuration feature has been added at 7b9c55f. But why are these in the comment? Is there a special reason?

{{- /*
serviceAccountName: {{ template "openldap.serviceAccountName" . }}
*/ -}}

first installation - again

k8s 1.6.2:

helm install --name ldap helm-openldap/ --tls

Error: YAML parse error on openldap/charts/ltb-passwd/templates/ingress.yaml: error converting YAML to JSON: yaml: line 5: mapping values are not allowed in this context

I only changed storage-class and size in values.

Also setting both phpldapadmin and ltb-passwd to false:

Error: validation failed: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2" ( It's apps/v1 in k8s 1.6.2)

Having this done, the pods crashing (interesing enough - only two are visible) with

read_config: no serverID / URL match found. Check slapd -h arguments

Release tags do not align with chart versions

Hello Jean Philippe,

Release tags in this repo are a bit confusing. Usually they are represents helm chart versions.
What do you think about updating them to align with Chart versions?

Btw, page on Artifacthub a bit outdated and confusing. I found that chart version is wrong and configuration part belongs to old release.

Please let me know if I can help!
Regards,
Roman

podAnnotations unused

Describe the bug
The podAnnotations setting in values.yaml does not get applied to the OpenLDAP pods.

To Reproduce
Steps to reproduce the behavior:

  1. Install the helm chart setting values for podAnnotations: --set podAnnotations=name=value
  2. Verify the annotations for the openldap-stack-ha-0 pod
  3. name=value does not appear in the list of annotations

Expected behavior
Annotations in the podAnnotations field are expected to be applied to the openldap-stack-ha-* pods, per the documentation.

Screenshots
NA

Desktop (please complete the following information):

  • OS: NA
  • Browser NA
  • Version NA

Smartphone (please complete the following information):

  • Device: NA
  • OS: NA
  • Browser NA
  • Version NA

Additional context
NA

Allow to extend StatefulSet with initContainers and/or sidecars

Hi,

Problem description

I've configured the OpenLDAP overlay Audit Logging see [OpenLDAP Software 2.4 Administrator's Guide

12.2. Audit Logging](https://www.openldap.org/doc/admin24/overlays.html) so that now all my audit events are written into a specific file. Now I would like to export the content of this audit log file to a remote destination (in my case my ELK stack).

Moreover, since the main slapd process is launched with a non root user (which is fine) the process has no permission to write into the /var/log folder.

Expected solution

It would be very convenient if I could add some extra containers to the Pod template of the StatefulSet.
For example, Helm Charts provided by Bitnami always have the ability to declare some (extra) sidecars and initContainers next to the default ones.

With such a feature I could declare an initContainer to set the right permissions to write the auditlog file and also a sidecar to run the necessary logic to export its content at a remote destination.

Release your work on probe ?

Hi,

Can you release the helm chart with all the work you do aroun all the probe ?
Im' currently facing some problems with the readinessProbe, which is a little bit to low...

Thank you and for the work you.

How to set openldap server name?

On deploy, servername is set as:

openldap-openldap-stack-ha.my-namespace.svc.cluster.local:389

How can I set servername to:

openldap.my-namespace.svc.cluster.local:389

Adding customLdifFiles in TLS mode results in dhparam.pem error

adding custom ldif results in container error:

chmod: cannot access '/container/service/slapd/assets/certs/dhparam.pem': No such file or directory
***  ERROR  | 2021-05-07 00:22:57 | /container/run/startup/slapd failed with status 1

***  INFO   | 2021-05-07 00:22:57 | Killing all processes...

pod spawns correctly without any custom ldif

yaml is

# Custom openldap configuration files used to override default settings
customLdifFiles:
  app.ldif: |-
    dn: cn=app-user,dc=app,dc=dev,dc=example,dc=net
    userPassword: app-user-secret
    description: user
    objectClass: simpleSecurityObject
    objectClass: organizationalRole
    cn: app

Bitnami/openldap image support

Is your feature request related to a problem? Please describe.
When I try to use the latest docker image docker.io/bitnami/openldap for the installation, the pod fails to come up with an error in the log.

Describe the solution you'd like
Some comments in the README or in the values.yml or an example how the installation needs to be configured to be able to use the bitnami image.

Describe alternatives you've considered
Describe why the bitnami image can't be supported.

Additional context
The bitnami image seems to be based on a more recent openldap version than the osixia image.

Trouble getting phpadmin connection to work

Hi,

thanks for this helm chart and the work which went into it!

We've been running a phpadmin & LDAP containers for a while and would like to switch to helm now.

We've read the instructions how to get phpadmin working but I cannot get the connection between phpldapadmin and ldap to work.
Here's how we do it currently:

set {
    name  = "phpldapadmin.env.PHPLDAPADMIN_LDAP_HOSTS"
    value = "<namespace>.<subdomain>.example.org"
}

set {
    name  = "phpldapadmin.ingress.hosts[0]"
    value = "<subdomain>.example.org"
}

set {
    name  = "adminPassword"
    value = var.OPENLDAP_PASS
}

set {
    name  = "phpldapadmin.ingress.enabled"
    value = true
}

We can reach the service via https://.example.org.
When logging in using the admin user (is that correct?) and the password which is passed as a secret, we see

Unable to connect to LDAP server openldap
--
Error: Can't contact LDAP server (-1) forย user

So it seems something is still borked with the connection between phpldapadmin pod to the ldap pods. Is there a way in k8s to check which value would be the correct one for PHPLDAPADMIN_LDAP_HOSTS? Or is there something else wrong in our setup?

ldap-ltb-passwd CrashLoopBackOff with default values

Describe the bug
ldap-ltb-passwd keeps crashing because of nginx missing file

To reproduce
Steps to reproduce the behavior:

  1. helm install my-release helm-openldap/openldap-stack-ha

Expected behavior

$ kubectl get pod | grep ldap-ltb-passwd
ldap-ltb-passwd-5494c456c-nkhf4      1/1     Running   0          1m

Screenshots

$ kubectl logs deployment/ldap-ltb-passwd
2022-06-24.09:24:20 [STARTING] ** [nginx] [24] Starting nginx 1.23.0
nginx: [emerg] open() "/etc/nginx/nginx.conf.d/php-fpm.conf" failed (2: No such file or directory) in /etc/nginx/sites.available/ssp.conf:11

I added the ldif file for changing the acess for certain attributes as a customFileSets under /container/service/slapd/assets/config/bootstrap/ldif/custom, but it did not take effect

Describe the bug
ldif file placed at /container/service/slapd/assets/config/bootstrap/ldif/custom is not getting applied with ldapmodify

To Reproduce
Steps to reproduce the behavior:
added below in my values.yaml according to the documentation of
/container/service/slapd/assets/config/bootstrap/ldif/custom
customFileSets:

  • name: custom
    targetPath: /container/service/slapd/assets/config/bootstrap/ldif/custom
  • filename: access.ldif
    content: |
    dn: olcDatabase={1}mdb,cn=config
    changetype: modify
    replace: olcAccess
    olcAccess: {0}to attrs=jpegPhoto,userPassword,shadowLastChange,pwmResponseSet by self write by dn="cn=admin,dc=skat-classic,dc=dxc,dc=com" write by anonymous auth by * none
    olcAccess: {1}to * by self write by dn="cn=admin,dc=skat-classic,dc=dxc,dc=com" write by dn="cn=readonly,dc=skat-classic,dc=dxc,dc=com" read by * none

but its not taking effect.
Its not modifying the ldap configuration with ldapmodify

Issues bootstrapping with `customLdifFiles`

I am facing moderate pain trying to start an HA instance with customLdifFiles.

It seems this might be related to #31.
There, the issue that bootstrapping is skipped if one of the following dirs is not empty:

  • /etc/ldap/slapd.d
  • /var/lib/ldap

In my current instance, I see bootstrapping is skipped:

***  INFO   | 2021-10-30 16:17:44 | Start OpenLDAP...
***  INFO   | 2021-10-30 16:17:44 | Waiting for OpenLDAP to start...
***  INFO   | 2021-10-30 16:17:44 | Add TLS config...
***  INFO   | 2021-10-30 16:17:46 | Add replication config...
***  INFO   | 2021-10-30 16:17:50 | Stop OpenLDAP...
***  INFO   | 2021-10-30 16:17:50 | Configure ldap client TLS configuration...
***  INFO   | 2021-10-30 16:17:50 | Remove config files...
***  INFO   | 2021-10-30 16:17:50 | First start is done...
***  INFO   | 2021-10-30 16:17:50 | Remove file /container/environment/99-default/default.startup.yaml
***  INFO   | 2021-10-30 16:17:50 | Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.yaml

In /etc/ldap/slapd.d I see the following files

cn=config/                                    docker-openldap-was-admin-password-set        docker-openldap-was-started-with-tls
cn=config.ldif                                docker-openldap-was-started-with-replication

/var/lib/ldap contains the database, which might be empty during the first start.
I've deployed fresh, i.e. with no PV and no PVC. Still, the bootstrapping is skipped.
This let's me assume that something writes into this dir before https://github.com/osixia/docker-openldap/blob/v1.5.0/image/service/slapd/startup.sh#L182-L183 is reached.

I also checked with logLevel: debug, however there is no debugging line indicating why Bootstrapping might be skipped, so this action is not really helping.

Maybe @ivan-c can share how he made bootstrapping work?

@jp-gouin Are tests still working with respect to this as you mentioned in #31 (comment)?

Pod "openldap-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"

Describe the bug
Error in stateful set:

create Pod openldap-0 in StatefulSet openldap failed error: Pod "openldap-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]

Any suggestions what I'm doing wrong, or if something in the helm Chart is wrong? :) Thank you!

To Reproduce
Steps to reproduce the behavior:

  1. helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
  2. Create values-openldap.yaml
global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  storageClass: "fast-disks"
  ldapDomain: "test.local
  1. helm install openldap helm-openldap/openldap-stack-ha -f values-openldap.yaml

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Ubunt 20.04
  • Kubernetes version:
$  kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:43:11Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}

Additional context
Add any other context about the problem here.

ldapadmin page brocken with path: / and pathType: ImplementationSpecific

Hello @jp-gouin, thanks for all your hard work on this chart. I'm new to heml and k8s.
I'm trying to setup a ldap cluster and ldapadmin. The problem is that ldapadmin page is not loading right with the path: / and the default type: ImplementationSpecific and if I try to change the path type to Prefix, the setting is ignored and the path type of ImplementaionSpecific is used. If I manually edit the ingress setting to pathType: Prefix, the page loads correctly.

To Reproduce
Steps to reproduce the behavior:
the ldapadmin part of the config file is:

phpldapadmin.txt

The ingress config of the running pod looks like this:

phpldapadmin-ingress.txt

Expected behavior
PHP ldap admin to load at: https://phpldapadmin.{domain}

Screenshots
see attached screenshot:

MicrosoftTeams-image

Desktop (please complete the following information):

  • OS: CentOS linux

Thanks.

first installation - fails again

Hi @jp-gouin , I see the same issue as last time,
the ldap pods just won't start. I posted logs in the last issue, not sure if it's notifying, as it's "closed"

interesting enough, I wasn't able to expand the existing setup to 3 pods, the 3rd pod wouldn't start. But I was able to "clone" the 2-pod-setup to my new cluster, and it works.

I'm still not able to install it with helm install (using helm3, openldap 1.3.0 image)

maybe you have a minute to look at this and give me some hint.

ERROR | 2022-05-30 16:25:56 | /container/run/startup/slapd failed with status 80

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. ไฝฟ็”จ่‡ชๅฎšไน‰ๅŸŸๅ
  2. ้ƒจ็ฝฒๅœจEKSไธญไฝฟ็”จhelm้ƒจ็ฝฒ
  3. ไฝฟ็”จๅ…่ดนDN่ฏไนฆ
  4. openldap-0 0/1 CrashLoopBackOff 6 12m
  5. ERROR | 2022-05-30 16:25:56 | /container/run/startup/slapd failed with status 80

้…็ฝฎ

openldap-vaules.yaml

global:
  storageClass: "gp2"
  ldapDomain: onwalk.net
  adminPassword: {{ ExtraVars.password }}
  configPassword: {{ ExtraVars.password }}
replicaCount: 1
customTLS:
  enabled: true
  secret: "openldap-tls" # ๅŒ…ๅซca็š„pem ๅ’Œkey
ltb-passwd:
  ingress:
    enabled: true
    hosts:
    - "ldap-ltb.onwalk.net"
phpldapadmin:
  enabled: true
  ingress:
    enabled: true
    hosts:
    - ldap-admin.onwalk.net
 env:
    PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT: "never"

Option to set slapd log level

I want to set the log level of slapd and I can't do it. The underlying image gets the log level from the LDAP_LOG_LEVEL environment variable but there doesn't seem to be a way to pass its value through the Helm chart.

I would like to be able to set the container LDAP_LOG_LEVEL, either through a dedicated Helm chart variable or through a custom envvar file mounted on the container /container/environment/01-custom dir as the image README suggests.

OpenLDAP does not deploy successfully with default values

Describe the bug
When installing the chart with the default values, an error is shown when describing the openldap stateful set which suggests a secret is missing and no openldap pods are created.

To Reproduce
Steps to reproduce the behavior:

  1. helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
  2. helm install openldap helm-openldap/openldap-stack-ha
  3. kubectl describe statefulset openldap
  4. Warning is shown detailing pods cannot be created as a secret required for a volume is not present. Pods are created for phpldapadmin and ltb-passwd but not for openldap.

Expected behavior
Three ready and functional openldap pods resulting in a working deployment of OpenLDAP.

Screenshots
Warning FailedCreate 6s (x6 over 57s) statefulset-controller create Pod openldap-0 in StatefulSet openldap failed error: Pod "openldap-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]

OpenLDAP container stops with error "read_config: no serverID / URL match found. Check slapd -h arguments."

Describe the bug
After deploying with the provided Helm chart, the two OpenLDAP pods (openldap-0 and openldap-1) fail with the stated error.
The values.yaml file used:

global:
  imageRegistry: ""
  imagePullSecrets: []
  storageClass: "longhorn"
  ldapDomain: "{{ traefik_domain }}"
  adminPassword: Not@SecurePassw0rd
  configPassword: Not@SecurePassw0rd
clusterDomain: "{{ traefik_domain }}"
image:
  repository: osixia/openldap
  tag: 1.5.0
  pullPolicy: Always
  pullSecrets: []
logLevel: debug
customTLS:
  enabled: false
service:
  annotations: {}
  ldapPort: 389
  sslLdapPort: 636
  externalIPs: []
  type: ClusterIP
  sessionAffinity: None
env:
 LDAP_LOG_LEVEL: "256"
 LDAP_ORGANISATION: "Moerman"
 LDAP_READONLY_USER: "false"
 LDAP_READONLY_USER_USERNAME: "readonly"
 LDAP_READONLY_USER_PASSWORD: "readonly"
 LDAP_RFC2307BIS_SCHEMA: "false"
 LDAP_BACKEND: "mdb"
 LDAP_TLS: "true"
 LDAP_TLS_CRT_FILENAME: "tls.crt"
 LDAP_TLS_KEY_FILENAME: "tls.key"
 LDAP_TLS_DH_PARAM_FILENAME: "dhparam.pem"
 LDAP_TLS_CA_CRT_FILENAME: "ca.crt"
 LDAP_TLS_ENFORCE: "false"
 LDAP_TLS_REQCERT: "never"
 KEEP_EXISTING_CONFIG: "false"
 LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
 LDAP_SSL_HELPER_PREFIX: "ldap"
 LDAP_TLS_VERIFY_CLIENT: "never"
 LDAP_TLS_PROTOCOL_MIN: "3.0"
 LDAP_TLS_CIPHER_SUITE: "NORMAL"
pdb:
  enabled: false
  minAvailable: 1
  maxUnavailable: ""
customFileSets: []
replication:
  enabled: true
  clusterName: "{{ traefik_domain }}"
  retry: 60
  timeout: 1
  interval: 00:00:00:10
  starttls: "critical"
  tls_reqcert: "never"
persistence:
  enabled: true
  storageClass: "longhorn"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
podSecurityContext:
  enabled: true
  fsGroup: 1001
containerSecurityContext:
  enabled: false
  runAsUser: 1001
  runAsNonRoot: true

serviceAccount:
  create: true
  name: ""
volumePermissions:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 10-debian-10
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  command: {}
  resources:
    limits: {}
    requests: {}
  containerSecurityContext:
    runAsUser: 0

To Reproduce
Steps to reproduce the behavior:

  1. Fresh deploy using helm
  2. Check the log of the pod
  3. See error

Error: olcMirrorMode: value #0: <olcMirrorMode> database is not a shadow

Hello,

I am unable to get OpenLDAP running on a 1.21.1 Kubernetes cluster, I do get this error

2021-06-30T10:25:49.549204376+02:00 60dc2a8d @(#) $OpenLDAP: slapd 2.4.57+dfsg-1~bpo10+1 (Jan 30 2021 06:59:51) $
2021-06-30T10:25:49.549230976+02:00 Debian OpenLDAP Maintainers <[email protected]>
2021-06-30T10:25:49.598702423+02:00 60dc2a8d olcMirrorMode: value #0: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598734095+02:00 60dc2a8d config error processing olcDatabase={0}config,cn=config: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598738830+02:00 60dc2a8d slapd stopped.
2021-06-30T10:25:49.598743525+02:00 60dc2a8d connections_destroy: nothing to destroy.

I am using this helm install:

helm install openldap helm-openldap/openldap-stack-ha \
  --namespace openldap \
  --create-namespace \
  --set replicaCount=1 \
  --set replication.enabled=false \
  --set image.tag=1.5.0 \
  --set-string logLevel="trace" \
  --set-string env.LDAP_ORGANISATION="Test LDAP" \
  --set-string env.LDAP_DOMAIN="ldap.internal.xxxxxxx.com" \
  --set-string env.LDAP_BACKEND="mdb" \
  --set-string env.LDAP_TLS="true" \
  --set-string env.LDAP_TLS_ENFORCE="false" \
  --set-string env.LDAP_REMOVE_CONFIG_AFTER_SETUP="true" \
  --set-string env.LDAP_ADMIN_PASSWORD="admin" \
  --set-string env.LDAP_CONFIG_PASSWORD="config" \
  --set-string env.LDAP_READONLY_USER="true" \
  --set-string env.LDAP_READONLY_USER_USERNAME="readonly" \
  --set-string env.LDAP_READONLY_USER_PASSWORD="password"

Any help would be appreciated. Thanks!

Default Chart not Starting

I was working with customized values.yaml and was running into issue, so I tried plain defaults and recieved same error:

***  INFO   | 2021-05-05 19:43:14 | CONTAINER_LOG_LEVEL = 3 (info)
***  INFO   | 2021-05-05 19:43:14 | Search service in CONTAINER_SERVICE_DIR = /container/service :
***  INFO   | 2021-05-05 19:43:14 | link /container/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools
***  INFO   | 2021-05-05 19:43:14 | link /container/service/slapd/startup.sh to /container/run/startup/slapd
***  INFO   | 2021-05-05 19:43:14 | link /container/service/slapd/process.sh to /container/run/process/slapd/run
***  INFO   | 2021-05-05 19:43:14 | Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.startup.yaml
/container/environment/99-default/default.yaml

To see how this files are processed and environment variables values,
run this container with '--loglevel debug'
***  INFO   | 2021-05-05 19:43:14 | Running /container/run/startup/:ssl-tools...
***  INFO   | 2021-05-05 19:43:14 | Running /container/run/startup/slapd...
***  INFO   | 2021-05-05 19:43:14 | openldap user and group adjustments
***  INFO   | 2021-05-05 19:43:14 | get current openldap uid/gid info inside container
***  INFO   | 2021-05-05 19:43:14 | -------------------------------------
***  INFO   | 2021-05-05 19:43:14 | openldap GID/UID
***  INFO   | 2021-05-05 19:43:14 | -------------------------------------
***  INFO   | 2021-05-05 19:43:14 | User uid: 911
***  INFO   | 2021-05-05 19:43:14 | User gid: 911
***  INFO   | 2021-05-05 19:43:14 | uid/gid changed: false
***  INFO   | 2021-05-05 19:43:14 | -------------------------------------
***  INFO   | 2021-05-05 19:43:14 | updating file uid/gid ownership
***  INFO   | 2021-05-05 19:43:14 | No certificate file and certificate key provided, generate:
***  INFO   | 2021-05-05 19:43:14 | /container/run/service/slapd/assets/certs/tls.crt and /container/run/service/slapd/assets/certs/tls.key
2021/05/05 19:43:14 [INFO] generate received request
2021/05/05 19:43:14 [INFO] received CSR
2021/05/05 19:43:14 [INFO] generating key: ecdsa-384
2021/05/05 19:43:14 [INFO] encoded CSR
2021/05/05 19:43:14 [INFO] signed certificate with serial number 116630929021868969892101848881681016104120383985
mv: cannot move '/tmp/cert.pem' to '/container/run/service/slapd/assets/certs/tls.crt': No such file or directory
mv: cannot move '/tmp/cert-key.pem' to '/container/run/service/slapd/assets/certs/tls.key': No such file or directory
***  INFO   | 2021-05-05 19:43:14 | Link /container/service/:ssl-tools/assets/default-ca/default-ca.pem to /container/run/service/slapd/assets/certs/ca.crt
ln: failed to create symbolic link '/container/run/service/slapd/assets/certs/ca.crt': No such file or directory
Generating DH parameters, 2048 bit long safe prime, generator 2
This is going to take a long time

**** CERT GENERATES ***

*** WARNING | 2021-05-05 19:45:02 | An error occurred. Aborting.
***  INFO   | 2021-05-05 19:45:02 | Shutting down /container/run/startup/slapd (PID 11)...
*** WARNING | 2021-05-05 19:45:02 | Init system aborted.
***  INFO   | 2021-05-05 19:45:02 | Killing all processes...

Can be recreated by

helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
helm install openldap helm-openldap/openldap-stack-ha -n my-namespace

Feat: add cicd in github action

Is your feature request related to a problem? Please describe.
The actual Ci/CD is ran outside Github.
Use Github action to install the chart , perform chaos test and ldap actions
Use selenium to test phpldapadmin and self service password integration with ldap

  • setup KIND
  • Setup Chaos mesh
  • Install the chart
  • Perform chaos test (rewrite Argo wf in GitHub action or install Argo)
  • Perform tests on phpldapadmin and self service password

Trigger on PR

Improve replication configuration

Templatisation of the replication configuration

Edit the configg map with this info :

LDAP_REPLICATION_CONFIG_SYNCPROV: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"60 +\" timeout=1 "
  LDAP_REPLICATION_DB_SYNCPROV: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:10 retry=\"60 +\" timeout=1 "

To add the interval and timeout in the values.yaml

Bad default LDAP_PORT

Describe the bug
The default values.yaml doesn't set the correct LDAP port.

To Reproduce
Steps to reproduce the behavior:

  1. helm install my-release helm-openldap/openldap-stack-ha

I disabled ldap-ltb-passwd because of #65

Expected behavior

$ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
ldap-phpldapadmin-5cbd44ffc5-xd7xd   1/1     Running   0          1m
ldap-0                               1/1     Running   0          1m
ldap-1                               1/1     Running   0          1m
ldap-2                               1/1     Running   0          1m

Screenshots

$ kubectl get pods
NAME                                 READY   STATUS             RESTARTS      AGE
ldap-phpldapadmin-5cbd44ffc5-c55t6   1/1     Running            0             2m20s
ldap-0                               0/1     CrashLoopBackOff   4 (51s ago)   2m20s
$ kubectl logs ldap-0 | grep LDAP_PORT
...
***  DEBUG  | 2022-06-24 09:44:17 | LDAP_PORT = tcp://10.43.219.179:389
...

Possible Solutions
Add to values.yaml:

env:
  ...
  LDAP_PORT: "389"
  LDAPS_PORT: "686" # I am not sure this one is required.

Ltb-passwd secret usage

Make Ltb-passwd use the secret created with openldap.
Edit the deployment.yaml of ltb-passwd to variabilyze BINDDN and BINDPW

Chore: Improve validation pipeline

  • Write entry on Openldap and check that the data is replicated
  • Make Chaos mesh work on GKE (related to chaos-mesh/chaos-mesh#937)
  • Test reliability of the deployment
  • Test PHPLdapAdmin connection with the LDAP server
  • Create a group
  • Create a user
  • Change user password on ltb-passwd

[Feature/Docs] Create helm repo and fix readme

Thanks for your work on this, while baseing on a proven solid docker container.
However: It would be nice if you could add a git workflow to automatically create a helm repo and push new versions there.

Also: please update the readme, because it still point to the old setup instructions for upstream.

Backup and Restore

I was able to get the openldap backup to work using slapd service but the restore part didn't seem to work so do we have any workaround for backup and restore, as it is a very important feature.

Found a issue #42 related to this which seems to have been marked won't fix and closed.

Password with special characters is not supported

To Reproduce
Steps to reproduce the behavior:

  1. wget https://github.com/jp-gouin/helm-openldap/raw/master/values.yaml
  2. edit adminPassword (I belive it is the '/' that is not supported because of sed)
  3. helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
  4. helm install my-release helm-openldap/openldap-stack-ha

Expected behavior
Everything works

Screenshots

$ kubectl get pods
NAME                                 READY   STATUS             RESTARTS      AGE
ldap-phpldapadmin-5cbd44ffc5-c55t6   1/1     Running            0             2m20s
ldap-0                               0/1     CrashLoopBackOff   4 (51s ago)   2m20s

Additional context
There is this error in the logs:

$ kubectl logs ldap-0
sed: -e expression #1, char 30: unknown option to `s'

ltb-passwd default to ldaps. there should be an option to select ldap or ldaps

Is your feature request related to a problem? Please describe.
I am exposing ldap within the cluster and dont want ssl. the ltb-passwd chart default to using ldaps.

Describe the solution you'd like
There should be option to select ldap or ldaps port

Describe alternatives you've considered
Creating a separate chart for ltb-passwd

Additional context
Add any other context or screenshots about the feature request here.

Install fails on latest chart

Describe the bug
Installation fails on K8s v1.19.7

To Reproduce
Steps to reproduce the behavior:

helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
helm --namespace iam upgrade --install openldap helm-openldap/openldap-stack-ha -f values-openldap.yaml

Error message
kubectl describe statefulset ...

Events:
  Type     Reason            Age                From                    Message
  ----     ------            ----               ----                    -------
  Normal   SuccessfulCreate  45s                statefulset-controller  create Claim data-openldap-openldap-stack-ha-0 Pod openldap-openldap-stack-ha-0 in StatefulSet openldap-openldap-stack-ha success
  Warning  FailedCreate      4s (x14 over 45s)  statefulset-controller  create Pod openldap-openldap-stack-ha-0 in StatefulSet openldap-openldap-stack-ha failed error: Pod "openldap-openldap-stack-ha-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]

Additional context
values-openldap.yaml:

env:
 LDAP_ORGANISATION: "redacted"
 LDAP_DOMAIN: "redacted"
 LDAP_READONLY_USER_PASSWORD: "redacted"

service:
  annotations: {} #TODO: set dns record in internal dns
  type: LoadBalancer

persistence:
  enabled: true
  accessModes:
    - ReadWriteOnce
  size: 8Gi  

adminPassword: redacted
configPassword: redacted

ltb-passwd:
  enabled : false

phpldapadmin:
  enabled: false

Affinity missing

Hi,

is there a reason why affinity is missing/is not configurable?

BR

Chore: Update note.txt

Update the note.txt with

  • Phpldapadmin info
  • Self service password info

rename the chart name in the note

Trying to inject an olcAccess rule

Hello,

Is that possible to add an olcAccess rule as in the customLdifFiles section?

Here's my customLdifFiles parameter of the Helm values configuration (simplified):

customLdifFiles:
  02-t.example.com.ldif: |-
    version: 1

    dn: dc=t,dc=example,dc=com
    associateddomain: t.example.com
    dc: t
    objectclass: dNSDomain
    objectclass: domainRelatedObject
    objectclass: top

  03-infra.t.example.com.ldif: |-
    version: 1

    dn: dc=infra,dc=t,dc=example,dc=com
    associateddomain: infra.t.example.com
    dc: infra
    objectclass: dNSDomain
    objectclass: domainRelatedObject
    objectclass: top

  99-access_rules.ldif: |-
    version: 1

    dn: olcdatabase={1}mdb,cn=config
    changetype: modify
    add: olcaccess
    olcaccess: to dn.subtree="dc=infra,dc=t,dc=example,dc=com" by dn.exact="uid=admin,dc=infra,dc=t,dc=example,dc=com" manage by dn.exact="uid=odmin,dc=infra,dc=t,dc=example,dc=com" read

As I can see, 02-t.example.com.ldif and 03-infra.t.example.com.ldif get applied without any difficulties, but 99-access_rules.ldif doesn't.

When the container starts, I exec bash (kubectl exec ...) and I see this file in the /container/service/slapd/assets/config/bootstrap/ldif/custom directory. Moreover, I can apply it manually (ldapadd -H ldapi:/// -Y EXTERNAL < /container/service/slapd/assets/config/bootstrap/ldif/custom/99-access_rules.ldif).

Why doesn't it being applied during the initialization process?

Thanks in advance.

Restore a backup

I just wanted to ask if there is a option to restore an ldif backup from an existing instance?

You can close it if there is already a documentation :)

Add session affinity

Is your feature request related to a problem? Please describe.
When using this helm chart to setup a multi-master replication ldap setup, data is synced in intervals accoring to replication.interval. When using this setup in combination with other tooling (e.g. Keycloak), data written is expected to be instantly available, which can not be guarantied as the write and read query doesn't have to go to the same pod.

Describe the solution you'd like
According to this post it seems possible to configure Session Affinity.

Describe alternatives you've considered
Running LDAP with a single replica, rejecting the purpose of the replication functionality.

Not able to port-forward port 389

Describe the bug
Not able to port-forward port 389

To Reproduce
Steps to reproduce the behavior:

kubectl port-forward openldap-phpldapadmin-cfcc57847-9d7bm 3890:389 -n tools

Error:

Forwarding from 127.0.0.1:3890 -> 389
Forwarding from [::1]:3890 -> 389
Handling connection for 3890
E0316 18:59:44.217607   65888 portforward.go:400] an error occurred forwarding 3890 -> 389: error forwarding port 389 to pod 180293b77f6fd52a2a7ba76457212268e23f2a612f5670073fda231d373c978a, uid : failed to execute portforward in network namespace "/var/run/netns/cni-e814d0a1-605e-a6c4-9158-9cf87936d975": failed to dial 389: dial tcp4 127.0.0.1:389: connect: connection refused

Expected behavior
Able to port-forward port 389 to be able to connect from other external machines

Context

kind v0.10.0 go1.15.7 darwin/amd64
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Bootstrap Skipped; ldap files already exist

Hi,
I'm trying to configure customLdifFiles and noticed the bootstrap section they're normally loaded during container startup was skipped because of the presence of the below files.

Further digging showed the below files were added shortly after container startup, but before startup.sh was executed.

I think these files may be generated by slapadd or slapd but I can't figure out when they would be invoked during container startup

Where are these files coming from and how can I prevent them from being created and causing my customLdifFiles to be ignored?

Thank you!

/var/lib/ldap/DUMMY
/etc/ldap/slapd.d/cn=config/olcDatabase={0}config.ldif
/etc/ldap/slapd.d/cn=config/cn=module{0}.ldif
/etc/ldap/slapd.d/cn=config/cn=schema.ldif
/etc/ldap/slapd.d/cn=config/olcDatabase={-1}frontend.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={3}inetorgperson.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={2}nis.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={1}cosine.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={0}core.ldif
/etc/ldap/slapd.d/cn=config.ldif

values-openldap.yml.txt

TLS and CA certs secrets question

If I am reading the chart correctly, the StatefulSet appears to set the certificate directory described in the docker image to the data volume here.

However, I am having difficulty finding where the TLS and CA secrets described in the values.yml get copied into the data volume.

Where do the TLS and CA secrets get copied into the data volume?

Bump app version to 2.5.0

currently the app version of the chart is 2.4.57, but the image version is 2.5.0 already. We should bump it to reflect it

When bootstrapping with `customLdifFiles`, the file may not be removed afterwards

When bootstrapping a deployment with an existing Ldif file via customLdifFiles, the entry may not be removed afterwards as otherwise the deployments will fail.

They search for a symlink to this file but can't find it if the value gets removed from values.yml.

One needs to keep at least an empty content:

customLdifFiles:
  01-default-users.ldif: |-

Boostrapping happens only during the first deployment in a fresh PV, so removing the content afterwards does not do any harm.

Not sure if this qualifies as "bug" but I wanted to have mentioned it here at least :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.