Giter Club home page Giter Club logo

helm-charts's Introduction

Jenkins Helm Charts

Artifact Hub License Releases downloads Join the chat at https://app.gitter.im/#/room/#jenkins-ci:matrix.org

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repository as follows:

helm repo add jenkins https://charts.jenkins.io
helm repo update

You can then run helm search repo jenkins to see the charts or obtain an exhaustive list of releases from GitHub releases.

Chart documentation is available in jenkins directory.

Building weekly releases

The default charts target Long-Term-Support (LTS) releases of Jenkins. To use other versions, the easiest way is to update the image tag to the version you want. You can also rebuild the chart if you want the appVersion field to match.

Contributing

We'd love to have you contribute! Please refer to our contribution guidelines for details.

License

Apache 2.0 License.

helm-charts's People

Contributors

11000100111000 avatar bergemalm avatar bmaximuml avatar dependabot[bot] avatar dominykas avatar electroma avatar fatmcgav avatar flah00 avatar garethjevans avatar hazzik avatar holmesb avatar jenkins-dependency-updater[bot] avatar jlegrone avatar jordanjennings avatar kimxogus avatar kvanzuijlen avatar lachie83 avatar lemeurherve avatar maorfr avatar maxnitze avatar notmyfault avatar oofnikj avatar rmkanda avatar scottrigby avatar siwyd avatar startnow65 avatar timja avatar torstenwalter avatar vivian-src avatar wmcdona89 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Make PrometheusRule Namespace configurable

Is your feature request related to a problem? Please describe.
By default, and without customizations, Prometheus only looks for PrometheusRules and ServiceMonitors in its own Namespace. Currently, the ServiceMonitor Namespace is configurable but the PrometheusRule is not.

Describe the solution you'd like
Add support to allow PrometheusRule to be configured via .Values.master.prometheus.prometheusRuleNamespace

Example:

{{- if .Values.master.prometheus.prometheusRuleNamespace }}
  namespace: {{ .Values.master.prometheus.prometheusRuleNamespace }}
{{- else }}
  namespace: {{ template "jenkins.namespace" . }}
{{- end }}

Also, add these to the values.yaml file and Document them.

Describe alternatives you've considered
Modifying Prometheus to scan other Namespaces

Additional context
Copied from #23306

UPGRADE FAILED: unable to recognize "": no matches for kind "ServiceMonitor"

Original issue from old stable repo is here. master.prometheus.enabled installs custom Prometheus Operator resources. If Prometheus was installed using the non-operator helm chart , the following error occurs:

Error: UPGRADE FAILED: unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Need to add a way of simply creating a Prom metrics endpoint. master.prometheus.enabled is too opinionated. Should have been a subkey master.prometheus.operator.enabled.

If master.prometheus.enabled = true, but master.prometheus.operator.enabled = false, just the exporter is added.

Recommend installing plugins in custom image?

Personally I never recommend installing plugins at runtime as the update center going down could stop you from being able to start your image.

Restarting a pod should always be safe.

Thoughts?

Allow using securityContext fsGroup without using runAsUser

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

The jenkins-master-deployment.yaml file only allows setting the fsGroup value if the runAsUser value is set to a non-root user (user 0).

Describe the solution you'd like
A clear and concise description of what you want to happen.

Allow the fsGroup value to be set without having to set runAsUser to 0.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

I have a use case that requires me to set the fsGroup value, while still setting runAsUser to 0. The current template does not allow for this case.

Remove all XML based config

Use JCasC config instead

Currently the chart has two configuration options:

via XML configuration files, which basically renders XML documents
via Configuration as Code plugins
The second one is preferred as it allows updating the configuration. With XML that's impossible as users could have changed settings via the Jenkins UI.

As the chart support JCasC configuration I suggest to get rid of all XML configuration options. This hopefully also makes it easier for users of the chart as they don't have to worry about if a configuration is done via XML or JCasC and be surprised if their setting is not applied.

Migrate to plugin-installation-manager-tool for plugin installation

Detect if the pmcli is installed and use that instead of install-plugins.sh

https://github.com/jenkinsci/plugin-installation-manager-tool

It's installed and available in recent versions of the Jenkins docker images since jenkinsci/docker#971 was merged

install-plugins is a hacky shell script that isn't maintained due to the risks of changing it and has some limited testing around it.

plugin installation manager tool is the replacement and has much more testing around it.

Invite/confirm listed chart maintainers

  • Invite existing maintainers to be collaborators on this repo
  • Contact the listed maintainers in Chart.yaml to see if they still wish to be involved
  • Remove old maintainers who either have declined or not confirmed

Jenkins Helm chart does not support using an image with a digest

The jenkins-master-deployment.yaml file only allows for images using the format <image>:<imageTag>. This prevents us from referencing an image that uses a SHA digest. Please add support for using digest with the image, such as jenkins/jenkins@sha:123456.

JCasC 'welcome-message' configuration example does not work

Hi!

I am struggling with making an example of custom JCasC configuraion work.

My values.yaml

master:
  serviceType: ClusterIP
  servicePort: 8081
  installPlugins:
    - kubernetes:1.27.0
    - workflow-aggregator:2.6
    - workflow-job:2.39
    - git:4.4.1
    - configuration-as-code:1.41
  JCasC:
    securityRealm: |-
      local:
        allowsSignup: false
    welcome-message: |
      jenkins:
        systemMessage: Welcome to our CI\CD server.  This Jenkins is configured and managed 'as code'.
  # https://github.com/helm/charts/issues/15453
  customInitContainers:
    - name: "volume-mount-permission"
      image: "busybox"
      command: ["/bin/chown", "-R", "1000", "/var/jenkins_home"]
      volumeMounts:
        - name: "jenkins-home"
          mountPath: "/var/jenkins_home"
      securityContext:
        runAsUser: 0
  jenkinsUriPrefix: "/jenkins"
persistence:
  storageClass: jenkins
  size: "4Gi"

securityRealm setting works fine, but systemMessage is not set.

I also tried to add

    jenkins-url: |
      unclassified:
        location:
          url: https://example.com/jenkins
          adminAddress: [email protected]

into JCasC section, but without any success.

kubectl version

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

helm version

version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.7"}

(doc) master.sidecars.autoConfigReload.enabled -> master.sidecars.configAutoReload.enabled

Describe the bug

In https://hub.helm.sh/charts/jenkinsci/jenkins chapter Config as Code With or Without Auto-Reload , it states master.sidecars.autoConfigReload.enabled, which shall be master.sidecars.configAutoReload.enabled

The code example is correct

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.13", GitCommit:"39a145ca3413079bcb9c80846488786fed5fe1cb", GitTreeState:"clean", BuildDate:"2020-07-15T16:10:14Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:

jenkins-2.6.4

What happened:

can lead misunderstand

What you expected to happen:

master.sidecars.configAutoReload.enabled

How to reproduce it (as minimally and precisely as possible):

https://hub.helm.sh/charts/jenkinsci/jenkins

Anything else we need to know:

podTemplates are not loaded

Lately it appears that my podTemplate configuration (under the agent section) is not loaded and only the default template is created.

Version of Helm and Kubernetes:

$ helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T23:41:24Z", GoVersion:"go1.14", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart: Latest

What happened: podTemplate configuration isn't loaded

The values (only the agent part):

agent:
  enabled: true
  image: jenkins/jnlp-slave
  tag: 3.27-1
  customJenkinsLabels: []
  imagePullSecretName:
  componentName: jenkins-slave
  privileged: false
  resources:
    requests:
      cpu: 512m
      memory: 512Mi
    limits:
      cpu: 512m
      memory: 512Mi
  alwaysPullImage: false
  podRetention: Never
  volumes: []
  envVars: []
  nodeSelector: {}
  command:
  args: ${computer.jnlpmac} ${computer.name}
  sideContainerName: jnlp
  TTYEnabled: false
  containerCap: 10
  podName: default
  idleMinutes: 0
  slaveConnectTimeout: 100
  podTemplates:
    python: |
      - name: python-agent
        label: python
        serviceAccount: build-jenkins-agent
        containers:
          - name: python
            image: 184642232056.dkr.ecr.us-east-1.amazonaws.com/devops/jenkins/slave/python37:latest
            command: "sleep"
            args: "infinity"
    node: |
      - name: node-agent
        label: node
        serviceAccount: access-to-automation-account
        containers:
          - name: node
            image: node:10.16.3
            command: "sleep"
            args: "infinity"

test ci pipeline

  • chat should be linted
  • DCO version should be checked

=> enforce both checks for PRs

organize values.yaml to enable removing VALUES_SUMMARY.md

v3.0.0 release

How does a bigger v3.0.0 release help us

While working on #10 I had some ideas regarding v3.0.0 release of the helm chart.

Basically removing all XML configuration options is a breaking change and requires a major version increment.
As I'd love to avoid to do too many breaking changes updates in a row I thought it might be useful to combine several others with this one.

  • removal of all XML configuration options
  • upgrading to Helm version 3
    helm 3 is out for a while now and support for version 2 will formally end Nov 13, 2020. See https://helm.sh/blog/2019-10-22-helm-2150-released/#helm-2-support-plan for details.
    I think it also helps us to worry less about things like is a feature supported in a specific version of helm 2 as pointed out in #35 as we then can safely assume that people are using helm version 3
  • remove values which are deprecated already
  • other items which you think should be done as part of a v3 release.

My hope is that this will help us going forward in keeping the chart maintainable and also making it easier for users to use the chart as they have less config flags to worry about and making it easier for contributors as the chart gets less complex.

How does it align with other plans

Documentation

In my opinion we should have more documentation and provide in examples how settings can be used and which features this chart offers. I started to label some issues with documentation. These are items which would be worth documenting so that other people don't have to ask again.

As I see it this could be done before or after a v3.0.0 release. Having more documentation always help. So it's totally fine to create PRs for it before a v3.0.0 release, but I think a lack thereof should also not block us from creating a release as outlined above. A less complex chart also makes documentation easier.

Replace deprecated terminology

@timja raised issue #11 to remove deprecated terminology and offensive terms. For me this absolutely makes sense. Most important are these changes:

  • master => controller
  • slave => agent

We are using them in values.yaml and reference them in the templates. Some values are also used as selectors. So this is for sure a breaking change. I would not start doing it before removing XML configuration options as that reduces the number of occurrences, which we have to replace.

We could do that change together with v3.0.0 or as a major release afterwards. Both would be ok for me, but I think that spilling it up makes it easier for users to migrate. And we could have that release sooner.
Hopefully we are able to make the change deprecating terminology shortly afterwards and the user just has to replace master key in values.yaml with controller and adjust a small set of other values.

How to put his into practice

@jenkinsci/helm-charts-developers I pushed my branch which removes the XML configuration option as https://github.com/jenkinsci/helm-charts/tree/jenkins-3-0-0 to this repository. I also made it protected and enforced PR reviews there. That way more people can contribute to that one and we can discuss PRs.

As a starting point I created a PR to migrate to helm 3 (#38)

@wmcdona89 If this procedure is fine with you I would close #34 in favor of #40.

@jenkinsci/helm-charts-developers I appreciate your feedback.

Expose slave listener port publicly to enable external worker nodes

It is really nice to have dynamic worker nodes manged by k8s! This make the worker management work much easier!

We are also trying to add some VM agent nodes as persistent agent nodes to run some Selenium tests.

However, the master's 'slaveListenerPort' is not exposed publicly. It cannot be reached from the internet.

This is blocking us bring in our own VM nodes.

Is it possible to expose the JNLP TCP port?

Error logs from connecting agent

Sep 26, 2020 5:58:20 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using C:\Users\liftrvmuser\Desktop\test-folder\work\remoting as a remoting work directory
Sep 26, 2020 5:58:20 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to C:\Users\liftrvmuser\Desktop\test-folder\work\remoting
Sep 26, 2020 5:58:21 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: win-selenuim-worker-001
Sep 26, 2020 5:58:21 AM hudson.remoting.jnlp.Main$CuiListener
INFO: Jenkins agent is running in headless mode.
Sep 26, 2020 5:58:21 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.5
Sep 26, 2020 5:58:21 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using C:\Users\liftrvmuser\Desktop\test-folder\work\remoting as a remoting work directory
Sep 26, 2020 5:58:21 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [https://jenkins.cicd.azliftr-test.io/]
Sep 26, 2020 5:58:21 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping]
Sep 26, 2020 5:58:26 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver isPortVisible
WARNING: connect timed out
Sep 26, 2020 5:58:26 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: https://jenkins.cicd.azliftr-test.io/ provided port:50000 is not reachable
java.io.IOException: https://jenkins.cicd.azliftr-test.io/ provided port:50000 is not reachable
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:314)
at hudson.remoting.Engine.innerRun(Engine.java:694)
at hudson.remoting.Engine.run(Engine.java:519)

Replace deprecated terminology

Slave has been deprecated for 4 years
'slave' -> 'agent'

Master recently became a deprecated term and was replaced with controller
'master' -> 'controller'

This can definitely be delivered incrementally.

setup CODEOWNERS and teams per chart

I suggest to create a CODEOWNERS file and to require that every PR is reviewed by CODEOWNERS.

At the moment it does not make a big difference if we have CODEOWNERS in place or not as there is just a single helm chart in this repository and every PR requires a review anyhow.
In the future however it could make a difference as it would be easier to host other charts e.g. for jenkins-operator also in this repository as we could configure different CODEOWNERS for different charts.

An immediate benefit would be that we could use this setting:
image

failed to download "jenkinsci/jenkins"

I ran the below commands before I installed the chart.

helm repo add jenkinsci https://charts.jenkins.io
helm repo update

But, still getting failed to download "jenkinsci/jenkins" error.

helm search repo jenkinsci
No results found

Logs:

"jenkinsci" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "jenkinsci" chart repository
Update Complete.

How to add host alias for agent?

I have a minikube cluster and would like to deploy both jenkins and gitlab. Both web services are of clusterip type. However, I'm facing the host issue in cloning the repo from gitlab. I tried to set hostalias for master but still no luck. I'm thinking whether it's due to the reason that hostalias must be set in agent?

Add secondary ingress to allow webhooks to be public

Is your feature request related to a problem? Please describe.
We have various jenkins installation that contain sensitive data, and thus hidden behind firewalls, but still would like github notifications

Describe the solution you'd like
Add a secondary ingress that just exposes the webhook urls (github-webhook/bitbucket-webhook/etc)

Describe alternatives you've considered
Could use a sidecar container that starts up smee, but thats an extra dependancyan

[improvement] - Add option to add scriptapproval hashes value on helm chart

Migrating from helm/charts#23265 like requested

Hello this would be a request for improvement,

We currently are running Jenkins with no persistent storage, with everything configured via CASC.
We have a few script that need to be approved at startup to be used right away. Thing here is even though the scripts are declared in the scriptApproval helm value, after every deployment the script asks to be approved again.

Doing some tests I've checked that the approvedSignatures xml tag is not as important as the approvedScriptHashes.
After every manual approval I've noticed that the hash is always the same (if the script doesn't change). Implementing this feature would bring more benefits than having just the approvedSignatures in place

The file where is set:
https://github.com/helm/charts/blob/master/stable/jenkins/templates/config.yaml#L129

Having something like the following would help alot:

image

This way every script approval could be managed directly under the Values file.
Thank you for the help

[BUG] Incorrect indentation, blocks normal usage of javaOps

Describe the bug
Here https://github.com/helm/charts/blob/master/stable/jenkins/templates/jenkins-master-deployment.yaml#L187
you have a space inside "if" block which introduces extra indentation

if in variables

...
  javaOpts: >
    -server -XX:+AlwaysPreTouch    
    -Xloggc:$JENKINS_HOME/gc-%t.log   
    -XX:NumberOfGCLogFiles=5
...

then it will create something like

...
            - name: JAVA_OPTS
              value: >
                -server -XX:+AlwaysPreTouch    
                -Xloggc:$JENKINS_HOME/gc-%t.log   
                -XX:NumberOfGCLogFiles=5                
                 -Dcasc.reload.token=$(POD_NAME)

which can't be parsed

possible workaround:

...
  # THE LEADING SPACE IS IMPORTANT!
  javaOpts: " -server -XX:+AlwaysPreTouch -Xloggc:$JENKINS_HOME/gc-%t.log -XX:NumberOfGCLogFiles=5"
...

Which makes it ugly and unmanageable with 15+ java opts

Version of Helm and Kubernetes:

Which chart:
stable/jenkins starting from 1.8.0

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
provide any java opts

Anything else we need to know:

Oops! A problem occurred while processing the request.

Describe the bug
More than once on installed new plugin restart Jenkins trigger error, It seems to be random:
截屏2020-09-10 下午11 05 43

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"clean", GoVersion:"go1.14.7"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart: jenkinsci/jenkins 2.6.4

What happened: Oops! A problem occurred while processing the request.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

$ kubectl -n jenkins-helm describe pod jenkins-95c5d7b85-jvk8s
...
Events:
  Type     Reason     Age                 From                      Message
  ----     ------     ----                ----                      -------
  Warning  Unhealthy  24m (x3 over 14h)   kubelet, k8s-worker-2252  Readiness probe failed: Get "http://10.244.0.42:8080/login": dial tcp 10.244.0.42:8080: connect: connection refused
  Warning  Unhealthy  24m (x17 over 14h)  kubelet, k8s-worker-2252  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy  24m (x16 over 14h)  kubelet, k8s-worker-2252  Liveness probe failed: HTTP probe failed with statuscode: 503

Allow IRSA for jenkins backup cronjob

Is your feature request related to a problem? Please describe.
There is no IRSA support for the jenkins backup cronjob service account.

Describe the solution you'd like
I would like to allow annotations to the service account to specify the IAM Role and specify the security context of the cronjob pod to allow access to that token.

Describe alternatives you've considered
Currently I am patching the service account for jenkins backup and cronjob spec to support IRSA.

Additional context
I would like to submit a PR for this feature.

agent merge logic does not properly handle booleans

the merge function does not properly handle booleans as true overwrites false - see helm issue 7313

merge

Merge two or more dictionaries into one, giving precedence to the dest dictionary:

the chart uses merge to merge agent into additionalAgents to ensure they at least have the default values

the following boolean values are impacted:
agent.privileged
agent.alwaysPullImage
agent.TTYEnabled

test template

{{- range $name, $additionalAgent := .Values.additionalAgents }}
  {{- $additionalAgent := merge $additionalAgent $.Values.agent }}
  {{- $name }}:
  {{- toYaml $additionalAgent | nindent 2 }}
{{- end }}

given:

values.yaml

agent:
  alwaysPullImage: true
  privileged: true
  TTYEnabled: true
additionalAgents:
  maven:
    alwaysPullImage: false
    privileged: false
    TTYEnabled: false

expected result:

maven:
  alwaysPullImage: false
  privileged: false
  TTYEnabled: false

actual result:

maven:
  alwaysPullImage: true
  privileged: true
  TTYEnabled: true

Fix

using mergeOverwrite with deepCopy resolves the issue...but requires helm v2.16.0 and above
deepCopy was added in sprig v2.22 which was introduced in helm v2.16.0
mergeOverwrite was added in sprig v2.18 which was introduced in helm v2.13.0

mergeOverwrite

Merge two or more dictionaries into one, giving precedence from right to left, effectively overwriting values in the dest dictionary. This is a deep merge operation but not a deep copy operation. Nested objects that are merged are the same instance on both dicts. If you want a deep copy along with the merge than use the deepCopy function along with merging.

deepCopy

The deepCopy function takes a value and makes a deep copy of the value. This includes dicts and other structures.

test template

{{- range $name, $additionalAgent := .Values.additionalAgents }}
  {{- $additionalAgent := mergeOverwrite (deepCopy $.Values.agent) $additionalAgent }}
  {{- $name }}:
  {{- toYaml $additionalAgent | nindent 2 }}
{{- end }}

given:

values.yaml

agent:
  alwaysPullImage: true
  privileged: true
  TTYEnabled: true
additionalAgents:
  maven:
    alwaysPullImage: false
    privileged: false
    TTYEnabled: false

expected result:

maven:
  alwaysPullImage: false
  privileged: false
  TTYEnabled: false

actual result:

maven:
  alwaysPullImage: false
  privileged: false
  TTYEnabled: false

document how an ingress can be configured

maybe even more import, which values can be used to configure external URL

jenkins.url template is defined in:

{{/*
Returns the Jenkins URL
*/}}
{{- define "jenkins.url" -}}
{{- if .Values.master.jenkinsUrl }}
{{- .Values.master.jenkinsUrl }}
{{- else }}
{{- if .Values.master.ingress.hostName }}
{{- if .Values.master.ingress.tls }}
{{- default "https" .Values.master.jenkinsUrlProtocol }}://{{ .Values.master.ingress.hostName }}{{ default "" .Values.master.jenkinsUriPrefix }}
{{- else }}
{{- default "http" .Values.master.jenkinsUrlProtocol }}://{{ .Values.master.ingress.hostName }}{{ default "" .Values.master.jenkinsUriPrefix }}
{{- end }}
{{- else }}
{{- default "http" .Values.master.jenkinsUrlProtocol }}://{{ template "jenkins.fullname" . }}:{{.Values.master.servicePort}}{{ default "" .Values.master.jenkinsUriPrefix }}
{{- end}}
{{- end}}
{{- end -}}

That should give you an idea which values you can set to configure it.

This template is then used to configure it via JCasC:

unclassified:
location:
adminAddress: {{ default "" .Values.master.jenkinsAdminEmail }}
url: {{ template "jenkins.url" . }}
{{- end -}}

Default empty configScripts field leads to no default config not being copied over when sidecar is disabled

Describe the bug
If you set the master.sidecars.configAutoReload.enabled: false and use the default value for the master.JCasC.configScripts: {} not even the default configuration from the helm charts helper templates function is loaded

Version of Helm and Kubernetes:

Helm Version: 2.16.10

$ helm version
Client: &version.Version{SemVer:"v2.16.10", GitCommit:"bceca24a91639f045f22ab0f41e47589a932cf5e", GitTreeState:"clean"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.11", GitCommit:"ea5f00d93211b7c80247bf607cfa422ad6fb5347", GitTreeState:"clean", BuildDate:"2020-08-26T20:27:22Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
2.7.0

What you expected to happen:
copy-default-config init container normally copies all the JCasC files into the correct folder for loading
However since configScripts is an empty object the default configuration is also not copied

$ kube logs jenkins-68ccbd7f84-krnzz -c copy-default-config
applying Jenkins configuration
disable Setup Wizard
copy configuration as code files
finished initialization

After copy configuration as code files the files copied should be displayed, including the

How to reproduce it (as minimally and precisely as possible):
values.yaml

master:
  JCasC:
    enabled: true
    defaultConfig: true

  sidecars:
    configAutoReload:
      enabled: false

helm install jenkins -f values.yaml"

Do not restart Jenkins if it's not necessary

Is your feature request related to a problem? Please describe.

This chart uses helm.sh/chart labels on generated resources as recommended in Helm chart best practices:

helm.sh/chart REC This should be the chart name and version: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}.

As a result of this with every update Jenkins restarts whenever a newer helm chart is used even if there are no changes to the Deployment or the ConfigMap.

The configAutoReload feature already does a great to avoid restarts for configuration changes done via via JCasC. It would be great if we could avoid restarts for updates in case they are not required.

Describe the solution you'd like
I suggest to not render the version of the Chart in that label. So instead of {{ .Chart.Name }}-{{ .Chart.Version \| replace "+" "_" }} just {{ .Chart.Name }}.

You could still tell directly from the resource which chart was used to render the resource. Only the version would be missing.

I think that's not really a problem as one could run helm list to figure out which chart version is currently in use.

Describe alternatives you've considered

  • Removing the helm.sh/chart annotation completely
    This would also work, but you could no longer see that the resource was rendered by a helm chart.

  • Introducing a new flag in values.yaml to conditionally disable rendering the label
    That's possible, but I would prefer to keep it straight forward and not introduce new flags just for this.

Add option to add binaryData to the ConfigMap

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

The config.yaml file only allows adding additional configs under the data key, but does not give an option to add binaryData.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Add a binaryData key to the config.yaml file and allow it to be customizable through values.yaml.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

An alternative is to create a separate ConfigMap yaml to store the binary data. This is an easy workaround, but results in having one more ConfigMap file to work with.

Additional context
Add any other context or screenshots about the feature request here.

Duplicated kubernetes configuration using JCasC

Describe the bug
After a fresh deploy of this helm chart using JCasC to configure kubernetes plugin, somehow cloud configuration is configured with two kubernetes entries:

image

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:17:59Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-eks-2ba888", GitCommit:"2ba888155c7f8093a1bc06e3336333fbdb27b3da", GitTreeState:"clean", BuildDate:"2020-07-17T18:48:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart: 2.6.4

What happened: Already described above

What you expected to happen: Only one kubernetes configuration should be in place.

How to reproduce it (as minimally and precisely as possible):
Use the bellow values.yaml to install the chart:

master:
  adminPassword: xxxxxxxxxxxxx

  installPlugins:
    - kubernetes
    - workflow-job
    - workflow-aggregator
    - credentials-binding
    - git
    - cloudbees-bitbucket-branch-source
    - parameterized-scheduler
    - active-directory
    - blueocean
    - configuration-as-code
    - job-dsl
    - terraform
    - aws-credentials
    - pipeline-aws

  JCasC:
    configScripts:
      my-settings: |
        jenkins:
          clouds:
          - kubernetes:
              containerCap: 10
              containerCapStr: "10"
              webSocket: true
              jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
              name: "kubernetes"
              namespace: "jenkins"
              podLabels:
              - key: "jenkins/jenkins-jenkins-slave"
                value: "true"
              serverUrl: "https://kubernetes.default"
              templates:
              - containers:
                - args: "^${computer.jnlpmac} ^${computer.name}"
                  envVars:
                  - containerEnvVar:
                      key: "JENKINS_URL"
                      value: "http://jenkins.jenkins.svc.cluster.local:8080"
                  image: "jenkins/jnlp-slave:3.27-1"
                  name: "jnlp"
                  resourceLimitCpu: "512m"
                  resourceLimitMemory: "512Mi"
                  resourceRequestCpu: "512m"
                  resourceRequestMemory: "512Mi"
                  workingDir: "/home/jenkins"
                label: "jenkins-jenkins-slave "
                name: "default"
                nodeUsageMode: NORMAL
                podRetention: "never"
                serviceAccount: "default"
                yamlMergeStrategy: "override"
          securityRealm:
            activeDirectory:
              domains:
                - name: "mydomain"
                  servers: "xxxxxxx"
                  bindName: 'xxxxxxxxxxx'
                  bindPassword: 'xxxxxxxxxxxx'
                  tlsConfiguration: TRUST_ALL_CERTIFICATES
              groupLookupStrategy: AUTO
              removeIrrelevantGroups: false
              cache:
                size: 250
                ttl: 10
              startTls: false
              internalUsersDatabase:
                jenkinsInternalUser: "jenkins"

persistence:
  enabled: true
  storageClass: "my-storage-class"
  size: "16Gi"

Specify POD Distrubtion Budget

Hey folks,

I think it would be great to provide a Pod Distrubtion Budget for Jenkins Master. The simple reason is that in a cloud environment every time a node could break or could get replaced and the service and as as result right now ingress
shows 502 (As long as it takes to ramp up Jenkins. Which can some minutes). It sounds like low-hanging fruits to specify a PDB for the master node to provide the Kubernetes scheduler a hint that the service should be moved first before termination: https://kubernetes.io/docs/tasks/run-application/configure-pdb/

Cheers!

Why do we need secrets-dir empty volume?

I am wondering why we actually need the secrets-dir empty volume.

It's used in the deployment like this:

      initContainers:
        - command:
          ...
          volumeMounts:
            ...
            - mountPath: /usr/share/jenkins/ref/secrets/
              name: secrets-dir
      containers:
        - name: jenkins
         ....
          volumeMounts:
            ...
            - mountPath: /usr/share/jenkins/ref/secrets/
              name: secrets-dir
              readOnly: false
      volumes:
      ...
      - name: secrets-dir
        emptyDir: {}

It seems to be only used here:

{{- if .Values.master.secretsFilesSecret }}
    echo "copy secrets"
    mkdir -p {{ .Values.master.jenkinsRef }}/secrets/;
    yes n | cp -i /var/jenkins_secrets/* {{ .Values.master.jenkinsRef }}/secrets/;
{{- end }}

and here:

{{- if .Values.master.enableXmlConfig }}
    echo "apply XML configuration"
    echo "false" > {{ .Values.master.jenkinsRef }}/secrets/slave-to-master-security-kill-switch;

So I would assume that it's ok to only mount the volume if either master.enableXmlConfig is set or master.secretsFilesSecret

securityRealm HudsonPrivateSecurityRealm admin user

Hello

Helm chart version: 2.6.4

When I change the securityRealm to something else,
For example:

  securityRealm: |-
    <securityRealm class="hudson.security.HudsonPrivateSecurityRealm">
      <disableSignup>true</disableSignup>
      <enableCaptcha>false</enableCaptcha>
    </securityRealm>

The admin user is not longer initialized

Add support for using a SHA digest when referencing the image

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

The jenkins-master-deployment.yaml file only allows for images using the format <image>:<imageTag>. This prevents us from referencing an image that uses a SHA digest.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Please add support for using a digest when referencing the image in the format <image>@<digest>, such as jenkins/jenkins@sha:123456.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Don't use the 'legacy' security realm

I spent a few hours helping a colleague with his Jenkins instance recently that didn't want to let him authenticate using username/API token, complaining about a missing CSRF token/crumb, which shouldn't be needed since Jenkins 2.96.

Turns out this chart configures the legacy security realm which is basically untested, and completely ignored for new development.

<securityRealm class="hudson.security.LegacySecurityRealm"/>

It probably shouldn't declare a security realm whose name already recommends against its use.

remove XML configuration options

Currently the chart has two configuration options:

  • via XML configuration files, which basically renders XML documents
  • via Configuration as Code plugins

The second one is preferred as it allows updating the configuration. With XML that's impossible as users could have changed settings via the Jenkins UI.

As the chart support JCasC configuration I suggest to get rid of all XML configuration options. This hopefully also makes it easier for users of the chart as they don't have to worry about if a configuration is done via XML or JCasC and be surprised if their setting is not applied.

To expose "activeDeadlineSeconds" for back job in the template

Is your feature request related to a problem? Please describe.
My Jenkins backup job is not marked completed or failed for ever in K8S. This prevents any new jobs get scheduled. The end result is that the backup cronjob did not get kicked off anymore since there is an active job. One of solutions to this problem is to specify "activeDeadlineSeconds" in the cronjob spec. However, the current helm template does not define and expose that parameter. Please add "activeDeadlineSeconds" to the backup job template.

Describe the solution you'd like
Add "activeDeadlineSeconds" to jenkins-backup-cronjob.yaml

Describe alternatives you've considered
N/A
Additional context
N/A

Version 2.6.4 how configuration use the istio-ingressgateway support https ?

My install command

helm install -n jenkins-helm jenkins jenkinsci/jenkins \
  --set master.installPlugins="" \
  --set master.imagePullPolicy="IfNotPresent" \
  --set master.javaOpts="-Dhudson.model.DownloadService.noSignatureCheck=true" \
  --set master.jenkinsUrl="jenkins.example.com" \
  --set persistence.size="10Gi"

Gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: jenkins-gw
  namespace: jenkins-helm
spec:
  selector:
    app: istio-ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - jenkins.example.com
      tls:
        httpsRedirect: true # sends 301 redirect for http requests
    - port:
        number: 443
        name: https
        protocol: HTTPS
      tls:
        mode: SIMPLE
        credentialName: tls.jenkins.example.com
      hosts:
        - jenkins.example.com

VirtualService

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: jenkins-vs
  namespace: jenkins-helm
spec:
  gateways:
    - jenkins-gw
  hosts:
    - jenkins.example.com
  http:
    - route:
        - destination:
            host: jenkins.jenkins-helm.svc.cluster.local

At version 2.5.2 it working
Upgrade the version 2.6.4 it not working

add support for agents connecting via websockets

Is your feature request related to a problem? Please describe.
Issue #64 raised awareness that we do not support websockts in the default JCasC configuration.
If people want to use it at the moment they have to disable defaults configuration and do all the other configuration it provides by themselves.

@galindro mentioned in #64 (comment)_ that we would need to do the following things:

The differences are:

jenkinsTunnel should be removed or empty if webSocket = ​true
jenkinsUrl should be http://jenkins.jenkins.svc.cluster.local:8080 if run on k8s

and the following field should be added:
webSocket: true

This would also solve #70 so that it simplifies connecting external agents to Jenkins, which is running on Kubernetes.

Describe the solution you'd like

Would be nice if we would have a flag in the values file via which we could enable websocket support e.g. agent.websocket: true, which would do the necessary configuration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.