Giter Club home page Giter Club logo

camunda-platform-helm's Introduction

Camunda 8 Helm

License Test - Unit Artifact Hub

Overview

Camunda 8 Self-Managed Helm charts repo. Camunda 8 Helm chart is an umbrella chart for different components. Some are internal (sub-charts), and some are external (third-party). The dependency management is fully automated and managed by Helm itself.

Camunda 8 Self-Managed Helm charts architecture diagram
Camunda 8 architecture

Documentation

Versioning

For more details about the Camunda 8 Helm chart versioning, please read the versioning scheme.

Installation

Find out more details about different installation and deployment options on the Camunda 8 Helm chart readme.

Guides

Default values cannot cover every use case, so we have Camunda 8 deploy guides. The guides have detailed examples for different use cases like Ingress setup.

Issues

Please create a new issue if you found any problem with the Camunda 8 Helm charts.

Contributing

We value all feedback and contributions. To start contributing to this project, please:

  • Don't create a PR without opening an issue and discussing it first.
  • Familiarize yourself with the contribution guide.
  • Find more information about configuring and deploying the Camunda 8 Helm chart.

Releasing

Please visit the Camunda 8 release guide to find out how to release the charts.

Deprecation

Old Zeebe charts

With the creation of the Camunda 8 Helm charts (previously known as ccsm-helm), the old zeebe-* charts have been deprecated. That means they are no longer part of the repository and are no longer maintained. However, the packaged charts are still available for download. But will be removed in the next releases.

The following charts are deprecated:

  • zeebe-full-helm
  • zeebe-cluster-helm
  • zeebe-operate-helm
  • zeebe-tasklist-helm

The new camunda-platform chart is a full replacement of zeebe-full-helm and replaces (contains) all other charts as sub-charts. All sub-charts in the camunda-platform are enabled by default.

For a complete migration guide, visit migration docs.

License

Camunda 8 Self-Managed Helm charts are licensed under the open-source Apache License 2.0. Please see LICENSE for details.

For Camunda 8 components, please visit licensing information page.

camunda-platform-helm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camunda-platform-helm's Issues

Services doesn't work without standalone gateway

Services like operate doesn't seem to work without the standalone gateway, If we want to make it possible to swtich bettwen embedded and standlone gateway then the broker contact point shouldn't be hard coded like it is done here https://github.com/camunda-community-hub/camunda-cloud-helm/blob/main/charts/zeebe-operate-helm/templates/configmap.yaml#L24

Possible solution in ccsm helm:

We could create a helper template and define zeebe.contactPoint. Depending on gateway enabled or not we set the contact point to the gateway or broker.

Retention policy

Add curator to the single chart and make it configurable.

  • #159
  • Add zeebe index config
  • Add operate index config
  • Add tasklist index config

unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend

Installing the latest version produces the following error:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]

[EPIC]: Add Optimize Helm chart

Add optimize chart to the single chart.

  • Requires: Identity in chart
  • Create a new sub chart in the ccsm-helm chart
    • Add manifest templates for optimize #287
    • Configure optimize to use identity #287
    • #286
  • #289
  • Add possible configurations as values to the values.yaml file
  • Verify default resources with SaaS G3-s #298
  • Document the configurations in values file and readme
  • #299
  • Test optimize sub chart
  • Update architecture image, you can find the source in google drive. zeebe/ccsm-helm/CCSM Helm Architecture Image.
  • Add documentation about optimize to the camunda docs

Depends on #127

Setting global.zeebe causes startup to fail

I wish to use the value global.zeebe to name the Kubernetes resources zeebe rather than zeebe-zeebe.

Works in old version

In version 0.0.89 of the chart this works fine:

> helm install zeebe zeebe/zeebe-cluster --version 0.0.89 --set global.zeebe=zeebe

> kubectl get pods
NAME                             READY   STATUS
elasticsearch-master-0           1/1     Running
elasticsearch-master-1           1/1     Running
elasticsearch-master-2           1/1     Running
zeebe-0                          1/1     Running
zeebe-1                          1/1     Running
zeebe-2                          1/1     Running
zeebe-gateway-5f89d47657-pl8jw   1/1     Running

Breaks in current version

In the latest versions of the chart this causes the startup to fail:

> helm install zeebe zeebe/zeebe-cluster --version 0.0.99 --set global.zeebe=zeebe

> kubectl get pods
NAME                             READY   STATUS
elasticsearch-master-0           1/1     Running
elasticsearch-master-1           1/1     Running
elasticsearch-master-2           1/1     Running
zeebe-0                          0/1     CrashLoopBackOff
zeebe-1                          0/1     CrashLoopBackOff
zeebe-2                          0/1     CrashLoopBackOff
zeebe-gateway-5657db69c4-bzfvw   1/1     Running

> kubectl logs zeebe-0
[...]
***************************
APPLICATION FAILED TO START
***************************

Description:

Failed to bind properties under 'zeebe.broker.gateway.network.port' to int:

    Property: zeebe.broker.gateway.network.port
    Value: tcp://10.0.96.44:9600
    Origin: "zeebe.broker.gateway.network.port" from property source "systemProperties"
    Reason: failed to convert java.lang.String to int

I am using Helm 3.

can't configure podSecurityContext/securityContext

I tried to configure podSecurityContext/securityContext with zeebe-cluster-helm 1.2.10.

It fails with
Error: YAML parse error on zeebe-cluster-helm/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 104: mapping values are not allowed in this context helm.go:88: [debug] error converting YAML to JSON: yaml: line 104: mapping values are not allowed in this context

The values file contains:

podSecurityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop:
    - all

The generated content is:

      securityContext:
        allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all

with a bad indentation (capabilities and allowPrivilegeEscalation) should be aligned.

I tried with {{ toYaml .Values.podSecurityContext | indent 10 | trim }} instead of {{ toYaml .Values.podSecurityContext | indent 12 | trim }} the generation is OK (it is more a securityContext than a podSecurityContext in fact).

Indent should be 12 for Deployment (didn't need to update it) and 10 for statefulset ...

AWS Elasticsearch need to configure with zeebe operate.

Description:
I have deployed zeebe cluster and zeebe operate separately, In the zeebe cluster, the elasticsearch is configured with the AWS ELK but facing issue with zeebe operate.

Configmap.yaml:
kind: ConfigMapmetadata:
name: {{ include "zeebe-operate.fullname" . }}
apiVersion: v1
data:
application.yml: |
# Operate configuration file
camunda.operate:
elasticsearch:
host: {{ .Values.global.elasticsearch.host }}
port: {{ .Values.global.elasticsearch.port }}
username: {{ .Values.global.elasticsearch.username }}
password: {{ .Values.global.elasticsearch.password }}
prefix: zeebe-record-operate

ERROR:
2020-06-29 05:35:10.397 ERROR 6 --- [ main] o.c.o.e.ElasticsearchConnector : Error occurred while connecting to Elasticsearch: clustername [elasticsearch], https://xx.xx.xx.x.x.x.x.ap-south-1.es.amazonaws.com:443. Will be retried...

java.io.IOException: https://xxxxx-xxxx-xxx-xx-xxx-x..ap-south-1.es.amazonaws.com: Name or service not known
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:964) ~[elasticsearch-rest-client-6.8.7.jar!/:6.8.7]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233) ~[elasticsearch-rest-client-6.8.7.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:146) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.camunda.operate.es.ElasticsearchConnector.checkHealth(ElasticsearchConnector.java:89) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector.createEsClient(ElasticsearchConnector.java:75) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector.esClient(ElasticsearchConnector.java:51) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527.CGLIB$esClient$0() ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527$$FastClassBySpringCGLIB$$af2d84c1.invoke() ~[camunda-operate-common-0.23.0.jar!/:?]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527.esClient() ~[camunda-o

New java opts defaults

Use the zeebe benchmark defaults

 # JavaOpts:
 # DEFAULTS
 JavaOpts: >-
   -XX:MaxRAMPercentage=25.0
   -XX:+ExitOnOutOfMemoryError
   -XX:+HeapDumpOnOutOfMemoryError
   -XX:HeapDumpPath=/usr/local/zeebe/data
   -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log
   -Xlog:gc*:file=/usr/local/zeebe/data/gc.log:time:filecount=7,filesize=8M

GKE Autopilot Clusters Constraints

When trying to install the zeebe-cluster-helm chart I am getting:

Error: admission webhook "validation.gatekeeper.sh" denied the request: [denied by autogke-no-write-mode-hostpath] hostPath volume proc used in container prometheus-node-exporter uses path /proc which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: ["/var/log/"]. Requesting user: <XXXX> and groups: <["system:authenticated"]>
[denied by autogke-no-write-mode-hostpath] hostPath volume sys used in container prometheus-node-exporter uses path /sys which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: ["/var/log/"]. Requesting user: <XXXXXX> and groups: <["system:authenticated"]>
[denied by autogke-no-host-port] container prometheus-node-exporter specifies a host port; disallowed in Autopilot. Requesting user: <XXXXXXX> and groups: <["system:authenticated"]>
[denied by autogke-disallow-hostnamespaces] enabling hostPID is not allowed in Autopilot. Requesting user: <XXXXX> and groups: <["system:authenticated"]>
[denied by autogke-disallow-hostnamespaces] enabling hostNetwork is not allowed in Autopilot. Requesting user: <XXXXX> and groups: <["system:authenticated"]>

zeebe installation fails

I am trying to install zeebe via helm charts in a separate namespace, the same fails. I have capture the debug logs from helm command as below:

$ helm install zeebe zeebe/zeebe-full-helm -n zeebe --debug
install.go:178: [debug] Original chart version: ""
install.go:199: [debug] CHART PATH: /home/edgeprov/.cache/helm/repository/zeebe-full-helm-1.3.1.tgz

client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD alertmanagerconfigs.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD alertmanagers.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD podmonitors.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD probes.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD prometheuses.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD prometheusrules.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD servicemonitors.monitoring.coreos.com is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD thanosrulers.monitoring.coreos.com is already present. Skipping.
W0117 18:44:32.951595 1150858 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission" ClusterRole
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission" ClusterRoleBinding
client.go:328: [debug] clusterrolebindings.rbac.authorization.k8s.io "zeebe-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission" Role
client.go:328: [debug] roles.rbac.authorization.k8s.io "zeebe-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission" RoleBinding
client.go:328: [debug] rolebindings.rbac.authorization.k8s.io "zeebe-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "zeebe-ingress-nginx-admission-create" Job
client.go:328: [debug] jobs.batch "zeebe-ingress-nginx-admission-create" not found
client.go:128: [debug] creating 1 resource(s)
client.go:528: [debug] Watching for changes to Job zeebe-ingress-nginx-admission-create with timeout of 5m0s
client.go:556: [debug] Add/Modify event for zeebe-ingress-nginx-admission-create: ADDED
client.go:595: [debug] zeebe-ingress-nginx-admission-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
helm.go:88: [debug] failed pre-install: timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:87
runtime.main
runtime/proc.go:225
runtime.goexit
runtime/asm_amd64.s:1371

Unclear how to configure zeebe-cluster

My problem was that I wanted to setup Zeebe with operate for our benchmarks and
I saw that the throughput was significant lower when we use the zeebe-full instead of the zeebe-cluster chart. https://github.com/zeebe-io/zeebe-benchmark/issues/23

I used the same configuration for the full chart as for the cluster chart, the only difference was that I
put all properties under zeebe-cluster.

Like:

zeebe-cluster:
  image:
    repository: camunda/zeebe
    tag: SNAPSHOT
    pullPolicy: IfNotPresent

  # ZEEBE CFG

  clusterSize: 3
  partitionCount: 3
  replicationFactor: 3
  cpuThreadCount: 4
  ioThreadCount: 4

  ... etc.

I expected that this was the way to go since the README in this repo says the same https://github.com/zeebe-io/zeebe-full-helm. Furthermore I expected that this works that way because the chart is called liked that.

Luckily I found this conversation, which shows the usage of zeebe instead of zeebe-cluster.

This fixes also my benchmark setup and worked now as expected:

zeebe:
  image:
    repository: camunda/zeebe
    tag: SNAPSHOT
    pullPolicy: IfNotPresent

  # ZEEBE CFG

  clusterSize: 3
  partitionCount: 3
  replicationFactor: 3
  cpuThreadCount: 4
  ioThreadCount: 4

  ... etc.

We probably should add an example configuration or something like that how to configure this or we change the name to zeebe-cluster, this would things probably more clear.

[EPIC]: Create a single helm chart

One helm chart to rule them all.

Create one single chart which contains all applications as sub charts.

Todo:

Authenticate Zeebe Operate and ElasticSearch

Hello!

I would like to configure zeebe operate helm chart with an elasticsearch authentication. I configured url with this schema: https://<es_user>:<es_pass>@<es_host>:<es_port> but it does not work.

Do you have a solution to use elasticsearch auth with zeebe-operate helm chart ?

Thanks

Gateway healthcheck fails [readiness check]

Tried to setup zeebe-cluster using the operator. Everything seems to be working fine except gateway health check.

It's always throwing out_of_service response.

{"status":"OUT_OF_SERVICE","components":{"diskSpace":{"status":"UP","details":{"total":21462233088,"free":17460678656,"threshold":10485760,"exists":true}},"gatewayClusterAwareness":{"status":"UP"},"gatewayPartitionLeaderAwareness":{"status":"UP"},"gatewayResponsive":{"status":"UP","details":{"timeOut":"PT0.5S"}},"gatewayStarted":{"status":"UP"},"livenessDiskSpace":{"status":"UP","details":{"total":21462233088,"free":17460678656,"threshold":1048576,"exists":true}},"livenessGatewayClusterAwareness":{"status":"UP","details":{"derivedFrom":"ClusterAwarenessHealthIndicator","wasEverUp":true,"maxDowntime":"PT5M","lastSeenDelegateHealthStatus":{"status":"UP"}}},"livenessGatewayPartitionLeaderAwareness":{"status":"UP","details":{"derivedFrom":"PartitionLeaderAwarenessHealthIndicator","wasEverUp":true,"maxDowntime":"PT5M","lastSeenDelegateHealthStatus":{"status":"UP"}}},"livenessGatewayResponsive":{"status":"UP","details":{"derivedFrom":"ResponsiveHealthIndicator","wasEverUp":true,"maxDowntime":"PT10M","lastSeenDelegateHealthStatus":{"status":"UP","details":{"timeOut":"PT5S"}}}},"livenessMemory":{"status":"UP","details":{"threshold":0.01}},"livenessState":{"status":"UP"},"memory":{"status":"UP","details":{"threshold":0.1}},"readinessState":{"status":"OUT_OF_SERVICE"}},"groups":["liveness","readiness","startup"]}

Please refer to this slack thread

Add elastic exporter as optional dependency

Sometimes on our benchmarks we also want to see metrics from elastic, for that normally the elastic-exporter is used

We configured it normally like this, in order to work with out Prometheus setup.

es:
  uri: "http://elasticsearch-master-headless:9200"
serviceMonitor:
  enabled: true
  labels:
    release: "metrics"

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Retry-After: unexpected status code 200

This repository currently has no open or pending branches.

Detected dependencies

asdf
.tool-versions
  • helm 3.14.4
  • kubectl 1.27.13
  • kustomize 5.4.1
  • golang 1.22.2
  • yq 4.43.1
github-actions
.github/actions/gke-login/action.yml
  • google-github-actions/auth v2@55bd3a7c6e2ae7cf1877fd1ccb9d54c0503c457c
  • google-github-actions/auth v2@55bd3a7c6e2ae7cf1877fd1ccb9d54c0503c457c
  • google-github-actions/get-gke-credentials v2@c02be8662df01db62234e9b9cff0765d1c1827ae
.github/workflows/add-to-project.yml
  • tibdex/github-app-token v2@3beb63f4bd073e61482598c45c71c1019b59b73a
  • actions/add-to-project v1.0.1@9bfe908f2eaa7ba10340b31e314148fcfe6a2458
.github/workflows/chart-public-files.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • EndBug/add-and-commit v9.1.4@a94899bca583c204427a224a7af87c02f9b325d5
.github/workflows/chart-release.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
  • actions/cache v4@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • sigstore/cosign-installer v3.5.0@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20
  • helm/chart-releaser-action v1.6.0@a917fd15b20e8b64b94d9158ad54cd6345335584
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • nick-fields/retry v3@7152eba30c6575329ac0576536151aca5a72780e
  • EndBug/add-and-commit v9.1.4@a94899bca583c204427a224a7af87c02f9b325d5
.github/workflows/chart-update-readme.yaml
  • tibdex/github-app-token v2@3beb63f4bd073e61482598c45c71c1019b59b73a
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • EndBug/add-and-commit v9.1.4@a94899bca583c204427a224a7af87c02f9b325d5
.github/workflows/chart-validate.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
  • actions/setup-python v5@82c7e631bb3cdc910f68e0081d67478d79c6982d
  • helm/chart-testing-action v2.6.1@e6669bcd63d7cb57cb4380c33043eebe5d111992
.github/workflows/renovate-config-check.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
.github/workflows/renovate-post-upgrade.yaml
  • tibdex/github-app-token v2@3beb63f4bd073e61482598c45c71c1019b59b73a
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • actions/setup-go v5@0c52d547c9bc32b1aa3301fd7a9cb496313a4491
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
  • actions/cache v4@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
  • EndBug/add-and-commit v9.1.4@a94899bca583c204427a224a7af87c02f9b325d5
.github/workflows/sec-codeql.yml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • github/codeql-action 82edfe29cebc1a8481d9331c337bcd1e22b9de91
  • github/codeql-action 82edfe29cebc1a8481d9331c337bcd1e22b9de91
  • github/codeql-action 82edfe29cebc1a8481d9331c337bcd1e22b9de91
.github/workflows/sec-scorecard.yml
  • actions/checkout v4.1.2@9bb56186c3b09b4f86b1c65136769dd318469633
  • ossf/scorecard-action v2.3.1@0864cf19026789058feabb7e87baa5f140aac736
  • actions/upload-artifact v4.3.2@1746f4ab65b179e0ea60a494b83293b640dd5bba
  • github/codeql-action v3.25.1@c7f9125735019aa87cfc361530512d50ea439c71
.github/workflows/test-integration-cleanup-template.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • redhat-actions/oc-login v1@5eb45e848b168b6bf6b8fe7f1561003c12e3c99d
.github/workflows/test-integration-template.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • tibdex/github-app-token v2@3beb63f4bd073e61482598c45c71c1019b59b73a
  • redhat-actions/oc-login v1@5eb45e848b168b6bf6b8fe7f1561003c12e3c99d
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
  • bobheadxi/deployments v1@648679e8e4915b27893bd7dbc35cb504dc915bc8
  • bobheadxi/deployments v1@648679e8e4915b27893bd7dbc35cb504dc915bc8
  • bobheadxi/deployments v1@648679e8e4915b27893bd7dbc35cb504dc915bc8
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • tibdex/github-app-token v2@3beb63f4bd073e61482598c45c71c1019b59b73a
  • bobheadxi/deployments v1@648679e8e4915b27893bd7dbc35cb504dc915bc8
.github/workflows/test-regression.yaml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
.github/workflows/test-unit.yml
  • actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
  • asdf-vm/actions v3@05e0d2ed97b598bfce82fd30daf324ae0c4570e6
  • actions/cache v4@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9
gomod
go.mod
  • go 1.22.2
  • github.com/BurntSushi/toml v1.3.2
  • github.com/gruntwork-io/terratest v0.46.13
  • github.com/stretchr/testify v1.9.0
  • gopkg.in/yaml.v3 v3.0.1
  • k8s.io/api v0.28.4
helm-values
charts/camunda-platform/values.yaml
  • registry.camunda.cloud/console/console-sm 8.5.12
  • camunda/optimize 8.5.0
  • bitnami/postgresql 15.6.0
  • bitnami/keycloak 23.0.7
  • bitnami/postgresql 15.6.0
  • camunda/connectors-bundle 8.5.0
  • bitnami/elasticsearch 8.12.2
charts/camunda-platform/values/values-latest.yaml
  • camunda/connectors-bundle 8.5.0
  • camunda/optimize 8.5.0
  • bitnami/keycloak 23.0.7
  • bitnami/postgresql 15.6.0
  • bitnami/elasticsearch 8.12.2
charts/camunda-platform/values/values-v8.0-eol.yaml
  • bitnami/keycloak 16.1.1
  • camunda/optimize 3.9.5
charts/camunda-platform/values/values-v8.1.yaml
  • camunda/connectors-bundle 0.16.1
  • bitnami/keycloak 16.1.1
  • bitnami/postgresql 14.5.0
  • camunda/optimize 3.9.5
  • bitnami/elasticsearch-curator-archived 5.8.4
charts/camunda-platform/values/values-v8.2.yaml
  • camunda/connectors-bundle 0.23.2
  • bitnami/keycloak 19.0.3
  • bitnami/postgresql 15.4.0
  • camunda/optimize 3.10.9
  • bitnami/elasticsearch-curator-archived 5.8.4
charts/camunda-platform/values/values-v8.3.yaml
  • camunda/connectors-bundle 8.3.10
  • bitnami/keycloak 22.0.5
  • bitnami/postgresql 15.5.0
  • camunda/optimize 8.3.8
  • bitnami/elasticsearch 8.8.2
charts/camunda-platform/values/values-v8.4.yaml
  • camunda/connectors-bundle 8.4.6
  • bitnami/keycloak 22.0.5
  • bitnami/postgresql 15.5.0
  • camunda/optimize 8.4.3
  • bitnami/elasticsearch 8.9.2
charts/web-modeler-postgresql/values.yaml
  • docker.io/bitnami/postgresql 14.5.0-debian-11-r35
  • docker.io/bitnami/bitnami-shell 11-debian-11-r45
  • docker.io/bitnami/postgres-exporter 0.15.0-debian-11-r22
helmv3
charts/camunda-platform/Chart.yaml
  • keycloak 19.4.1
  • postgresql 12.x.x
  • common 2.x.x
charts/web-modeler-postgresql/Chart.yaml
  • common 2.x.x
regex
charts/camunda-platform/values.yaml
  • camunda/camunda-platform 8.5.0
charts/camunda-platform/values/values-latest.yaml
  • camunda/camunda-platform 8.5.0
charts/camunda-platform/values/values-v8.0-eol.yaml
  • camunda/camunda-platform 8.0.21
charts/camunda-platform/values/values-v8.1.yaml
  • camunda/camunda-platform 8.1.27
charts/camunda-platform/values/values-v8.2.yaml
  • camunda/camunda-platform 8.2.26
charts/camunda-platform/values/values-v8.3.yaml
  • camunda/camunda-platform 8.3.10
charts/camunda-platform/values/values-v8.4.yaml
  • camunda/camunda-platform 8.4.6
charts/camunda-platform/values.yaml
  • camunda/console 8.5.12
  • camunda/optimize 8.5.0
  • camunda/web-modeler 8.5.0
charts/camunda-platform/values/values-latest.yaml
  • camunda/console 8.5.12
  • camunda/optimize 8.5.0
  • camunda/web-modeler 8.5.0
charts/camunda-platform/values/values-v8.1.yaml
  • camunda/web-modeler 0.8.0-beta
  • elasticsearch/elasticsearch 7.17.20
charts/camunda-platform/values/values-v8.2.yaml
  • camunda/web-modeler 8.2.13
  • elasticsearch/elasticsearch 7.17.20
charts/camunda-platform/values/values-v8.3.yaml
  • camunda/optimize 8.3.8
  • camunda/web-modeler 8.3.6
charts/camunda-platform/values/values-v8.4.yaml
  • camunda/console 8.4.60
  • camunda/optimize 8.4.3
  • camunda/web-modeler 8.4.4

  • Check this box to trigger a request for Renovate to run again on this repository

Wrong tasklist default config

Hello, I had a problem with default config for tasklist. The service did not work in default installation I had to change main object of config from zeebe to camunda.

From:

# Tasklist configuration file

zeebe.tasklist:
  # Set Tasklist username and password.
  # If user with <username> does not exists it will be created.
  # Default: demo/demo
---TRUNC---

To:

# Tasklist configuration file

camunda.tasklist:
  # Set Tasklist username and password.
  # If user with <username> does not exists it will be created.
  # Default: demo/demo
---TRUNC---

zeebe-operate-helm and zeebe-tasklist-helm doesnot allow toleration and podSelector values in values.yaml

Hi,
I wanted to deploy all pods related to Zeebe in a different node pools in AKS. I figured it out that I can run all the pods in one node pool except the tasklist and operate pods.
Is there anyway to set these values for Tasklist and operate pods?

I have used below values.yaml file for installing the zeebe-full-helm chart.

helm upgrade --namespace zeebe --install --wait --timeout 10m0s zb zeebe/zeebe-full-helm -f values.yaml

`
global:
zeebe: "{{ .Release.Name }}-zeebe"

zeebe-cluster-helm:
elasticsearch:
tolerations:
- key: "zeebe"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
nodeSelector:
zeebe: isolatenodepool
gateway:
tolerations:
- key: "zeebe"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
nodeSelector:
zeebe: isolatenodepool

tolerations:

  • key: "zeebe"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"
    nodeSelector:
    zeebe: isolatenodepool

cloudevents:
enabled: false
tasklist:
enabled: true
zeeqs:
enabled: false

ingress-nginx:
controller:
tolerations:
- key: "zeebe"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
nodeSelector:
zeebe: isolatenodepool
service:
ports:
http: 80
https: 443
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"

zeebe-tasklist-helm:
ingress:
enabled: true
annotations:
ingress.kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
path: /
host: tasklist.zeebe.weud
tls:
enabled: false
secretName:
nodeSelector:
zeebe: isolatenodepool
tolerations:

  • key: "zeebe"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"

zeebe-operate-helm:
ingress:
enabled: true
annotations:
ingress.kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
path: /
host: operate.zeebe.weud
tls:
enabled: false
secretName:
nodeSelector:
zeebe: isolatenodepool
tolerations:

  • key: "zeebe"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"
    `

result in AKS cluster:
image

Add extra container and init container values

When using the Helm charts to demo or test something, it's sometimes useful to be able to inject sidecar containers, or extra init containers. Some operators (e.g. linkerd) do this for you through annotations, others require you to do this yourself (e.g. Jaeger - they support annotations as well, but only on deployments for whatever reason).

AT:

  • add extraAnnotations value to allow specifying annotations for pods
  • add extraContainers value to allow specifying sidecar containers for pods
  • add extraInitContainers value to allow specifying additional init containers for pods (e.g. testing migration script, for example)
  • add extraVolumes and extraVolumeMounts to allow injecting/overwriting files during deployment

At the moment I really only care to scope all of these for Zeebe, not sure if it makes sense to have these for other services; some of them might even have partial support already.

NOTE: see some examples here and here, and some discussion here

Support non-root deployments

Description

It's not currently possible to support non-root deployments with the Helm charts. In order to do so, we need to:

  1. Allow users to configure the StatefulSet's .spec.template.securityContext so users can add runAsUser, runAsGroup, and fsGroup (and set it to 1000 as recommended value)
  2. Update the volumes section and set the defaultMode of the config volume to 0777 instead of 0744.

[EPIC]: Add chart tests

We need to test our charts, to verify and ensure functionality and to guarantee that we not break them on adding more features or fixing bugs.

Expose Prometheus via Ingress with Zeebe Full Helm Chart

Right now Prometheus can be deployed with a Zeebe Cluster, but in order to access grafana you need to port forward to it, this can be simplified by enabling the ingress in the prometheus chart and making sure that our Ingress Controller expose the right paths. This has proved to be challenging and time-consuming but I am sure that it can be done.

Standalone Gateway pod 9600 endpoint returns 503 Service Unavailable

Hi

I recently deployed zeebe-cluster-helm chart version 0.0.88 and as part of my testing noticed that the gateway service exposes an endpoint on port 9600, which seems to always return:

503 Service Unavailable
upstream connect error or disconnect/reset before headers. reset reason: connection failure

I know this endpoint is exposed by the Zeebe pods and used by Kubernetes to check when the pod is ready, but in the gateway deployment I don't think the 9600 endpoint is used and the gateway pod seems to be ready and working, despite the 503 error.

So, is the 9600 endpoint required on the gateway pod and if yes, what might be causing the 503 error?

I tested this endpoint by calling :9600/ready, perhaps for the gateway this endpoint is invalid?

I have checked the logs, which I can provide if it helps, but they simply report that a 503 Service Unavailable error was returned.

Worth mentioning... The reason I say the gateway pod is working is because when I request the Topology I get the following response:

{
  "brokers": [
    {
      "partitions": [],
      "nodeId": 0,
      "host": "workflow-engine-zeebe-0.workflow-engine-zeebe.ah-playground.svc.cluster.local",
      "port": 26501
    }
  ],
  "clusterSize": 1,
  "partitionsCount": 1,
  "replicationFactor": 1
}

So I have assumed the standalone gateway is talking to the Zeebe Cluster.

config:
Zeebe version 0.22.1

Zeebe Cluster:
clusterSize: 1
partitionCount: 1
replicationFactor: 1

ES
replicas: 1

Thanks
Andy

Multiple gateway replicas may not work gossip algorithm

Description

Nodes in a Zeebe cluster use SWIM (i.e. gossip) to propagate information about other nodes. This is how topology is propagated, and how which node is subscribed to which notification event as well. Within SWIM, each node is identified by its memberId; if two nodes share the same memberId, then problems can arise as for the other nodes they are essentially the same. As you can imagine, that has some pretty bad consequences, such as breaking topology (e.g. incomplete topologies) or missing long polling notifications.

To fix this, we simply need to ensure every gateway replica gets a different memberId. This can be done by setting the env var ZEEBE_GATEWAY_CLUSTER_MEMBERID to the hostname or some such. I think at the moment it is unfortunately hardcoded to be zeebe-gateway, which could lead to the issues I've described above if someone uses more than one gateway replica.

Note that this does not affect embedded gateways.

/cc @salaboy @Zelldon

CCSM: Operate Deployment is using a Zeebe Image

values.yml:

global:
  image:
    repository: camunda/zeebe
    tag: SNAPSHOT
    pullPolicy: Always
operate:
  enabled: true
  global:
    image:
      repository: camunda/operate
zeebe:
  clusterSize: "3"
  partitionCount: "3"
  replicationFactor: "3"
....

generates this deployment for Operate:

# Source: ccsm-helm/charts/operate/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: os-benchmark-1-operate
  labels:
    app: camunda-cloud-self-managed
    app.kubernetes.io/name: operate
    helm.sh/chart: operate-0.0.9
    app.kubernetes.io/instance: os-benchmark-1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: 1.3.1
    app.kubernetes.io/part-of: camunda-cloud-self-managed
    app.kubernetes.io/component: operate
  annotations:
spec:
  replicas: 1
  selector:
    matchLabels: ...
  template:
    metadata:
      labels: ...
    spec:
      containers:
      - name: operate
        image: "camunda/zeebe:SNAPSHOT"
...

I'd expect the global.image.repository overwrite for Operate to work, but I'm not too familiar with Helm.

I've found the values.yml in the Zeebe benchmarks: https://github.com/camunda-cloud/zeebe/blob/main/benchmarks/setup/default/zeebe-values.yaml#L112-L116

Zeebe Operate unable to import data after upgrade 1.0.1->1.3.3

We are using Zeebe with the elasticsearch exporter and the Operate dashboard. After upgrading from 1.0.1 to 1.3.3 operate does not show any processes from after the update and prints migrations errors.
I tried to start versions 1.1 and 1.2 for both the zeebe broker and operate dashboard to migrate the data, but it didn't help. It seems operate is unable to use any of the new data saved by the new broker version.
Zeebe or Elasticsearch do not show any errors and I found data in the index zeebe-record_process-instance_1.3.3_2022-02-03, so the data should be saved correctly in elasticsearch.
operate-logs.csv

Zeebe 1.3.3
Operate 1.3.3
Elasticsearch v7.14.1

Add curator as optional deployment

The helm chart should be able to also deploy an curator job like:

CFG Map

[zell zeebe-benchmark/ ns:zell-helm]$ cat setup/curator-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: curator-config
  labels:
    app: curator
data:
  action_file.yml: |-
    ---
    # Remember, leave a key empty if there is no value.  None will be a string,
    # not a Python "NoneType"
    #
    # Also remember that all examples have 'disable_action' set to True.  If you
    # want to use this action as a template, be sure to set this to False after
    # copying it.
    actions:
      1:
        action: delete_indices
        description: "Clean up ES by deleting old indices"
        options:
          timeout_override:
          continue_if_exception: False
          disable_action: False
          ignore_empty_list: True
        filters:
        - filtertype: age
          source: name
          direction: older
          timestring: '%Y-%m-%d'
          unit: days
          unit_count: 1
          field:
          stats_result:
          epoch:
          exclude: False
  config.yml: |-
    ---
    # Remember, leave a key empty if there is no value.  None will be a string,
    # not a Python "NoneType"
    client:
      hosts:
        - elasticsearch-master-headless
      port: 9200
      url_prefix:
      use_ssl: False
      certificate:
      client_cert:
      client_key:
      ssl_no_validate: False
      http_auth:
      timeout: 30
      master_only: False
    logging:
      loglevel: INFO
      logfile:
      logformat: default
      blacklist: ['elasticsearch', 'urllib3']

Cronjob

[zell zeebe-benchmark/ ns:zell-helm]$ cat setup/curator-cronjob.yaml 
# https://medium.com/@hagaibarel/running-curator-as-a-kubernetes-cronjob-19eaab9afd3b
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: curator
  labels:
    app: curator
spec:
  schedule: "0 3 * * *" # run every 03:00 AM daily
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 3
  concurrencyPolicy: Forbid
  startingDeadlineSeconds: 120
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - image: bobrik/curator:5.7.6
            name: curator
            args: ["--config", "/etc/config/config.yml", "/etc/config/action_file.yml"]
            volumeMounts:
            - name: config
              mountPath: /etc/config
          volumes:
          - name: config
            configMap:
              name: curator-config
          restartPolicy: OnFailure

Support imagePullSecrets

Looks like imagePullSecrets support is missing from the helm chart.
I does work if you set the secrets in the ServiceAccount but it would be great if it is added for the pods as well.

Two use cases could be the following:

  1. Creating custom zeebe images in a private image registry (eg. images with custom exporters added).
  2. Use initContainers from a private image registry.

[EPIC]: Add Identity

Add Identity Application to the single chart. Dependency of #126.

`zeebe-operate-helm`: Ingress apiVersion `networking.k8s.io/v1` is not available in K8S < 1.19.0

It's not possible to deploy the zeebe-operate-helm-chart to K8S-Clusters with a version < 1.19.0 due to an incompatibility with the ingress's apiVersion.

Is this by design because you don't want to support K8S-versions which are EOL with this chart?
Otherwise I'd like to provide a "fix" for that by adjusting the ingress.yaml as it is created per default by $ helm create chart:

# [...]
{{- if .Values.ingress.enabled -}}
# [...]
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
  {{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
  {{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
  {{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
# [...]
spec:
  {{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
  ingressClassName: {{ .Values.ingress.className }}
  {{- end }}
# [...]
$ k version
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.17", GitCommit:"68b4e26caf6ede7af577db4af62fb405b4dd47e6", GitTreeState:"clean", BuildDate:"2021-03-18T00:54:02Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

$ helm upgrade --install --namespace ${K8S_NAMESPACE} camundacloud/zeebe-operate-helm .
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.