Giter Club home page Giter Club logo

helm-charts's People

Contributors

adamdang avatar allex1 avatar arukiidou avatar asherf avatar caarlos0 avatar calvinbui avatar davidkarlsen avatar dependabot[bot] avatar desaintmartin avatar dongjiang1989 avatar dotdc avatar drfaust92 avatar gianrubio avatar gkarthiks avatar invidian avatar jkroepke avatar k8s-ci-robot avatar lawliet89 avatar monotek avatar mrueg avatar naseemkullah avatar nlamirault avatar okgolove avatar quentinbisson avatar scottrigby avatar t3mi avatar torstenwalter avatar vsliouniaev avatar zanac1986 avatar zeritti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Invite current stable prometheus chart maintainers as collaborators with write access

  • 1. We can start by adding a single GitHub CODEOWNERS file to reproduce what the prow OWNERS files contain now. This can help with enabling required reviews on pull requests
  • 2. Then all collaborators can be invited with write access, without changing existing functionality from stable
    - [ ] 3. cleanup: remove the prow OWNERS files Edit: will be done as part of #14 so the initial helm index will be clean

📊 Maintainers poll

Do we want to keep this level of granularity, or would it be better to give all collaborators write access to all charts in this repo? Let's discuss in comments below…

Transfer ownership and rename scottrigby/prometheus-helm-charts to prometheus-community/helm-charts

Background

Fixes: prometheus-community/community#28 (comment)

Status

  • Maintainers poll about whether #1 is a blocker for transferring the repo. From feedback so far appears it is not. See "Is renaming prometheus-operator to kube-prometheus a blocker?" below
  • @brancz final approval as sponsoring prometheus developer

📊 Maintainers poll

Are we good to transfer ownership now?

Note: Final decision is for @brancz, as he's sponsoring the repo transfer for the prometheus-community.

This temporary repo location is working. You can test it now:

$ helm repo add temp-prometheus-community https://scottrigby.github.io/prometheus-helm-charts
"temp-prometheus-community" has been added to your repositories
$ helm repo update
$ helm search repo prometheus | grep temp
temp-prometheus-community/prometheus              	11.12.1      	2.20.1     	Prometheus is a monitoring system and time seri...
temp-prometheus-community/prometheus-adapter      	2.5.1        	v0.7.0     	A Helm chart for k8s prometheus adapter           
temp-prometheus-community/prometheus-blackbox-e...	4.3.1        	0.16.0     	Prometheus Blackbox Exporter                      
temp-prometheus-community/prometheus-cloudwatch...	0.8.4        	0.8.0      	A Helm chart for prometheus cloudwatch-exporter   
temp-prometheus-community/prometheus-consul-exp...	0.1.6        	0.4.0      	A Helm chart for the Prometheus Consul Exporter   
temp-prometheus-community/prometheus-couchdb-ex...	0.1.2        	1.0        	A Helm chart to export the metrics from couchdb...
temp-prometheus-community/prometheus-mongodb-ex...	2.8.1        	v0.10.0    	A Prometheus exporter for MongoDB metrics         
temp-prometheus-community/prometheus-mysql-expo...	0.7.1        	v0.11.0    	A Helm chart for prometheus mysql exporter with...
temp-prometheus-community/prometheus-nats-exporter	2.5.1        	0.6.2      	A Helm chart for prometheus-nats-exporter         
temp-prometheus-community/prometheus-node-exporter	1.11.2       	1.0.1      	A Helm chart for prometheus node-exporter         
temp-prometheus-community/prometheus-operator     	9.3.2        	0.38.1     	DEPRECATED - This chart will be renamed. See ht...
temp-prometheus-community/prometheus-postgres-e...	1.3.1        	0.8.0      	A Helm chart for prometheus postgres-exporter     
temp-prometheus-community/prometheus-pushgateway  	1.4.2        	1.2.0      	A Helm chart for prometheus pushgateway           
temp-prometheus-community/prometheus-rabbitmq-e...	0.5.6        	v0.29.0    	Rabbitmq metrics exporter for prometheus          
temp-prometheus-community/prometheus-redis-expo...	3.5.1        	1.3.4      	Prometheus exporter for Redis metrics             
temp-prometheus-community/prometheus-snmp-exporter	0.0.6        	0.14.0     	Prometheus SNMP Exporter                          
temp-prometheus-community/prometheus-to-sd        	0.3.1        	0.5.2      	Scrape metrics stored in prometheus format and ...

After the transfer – as documented in this repo README and all the charts READMEs – you should be able to do the same with:

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Is renaming prometheus-operator to kube-prometheus a blocker?

@brancz see how the temp-prometheus-community/prometheus-operator is marked DEPRECATED in the output? Because it's also flagged in Chart.yaml it will be hidden in the hubs like Helm Hub (and Artifact Hub, it's successor). I also made sure to VERY CLEARLY mark the chart README with a huge deprecation and warning sign (as well as CLI output if someone tries to install it after missing all that) so no one should mistake it as an active chart.

Regarding the order of this repo transfer and finishing fixing the renamed chart (ongoing in PR #1), I see two options:

  1. Wait to complete that until after transferring this repo to prometheus-community
    • Pro: it will allow us to deprecate the prometheus charts in the stable repo immediately, and allowing users to begin contributing directly to them
    • Con: users of stable/prometheus-operator will have to wait until the newly named prometheus-community/kube-prometheus chart is finished to use it. Then again, they will have to wait until it's ready either way
  2. Complete PR #1 before transferring repo ownership to prometheus-community
    • Pro: when we announce the move, we can also announce the upgrade path from stable/prometheus-operator chart to prometheus-community/kube-prometheus
    • Con: it will delay moving the other charts. Since there are far fewer active maintainers of the stable/prometheus-operator chart it's difficult to predict a completion timeline. Also if we transferred first we could still announce the planned change (and link to the PR), which may encourage more participating in review/testing to get it done sooner than it would have otherwise?

I'm clearly in favor of option 1. Does anyone disagree?

Any other issues need to be resolved first?

All identified issues so far in this gh project prepare repo for transfer to prometheus-community/helm-charts are complete. The other open issues (apart from #13, which I'm keeping open to remind us to continue holding PRs in stable until this is done so we can close them) either must or can wait until after the transfer.

[prometheus-kube-stack] impossible to install chart

Describe the bug
I don't manage to install prometheus-kube-stack , and I don't understand why.
I am on an AWS EC2 instance with Koops and Kubectl.

Capture d’écran 2020-09-15 à 15 49 23

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"clean", GoVersion:"go1.14.7"}
Helm Version:

Which chart:
prometheus-kube-stack

Which version of the chart:
last version

What happened:
impossible to install prometheus-kube-stack

What you expected to happen:
to install smootly prometheus-kube-stack

How to reproduce it (as minimally and precisely as possible):
tape this:
helm install monitoring prometheus-community/kube-prometheus-stack

[prometheus] prometheus-server continually restarting after SIGTERM

Describe the bug
Prometheus server container continually restarts after receiving SIGTERM. Container will be up for a few minutes, receive SIGTERM, go down for a few minutes, restart, repeat.

Version of Helm and Kubernetes:
Helm:

helm version
version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.6"}

Kubernetes:

kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"fdba62c353cc548995bbe730321f64176e4f6e4b", GitTreeState:"clean", BuildDate:"2020-04-08T18:15:19Z", GoVersion:"go1.13.8 BoringCrypto", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
stable/prometheus

What happened:
The prometheus-server container regularly restarts after receiving SIGTERM and prometheus is unavailable for several minutes at a time.

What you expected to happen:
Prometheus to continue running while my workflows are running.

How to reproduce it (as minimally and precisely as possible):
Happened after installing helm using homebrew (brew install helm), then prometheus (helm install prometheus stable/prometheus). Installation and initial setup works fine, but restarting problems occur several minutes after installation.

Anything else we need to know:
Moving original issue here as instructed.

I've tried other suggestions mentioned in other GitHub issues (increasing memory allocated to prometheus-server, increasing initial delay for liveness/readiness checks) and so far nothing has helped.

My apologies if this issue is with my own cluster configuration rather than the helm chart, but any suggestions would be greatly appreciated.

Thanks!

[prometheus-operator] additionalScrapeConfigs query

I am trying to add a new label called "teamname" to the metrics scraped on kube-state-metrics.So my objective is to add teamname as an additional label to the metrics scraped at kube-state-metrics so in the PrometheusRule we can set the specific team name on the label.
So tried with the below config.But I dont see this new label being added to the kube-state-metrics in Prometheus Service Discovery page.Please let me know whats the correct method to do this.
Objective: is to add new label named "teamname" so based upon the metrics,the team name can be set.

prometheus:
prometheusSpec:
additionalScrapeConfigs:

  • job_name: kube-state-metrics
    relabel_configs:
  • separator: ;
    regex: (.*)
    target_label: teamname
    replacement: "myteam"
    action: replace

Prometheus Operator chart version: 9.3.1

Insert image tag or Git SHA here

quay.io/coreos/prometheus-operator:v0.38.1

Kubernetes version information:

kubectl version 1.15

Kubernetes cluster kind: EKS

Clean up prometheus charts issues and PRs in stable repo

After #11 and #28

Note we're already holding PRs in stable (#13)

Issue options:

  1. Close them all with a note to re-open if relevant in the new location (easiest on us, maybe not as nice for end users)
  2. Automate this for users by transferring the issues
    1. temporarily transfer repo to the helm org (because you can only move between repos within the same org)
    2. move issues from helm/charts to helm/prometheus-helm-charts
    3. immediately transfer repo from helm/prometheus-helm-charts to prometheus-community/helm-charts

Since the repo has already been transferred we can only do option 1 above.

PR options:

AFAIK there's no nice way for us to automate transferring open PRs from one repo to another and still allow the original PR author to own the PR. So I think our best option is to close open PRs in helm/charts with a friendly note on how they can do re-open the PR themselves (1. add new remote locally, 2. open PR from the same branch the initially did).

Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[storage-volume prometheus-alertmanager-token-ht648 config-volume]: failed to get Plugin from volumeSpec

Hi,

First of all thanks to the team for making such a wonderful helm charts for prometheus.

This issue occurs when I try to do PVC for my existing PV.

Though I see the PV is getting Bound to the Pod but my Pod throws error "ContainerCreating" and I see the below error.

Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[storage-volume prometheus-alertmanager-token-ht648 config-volume]: failed to get Plugin from volumeSpec for volume "storek8s" err=no volume plugin matched

I see lot of online suggestions for adding FlexVolume or plugin but not sure how we can implement that in the existing helm charts values.yml

Please help me with the same.

Below is the part of the PersistentVolume for alertmanager in values.yml

persistentVolume:
## If true, alertmanager will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: true

## alertmanager data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
  - ReadWriteMany

## alertmanager data Persistent Volume Claim annotations
##
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
  {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"storek8s","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"2Gi"}}}}

## alertmanager data Persistent Volume existing claim name
## Requires alertmanager.persistentVolume.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim: ""

## alertmanager data Persistent Volume mount root path
##
mountPath: /data

volumes:
- flexVolume:
    driver: fstab/cifs
    fsType: cifs
    options:
      mountOptions: dir_mode=0755,file_mode=0644,noperm
      networkPath: //store/ThinkBigLogs/k8s
    secretRef:
      name: cifs-secret-test
name: storek8s2

## alertmanager data Persistent Volume size
##
size: 2Gi

## alertmanager data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
##   set, choosing the default provisioner.  (gp2 on AWS, standard on
##   GKE, AWS & OpenStack)
##
storageClassName: ""

## alertmanager data Persistent Volume Binding Mode
## If defined, volumeBindingMode: <volumeBindingMode>
## If undefined (the default) or set to null, no volumeBindingMode spec is
##   set, choosing the default mode.
##
volumeBindingMode: ""

Give chart maintainer collaborators correct GitHub repo access level

After #11

See related #12 and #15

Context

Until #11 this repo is under my individual user account, therefore there is only one "collaborator" access level. But after this transfer, we will have options. One suggestion #12 (comment) was that all chart maintainers be given repo "Maintain" access level.

Example options list:
Screen Shot 2020-08-19 at 11 44 35 AM

Open questions

  1. GitHub docs for Maintain access says it allows members to Push to protected branches. Does this mean bypassing the rules in CODEOWNERS? I don't have a current repo set up for testing this

Clean up initial repo URL from gh-pages index.yaml

There are still references to https://github.com/scottrigby/prometheus-helm-charts in the gh-pages helm repo index.yaml file. These should now be changed to https://prometheus-community.github.io/helm-charts.

This is not a functional change, since [GitHub forwards transferred repo URLs](All links to the previous repository location are automatically redirected to the new location):

All links to the previous repository location are automatically redirected to the new location

But it will look nicer 😄

[kube-prometheus-stack] pull functionality in hack scripts out of main into separate functions

We would like to add our own dashboards, rules, and alerts, but they are private, and we do not want them pushed upstream. We also want to be able to modify those made by kube-prometheus via jsonnet. To do these things, we are vendoring these helm charts as-is, then generating the json for these things via jsonnet with kube-prometheus, then finally piping them into a modified version of the hack scripts to create the templates for the helm charts.

If some of the heavy lifting (particularly the loops) were pulled into their own functions, then it would be easier to write slightly modified versions of the hack script, and I won't need to merge with updates pulled from upstream as often (that hasn't really been an issue, but if all the main functionality is available in specific functions, then I have more confidence that it won't be). It would be nice to just copy these hack scripts into my own python module for me to use the functions provided. I assume there would still be some churning as there's no API guarantees in these hack scripts, but it seems more manageable at a function-level.

This is the "easy" feature request, though I guess this leads to what I'm ultimately trying to achieve: the ability to add rules, dashboards, and alerts (as well as modify the ones from kube-prometheus), then easily be able to deploy them as part of the helm charts (with all the helm templating, etc.) without having to first push these somewhere public.

Is your feature request related to a problem? Please describe.
Having a "static" set of dashboards, rules, and alerts won't work for us. We need to be able to modify things in the jsonnet as well as add our own stuff. However, using helm is still a huge benefit, in terms of switching them on/off with the values.yaml and using helm templating for variables not known until deployment time (i.e. not known when the charts are generated with these hack scripts).

Describe the solution you'd like
For the easy solution, I would just like any loops or non-trivial functionality to be pulled into functions, so the main function is simplified and I don't need to worry about having to merge things when pulling upstream changes as often.

Ultimately, I'm picturing these hack scripts basically being a helm chart "generator" (since helm natively doesn't support jsonnet or workflows like this), and I'd like to use functions from these hack.py scripts to help with that.

Describe alternatives you've considered
We've considered using just kube-prometheus without helm, but given that kubectl apply --prune is still alpha, helm seems to be best for packaging things. Plus the helm templating is a huge plus as well.

HTH make a PR for this.

Import prometheus chart source code from stable repo with git history

Quick how-to, to increase the bus factor:

  1. Bring down the stable charts code

    git clone [email protected]:helm/charts.git prometheus-helm-charts/
    cd prometheus-helm-charts/
    
  2. Filter only prometheus charts and rename stable dir to charts while retaining direct commit history all in one go, with git-filter-repo

    brew install git-filter-repo
    git filter-repo --path-glob 'stable/prometheus*' --path-rename stable/:charts/
    
  3. If you want to adopt main instead of master branch naming, do it now. See #6

    git checkout -b main
    git branch -D master
    
  4. Set the remote and push. See #11 for context

    git remote add origin [email protected]:scottrigby/prometheus-helm-charts.git
    git push -u origin main
    

[prometheus-blackbox-exporter] Linting fails due to use of deprecated apiVersion rbac.authorization.k8s.io/v1beta1

Describe the bug

When running helm lint for the prometheus-blackbox-exporter chart 4.3.1 it fails with the following errors:

  ==> Linting /tmp/750049035/monitoring/prometheus-blackbox-exporter/helm-charts-stable/prometheus-blackbox-exporter/4.3.0/prometheus-blackbox-exporter
  [INFO] Chart.yaml: icon is recommended
  [ERROR] templates/role.yaml: the kind "rbac.authorization.k8s.io/v1beta1 Role" is deprecated in favor of "rbac.authorization.k8s.io/v1 Role"
  [ERROR] templates/rolebinding.yaml: the kind "rbac.authorization.k8s.io/v1beta1 RoleBinding" is deprecated in favor of "rbac.authorization.k8s.io/v1 RoleBinding"
  Error: 1 chart(s) linted, 1 chart(s) failed

Version of Helm and Kubernetes:

Helm v3.3.1
Kubernetes v1.19.0

Which chart:

prometheus-blackbox-exporter

What happened:

Fails linting due to a deprecated api version in role.yaml and rolebinding.yaml

What you expected to happen:

Linting to succeed

How to reproduce it (as minimally and precisely as possible):

  • Clone the repository
  • Navigate to directory charts/prometheus-blackbox-exporter
  • Run Helm (3+) linting with helm lint

Anything else we need to know:

Nope

[prometheus-adapter] Remove colons from the ClusterRoleBinding resource name

Describe the bug

The helm lint (for Helm 3.3) fails for the prometheus-adapter Helm chart because of the invalid resource name that is not compliant with Kubernetes name requirements. The error is the following:

templates/custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml: object name does not conform to Kubernetes naming requirements: "prometheus-adapter:system:auth-delegator"

Particularly, it does not like the colons in the resource name.
Helm uses the following regular expression to check the validity of a resource name:

^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$

I believe the name should become: prometheus-adapter-system-auth-delegator

Version of Helm and Kubernetes:

Helm: 3.3.0
Kubernetes: 1.18.7

Which chart:

stable/prometheus-adapter

What happened:

helm lint command fails

What you expected to happen:

helm lint command passes

How to reproduce it (as minimally and precisely as possible):

Using Helm 3.3 run the following commands:

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm pull stable/prometheus-adapter --untar
helm lint stable/prometheus-adapter

The output is:

==> Linting stable/prometheus-adapter
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml: object name does not conform to Kubernetes naming requirements: "prometheus-adapter:system:auth-delegator"

Error: 1 chart(s) linted, 1 chart(s) failed

What is the use of adding stable and incubator in releaser

In the GH Actions, we have the following in L30-L33. What is the purpose of adding those repo?

      - name: Add dependency chart repos
        run: |
          helm repo add stable https://kubernetes-charts.storage.googleapis.com/
          helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/

[prometheus] linting fails due to deprecated rbac api version

Describe the bug

Running helm lint fails with the following error:

$ helm lint
==> Linting .
[ERROR] templates/rbac/alertmanager-clusterrole.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRole" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRole"
[ERROR] templates/rbac/alertmanager-clusterrolebinding.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRoleBinding"
[ERROR] templates/rbac/pushgateway-clusterrole.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRole" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRole"
[ERROR] templates/rbac/pushgateway-clusterrolebinding.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRoleBinding"
[ERROR] templates/rbac/server-clusterrole.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRole" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRole"
[ERROR] templates/rbac/server-clusterrolebinding.yaml: the kind "rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding" is deprecated in favor of "rbac.authorization.k8s.io/v1 ClusterRoleBinding"

Error: 1 chart(s) linted, 1 chart(s) failed

Version of Helm and Kubernetes:

$ helm version
version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"dirty", GoVersion:"go1.15"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T21:54:15Z", GoVersion:"go1.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.6-1+64f53401f200a7", GitCommit:"64f53401f200a7a5977955a74ad3f997302774ca", GitTreeState:"clean", BuildDate:"2020-07-15T19:59:02Z", GoVersion:"go1.14.5", Compiler:"gc", Platform:"linux/amd64"}

Which chart:

prometheus

Which version of the chart:

11.13.1

What happened:

Linting fails

What you expected to happen:

Linting to succeed

How to reproduce it (as minimally and precisely as possible):

  • Clone the repository
  • Navigate to directory charts/prometheus
  • Run Helm (3+) linting with helm lint

Anything else we need to know:

Nope

Additional chart maintainers

Problem

Good summary from @torstenwalter #21 (comment):

When introducing CODEOWNERS we will have an issue with charts which only have one maintainers. If that maintainer make a change then he is not able to approve it. Repository admins would need to use their "superpower" to override the required review and can merge it. That might be ok for the start, but I would suggest that we try to add at least a second maintainer there.

Solution

Also from #21 (comment):

Would be great if we could find volunteers here or even better if the chart maintainers could try to motivate people who already contributed to the chart to become maintainer.

We have arrived at a process:

  1. Open a PR proposing yourself as a co-maintainer of one of the charts in need
  2. Use the PR to discuss with current maintainer(s)
  3. We prefer to see a history of contributing to the prometheus charts or other prometheus projects (not only typo PRs etc), to ensure the maintainers of charts have a good understanding of using and supporting these projects. This will make successful co-maintainership more likely, which will in turn help the end user community

Additional maintainer processes should be discussed in a separate issue, and documented in the PROCESSES file

Current status

Affected charts since issue was opened:

Original issue

📊 Maintainers poll
Could any of your charts use additional maintainers?

Let's use this issue to discuss. If it gets unwieldy, after we move to the prometheus-community GH org, we could consider enabling team discussions (but let's cross that bridge when we get there).

Hold all PRs in stable repo for prometheus* charts

Existing prometheus PRs in stable are on hold for now (see linked PRs on prometheus-community/community#28), but we don't have any automated way to hold new PRs that may open between now and when that issue is resolved (this new helm repo ready to be transferred to prometheus-community, stable prometheus charts deprecated, new helm repo with charts listed on the hubs).

Maintainers: Please help with this if you see new PRs for prometheus charts open in the meantime.

Let's keep this open to track the effort, until the above is complete.

  • Existing prometheus PRs in stable
  • New prometheus PRs in stable

Add DCO check

Status

  • Add DCO app to scottrigby/prometheus-helm-charts
  • Authorize and re-add app after transferring repo to prometheus-community (#11)

Original issue

A simple approach is to install https://github.com/apps/dco

Git Repo Admins

Per #20 we have CODEOWNERS so people who maintain the 17 individual prometheus charts can still do so safely while sharing a common GitHub/helm repo with common CI/CD, related issues where appropriate, etc. This will allow the prometheus community charts to grow with new charts, and new maintainers as needed (existing maintainers of a chart can approve new maintainers, and repo admins can add them to the CODEOWNERS file).

Now that the repo has transferred, we are moving users from collaborators to GitHub teams.

This is the team structure:

  • helm-charts: top level team for all members (Read access to this git repo)
    • helm-charts-admins: nested team (Admin access to this git repo)
    • helm-charts-maintainers nester team (Write access to this git repo)

Currently @brancz @SuperQ and I are in the helm-charts-admins team. It wouldn't make sense for all chart maintainers to also do this (that would remove the CODEOWNERS benefit mentioned above). I don't know if there is criteria within the prometheus-community org for who can administer the git repo – I have asked, and we'll update this issue with an answer on that.

In the meantime, and apart from that, would it make sense for chart maintainers to have perhaps 2 repo maintainers, and define the duties for that team (what they are expected to do and what they will not do, like merge PRs for other charts they don't maintain). As well as a process for that (perhaps starting with people who have helped with the repo side of things, and then some kind of rotation process)?

I don't need to continue as an admin on this repo now that it's set up, however I'd be ok with staying on that team if people would like me to continue to help as backup.

Let's discuss this here. This repo is co-owned by the charts maintainers and ultimately governed by the prometheus-community org. Just wanted to start the conversation, but I think it's up to all of you. We can only assign 10 people to an issue, so CCing you all below instead:

CC @brancz @SuperQ

CC CODEOWNERS:

[kube-prometheus-stack] Compatibility matrix with k8s versions

Is your feature request related to a problem? Please describe.
kube-prometheus has a compatibility matrix at https://github.com/prometheus-operator/kube-prometheus#kubernetes-compatibility-matrix . Since this helm chart is based on kube-prometheus, I was wondering if this chart would also have compatibility matrix.

Describe the solution you'd like
If the latest version of this chart is not compatible with all versions of k8s (1.10+ as mentioned in the docs), a compatibility chart would help a lot in figuring out which version of this chart should be used with a given k8s version.

Describe alternatives you've considered
This feature request assumes that like kube-prometheus, this helm chart is not compatible with all major k8s versions. If this helm chart is compatible with all major k8s versions, please feel free to close it.

Additional context
None

add all\custom namespaces to monitoring kube-prometheus-stack

Hi team,
I wasn't able to find how to add all metrics from different namespaces to monitoring. It looks like only metrics in deployment
namespace with kube-system and default were added
All services in custom namespaces were added successfully with additionalServiceMonitors section, but how may i add non services metrics - volumes, ingress etc. I tried this part in values.yaml ## Namespaces to scope the interaction of the Prometheus Operator and the apiserver (allow list) but without success.
I found only this workaround https://github.com/prometheus-operator/kube-prometheus#adding-additional-namespaces-to-monitor and found this solution prometheus-operator/prometheus-operator#2890 (comment)
but it doesn't help. Should i create role and rolebinding in every necessary namespace or something else?

eks 1.17
prometheus-operator-9.3.0
app version 0.38.1
Thank you in advance.

[prometheus] Make a mount which is defined in extraConfigmapMounts optional

Is your feature request related to a problem? Please describe.

We want to mount multiple configmaps as alerting rules via extraConfigmapMounts, as described in this comment: helm/charts#9254 (comment)
However, one configmap prometheus-default-alerts is available by default (managed by the team that sets up prometheus, and is deployed before the helm chart), but another one, prometheus-alerts, should only be mounted if it is available (managed by another team).

Currently, when I deploy prometheus, it cannot start, since the second configmap is not available. (MountVolume.SetUp failed for volume "server-prometheus-alerts" : configmap "prometheus-alerts" not found)

Since kubernetes seems to be able to mount configmaps only if they are available, at least stated in https://stackoverflow.com/a/48228418/2286108 , it would be nice if the helm chart would provide this functionality for mounts too.

Describe the solution you'd like
It would be great if there would be a flag like show in the code piece below, that would allow to mount configmaps only if they are available:

extraConfigmapMounts:
    - name: prometheus-default-alerts
      mountPath: /etc/config/alerts
      configMap: prometheus-default-alerts
      readOnly: true
    - name: prometheus-alerts
      mountPath: /etc/config/alerts
      configMap: prometheus-alerts
      readOnly: true
      optional: true                # <-- this one is new

If a similar functionality is already available, please let me know, but I didn't find a hint in values.yaml or in the documentation.

Additionally, this would or example also be a good idea for alertmanager.configFromSecret and other places where configmaps/secrets are mounted.

If additional information is required (e.g. current values.yaml), please let me know!

Merging Process

I noticed that several people approved #51 but no one merged it. What should our process for this look like?

  1. The one who created the PR is in charge for merging
    I think that's good for PRs like #1, where it's good that multiple persons have a look. The downside is that it only works if the PR creator has write permission for this repository. As we want outside contributions that option can be excluded as a general process..
  2. The one who reviews the PR also merges it
    That's simple and straight forward. Downside is that people have to get back to the PR in case they approve it before status checks have been finished.
  3. Implement auto merge based on criteria e.g.
    • PR has at least one approved review
    • all required status checks are passing (chart linting, superlinter, DCO)
      That's similar to what we had in stable repository. We would need to investigate how to implement it.

For option 3 we could look into https://github.com/pascalgn/automerge-action

Which option would you prefer?

[prometheus-blackbox-exporter] Wrong configuration default for config.modules.http_2xx.http.valid_http_versions

(Repost of helm/charts#23535)

Describe the bug

Wrong configuration default for config.modules.http_2xx.http.valid_http_versions.

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.2", GitCommit:"", GitTreeState:"", GoVersion:"go1.14.3"}
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"archive", BuildDate:"2020-07-20T23:19:14Z", GoVersion:"go1.14.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

Which chart:

stable/prometheus-blackbox-exporter

What happened:

Probing HTTP/2.0 sites results in failures.

What you expected to happen:

Probing HTTP/2.0 sites should work.

How to reproduce it (as minimally and precisely as possible):

Run the default setup against any HTTP/2.0 site.

Anything else we need to know:

This can be fixed by replacing valid_http_versions: ["HTTP/1.1", "HTTP/2"] with valid_http_versions: ["HTTP/1.1", "HTTP/2.0"].
See prometheus/blackbox_exporter#658

prometheus-node-exporter ignores taints

helm/charts#23542 recreated here

Describe the bug
prometheus-node-exporter ignores taints

Version of Helm and Kubernetes:
helm 3, k8s 1.16.3

Which chart:
kube-prometheus-stack

What happened:

Taints: node.kubernetes.io/os=windows:NoSchedule
Taints: search=true:NoSchedule

the above taints are being ignored and the node-exporter is scheduled regardless on nodes, get stuck at container creating for windows nodes

monitoring kube-prometheus-stack-prometheus-node-exporter-2l5wz 1/1 Running 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-4vmt4 1/1 Running 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-5lkwh 0/1 ContainerCreating 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-6l7bc 1/1 Running 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-89926 1/1 Running 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-bbpcg 0/1 ContainerCreating 0 5m28s
monitoring kube-prometheus-stack-prometheus-node-exporter-l9p4r 1/1 Running 0 5m27s

What you expected to happen:
node-exporter should not be scheduled on windows nodes

How to reproduce it (as minimally and precisely as possible):

setup a cluster with taints
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack

workaround:

--set prometheus-node-exporter.nodeSelector."beta\.kubernetes\.io/os"=linux

[kube-prometheus-stack] Can't find upgrade procedure from stable/prometheus-operator

Hi, I've been looking everywhere but I can't find how to upgrade stable/prometheus-operator to the renamed chart. I looked in open issues but found nothing.

I've never upgraded a renamed chart, I tried to look it up with no luck. Can anyone point me to that if it's anywhere?

I guess it's worth mentioning it in https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md ?

Dependent charts of prometheus-operator?

I see that we are moving prometheus charts from stable to this repo. what about other charts that prometheus-operatror deploy like grafana, kube-state-metrics. ?

Markdown: should we require one sentence per line?

📊 Maintainers poll: should we require one sentence per line in markdown files?

I initially removed these from the READMEs where they existed, but am rethinking that now. I removed them because they were not consistent. But maybe it's better to make them consistent the other way, by adding them back, and then requiring that moving ahead. https://sembr.org/ makes a compelling case for this, and I'm inclined to agree.

This issue is only about whether maintainers also agree, and want this to be part of this repo style guide. Thoughts?

Also see this related issue about whether to enforce markdownlint in general: #19

For options, see:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.