Comments (22)
you used that one ?
https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus
and in my chart.. I'm using this
- name: prometheus
version: 13.3.1
repository: "@prometheus-community"
so far.. that's what I used.. and I saw those metrics (openebs 2.5.0 at that time)
from charts.
Sure @survivant . Thanks for your comments on these. Surely the docs needs modification. Will incorporate your comments asap. Also the updated docs would help the users as well to a great extend to avoid these circumstances.
from charts.
@survivant For now, the parameter can be changed using the command mentioned at the end of this link: https://github.com/openebs/charts/tree/master/charts/openebs.
Moving forward we will remove the dependency on different service account names(i.e generalize this).
from charts.
I think all the html errors are because I didn't pass the variable in the URL. I think we should have a dropdown with the information instead of having to pass them in the URL.
The people that will mostly use those dashboard will be devops, but in my case.. the first level of support won't have access to the cluster. They have no idea what a pod is or where to get the information. So it not really possible to pass a URL what that information. The information should be available in dropdowns.. they will be able to select the right information and the board will update itself.
One dashboard should represent that information
root@test-pcl109:~# kubectl -n openebs get cspc,cspi
NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE
cstorpoolcluster.cstor.openebs.io/cspc-iep-localpv 3 3 3 14h
cstorpoolcluster.cstor.openebs.io/cspc-iep-mirror 1 1 1 14h
cstorpoolcluster.cstor.openebs.io/cspc-iep-mirror-metrics 1 1 1 14h
NAME HOSTNAME FREE CAPACITY READONLY PROVISIONEDREPLICAS HEALTHYREPLICAS STATUS AGE
cstorpoolinstance.cstor.openebs.io/cspc-iep-localpv-ffzv test-pcl112 7T 7000004220k false 18 10 ONLINE 14h
cstorpoolinstance.cstor.openebs.io/cspc-iep-localpv-m7qc test-pcl110 6600G 6600003290k false 11 6 ONLINE 14h
cstorpoolinstance.cstor.openebs.io/cspc-iep-localpv-tlks test-pcl113 7390G 7390002970k false 6 5 ONLINE 14h
cstorpoolinstance.cstor.openebs.io/cspc-iep-mirror-jlqv test-pcl110 7450G 7450029300k false 11 7 ONLINE 14h
cstorpoolinstance.cstor.openebs.io/cspc-iep-mirror-metrics-tgpt test-pcl113 240G 240050200k false 4 3 ONLINE 14h
with that information in one dashboard the user could see if there is a issue with a pool and if the pool is near full capacity.
The information could look like that (but instead of PV/PVC.. it could be cspc,cspi..)
from charts.
cc: @Ab-hishek
from charts.
I would need a few clarifications first @survivant .
- How did you setup prometheus and grafana in your cluster? And how did you define your prometheus configs for pools metrics?
- Which version of grafana are you running?
I think all the html errors are because I didn't pass the variable in the URL. I think we should have a dropdown with the information instead of having to pass them in the URL.
Answering your concern about this. No, the errors are not w.r.t URL(except for localpv dashboard and storage pool dashboard). I think it is because of the grafana version being used. Also the null values I suspect are due the prometheus configs applied differently then expected. Thus wanted to know the way you defined the prometheus configs.
HTML is not supported in Grafana 6 version. Below is the issue:
grafana/grafana#15647
The required grafana version to be used is mentioned in the dashboards.
Regarding the storage class dashboard issue I will have to investigate it.
The people that will mostly use those dashboard will be devops, but in my case.. the first level of support won't have access to the cluster. They have no idea what a pod is or where to get the information. So it not really possible to pass a URL what that information. The information should be available in dropdowns.. they will be able to select the right information and the board will update itself.
Will work on this. Only for 2 dashboards(localpv and storage pool) the change needs to be done. All other dashboards have the dropdown funcationality.
Also will try to come up with the dashboard you requested.
from charts.
@Ab-hishek I created a project here : https://github.com/survivant/openebs-grafana
My setup :
On-premise
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
ctor - OpenEBS 2.6.0 (installed with Helm)
I'm using default Prometheus, AlertManager, Grafana (In the dependencies you will see kube-prometehus-stack.. I'm not using it anymore.. we are switching to default)
I added OpenEBS grafana dashboard in my chart and also I added dashboard that came from Prometheus-operator
Next step will be to import the rules from Prometheus-operator (from here : https://github.com/prometheus-operator/kube-prometheus/blob/master/manifests/kubernetes-prometheusRule.yaml )
from charts.
What I meant is how did you apply the configs to let prometheus know from where to scrape the OpenEBS pools and volume related metrics? Like in the openebs-monitoring-pg.yaml which is present in the README.md file inside grafana-charts folder, prometheus scrape-configs are defined for openebs pools and volumes. There the labels like openebs_io_cstor_pool_cluster
are replaced with other labels which is then used to form the dashboards. But in your case the labels are not getting replaced(as it is visible in the storage pool claim dashboard. If you see any of the metrics returned in the Used pool capacity panel) and hence the values are coming as null in the Text panel where HTML code in written.
from charts.
I let Prometheus scrapes using the defaults. I think it will scrape all pods that have the annotations:
prometheus.io/path: /metrics
prometheus.io/port: 9500
prometheus.io/scrape: true
from charts.
In that case I think you will need to replace the query in the dashboards templates with the original field name. For eg:
replace storage_pool_claim
with openebs_io_cstor_pool_cluster
replace cstor_pool
with openebs_io_cstor_pool_instance
replace openebs_pv
with openebs_io_persistent_volume
replace openebs_pvc
with openebs_io_persistent_volume_claim
from charts.
I see. there are rules that were added in your prometheus config
- job_name: 'openebs-pools'
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_openebs_io_monitoring]
regex: pool_exporter_prometheus
action: keep
# Adding comma-separated source_labels below in order to fetch the metrics for pool claim instances of SPC and CSPC kind
- source_labels: [__meta_kubernetes_pod_label_openebs_io_storage_pool_claim, __meta_kubernetes_pod_label_openebs_io_cstor_pool_cluster]
action: replace
# separator: Separator placed between concatenated source label values, default -> ;
separator: ''
target_label: storage_pool_claim
# Adding comma-separated source_labels below in order to fetch the metrics for pool instances of CSP and CSPI kind
- source_labels: [__meta_kubernetes_pod_label_openebs_io_cstor_pool, __meta_kubernetes_pod_label_openebs_io_cstor_pool_instance]
action: replace
# separator: Separator placed between concatenated source label values, default -> ;
separator: ''
target_label: cstor_pool
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:${2}
target_label: __address__
in my case I didn't use any new rules. Using a vanilla Prometheus.
from charts.
should the dashboards support the default values (vanilla setup) or use openebs-monitoring-pg.yaml ? I think about people that are using different setups. If they have to modify manually OpenEBS GF dashboards at each release (if there are modifications or new ones) it will be prone to errors.
from charts.
I let Prometheus scrapes using the defaults. I think it will scrape all pods that have the annotations:
prometheus.io/path: /metrics
prometheus.io/port: 9500
prometheus.io/scrape: true
It doesn't happen that way I guess. I tried to get openebs metrics using the default rules. Doesn't give me any metrics.
from charts.
did you try my chart ? I have openebs metrics in prometheus. The metrics are probably scrapped with those
kubeStateMetrics:
enabled: true
nodeExporter:
enabled: true
from charts.
I tried via prometheus-operator through helm installation. That didn't work.
from charts.
should the dashboards support the default values (vanilla setup) or use openebs-monitoring-pg.yaml ? I think about people that are using different setups. If they have to modify manually OpenEBS GF dashboards at each release (if there are modifications or new ones) it will be prone to errors.
In my opinion using openebs-monitoring-pg.yaml would be better as it supports spc as well as cspc, csp as well as cspi. Otherwise if we use defaults then the user would have to make changes in prometheus query according to the OpenEBS version they are using. So all these is taken care by openebs-monitoring-pg.yaml file.
from charts.
if that the case, it should be written in the docs and the dashboard are made for openebs-monitoring-pg.yaml and it's not guarantied that they will works with others Prometheus setup. My next step is the add rules in my setup. I'll create it so I can import Prometheus-operator rules and add another folder for custom rules.. and put the one for OpenEbs.
To help others to do that, it should have a section in the docs to tell the users what they should include in there setup if they don't want to use openebs-monitoring-pg.yaml. Because in production each compagnies will have there own rules/dashbaord/alerts..
from charts.
@Ab-hishek hello. I found a issue with the latest version of https://raw.githubusercontent.com/Ab-hishek/openebs-monitoring/master/openebs-monitoring-pg.yaml
When we are using openebc-ctor chart since 2.6.0 The ServiceAccount changed.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m41s (x16 over 5m25s) replicaset-controller Error creating: pods "openebs-prometheus-6f5fcdbcd4-" is forbidden: error looking up service account openebs/openebs-maya-operator: serviceaccount "openebs-maya-operator" not found
root@test-pcl114:~#
here the list of ServiceAccount
root@test-pcl114:~# kubectl -n openebs get sa
NAME SECRETS AGE
default 1 12d
openebs-cstor-csi-controller-sa 1 12d
openebs-cstor-csi-node-sa 1 12d
openebs-cstor-operator 1 12d
openebs-ndm 1 12d
root@test-pcl114:~#
from charts.
It's present for me @survivant . Works fine for me. The openebs-maya-operator
sa is present.
https://github.com/openebs/charts/blob/gh-pages/2.6.0/cstor-operator.yaml
from charts.
@Ab-hishek Look like the helm cstor chart doesn't generate the same artifacts as the cstor-operator.yaml Here a example
$ kubectl --context kind-kind create ns openebs
namespace/openebs created
$ kubectl --context kind-kind get sa -n openebs
NAME SECRETS AGE
default 1 11s
$ helm install openebs-cstor openebs-cstor/cstor -n openebs --kube-context kind-kind
W0322 15:43:56.037013 1440 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0322 15:43:56.178350 1440 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0322 15:43:58.326777 1440 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0322 15:43:58.335545 1440 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0322 15:43:59.165731 1440 warnings.go:70] storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
W0322 15:43:59.769799 1440 warnings.go:70] storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
NAME: openebs-cstor
LAST DEPLOYED: Mon Mar 22 15:43:58 2021
NAMESPACE: openebs
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The OpenEBS cstor has been installed check its status by running:
$ kubectl get pods -n openebs
Use `kubectl get bd -n openebs ` to see the list of
blockdevices attached to the Kubernetes cluster nodes.
For more information, visit our Slack at https://openebs.io/community or view
the documentation online at http://docs.openebs.io/.
For more information related to cstor pool and volume provisioning, visit
https://github.com/openebs/cstor-operators/tree/master/docs .
$ kubectl --context kind-kind get sa -n openebs
NAME SECRETS AGE
default 1 44s
openebs-cstor-csi-controller-sa 1 8s
openebs-cstor-csi-node-sa 1 8s
openebs-cstor-operator 1 8s
openebs-ndm 1 8s
from charts.
@kmova @akhilerm there is a reason why the service account is named : "openebs-cstor-operator" in the helm chart and "openebs-maya-operator" in operator yaml file ?
from charts.
All the dashboards have been moved to the new monitoring repo: https://github.com/openebs/monitoring keeping in mind comments from this issue. Also redundant dashboards have been removed in the new repo and only relevant ones are stored there. Migration of two dashboards- localpv
and cstor volume replicas
are still left. Otherwise the new OpenEBS monitoring stack is good to work with.
Will be closing this issue. Any other issues regarding dashboards can be taken up in the monitoring repository.
from charts.
Related Issues (20)
- init-pvc pod runs with priviledged security context
- ValidatingWebhookConfiguration object left behind after disabling cStor/legacy HOT 3
- node selection not available for openebs-ndm:ndmExporter
- Chart: Allow configuring of deployment strategy
- Installing the latest helm chart release 3.3.0 includes images tagged 3.2.0 HOT 1
- helm upgrade failed with "nil pointer evaluating interface {}.enabled" HOT 1
- 3.3.0 -> 3.4.0
- Chart 3.4.0 Referenced in Chart.yaml but no tar.gz exists HOT 1
- Enabling ndmExporter results in duplicate "name" keys HOT 1
- allowVolumeExpansion helm parameter for localPV device storage class
- upgrading to install jiva causes resource mapping not found
- typo in LVM driver image name HOT 2
- Support NDM `metaconfigs` HOT 2
- Upgrade 3.7.0->3.8.0 fails when there exists a volumesnapshotclass HOT 3
- Incorrect OPENEBS_IO_BASE_PATH when mayastor enabled
- Wrong override for jiva image in helm chart in release 3.10.0 HOT 1
- Publish to OCI
- Issue with webdocs deploying helmchart for OpenEBS
- Missing Chart for nfs provisioner HOT 1
- [Bug] DaemonSet openebs-ndm has too much RBAC permission which may leads the whole cluster being hijacked HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from charts.