Giter Club home page Giter Club logo

cbprometheus_python's Introduction

Couchbase Prometheus Exporter

DEPRECATED DO NOT USE

This project has been deprecated.

Please use the CMOS Exporter

There is also an available role for this exporter couchbaselabs.couchbase-exporter

DEPRECATED DO NOT USE

To run Locally

Install requirements

# pip install -r requirements

set environment variables:

export CB_DATABASE='<>,<>'
export CB_USERNAME='<>'
export CB_PASSWORD='<>'

Please list more than one node in the list of nodes. It does not matter the order or the service running on the node. The nodes must be separated by commas.

By default the exporter runs in a "cluster" configuration, this way when it is scraped it will return all of the relevant metrics for a particular service for each node in the cluster. This way only a single exporter has to be configured per cluster, however this may be undesirable or you may wish to install the exporter on each node in the cluster to reduce the overall payload size of metrics returned. To do this set the variable CB_EXPORTER_MODE to local, then all requests will only be made to the localhost, and only relevant metrics to that single node will be returned.

export CB_EXPORTER_MODE="local"

if you are working with very large clusters or clusters with many indexes it may be more performant to stream your results to prometheus instead of trying to load the full dataset at one time. To do that export the following variable

export CB_STREAMING=true

Another way to lower the payload size is to reduce the number of samples per poll. You can do this by saying how many samples you want from the last 1 minute. Valid entries are: 1,2,3,4,5,6,10,12,15,20,30,60. You can enter other values but if they are not valid the system will get as close as possible to your number.

export  CB_RESULTSET=1

If you would like to run cbstats from the exporter in cluster mode to load into prometheus and grafana you need to set up passwordless ssh using an ssh key. If you are running the exporter in local mode, it is not required to setup SSH as the local path will be used. Once the public key is loaded on each of the couchbase nodes and the private key loaded on the exporter you can then configure the exporter to use the key. The user will need to have access to run cbstats in whatever directory you have installed it. By default that will be /opt/couchbase/bin/cbstats

export CB_KEY=/path/to/private/key
export CB_CBSTAT_PATH = /opt/couchbase/bin/cbstats
export CB_SSH_UN = username associated with key

If you are not using docker to run this it may be beneficial to create and add these variables to the /etc/profile.d/exporter.sh

sudo su
{
    echo 'export CB_DATABASE="<>,<>"'
    echo 'export CB_USERNAME="<>"'
    echo 'export CB_PASSWORD="<>"'
    echo 'export CB_KEY=/path/to/private/key'
    echo 'export CB_CBSTAT_PATH = /opt/couchbase/bin/cbstats'
    echo 'export CB_SSH_UN = username associated with key'
} > /etc/profile.d/exporter.sh
sudo chmod +x /etc/profile.d/exporter.sh
source /etc/profile.d/exporter.sh

run with uwsgi

uwsgi --http :5000 --processes 5 --pidfile /tmp/cbstats.pid --master --wsgi-file wsgi.py

Node Exporter and Process Exporter are valuable exporters to that extract information which is not gathered by Couchbase or the Couchbase Exporter. There is plenty of documentation of how to get these setup and running. However, to correlate these metrics with the Couchbase Metrics there needs to be common labels. The Couchbase Exporter exposes two additional endpoints /metrics/node_exporter and /metrics/process_exporter, these endpoints act as a proxy calling the Node / Process Exporter directly, but before the stats are returned, the cluster and node name labels are added to the metrics they return. By default Node Exporter runs on port 9100, this should be changed to 9200 or some other port, as that is a port used by Couchbase Server, Process Exporter runs on port 9256. These ports can be changed in the exporter by setting:

export CB_NODE_EXPORTER_PORT = 9200
export CB_PROCESS_EXPORTER_PORT = 9256

To Run with Docker:

# docker network create -d macvlan --subnet=<>/<> --gateway=<> -o parent=<> --ip-range=<>/<> pub_net
# cd <gitRepo>
# docker build --tag=cbstats .
# docker run --name <container name> --env CB_DATABASE='<cluster address>' --env CB_USERNAME='<username>' --env CB_PASSWORD='<password>' --env CB_STREAMING=true --network pub_net cbstats
# docker start <container name>
# docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>

Prometheus Configuration

To configure Prometheus a config file has been added to this repositiory utilizing the different endpoints available in this exporter.

To Run Prometheus with Docker

$ cd <gitRepo>/prometheus .
# docker build -t my-prometheus .
# docker run --name <container name> -p 9090:9090 --network pub_net my-prometheus
ctrl+c
# docker start <container name>
# docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>

To run Grafana with Docker

# docker run -d -p 3000:3000 --name <container name> --network pub_net -e "GF_INSTALL_PLUGINS=grafana-clock-panel,camptocamp-prometheus-alertmanager-datasource" grafana/grafana
# docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>

Testing

With the exporter running an easy way to test if you have connectivity is to use your browser or curl to test the endpoints

curl http://<ipaddress>:5000/metrics/system

To test that the metrics are being returned in the way Prometheus expects to read them you can use the promtool. The following command must be run from the Prometheus installation directory.

curl -s http://<ipaddress>:5000/metrics/system | ./promtool check metrics

cbprometheus_python's People

Contributors

balaji-it avatar bentonam avatar davischapmancouchbase avatar jadtalbert avatar maskayman avatar tdenton8772 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cbprometheus_python's Issues

Add support for Slow Queries

Need to add support for slow-queries. It should be variable similar to the slow-queries.sh script:

  1. Loop over each query node
  2. Call that specific query node and get it's system:completed_requests values and aggregate them appropriately.
  3. Generate a hash signature of the query so that if someone wants to drop the statement due to text size, they still have something that can be grouped appropriately in prometheus.
  4. Will need to have logic in place to only pull statements from 1 minute prior

cbstats information for GET and Store requests

This script will be executed in every 24 hours and will reset the stats. Script will capture cbstat info on STORE and GET commands
The values will be grouped into 3 levels

Get(<1ms)
Get(1-32ms)
Get(32ms-512ms)
Get(>512ms)
Store(<1ms)
Store(1-32ms)
Store(32ms-512ms)
Store(>512ms)

System level Metrics

System level metrics are exposed through the Prometheus Project Node Exporter. Once the Prometheus Node Exporter binaries are installed in Couchbase nodes, we would like to get system level metrics such as TOP metrics, check the data volume directory and OS level metrics from the Couchbase Exporter.

Allow exporter to be ran in standalone or single node configuration

As the responses scan be fairly large, it would be useful to allow the exporter to be ran locally on each couchbase node. In this mode the exporter would only look at stats/services/metrics on that single node and not query the cluster manager to determine all available nodes.

Specify # of Samples

Assuming that prometheus is calling the exporter 1 time per minute, and the default couchbase zoom parameter of minute is being used. There is 60 samples for each stat that is returned from the exporter. It has been requested that this be a configurable value.

It should be able to be one of the following values: 60, 30, 20, 15, 10, 6, 5, 4, 3, 2, 1

This will reduce the payload size of what prometheus has to ingest.

Possibility to query a couchbase cluster through HTTPS on port 18091

Hello,

I am trying to collect cluster statistics through https over port 18091. Could you let me know if it is possible ? I am using the standard centos7 python i.e version 2.7.5.
Curl queries from my collect server to the cluster nodes from a terminal are OK.

Thanks very much

Add Node Status

Currently we aren't outputting the node status from /pools/nodes

Whitelist/Blacklist Metrics

While metrics can be dropped via the prometheus.yml file they still have to be processed and transmitted across the wire. It would be beneficial if we supported some sort of inclusion/exclusion syntax to automatically only include or exclude certain stats. This again would reduce the payload size and not require prometheus to do as much work.

Add endpoint /metrics/exporter

This endpoint should contain information about the exporter itself. This needs to include at least the exporter version. Can be the latest git commit id.

Add Service specific Endpoints

/metrics
/metrics/buckets
/metrics/query
/metrics/indexes
/metrics/eventing
/metrics/xdcr
/metrics/analytics
/metrics/fts
/metrics/system

Allow Configuration via JSON file

It would be useful to have a configuration file for the exporter that is a JSON file. This will allow for more detailed/complex arguments. Environment variables could still be used (optionally) for certain scenarios.

Add Prepared Stats to Query Endpoint

Need to add the system:prepareds information to the output, this should be behave just like calls to system:completed_requests as it is per node stats.

Turn loops into generators

It should be possible to turn the loops into generators and yield results as the come back. May be problematic since we have to parse the entire json. But it is something to try instead of just using the generator at the end.

Node Exporter enhancement request

Number of database users should be something the exporter scrap for as well. Specially if enterprise starts going the hashicorp vault way for user and credentials.

Most of the panels show "No Data"

Most of the panels in the shows "No Data". Is there any additional configuration that need to be done?

Below is the screenshot of "Couchbase Performance Dashboard". And "XDCR" Dashboard has also has panels with "No Data". For all the "No Data" panels, PromQL metrics are not found in Prometheus.

image

Allow for Multiple Clusters to be specified

since the exporter is stateless and just performing REST API calls, there is really no reason that we need to have a separate instance per couchbase installation. We can still support the ENV variables and treat it as a single cluster, but it would be useful to have a JSON file of clusters.

Prometheus provides a way to add query string parameters, this could be used to specify the cluster that you want to use.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.