Giter Club home page Giter Club logo

redis_exporter's Introduction

Prometheus ValKey & Redis Metrics Exporter

Build Status Coverage Status codecov docker_pulls Stand With Ukraine

Prometheus exporter for ValKey metrics (Redis-compatible).
Supports ValKey and Redis 2.x, 3.x, 4.x, 5.x, 6.x, and 7.x

Ukraine is still suffering from Russian aggression, please consider supporting Ukraine with a donation.

Stand With Ukraine

Building and running the exporter

Build and run locally

git clone https://github.com/oliver006/redis_exporter.git
cd redis_exporter
go build .
./redis_exporter --version

Pre-build binaries

For pre-built binaries please take a look at the releases.

Basic Prometheus Configuration

Add a block to the scrape_configs of your prometheus.yml config file:

scrape_configs:
  - job_name: redis_exporter
    static_configs:
    - targets: ['<<REDIS-EXPORTER-HOSTNAME>>:9121']

and adjust the host name accordingly.

Kubernetes SD configurations

To have instances in the drop-down as human readable names rather than IPs, it is suggested to use instance relabelling.

For example, if the metrics are being scraped via the pod role, one could add:

          - source_labels: [__meta_kubernetes_pod_name]
            action: replace
            target_label: instance
            regex: (.*redis.*)

as a relabel config to the corresponding scrape config. As per the regex value, only pods with "redis" in their name will be relabelled as such.

Similar approaches can be taken with other role types depending on how scrape targets are retrieved.

Prometheus Configuration to Scrape Multiple Redis Hosts

The Prometheus docs have a very informative article on how multi-target exporters are intended to work.

Run the exporter with the command line flag --redis.addr= so it won't try to access the local instance every time the /metrics endpoint is scraped. Using below config instead of the /metric endpoint the /scrape endpoint will be used by prometheus. As an example the first target will be queried with this web request: http://exporterhost:9121/scrape?target=first-redis-host:6379

scrape_configs:
  ## config for the multiple Redis targets that the exporter will scrape
  - job_name: 'redis_exporter_targets'
    static_configs:
      - targets:
        - redis://first-redis-host:6379
        - redis://second-redis-host:6379
        - redis://second-redis-host:6380
        - redis://second-redis-host:6381
    metrics_path: /scrape
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: <<REDIS-EXPORTER-HOSTNAME>>:9121

  ## config for scraping the exporter itself
  - job_name: 'redis_exporter'
    static_configs:
      - targets:
        - <<REDIS-EXPORTER-HOSTNAME>>:9121

The Redis instances are listed under targets, the Redis exporter hostname is configured via the last relabel_config rule.
If authentication is needed for the Redis instances then you can set the password via the --redis.password command line option of the exporter (this means you can currently only use one password across the instances you try to scrape this way. Use several exporters if this is a problem).
You can also use a json file to supply multiple targets by using file_sd_configs like so:

scrape_configs:
  - job_name: 'redis_exporter_targets'
    file_sd_configs:
      - files:
        - targets-redis-instances.json
    metrics_path: /scrape
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: <<REDIS-EXPORTER-HOSTNAME>>:9121

  ## config for scraping the exporter itself
  - job_name: 'redis_exporter'
    static_configs:
      - targets:
        - <<REDIS-EXPORTER-HOSTNAME>>:9121

The targets-redis-instances.json should look something like this:

[
  {
    "targets": [ "redis://redis-host-01:6379", "redis://redis-host-02:6379"],
    "labels": { }
  }
]

Prometheus uses file watches and all changes to the json file are applied immediately.

Command line flags

Name Environment Variable Name Description
redis.addr REDIS_ADDR Address of the Redis instance, defaults to redis://localhost:6379. If TLS is enabled, the address must be like the following rediss://localhost:6379
redis.user REDIS_USER User name to use for authentication (Redis ACL for Redis 6.0 and newer).
redis.password REDIS_PASSWORD Password of the Redis instance, defaults to "" (no password).
redis.password-file REDIS_PASSWORD_FILE Password file of the Redis instance to scrape, defaults to "" (no password file).
check-keys REDIS_EXPORTER_CHECK_KEYS Comma separated list of key patterns to export value and length/size, eg: db3=user_count will export key user_count from db 3. db defaults to 0 if omitted. The key patterns specified with this flag will be found using SCAN. Use this option if you need glob pattern matching; check-single-keys is faster for non-pattern keys. Warning: using --check-keys to match a very large number of keys can slow down the exporter to the point where it doesn't finish scraping the redis instance.
check-single-keys REDIS_EXPORTER_CHECK_SINGLE_KEYS Comma separated list of keys to export value and length/size, eg: db3=user_count will export key user_count from db 3. db defaults to 0 if omitted. The keys specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than check-keys.
check-streams REDIS_EXPORTER_CHECK_STREAMS Comma separated list of stream-patterns to export info about streams, groups and consumers. Syntax is the same as check-keys.
check-single-streams REDIS_EXPORTER_CHECK_SINGLE_STREAMS Comma separated list of streams to export info about streams, groups and consumers. The streams specified with this flag will be looked up directly without any glob pattern matching. Use this option if you don't need glob pattern matching; it is faster than check-streams.
check-keys-batch-size REDIS_EXPORTER_CHECK_KEYS_BATCH_SIZE Approximate number of keys to process in each execution. This is basically the COUNT option that will be passed into the SCAN command as part of the execution of the key or key group metrics, see COUNT option. Larger value speeds up scanning. Still Redis is a single-threaded app, huge COUNT can affect production environment.
count-keys REDIS_EXPORTER_COUNT_KEYS Comma separated list of patterns to count, eg: db3=sessions:* will count all keys with prefix sessions: from db 3. db defaults to 0 if omitted. Warning: The exporter runs SCAN to count the keys. This might not perform well on large databases.
script REDIS_EXPORTER_SCRIPT Comma separated list of path(s) to Redis Lua script(s) for gathering extra metrics.
debug REDIS_EXPORTER_DEBUG Verbose debug output
log-format REDIS_EXPORTER_LOG_FORMAT Log format, valid options are txt (default) and json.
namespace REDIS_EXPORTER_NAMESPACE Namespace for the metrics, defaults to redis.
connection-timeout REDIS_EXPORTER_CONNECTION_TIMEOUT Timeout for connection to Redis instance, defaults to "15s" (in Golang duration format)
web.listen-address REDIS_EXPORTER_WEB_LISTEN_ADDRESS Address to listen on for web interface and telemetry, defaults to 0.0.0.0:9121.
web.telemetry-path REDIS_EXPORTER_WEB_TELEMETRY_PATH Path under which to expose metrics, defaults to /metrics.
redis-only-metrics REDIS_EXPORTER_REDIS_ONLY_METRICS Whether to also export go runtime metrics, defaults to false.
include-config-metrics REDIS_EXPORTER_INCL_CONFIG_METRICS Whether to include all config settings as metrics, defaults to false.
include-system-metrics REDIS_EXPORTER_INCL_SYSTEM_METRICS Whether to include system metrics like total_system_memory_bytes, defaults to false.
redact-config-metrics REDIS_EXPORTER_REDACT_CONFIG_METRICS Whether to redact config settings that include potentially sensitive information like passwords.
ping-on-connect REDIS_EXPORTER_PING_ON_CONNECT Whether to ping the redis instance after connecting and record the duration as a metric, defaults to false.
is-tile38 REDIS_EXPORTER_IS_TILE38 Whether to scrape Tile38 specific metrics, defaults to false.
is-cluster REDIS_EXPORTER_IS_CLUSTER Whether this is a redis cluster (Enable this if you need to fetch key level data on a Redis Cluster).
export-client-list REDIS_EXPORTER_EXPORT_CLIENT_LIST Whether to scrape Client List specific metrics, defaults to false.
export-client-port REDIS_EXPORTER_EXPORT_CLIENT_PORT Whether to include the client's port when exporting the client list. Warning: including the port increases the number of metrics generated and will make your Prometheus server take up more memory
skip-tls-verification REDIS_EXPORTER_SKIP_TLS_VERIFICATION Whether to to skip TLS verification when the exporter connects to a Redis instance
tls-client-key-file REDIS_EXPORTER_TLS_CLIENT_KEY_FILE Name of the client key file (including full path) if the server requires TLS client authentication
tls-client-cert-file REDIS_EXPORTER_TLS_CLIENT_CERT_FILE Name the client cert file (including full path) if the server requires TLS client authentication
tls-server-key-file REDIS_EXPORTER_TLS_SERVER_KEY_FILE Name of the server key file (including full path) if the web interface and telemetry should use TLS
tls-server-cert-file REDIS_EXPORTER_TLS_SERVER_CERT_FILE Name of the server certificate file (including full path) if the web interface and telemetry should use TLS
tls-server-ca-cert-file REDIS_EXPORTER_TLS_SERVER_CA_CERT_FILE Name of the CA certificate file (including full path) if the web interface and telemetry should use TLS
tls-server-min-version REDIS_EXPORTER_TLS_SERVER_MIN_VERSION Minimum TLS version that is acceptable by the web interface and telemetry when using TLS, defaults to TLS1.2 (supports TLS1.0,TLS1.1,TLS1.2,TLS1.3).
tls-ca-cert-file REDIS_EXPORTER_TLS_CA_CERT_FILE Name of the CA certificate file (including full path) if the server requires TLS client authentication
set-client-name REDIS_EXPORTER_SET_CLIENT_NAME Whether to set client name to redis_exporter, defaults to true.
check-key-groups REDIS_EXPORTER_CHECK_KEY_GROUPS Comma separated list of LUA regexes for classifying keys into groups. The regexes are applied in specified order to individual keys, and the group name is generated by concatenating all capture groups of the first regex that matches a key. A key will be tracked under the unclassified group if none of the specified regexes matches it.
max-distinct-key-groups REDIS_EXPORTER_MAX_DISTINCT_KEY_GROUPS Maximum number of distinct key groups that can be tracked independently per Redis database. If exceeded, only key groups with the highest memory consumption within the limit will be tracked separately, all remaining key groups will be tracked under a single overflow key group.
config-command REDIS_EXPORTER_CONFIG_COMMAND What to use for the CONFIG command, defaults to CONFIG.

Redis instance addresses can be tcp addresses: redis://localhost:6379, redis.example.com:6379 or e.g. unix sockets: unix:///tmp/redis.sock.
SSL is supported by using the rediss:// schema, for example: rediss://azure-ssl-enabled-host.redis.cache.windows.net:6380 (note that the port is required when connecting to a non-standard 6379 port, e.g. with Azure Redis instances).\

Command line settings take precedence over any configurations provided by the environment variables.

Authenticating with Redis

If your Redis instance requires authentication then there are several ways how you can supply a username (new in Redis 6.x with ACLs) and a password.

You can provide the username and password as part of the address, see here for the official documentation of the redis:// scheme. You can set -redis.password-file=sample-pwd-file.json to specify a password file, it's used whenever the exporter connects to a Redis instance, no matter if you're using the /scrape endpoint for multiple instances or the normal /metrics endpoint when scraping just one instance. It only takes effect when redis.password == "". See the contrib/sample-pwd-file.json for a working example, and make sure to always include the redis:// in your password file entries.

An example for a URI including a password is: redis://<<username (optional)>>:<<PASSWORD>>@<<HOSTNAME>>:<<PORT>>

Alternatively, you can provide the username and/or password using the --redis.user and --redis.password directly to the redis_exporter.

If you want to use a dedicated Redis user for the redis_exporter (instead of the default user) then you need enable a list of commands for that user. You can use the following Redis command to set up the user, just replace <<<USERNAME>>> and <<<PASSWORD>>> with your desired values.

ACL SETUSER <<<USERNAME>>> +client +ping +info +config|get +cluster|info +slowlog +latency +memory +select +get +scan +xinfo +type +pfcount +strlen +llen +scard +zcard +hlen +xlen +eval allkeys on ><<<PASSWORD>>>

Run via Docker

The latest release is automatically published to the Docker registry.

You can run it like this:

docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter

Docker images are also published to the quay.io docker repo so you can pull them from there if for instance you run into rate limiting issues with Docker hub.

docker run -d --name redis_exporter -p 9121:9121 quay.io/oliver006/redis_exporter

The latest docker image contains only the exporter binary. If e.g. for debugging purposes, you need the exporter running in an image that has a shell then you can run the alpine image:

docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter:alpine

If you try to access a Redis instance running on the host node, you'll need to add --network host so the redis_exporter container can access it:

docker run -d --name redis_exporter --network host oliver006/redis_exporter

Run on Kubernetes

Here is an example Kubernetes deployment configuration for how to deploy the redis_exporter as a sidecar to a Redis instance.

Tile38

Tile38 now has native Prometheus support for exporting server metrics and basic stats about number of objects, strings, etc. You can also use redis_exporter to export Tile38 metrics, especially more advanced metrics by using Lua scripts or the -check-keys flag.
To enable Tile38 support, run the exporter with --is-tile38=true.

What's exported

Most items from the INFO command are exported, see Redis documentation for details.
In addition, for every database there are metrics for total keys, expiring keys and the average TTL for keys in the database.
You can also export values of keys by using the -check-keys (or related) flag. The exporter will also export the size (or, depending on the data type, the length) of the key. This can be used to export the number of elements in (sorted) sets, hashes, lists, streams, etc. If a key is in string format and matches with --check-keys (or related) then its string value will be exported as a label in the key_value_as_string metric.

If you require custom metric collection, you can provide comma separated list of path(s) to Redis Lua script(s) using the -script flag. If you pass only one script, you can omit comma. An example can be found in the contrib folder.

The redis_memory_max_bytes metric

The metric redis_memory_max_bytes will show the maximum number of bytes Redis can use.
It is zero if no memory limit is set for the Redis instance you're scraping (this is the default setting for Redis).
You can confirm that's the case by checking if the metric redis_config_maxmemory is zero or by connecting to the Redis instance via redis-cli and running the command CONFIG GET MAXMEMORY.

What it looks like

Example Grafana screenshots: redis_exporter_screen_01

redis_exporter_screen_02

Grafana dashboard is available on grafana.com and/or github.com.

Viewing multiple Redis simultaneously

If running Redis Sentinel, it may be desirable to view the metrics of the various cluster members simultaneously. For this reason the dashboard's drop down is of the multi-value type, allowing for the selection of multiple Redis. Please note that there is a caveat; the single stat panels up top namely uptime, total memory use and clients do not function upon viewing multiple Redis.

Using the mixin

There is a set of sample rules, alerts and dashboards available in redis-mixin

Upgrading from 0.x to 1.x

PR #256 introduced breaking changes which were released as version v1.0.0.

If you only scrape one Redis instance and use command line flags --redis.address and --redis.password then you're most probably not affected. Otherwise, please see PR #256 and this README for more information.

Memory Usage Aggregation by Key Groups

When a single Redis instance is used for multiple purposes, it is useful to be able to see how Redis memory is consumed among the different usage scenarios. This is particularly important when a Redis instance with no eviction policy is running low on memory as we want to identify whether certain applications are misbehaving (e.g. not deleting keys that are no longer in use) or the Redis instance needs to be scaled up to handle the increased resource demand. Fortunately, most applications using Redis will employ some sort of naming conventions for keys tied to their specific purpose such as (hierarchical) namespace prefixes which can be exploited by the check-keys, check-single-keys, and count-keys parameters of redis_exporter to surface the memory usage metrics of specific scenarios. Memory usage aggregation by key groups takes this one step further by harnessing the flexibility of Redis LUA scripting support to classify all keys on a Redis instance into groups through a list of user-defined LUA regular expressions so memory usage metrics can be aggregated into readily identifiable groups.

To enable memory usage aggregation by key groups, simply specify a non-empty comma-separated list of LUA regular expressions through the check-key-groups redis_exporter parameter. On each aggregation of memory metrics by key groups, redis_exporter will set up a SCAN cursor through all keys for each Redis database to be processed in batches via a LUA script. Each key batch is then processed by the same LUA script on a key-by-key basis as follows:

  1. The MEMORY USAGE command is called to gather memory usage for each key
  2. The specified LUA regexes are applied to each key in the specified order, and the group name that a given key belongs to will be derived from concatenating the capture groups of the first regex that matches the key. For example, applying the regex ^(.*)_[^_]+$ to the key key_exp_Nick would yield a group name of key_exp. If none of the specified regexes matches a key, the key will be assigned to the unclassified group

Once a key has been classified, the memory usage and key counter for the corresponding group will be incremented in a local LUA table. This aggregated metrics table will then be returned alongside the next SCAN cursor position to redis_exporter when all keys in a batch have been processed, and redis_exporter can aggregate the data from all batches into a single table of grouped memory usage metrics for the Prometheus metrics scrapper.

Besides making the full flexibility of LUA regex available for classifying keys into groups, the LUA script also has the benefit of reducing network traffic by executing all MEMORY USAGE commands on the Redis server and returning aggregated data to redis_exporter in a far more compact format than key-level data. The use of SCAN cursor over batches of keys processed by a server-side LUA script also helps prevent unbounded latency bubble in Redis's single processing thread, and the batch size can be tailored to specific environments via the check-keys-batch-size parameter.

Scanning the entire key space of a Redis instance may sound a lttle extravagant, but it takes only a single scan to classify all keys into groups, and on a moderately sized system with ~780K keys and a rather complex list of 17 regexes, it takes an average of ~5s to perform a full aggregation of memory usage by key groups. Of course, the actual performance for specific systems will vary widely depending on the total number of keys, the number and complexity of regexes used for classification, and the configured batch size.

To protect Prometheus from being overwhelmed by a large number of time series resulting from misconfigured group classification regular expression (e.g. applying the regular expression ^(.*)$ where each key will be classified into its own distinct group), a limit on the number of distinct key groups per Redis database can be configured via the max-distinct-key-groups parameter. If the max-distinct-key-groups limit is exceeded, only the key groups with the highest memory usage within the limit will be tracked separately, remaining key groups will be reported under a single overflow key group.

Here is a list of additional metrics that will be exposed when memory usage aggregation by key groups is enabled:

Name Labels Description
redis_key_group_count db,key_group Number of keys in a key group
redis_key_group_memory_usage_bytes db,key_group Memory usage by key group
redis_number_of_distinct_key_groups db Number of distinct key groups in a Redis database when the overflow group is fully expanded
redis_last_key_groups_scrape_duration_milliseconds Duration of the last memory usage aggregation by key groups in milliseconds

Script to collect Redis lists and respective sizes.

If using Redis version < 4.0, most of the helpful metrics which we need to gather based on length or memory is not possible via default redis_exporter. With the help of LUA scripts, we can gather these metrics. One of these scripts contrib/collect_lists_length_growing.lua will help to collect the length of redis lists. With this count, we can take following actions such as Create alerts or dashboards in Grafana or any similar tools with these Prometheus metrics.

Development

The tests require a variety of real Redis instances to not only verify correctness of the exporter but also compatibility with older versions of Redis and with Redis-like systems like KeyDB or Tile38.
The contrib/docker-compose-for-tests.yml file has service definitions for everything that's needed.
You can bring up the Redis test instances first by running make docker-env-up and then, every time you want to run the tests, you can run make docker-test. This will mount the current directory (with the .go source files) into a docker container and kick off the tests.
Once you're done testing you can bring down the stack by running make docker-env-down.
Or you can bring up the stack, run the tests, and then tear down the stack, all in one shot, by running make docker-all.

Note. Tests initialization can lead to unexpected results when using a persistent testing environment. When make docker-env-up is executed once and make docker-test is constantly run or stopped during execution, the number of keys in the database changes, which can lead to unexpected failures of tests. Use make docker-env-down periodacally to clean up as a workaround.

Communal effort

Open an issue or PR if you have more suggestions, questions or ideas about what to add.

redis_exporter's People

Contributors

4admin2root avatar bjosv avatar cgroschupp avatar cmur2 avatar dafydd-t avatar dependabot-preview[bot] avatar dependabot[bot] avatar ericonyu avatar eumel8 avatar filipecosta90 avatar garrygerber avatar houstonheat avatar ihrigb avatar kpachhai avatar larssonoliver avatar laubstein avatar mdawar avatar nantiferov avatar naveensrinivasan avatar nipuntalukdar avatar oliver-nyt avatar oliver006 avatar opan avatar renovate[bot] avatar sashgorokhov avatar siavashs avatar stanhu avatar superq avatar vin01 avatar xginn8 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis_exporter's Issues

PrintLn in non-debug mode.

Is it normal that we have a PrintLn on the parseDBKeyspaceString method?

https://github.com/oliver006/redis_exporter/blob/master/exporter/redis.go#L205

Jul 27 08:21:45 r213-srv18 rkt[47236]: [82783.876068] redis_exporter[260]: 2016/07/27 08:21:45 740 740 <nil>
Jul 27 08:21:45 r213-srv18 rkt[47236]: [82783.876428] redis_exporter[260]: 2016/07/27 08:21:45 740 740 <nil>
Jul 27 08:21:45 r213-srv18 rkt[47236]: [82783.876589] redis_exporter[260]: 2016/07/27 08:21:45 157212478 1.57212478e+08 <nil>

Add support for Latency monitor

Latency monitoring is a relatively new feature introduced in Redis 2.8.13 that helps you troubleshoot latency problems. This tool logs latency spikes on your server, and the events that cause them. You must enable latency monitoring before you can use it however but I guess it would be nice metrics to have.
More details here: https://redis.io/topics/latency-monitor

Let me know if this is at all possible. I would love to work on it and see what we can come up with. Just getting my hands dirty to this whole open source thing, no big deal.

Add redis_up gauge

Hi!
thanks for redis_exporter!
In line with other exporters (e.g. memcached) it'd be nice to provide a gauge redis_up 0/1 whenever redis scraping fails or succeeds.

thanks!

Redis exporter hangs on dead redis target

If provided with multiple redis hosts and one of them is inactive/dead via -redis.addr parameter - the exporter hangs even if other redis hosts are up and running.

Way to reproduce:
$./redis_exporter -redis.addr working_server1_IP:6379, working_server2_IP:6379,dead_server1_IP:6379

$wget http://localhost:9121/metrics

Makefile

Hi @oliver006,
I would like make a kind of feature request. Could it be possible having a Makefile here, as well as Build Information handle correctly like the most of Prometheus-Exporters? If you are interested about it, I would be making a PR soon. Thanks!

Where its log will be output?

For Redis_export where its log will be output? For docker way to pull up, whether you can set volumes for it log?

run redis_exporter but get error in prometheus: [ getsockopt: no route to host ]

[root@Redis-master redis_exporter]# pwd
/root/redis_exporter

[root@Redis-master redis_exporter]# go get
go install: no install location for directory /root/redis_exporter outside GOPATH
For more details see: 'go help gopath'
[root@Redis-master redis_exporter]# ./redis_exporter -web.listen-address=10.21.20.241:9121 -redis.addr=redis://127.0.0.1:6379 -redis.password=xxx &
[1] 87963
INFO[0000] Redis Metrics Exporter <<< filled in by build >>> build date: <<< filled in by build >>> sha1: <<< filled in by build >>> Go: go1.8.3

INFO[0000] Providing metrics at 10.21.20.241:9121/metrics
INFO[0000] Connecting to redis hosts: []string{"redis://127.0.0.1:6379"}
INFO[0000] Using alias: []string{""}
[root@Redis-master redis_exporter]#

here am I start the redis_exporter right? and when i add a block to the scrape_configs of your prometheus.yml config file, i get the error in prometheus targets page:
http://10.21.20.241:9121/metrics
DOWN group="redis" instance="10.21.20.241:9121" 4.219s ago Get http://10.21.20.241:9121/metrics: dial tcp 10.21.20.241:9121: getsockopt: no route to host

here i don't know how to fix it? can you help me? thanks much.
[root@Redis-master redis_exporter]# ps -ef|grep redis
devdepl+ 31096 1 3 Aug03 ? 06:33:11 /usr/local/redis/bin/redis-server *:6379
root 62764 8869 0 Aug10 pts/2 00:00:00 ./redis_exporter -web.listen-address=10.21.20.241:9121 -redis.addr=redis://127.0.0.1:6379 -redis.password=DevCoRe657x
root 87431 99325 0 08:57 pts/1 00:00:00 grep --color=auto redis

Where to specify flags?

The readme doesn't explain where flags should be specified. I'm using docker, how can I specify flags?

redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused

Through the command redis-cli 127.0.0.1 -p zzzaa001 connection redis is good. But the redis_exporter error is as follows:

[root@acserver ~]# docker run -e "REDIS_ADDR=127.0.0.1:6379" -e "REDIS_PASSWORD=zzzaa001" --name redis_exporter -p 9121:9121 oliver006/redis_exporter
time="2017-06-21T09:12:59Z" level=info msg="Redis Metrics Exporter v0.11.1    build date: 2017-05-24-14:10:15    sha1: d3af2b49709a2cab654d7c876497e791c6d9a082    Go: go1.8.1\n" 
time="2017-06-21T09:12:59Z" level=info msg="Providing metrics at :9121/metrics" 
time="2017-06-21T09:12:59Z" level=info msg="Connecting to redis hosts: []string{\"127.0.0.1:6379\"}" 
time="2017-06-21T09:12:59Z" level=info msg="Using alias: []string{\"\"}" 
time="2017-06-21T09:13:03Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 
time="2017-06-21T09:13:08Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 
time="2017-06-21T09:13:13Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 

And my prometheus cofnig is:

  - job_name: redis_exporter_dev

    scrape_interval: 5s

    static_configs:
    - targets: ['192.168.1.2:9121']

  - job_name: redis_exporter_test

    scrape_interval: 5s

    static_configs:
    - targets: ['192.168.1.3:9121']

Use default redisaddr the error as below:

[root@acserver ~]# docker run -e "REDIS_PASSWORD=zzzaa001" --name redis_exporter -p 9121:9121 oliver006/redis_exporter
time="2017-06-21T09:32:22Z" level=info msg="Redis Metrics Exporter v0.11.1    build date: 2017-05-24-14:10:15    sha1: d3af2b49709a2cab654d7c876497e791c6d9a082    Go: go1.8.1\n" 
time="2017-06-21T09:32:22Z" level=info msg="Providing metrics at :9121/metrics" 
time="2017-06-21T09:32:22Z" level=info msg="Connecting to redis hosts: []string{\"redis://localhost:6379\"}" 
time="2017-06-21T09:32:22Z" level=info msg="Using alias: []string{\"\"}" 
time="2017-06-21T09:32:22Z" level=info msg="redis err: dial redis: unknown network redis"

Could you provide statically linked binaries?

I want to create a docker image for redis_exporter based on vanilla alpine:3.5 that by default lacks the libraries required to run dynamically linked binaries such as the one you provide on the Releases tab. E.g. the https://github.com/prometheus/blackbox_exporter/ project does this and it would be very nice if you could, too :) That would allow creating docker images without having to locally compile the go program again.

Maybe http://blog.wrouesnel.com/articles/Totally%20static%20Go%20builds/ helps.

Collect metrics for config options

Collecting metrics for config options like max_memory could be useful and should be easy to implement by parsing the output of CONFIG GET *.

promtool output

Hi @oliver006,

I checked today the latest 0.11.2 version with promtool (Pass Prometheus metrics over stdin to lint them for consistency and correctness), and I think this could be fixed.

You can get promtool tool with,

GO15VENDOREXPERIMENT=1 go get github.com/prometheus/prometheus/cmd/promtool
# curl -s http://localhost.localdomain:9121/metrics | promtool check-metrics
redis_aof_current_rewrite_duration_sec: no help text
redis_aof_enabled: no help text
redis_aof_last_rewrite_duration_sec: no help text
redis_aof_rewrite_in_progress: no help text
redis_aof_rewrite_scheduled: no help text
redis_blocked_clients: no help text
redis_command_call_duration_seconds_count: non-histogram and non-summary metrics should not have "_count" suffix
http_request_duration_microseconds: use base unit "seconds" instead of "microseconds"
redis_client_longest_output_list: no help text
redis_command_call_duration_seconds_sum: non-histogram and non-summary metrics should not have "_sum" suffix
redis_commands_processed_total: no help text
redis_connected_clients: no help text
redis_connected_slaves: no help text
redis_connections_received_total: no help text
redis_expired_keys_total: no help text
redis_commands_processed_total: non-counter metrics should not have "_total" suffix
redis_connections_received_total: non-counter metrics should not have "_total" suffix
redis_evicted_keys_total: no help text
redis_expired_keys_total: non-counter metrics should not have "_total" suffix
redis_instantaneous_ops_per_sec: no help text
redis_keyspace_hits_total: no help text
redis_latest_fork_usec: no help text
redis_loading_dump_file: no help text
redis_memory_fragmentation_ratio: no help text
redis_memory_used_bytes: no help text
redis_rdb_changes_since_last_save: no help text
redis_evicted_keys_total: non-counter metrics should not have "_total" suffix
redis_keyspace_hits_total: non-counter metrics should not have "_total" suffix
redis_keyspace_misses_total: no help text
redis_master_repl_offset: no help text
redis_memory_used_lua_bytes: no help text
redis_memory_used_peak_bytes: no help text
redis_memory_used_rss_bytes: no help text
redis_net_input_bytes_total: no help text
redis_net_output_bytes_total: no help text
redis_process_id: no help text
redis_pubsub_channels: no help text
redis_pubsub_patterns: no help text
redis_rdb_current_bgsave_duration_sec: no help text
redis_rdb_last_bgsave_duration_sec: no help text
redis_rejected_connections_total: no help text
redis_uptime_in_seconds: no help text
redis_rejected_connections_total: non-counter metrics should not have "_total" suffix
redis_replication_backlog_bytes: no help text
redis_up: no help text
redis_used_cpu_sys: no help text
redis_used_cpu_sys_children: no help text
redis_used_cpu_user: no help text
redis_used_cpu_user_children: no help text
redis_keyspace_misses_total: non-counter metrics should not have "_total" suffix
redis_net_input_bytes_total: non-counter metrics should not have "_total" suffix
redis_net_output_bytes_total: non-counter metrics should not have "_total" suffix

Redis 4

Redis 4 is stable, but this exporter only supports 2. 3. according to doc.

Place configuration flags in a configuration file

Rather than passing the configuration flags as command line options could you add reading these from a configuration file? The primary concern is that when securing the RedisDB with a password, the password must then be passed as command line argument to the redis_exporter. Anyone who gains access to the machine could then do a simple ps command and observe the password. This violates various security requirements.

All configs with env variables

Hi!

Nice work, we are using it every day!
For our docker setup I would like to know if it is possible to overwrite all flags with env variables. I could provide a pull request if you like.

With kind regards,
Nighthawk

Multiple Series Error

The uptime & clents & memory usage: Can not get its value

The stack trace:

Error: Multiple Series Error
at b.setValues (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:29:8355)
at b.onDataReceived (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:29:6107)
at f.emit (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:61:19286)
at a.emit (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:61:21604)
at b.handleQueryResult (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:55:21101)
at i (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:51:24759)
at http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:51:25181
at o.$eval (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:52:293)
at o.$digest (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:51:30809)
at o.$apply (http://192.168.1.2:4001/public/app/boot.c8e57b1c.js:52:576)

image

I found a bug about uptime when my time range was chosen to show 24 hours but my redis started just 2 minutes at this time, Uptime was N/A state and reported above the error. May be the clents & memory usage have been the same issue.
image
image

Add binaries to the releases

Figure out if we can do this automatically via CircleCI, if not just do it manually but we should provide binaries.

Old version of last release

I tried to install new release of exporter to my server, but in metrics is a value with older version.
In your changelog is adding metric "up" to new release, but I haven't this value in my redis exporter.
Is that a new release of exporter?

redis_exporter_build_info{build_date="2016-12-02-21:09:17",commit_sha="a410b50d53985b76f117a6d9ee81ab052c8c1c91",golang_version="go1.7.3",version="v0.10.3"} 1

Thank you :)

Use ConstMetrics

This exporter is currently using direct instrumentation, which is strong discouraged for custom collectors. It should be switched to use ConstMetrics, and all the mutexes can then be removed.

Can not connect to redis_reporter

[root@Redis-master redis_exporter]# pwd
/root/go/src/github.com/oliver006/redis_exporter
[root@Redis-master redis_exporter]# ll
total 7572
-rwxr-xr-x 1 root root 2540 Aug 8 05:42 build.sh
-rw-r----- 1 root root 2223 Aug 8 05:42 circle.yml
drwxr-x--- 2 root root 4096 Aug 8 05:42 contrib
-rw-r----- 1 root root 160 Aug 8 05:42 Dockerfile
drwxr-x--- 2 root root 4096 Aug 8 05:42 exporter
-rw-r----- 1 root root 1308 Aug 8 05:42 glide.lock
-rw-r----- 1 root root 366 Aug 8 05:42 glide.yaml
-rw-r----- 1 root root 1063 Aug 8 05:42 LICENSE
-rw-r----- 1 root root 5900 Aug 8 05:42 main.go
-rw-r----- 1 root root 4174 Aug 8 05:42 README.md
-rwxr-x--- 1 root root 7684197 Aug 11 10:14 redis_exporter
drwxr-x--- 3 root root 4096 Aug 11 10:11 src
[root@Redis-master redis_exporter]# ./redis_exporter -web.listen-address=10.21.20.241:9121 -redis.addr=redis://127.0.0.1:6379 -redis.password=password
INFO[0000] Redis Metrics Exporter <<< filled in by build >>> build date: <<< filled in by build >>> sha1: <<< filled in by build >>> Go: go1.8.3

INFO[0000] Providing metrics at 10.21.20.241:9121/metrics
INFO[0000] Connecting to redis hosts: []string{"redis://127.0.0.1:6379"}
INFO[0000] Using alias: []string{""}

At the same machine:

[root@Redis-master redis_exporter]# telnet 10.21.20.241 9121
Trying 10.21.20.241...
Connected to 10.21.20.241.
Escape character is '^]'.

HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close

400 Bad RequestConnection closed by foreign host.
[root@Redis-master redis_exporter]#

At other machine:
[root@SZX1000347429 ~]# telnet 10.21.20.241 9121
Trying 10.21.20.241...
telnet: connect to address 10.21.20.241: No route to host

Is the way that I start the redis_exporter not correct? how should i do? thanks much!
And prometheus error message is:
Get http://10.21.20.241:9121/metrics: dial tcp 10.21.20.241:9121: getsockopt: no route to host

keys-checks with ':'

Hello,

It seams that the key-check feature doesn't work with keys with ':' in the name, example "cntr:flood".

Max.

Multiple Redis servers and -check-key option.

Hello, @oliver006 !

Thank you for you exporter! We are currently relying on it in our installation.

I've found out a strange behavior. Not sure if it is a feature or a bug.

We have multiple Redis servers and one redis-exporter that scrapes them all.
I want to make a designated key from each Redis to be visible to Prometheus. I've added -check-keys option. But it only shows the value from one server which it scrapes last.

Here is how to quickly reproduce the problem:

docker-compose.yml

services:
  redis1:
    image: redis:3.2.7
    container_name: redis1
    expose:
      - 6379
  redis2:
    image: redis:3.2.7
    container_name: redis2
    expose:
      - 6379
  redisexporter:
    container_name: redis_exporter
    image: oliver006/redis_exporter
    command:
      - '-check-keys=testkey'
      - '-redis.addr=redis://redis1:6379,redis://redis2:6379'
    expose:
      - 9121
    ports:
      - 127.0.0.1:9121:9121

Commands:

docker-compose -f docker-compose.yml up -d
docker exec -it redis1 redis-cli set testkey 1111
docker exec -it redis2 redis-cli set testkey 2222
curl -s localhost:9121/metrics | grep testkey

Thank you!

Is it too long to scrape every 15 seconds ?

Hi, oliver006. Something puzzled me. I find redis_exporter do the scraping every 15 seconds(default). Is it too long ? I mean, there're probably many metrics changes within 15 seconds, we may miss the most part of them. Unless these redis metrics obtained by INFO command only go up(never go down), 15 seconds is workable.

However, as far as I know, there are some metrics that could either go up or go down, so you may not capture the major changes if setting the interval too long.

listen tcp: missing port in address 9121

Running redis_exporter in alpine:edge with

/go/bin/redis_exporter -web.listen-address=9121 -redis.alias=control -debug

it gives the following logs:

INFO[0000] Redis Metrics Exporter <<< filled in by build >>>    build date: <<< filled in by build >>>    sha1: <<< filled in by build >>>
 
DEBU[0000] Enabling debug output                        
INFO[0000] Providing metrics at 9121/metrics            
INFO[0000] Connecting to redis hosts: []string{"redis://localhost:6379"} 
INFO[0000] Using alias: []string{"control"}             
FATA[0000] listen tcp: missing port in address 9121 

I'm running it inside a docker container alongside with redis instalation (that is running in the test above) and with go 1.7.4 and docker 17.03.1-ce-rc1.

Improve redis addr validation and handling

Maybe I'm missing something but I don't see any Redis metrics around databases/keys/hit/miss

testing on various Redis versions from 2.4.x to 3.0.x and the only redis_ prefixed metrics I see are:

# HELP redis_exporter_last_scrape_duration_seconds The last scrape duration.
# TYPE redis_exporter_last_scrape_duration_seconds gauge
redis_exporter_last_scrape_duration_seconds 0.001755366
# HELP redis_exporter_last_scrape_error The last scrape error status.
# TYPE redis_exporter_last_scrape_error gauge
redis_exporter_last_scrape_error 1
# HELP redis_exporter_scrapes_total Current total redis scrapes.
# TYPE redis_exporter_scrapes_total counter
redis_exporter_scrapes_total 1

[bug]: getsockopt: connection refused

Through the command redis-cli 127.0.0.1 -p zzzaa001 connection redis is good. But the redis_exporter error is as follows:

[root@acserver ~]# docker run -e "REDIS_ADDR=127.0.0.1:6379" -e "REDIS_PASSWORD=zzzaa001" --name redis_exporter -p 9121:9121 oliver006/redis_exporter
time="2017-06-21T09:12:59Z" level=info msg="Redis Metrics Exporter v0.11.1    build date: 2017-05-24-14:10:15    sha1: d3af2b49709a2cab654d7c876497e791c6d9a082    Go: go1.8.1\n" 
time="2017-06-21T09:12:59Z" level=info msg="Providing metrics at :9121/metrics" 
time="2017-06-21T09:12:59Z" level=info msg="Connecting to redis hosts: []string{\"127.0.0.1:6379\"}" 
time="2017-06-21T09:12:59Z" level=info msg="Using alias: []string{\"\"}" 
time="2017-06-21T09:13:03Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 
time="2017-06-21T09:13:08Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 
time="2017-06-21T09:13:13Z" level=info msg="redis err: dial tcp 127.0.0.1:6379: getsockopt: connection refused" 

And my prometheus cofnig is:

  - job_name: redis_exporter_dev

    scrape_interval: 5s

    static_configs:
    - targets: ['192.168.1.2:9121']

  - job_name: redis_exporter_test

    scrape_interval: 5s

    static_configs:
    - targets: ['192.168.1.3:9121']

Build info in docker image is not set

The latest docker image does not seem to be built correctly, the build_info series is not populated:

$ docker run -d -P oliver006/redis_exporter:v0.9.1
Unable to find image 'oliver006/redis_exporter:v0.9.1' locally
v0.9.1: Pulling from oliver006/redis_exporter
[...]
Digest: sha256:8c9585f48075dcf6888f4bca63813052c147d13a0fb9d7b0677c38b08ac13423
Status: Downloaded newer image for oliver006/redis_exporter:v0.9.1
$ curl localhost:32771/metrics | grep version
redis_exporter_build_info{build_date="<<< filled in by build >>>",commit_sha="<<< filled in by build >>>",golang_version="go1.6.3",version="<<< filled in by build >>>"} 1

Not connecting to Auzre Redis on SSL port

I am trying to get this exported working against an Azure Redis Cache instance.
By default it uses SSL on port 6380.

I can get it working if I disable 'Allow access only via SSL' and connect to port 6379.
Is SSL support something you could add to the exporter or should I be running something else to handle the SSL connection?

need redis 2.8 maxmemory support

Hi, Im using old redis version 2.8 and there is no maxmemory in the INFO result. What about use "CONFIG GET maxmemory" on old redis version?

Use sync.Map

Once Go 1.9 is GA we should use sync.Map for e.metrics to simplify all the
lock -> set value -> unlock sequences.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.