Giter Club home page Giter Club logo

pure-fa-openmetrics-exporter's Introduction

Current version

Pure Storage FlashArray OpenMetrics exporter

OpenMetrics exporter for Pure Storage FlashArray.

Support Statement

This exporter is provided under Best Efforts support by the Pure Portfolio Solutions Group, Open Source Integrations team. For feature requests and bugs please use GitHub Issues. We will address these as soon as we can, but there are no specific SLAs.

Overview

This application aims to help monitor Pure Storage FlashArrays by providing an "exporter", which means it extracts data from the Purity API and converts it to the OpenMetrics format, which is for instance consumable by Prometheus, or other observability platforms.

The stateless design of the exporter allows for easy configuration management as well as scalability for a whole fleet of Pure Storage arrays. The design follows almost completely the multi-target-exporter pattern described in the Prometheus documentation, so the tool can be used to scrape multiple FlashArrays from a single instance or just act the front-end for a single FlashArray.

To monitor your Pure Storage appliances, you will need to create a new dedicated user on your array, and assign read-only permissions to it. Afterwards, you also have to create a new API key.

Building and Deploying

The exporter is a Go application based on the Prometheus Go client library and Resty, a simple but reliable HTTP and REST client library for Go . It is preferably built and launched via Docker. You can also scale the exporter deployment to multiple containers on Kubernetes thanks to the stateless nature of the application.

Detailed examples of how to deploy several configurations either using Docker or an executable binary can be in docs/deployment-examples.md.


The official docker images are available at Quay.io

docker pull quay.io/purestorage/pure-fa-om-exporter:<release>

where the release tag follows the semantic versioning.


Binaries

Binary downloads of the exporter can be found on the Releases page.


Local development

The following commands describe how to run a typical build :

# clone the repository
git clone [email protected]:PureStorage-OpenConnect/pure-fa-openmetrics-exporter.git

# modify the code and build the package
cd pure-fa-openmetrics-exporter
...
make build .

The newly built exporter binary can be found in the ./out/bin directory.

Optionally, to build the binary with the vendor cache, you may use

make build-with-vendor

Docker Image

The provided dockerfile can be used to generate a docker image of the exporter. The image can be built using docker as follows

VERSION=<version>
docker build -t pure-fa-ome:$VERSION .

# You can also use the make file to build a docker-image

cd pure-fa-openmetrics-exporter
...
make docker-build

Authentication

Authentication is used by the exporter as the mechanism to cross authenticate to the scraped appliance, therefore for each array it is required to provide the REST API token for an account that has a 'readonly' role. The api-token can be provided in two ways

  • using the HTTP Authorization header of type 'Bearer', or
  • via a configuration map in a specific configuration file.

The first option requires specifying the api-token value as the authorization parameter of the specific job in the Prometheus configuration file. The second option provides the FlashArray/api-token key-pair map for a list of arrays in a simple YAML configuration file that is passed as parameter to the exporter. This makes possible to write more concise Prometheus configuration files and also to configure other scrapers that cannot use the HTTP authentication header.

TLS Support

The exporter can be started in TLS mode (HTTPS, mutually exclusive with the HTTP mode) by providing the X.509 certificate and key files in the command parameters. Self-signed certificates are also accepted.

Supported Headers

X-Request-ID (Optional)

The X-Request-ID Header, as used in the Purity API, may be used when calling the OpenMetrics exporter by using the HTTP Header X-Request-ID. It will then be passed and used when requesting metrics from the Purity API.

Usage

usage: pure-fa-om-exporter [-h|--help] [-a|--address "<value>"] [-p|--port <integer>] [-d|--debug] [-t|--tokens <file>] [-k|--key <file>] [-c|--cert <file>]

                           Pure Storage FA OpenMetrics exporter

Arguments:

  -h  --help     Print help information
  -a  --address  IP address for this exporter to bind to. Default: 0.0.0.0
  -p  --port     Port for this exporter to listen. Default: 9490
  -d  --debug    Enable debug. Default: false
  -t  --tokens   API token(s) map file
  -c  --cert     SSL/TLS certificate file. Required only for TLS
  -k  --key      SSL/TLS private key file. Required only for TLS

The array token configuration file must have to following syntax:

<array_id1>:
  address: <ip-address>|<hosname1>
  api_token: <api-token1> 
<array_id2>:
  address: <ip-address2>|<hostname2>
  api_token: <api-token2>
...
<array_idN>:
  address: <ip-addressN>|<hostnameN>
  api_token: <api-tokenN>

When the array token configuration file is used, the array_id key must be used as the endpoint argument for the scraped URL.

For usage a usage example of how to use this feature with a Docker container, see Docker Usage Examples below.

Scraping endpoints

The exporter uses a RESTful API schema to provide Prometheus scraping endpoints.

URL GET parameters Description
http://<exporter-host>:<port>/metrics endpoint Full array metrics
http://<exporter-host>:<port>/metrics/array endpoint Array only metrics
http://<exporter-host>:<port>/metrics/volumes endpoint Volumes only metrics
http://<exporter-host>:<port>/metrics/hosts endpoint Hosts only metrics
http://<exporter-host>:<port>/metrics/pods endpoint Pods only metrics
http://<exporter-host>:<port>/metrics/directories endpoint Directories only metrics

Depending on the target array, scraping for the whole set of metrics could result into timeout issues, in which case it is suggested either to increase the scraping timeout or to scrape each single endpoint instead.

Prometheus configuration

A sample of a basic configuration file for Prometheus is as follows.

global:
  scrape_interval: 30s
  scrape_timeout: 10s
  evaluation_interval: 30s
scrape_configs:
- job_name: monitoring/pure-fa-probe
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics/pods
  scheme: http
  follow_redirects: true
  enable_http2: true
  relabel_configs:
  - source_labels: [job]
    separator: ;
    regex: (.*)
    target_label: __tmp_prometheus_job_name
    replacement: $1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: job
    replacement: pure-fa-probe
    action: replace
  - source_labels: [__address__]
    separator: ;
    regex: (.*)
    target_label: __param_endpoint
    replacement: $1
    action: replace
  - source_labels: [__param_endpoint]
    separator: ;
    regex: (.*)
    target_label: instance
    replacement: $1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: __address__
    replacement: pure-fa-exporter.your.domain:9490  #  <== your exporter address and port goes here
    action: replace
  static_configs:
  - targets:           #  <== the list of your FlashArrays goes here
    - 10.11.12.80
    - 10.11.12.82
    - 10.11.12.90

See the Kubernetes examples for a similar configuration that uses additional configuration items for a simple Prometheus+Kubernetes deployment, or for the more interesting Prometheus operator.

Usage Examples

Docker

In a typical production scenario, it is recommended to use a visual frontend for your metrics, such as Grafana. Grafana allows you to use your Prometheus instance as a datasource, and create Graphs and other visualizations from PromQL queries. Grafana, Prometheus, are all easy to run as docker containers.

To spin up a very basic set of those containers, use the following commands:

# Pure Storage OpenMetrics Exporter
docker run -d -p 9490:9490 --name pure-fa-om-exporter quay.io/purestorage/pure-fa-om-exporter:<version>

# Prometheus with config via bind-volume (create config first!)
docker run -d -p 9090:9090 --name=prometheus -v /tmp/prometheus-pure.yml:/etc/prometheus/prometheus.yml -v /tmp/prometheus-data:/prometheus prom/prometheus:latest

# Grafana
docker run -d -p 3000:3000 --name=grafana -v /tmp/grafana-data:/var/lib/grafana grafana/grafana

Docker: Passing tokens.yaml file to the container

On starting the container, the Dockerfile will create an empty /etc/pure-fa-om-exporter/tokens.yaml file whether the users requires it or not. If the file is blank, the container will successfully start. If the container has a volume attached to the /etc/pure-fa-om-exporter/ directory containing a valid tokens.yaml file the container will utilize the contents.

# Pure Storage OpenMetrics Exporter container with authentication tokens
docker run -d -p 9490:9490 --name pure-fa-om-exporter --volume /hostpathtofile/tokens.yaml:/etc/pure-fa-om-exporter/tokens.yaml  quay.io/purestorage/pure-fa-om-exporter:<version>

Changes to the tokens.yaml file can be reloaded by restarting the Docker container.

docker restart pure-fa-om-exporter

Please have a look at the documentation of each image/application for adequate configuration examples.

Kubernetes

A simple but complete example to deploy a full monitoring stack on Kubernetes can be found in the examples directory

Docker Compose

A complete example monitoring stack implemented in Docker Compose which can be found in the examples directory.

TLS HTTPS Support

Usage example of how to deploy the exporter with TLS using the pure-fa-om-exporter binary.

Deployment:

$ ./pure-fa-om-exporter -c cert.crt -k cert.key
2023/08/01 12:00:00 Start Pure FlashArray exporter v1.0.9 on 0.0.0.0:9490

Testing:

$ cURL --header 'Authorization: Bearer xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' --request GET 'https://pure-fa-exporter.your.domain:9490/metrics/array?endpoint=arrayname.your.domain' --insecure --silent | grep purefa_info
# HELP purefa_info FlashArray system information
# TYPE purefa_info gauge
purefa_info{array_name="arrayname",os="Purity//FA",system_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",version="6.4.5"} 1

Metrics Collected

Please refer to the purefa metrics specification for full details about all metrics.

Metric Name Description
purefa_info FlashArray system information
purefa_alerts_open FlashArray open alert events
purefa_array_performance_average_bytes FlashArray array average operations size in bytes
purefa_array_performance_bandwidth_bytes FlashArray array throughput in bytes per second
purefa_array_performance_latency_usec FlashArray array latency in microseconds
purefa_array_performance_queue_depth_ops FlashArray array queue depth size
purefa_array_performance_throughput_iops FlashArray array throughput in iops
purefa_array_space_bytes FlashArray array space in bytes
purefa_array_space_data_reduction_ratio FlashArray array space data reduction
purefa_array_space_utilization FlashArray array space utilization in percent
purefa_directory_performance_average_bytes FlashArray directory average operations size in bytes
purefa_directory_performance_bandwidth_bytes FlashArray directory throughput in bytes per second
purefa_directory_performance_latency_usec FlashArray directory latency in microseconds
purefa_directory_performance_throughput_iops FlashArray directory throughput in iops
purefa_directory_space_bytes FlashArray directory space in bytes
purefa_directory_space_data_reduction_ratio FlashArray directory space data reduction
purefa_host_connections_info FlashArray host volumes connections
purefa_host_performance_average_bytes FlashArray host average operations size in bytes
purefa_host_performance_bandwidth_bytes FlashArray host bandwidth in bytes per second
purefa_host_performance_latency_usec FlashArray host latency in microseconds
purefa_host_performance_throughput_iops FlashArray host throughput in iops
purefa_host_space_bytes FlashArray host space in bytes
purefa_host_space_data_reduction_ratio FlashArray host space data reduction
purefa_host_space_size_bytes FlashArray host volumes size
purefa_hw_component_status FlashArray hardware component status
purefa_hw_component_temperature_celsius FlashArray hardware component temperature in C
purefa_hw_component_voltage_volt FlashArray hardware component voltage
purefa_network_interface_performance_bandwidth_bytes network interfaces bandwidth in bytes per second
purefa_network_interface_performance_throughput_pkts FlashArray network interfaces throughput in packets per second
purefa_network_interface_performance_errors FlashArray network interfaces errors per second
purefa_pod_performance_average_bytes FlashArray pod average operations size
purefa_pod_performance_bandwidth_bytes FlashArray pod throughput in bytes per second
purefa_pod_performance_latency_usec FlashArray pod latency in microseconds
purefa_pod_performance_throughput_iops FlashArray pod throughput in iops
purefa_pod_space_bytes FlashArray pod space in bytes
purefa_pod_space_data_reduction_ratio FlashArray pod space data reduction
purefa_pod_performance_replication_bandwidth_bytes FlashArray pod replication bandwidth in bytes per second
purefa_pod_replica_links_performance_bandwidth_bytes FlashArray pod replica links throughput in bytes per second
purefa_pod_replica_links_lag_average_sec FlashArray pod replica links average lag in milliseconds (deprecated, please use purefa_pod_replica_links_lag_average_msec)
purefa_pod_replica_links_lag_max_sec FlashArray pod replica links maximum lag in milliseconds (deprecated, please use purefa_pod_replica_links_lag_average_msec)
purefa_pod_replica_links_lag_average_msec FlashArray pod replica links average lag in milliseconds
purefa_pod_replica_links_lag_max_msec FlashArray pod replica links maximum lag in in milliseconds
purefa_volume_performance_average_bytes FlashArray volume average operations size in bytes
purefa_volume_performance_bandwidth_bytes FlashArray volume throughput in bytes per second
purefa_volume_performance_latency_usec FlashArray volume latency in microseconds
purefa_volume_performance_throughput_iops FlashArray volume throughput in iops
purefa_volume_space_bytes FlashArray volume space in bytes
purefa_volume_space_data_reduction_ratio FlashArray volume space data reduction

Integrating with Observability Platforms

While most monitoring and observability platforms should be able to utilize the OpenMetrics standard, some platforms may require some form of minor config or development to integrate. This is normal for any device providing metrics, not just Pure Storage.

Pure Storage are working with observability platform vendors to ensure products work out of the box including a fleet wide overview dashboard.

For more information on developed observability platform integrations such as Datadog, Dynatrace, Grafana & Prometheus, take a look at extra/o11y_platforms.md.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

pure-fa-openmetrics-exporter's People

Contributors

chrroberts-pure avatar drizton avatar genegr avatar james-laing avatar nocentino avatar sdodsley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pure-fa-openmetrics-exporter's Issues

Update useragent string to reflect calling platform

The current useragent string returns that the OpenMetrics exporter is hitting the backend array, which is reported to Pure for telemetry purposes.
It would be good if we could also/otherwise expose the platform using the OME to get observability data, eg. DataDog, NewRelic, OTel, etc.

grafana dashboard

Where can we get grafana dashboard for this exporter? Is there dedicated dashboards for the replication features Active Cluster and ActiveDR etc.. ?

Array Load Statistics metric

We recently deployed the pure-fa-openmetrics-exporter into our Prometheus/Grafana Mimir cluster. Thank you for offering this option to your customers! One of the metrics that we closely track in the Pure1 Manage UI that we would like to consume is load. Can you please provide information on how the Array Load Statistics are calculated? If the metrics needed aren't currently exposed by the pure-fa-openmetrics exporter could we please request this be added?

Thank you!

inversion in label

Hi,
I think there is an inversion between labels dimension and direction :

purefa_pod_performance_replication_bandwidth_bytes{dimension="from_remote", direction="continuos"}

[ERROR] Exception occurred while handling uri:

Hello - i've been able to get the docker image up and running on a RHEL8 VM.. when i configure the promethues yml file for the FA i start getting these messages in the error log of the collector.. i can see the user account loging into the array with the api token but no data is being handled.. Thoughts?

i'm using the 0.1.0 version and i've tried 0.2.0-dev.. also whenever i try to use the --workers command i get errors too.. only seems to start the container successfully when its one worker..

[2022-09-07 20:54:06 +0000] [1] [ERROR] Exception occurred while handling uri: 'http://boop:9491/metrics?endpoint=10.32.33.34'
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
FutureStatic,
File "/app/pure_fa_exporter.py", line 51, in decorated_function
response = await f(request, *args, **kwargs)
File "/app/pure_fa_exporter.py", line 113, in flasharray_handler
resp = generate_latest(registry)
File "/app/.local/lib/python3.10/site-packages/prometheus_client/exposition.py", line 197, in generate_latest
for metric in registry.collect():
File "/app/.local/lib/python3.10/site-packages/prometheus_client/registry.py", line 97, in collect
yield from collector.collect()
File "/app/.local/lib/python3.10/site-packages/pure_fa_openmetrics_exporter/flasharray_collector/collector.py", line 34, in collect
yield from ArrayInfoMetrics(self.fa).get_metrics()
File "/app/.local/lib/python3.10/site-packages/pure_fa_openmetrics_exporter/flasharray_collector/flasharray_metrics/array_info_metrics.py", line 8, in init
self.array = fa_client.arrays()[0]
File "/app/.local/lib/python3.10/site-packages/pure_fa_openmetrics_exporter/flasharray_client/client.py", line 66, in arrays
return list(self._arrays.values())
AttributeError: 'NoneType' object has no attribute 'values'

purefa_alerts_open report errors with Purity 6.4.3

/metrics/array endpoint returns errors when multiple instances of the same error are discovered in pure_fa_open since upgrading to Purity 6.3.3 to 6.4.3. All other metrics endpoints work as expected and return results.

Versions

  • Purity //FA 6.3.4
  • OpenMetrics Exporter quay.io/purestorage/pure-fa-om-exporter:v1.0.5.hotfix1
An error has occurred while serving metrics:

17 error(s) occurred:
* collected metric "purefa_alerts_open" { label:<name:"component_name" value:"Service: active_directory_domain_catalog_ldap" > label:<name:"component_type" value:" IpAddress: 192.168.0.20" > label:<name:"severity" value:" Port: 389" > gauge:<value:1 > } was collected before with the same name and label values
* collected metric "purefa_alerts_open" { label:<name:"component_name" value:"Service: active_directory_domain_catalog_ldap" > label:<name:"component_type" value:" IpAddress: 192.168.0.20" > label:<name:"severity" value:" Port: 389" > gauge:<value:1 > } was collected before with the same name and label values
<truncated - message repeats>

The impact of this is that due to purefa_info failing to collect from /metrics/array any observability dashboards are unable to correlate data and therefore don't work.

Example of RESTAPI json output:

    {
      "description": "(directory_service:Service: active_directory_domain_catalog_ldap, IpAddress: 192.168.0.1, Port: 389, Filter: (&(uidNumber=1000)(objectClass=user))): Directory service lookup failed. Expected: , Actual: ",
      "created": 1678460265500,
      "state": "open",
      "component_type": "directory_service",
      "name": "10999254",
      "id": "1cb99f01d2754294a1d7eb3b6e61abdf",
      "code": 231,
      "category": "array",
      "severity": "info",
      "flagged": false,
      "updated": 1678460265500,
      "closed": null,
      "notified": null,
      "component_name": "Service: active_directory_domain_catalog_ldap, IpAddress: 192.168.0.1, Port: 389, Filter: (&(uidNumber=1000)(objectClass=user))",
      "expected": "",
      "actual": "",
      "issue": "Directory service lookup failed.",
      "knowledge_base_url": "https://support.purestorage.com/?cid=Alert_0231",
      "summary": "(directory_service:Service: active_directory_domain_catalog_ldap, IpAddress: 192.168.0.1, Port: 389, Filter: (&(uidNumber=1000)(objectClass=user))): Directory service lookup failed."
    }

[new metric proposal] - Power consumption per power supply - purefa_hw_component_power_watts

Metric name: purefa_hw_component_power_watts
Description: FlashArray hardware component consumption in watts
Dimensions: component_name, component_type

Example output:
purefa_hw_component_power_watts{component_name="CH0.PWR0",component_type="power_supply"} 1121
purefa_hw_component_power_watts{component_name="CH0.PWR1",component_type="power_supply"} 1120

Is the metric currently available in the Purity API? : No
Recommended Status in the metric spec : Accepted

Purefa Exporter Status Metric

It would be important for the exporter to have a status metric, which shows whether the exporter is successfully collecting the metrics or not.

Very efficient in the case of an alert engine, for notification in case of loss of visibility.

Example: purefa_exporter_status (0(down) or 1(up)

Target authorization token is missing

I setup the exporter with the local setup process.

I confirmed that the curl command give the following output.

curl -H 'Authorization: Bearer 4b84cabd-***' -X GET 'http://10.10.10.10:9490/metrics/array?endpoint=bel2-pure1'

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 17
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.18.1"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.97776e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.97776e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 4303
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 702
....

But referencing the token.yaml with the following command the page only shows "Target authorization token is missing"

/opt/pure-fa-om-exporter/out/bin/pure-fa-om-exporter -t /opt/pure-fa-om-exporter/token.yaml

Here is my token.yaml

array_id1:
  address: 'bel2-pure1'
  api_token: '4b84cabd-***'
array_id2:
  address: '172.31.146.80'
  api_token: 'b8605f85-***'

The OpenMetrics exporter is incorrectly calling the /hardware endpoint

Hello,

It sounds like the OpenMetrics exporter is reporting the wrong status while we perform a GET request against the /hardware endpoint and instead a proper drive status while using the /drives endpoint.

// More on the linked internal issue reported.

The Pure API version in use for this test is 1.9
Swagger reference: Pure1 Public REST APIs: https://static.pure1.purestorage.com/api-swagger/index.html

Could you please have a look on this?
Thanks!

docker compose prometheus yml adoption

Hi Eugenio, please could you change the prometheus.yml file for docker-compose setup, to match the required pattern for "endpoint" specification. In the file is written:

endpoint: YOUR_FLASHARRAY_IP

but if you specify those endpoint params without any brackets you will receive a GO unmarshal str error.

Defining the endpoint params like:

endpoint: [10.220.113.14]

would solve the problem.

purefa_alert_open - severity and component_name are reversed

Hi,

With the pure-fa-om-exporter v1.0.2, for a host alert, severity and component_name fields are reversed.

Here an example :
purefa_alerts_open{component_name="critical", component_type="host", instance="<flasharray name>", job="<job_name>", severity="<host name>"}

image used :
"Image": "quay.io/purestorage/pure-fa-om-exporter:v1.0.2"

NB: I don't know if the same issue is present for other component types.

Error seems to be on this commit : f3c66dc where the order of parameter seems to be wrong.

purefb_hardware_health

@genegr Hi, is there a documentation on what health alert $values for purefb_hardware_health metric mean?

Current Possible values I see are 0, 1, 2

Smaller container images

Current container images are fairly large (441Mb) because they contains Go-Lang build stage.

Furthermore the binary (5.54MB) is (almost) statically compiled, so the base image isn't need.

Consider using Docker multistage.

$ docker history quay.io/purestorage/pure-fa-om-exporter:latest 
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
bdb74f431b02   7 weeks ago    /bin/sh -c #(nop)  CMD ["--host" "0.0.0.0" "…   0B        
<missing>      7 weeks ago    /bin/sh -c #(nop)  ENTRYPOINT ["/usr/local/b…   0B        
<missing>      7 weeks ago    /bin/sh -c #(nop)  EXPOSE 9490                  0B        
<missing>      7 weeks ago    /bin/sh -c go build -v -o /usr/local/bin/pur…   47.7MB    
<missing>      7 weeks ago    /bin/sh -c #(nop) COPY dir:2be547e94338b94e0…   1.28MB    
<missing>      7 weeks ago    /bin/sh -c go mod download && go mod verify     38.9MB    
<missing>      7 weeks ago    /bin/sh -c #(nop) COPY multi:fb0a91644ecf963…   48.9kB    
<missing>      7 weeks ago    /bin/sh -c #(nop) WORKDIR /usr/src/app          0B        
<missing>      8 weeks ago    /bin/sh -c #(nop) WORKDIR /go                   0B        
<missing>      8 weeks ago    /bin/sh -c mkdir -p "$GOPATH/src" "$GOPATH/b…   0B        
<missing>      8 weeks ago    /bin/sh -c #(nop)  ENV PATH=/go/bin:/usr/loc…   0B        
<missing>      8 weeks ago    /bin/sh -c #(nop)  ENV GOPATH=/go               0B        
<missing>      8 weeks ago    /bin/sh -c set -eux;  apk add --no-cache --v…   347MB     
<missing>      8 weeks ago    /bin/sh -c #(nop)  ENV GOLANG_VERSION=1.19.3    0B        
<missing>      8 weeks ago    /bin/sh -c #(nop)  ENV PATH=/usr/local/go/bi…   0B        
<missing>      8 weeks ago    /bin/sh -c set -eux;  if [ -e /etc/nsswitch.…   0B        
<missing>      2 months ago   /bin/sh -c apk add --no-cache ca-certificates   519kB     
<missing>      2 months ago   /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B        
<missing>      2 months ago   /bin/sh -c #(nop) ADD file:ceeb6e8632fafc657…   5.54MB    

API key as url parameter

The old pure_exporter had the option to set the api token as url parameter which allowed the value to be set via label mapping. As a result, there was only one prometheus scrape job for all arrays and not one job per array as this new version enforces. That's because one can set the parameter in the targets list (see below). Any chance to add url parameter back again? Scaling with more arrays is otherwise tricky and breaks many workflows (like silencing an entire job because of maintenance etc). Here's an example of how we currently use it with the old exporter that unfortunately no longer works.

prometheus.yml

scrape_configs:
  - job_name: 'pure'
    honor_labels: true
    metrics_path: /metrics/flasharray
    scrape_interval: 10s
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_endpoint
      - source_labels: [__pure_apitoken]
        target_label: __param_apitoken
      - source_labels: [__address__]
        target_label: instance
        # trim fqdn to short name
        regex: '^(.+)\.example\.com$'
      - target_label: __address__
        replacement: localhost:9491
    file_sd_configs:
      - files:
        - /etc/prometheus/targets/pure.yml

/etc/prometheus/targets/pure.yml

- targets: [ pure1.example.com ]
  labels:
    __pure_apitoken: aaaa-bbb-ccc-ddd-eee
- targets: [ pure2.example.com ]
  labels:
    __pure_apitoken: aaaa-bbb-ccc-eee-ddd

Pure block exporter does not work

Note: Re-creating Issue on this Repo:

I ran the exporter on docker. Disabled SSL check.

As we discussed this in our 1:1 meeting:

The logs say:

[2022-03-30 00:38:32 +0000] [1] [INFO] packages: sanic-routing==0.7.2
[2022-03-30 00:38:32 +0000] [1] [INFO] Starting worker [1]

When I try to access the end-point /metrics, it does not return anything. It says site cannot be reached. I double checked on the port numbers and everything.

http://fqdn:9491/metrics

curl -G 'http://localhost:9498'
curl: (7) Failed connect to localhost:9498; Connection refused

Inquiry regarding the “Purestorage” version supported by the current “exporter” and the previous “exporter”

Inquiry regarding the “Purestorage” version supported by the current “exporter” and the previous “exporter”

Previously, data from “purestorage” was imported to the site below.
https://github.com/PureStorage-OpenConnect/pure-exporter

And currently, I proceeded with the “exporter” of the site.

There was a phenomenon in which data could not be retrieved in "purestorage" version 4.7.10.

So what I'm curious about is whether the previous "exporter" and the current "exporter" support "purestorage" of 4.7.10, respectively.

I would like to know whether a "purestorage" version of each is provided in its entirety, or to what extent it is covered.

[new metric proposal] - Drive capacity metrics - purefa_drive_capacity

Metric name: purefa_drive_capacity
Description: FlashArray Drives capacity metric
Dimensions: component_name, component_type, component_protocol, component_status

Example output:
purefa_drive{component_name="SH0.BAY10",component_type="SSD", component_protocol="SAS", component_status="healthy"} 6492782592
purefa_drive{component_name="SH0.BAY23",component_type="NVRAM", component_protocol="SAS", component_status="failed"} 25165824

Is the metric currently available in the Purity API? : Yes
Recommended Status in the metric spec : Accepted

Collect Frontent WWPN Information

For our internal PerformanceWarehouse we need the WWPNs from the front end ports. Would be great to implement that.
We want to have a full view from Host to Storage in Grafana.

BR

Lukas

ER: Include array_name in ALL metrics

array_name is only available in purefa_info which we use to correlate data in table widgets on dashboards.

With Datadog and Prometheus we recommend tagging all metrics with the array name/instance. Dynatrace extensions (and possibly other platforms) rely heavily on scraping the data required from the metrics.

While we could recommend using tagging, I think an easier approach for the user would be to include at least the array_name in every available metric. This would remove the reliance on additional tagging configuration by the user and still allow full metric correlation.

Сonnection path state

I would really appreciate it if you could kindly assist us in finding a solution for monitoring the connection path state of hosts. Specifically, I'm looking to gather information regarding the host connectivity state, which may display path states such as Redundant, Unused Port, Unknown, etc. Health>Connections>Host Connections into one metric.

Is it possible to obtain these states in specific metric?

ca1-flasharray-a06_host_path

purefa_alerts_open not reporting correctly

We have pure-fa-openmetrics-exporter on 1.0.12 (think same issue might be on 1.0.13) and some alerts are not reporting the matching fields correctly. For example the last alert with code 5177 is reporting severity value on summary field, sounds like a hop mismatching

You can see below the difference of outputs between a well scrapped alert and the one with wrong matching of values

image

Thanks

Grafana not showing data as expected with multiple arrays

Hi!

Trying to use the grafana dashboard to view multiple Flash Arrays.
The prometheus configuration uses the flash array exporter with token configuration, meaning that I have multiple targets and the relabel config for each job.
This made most sense rather than adding one job per array per metric source.

To accommodate this there are two exporters running, one which runs generically and one which runs with the token config.

Earlier this has been done with config towards one FA, and one job per metric source on that FA, which works fine in the grafana dash.

I get the data in to prometheus as expected, and I can't see any metric difference from the metrics retrieved by the old config and the new config. The data is differentiated with the label instance which I understood the config to use.
The only difference between the old and the new config would be the job names, but this hsould have little affect on grafana.

In grafana I can see both arrays in the dropdown menu FlashArray, or rather three since the new config targets both and the old only one, and both are live at the same time. But this means that it has gotten hold of which instance labels there are.

But in the dashboard it only works if all or the old instance is selected, and the list at the top only shows the old instance, and for a while it showed the IP of the prometheus exporter, but it does not do that either now.

In my mind if I have all the FAs in the dropdown, this should be propogated in the entire dash. Selecting an instance from the new config returns no data at all, even though prometheus is collecting data. It should be possible to use any number of prometheus exporters for FAs depending on the environment, just like you can input any number of FAs in the prometheus config as per the last example below.

The dash used is this one:
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/grafana/grafana-purefa-flasharray-overview.json

The prometheus config used is the one in the readme, repeated for each metric source:
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/grafana/grafana-purefa-flasharray-overview.json

The old prometheus config which works is based on this one:
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/prometheus/prometheus.yml

Let me know if you need any additional information to help trouble shoot this.

Thanks!

Replication monitoring

Hi,
Have you plan to include some metrics about "Active Cluster" , distant snapshot (protection group), safe mode , network activity ...?

Plugin restarting and collections stopped in Purity 6.4.4 and later

Hello everyone,

I am use this pure-fa-openmetrics-exporter for Flasharray and Flashblade in the bigest customer in LATAM of Pure Storage.
After upgrade the Purity for version 6.4.5 on two Flasharrays//C, the collects stopping
Bellow, cli show command "docker ps":

[root@~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d5d03b63270 grafana/grafana "/run.sh" 2 months ago Up 7 days 0.0.0.0:443->3000/tcp pure-fa-openmetrics-exporter-docker-grafana-1
3b549b787bcb quay.io/purestorage/pure-fb-om-exporter:latest "/pure-fb-om-exporte…" 3 months ago Up 7 days 0.0.0.0:9491->9491/tcp pure-fa-openmetrics-exporter-docker-pure-fb-om-exporter-1
fa57c4d7b452 quay.io/purestorage/pure-fa-om-exporter:latest "/usr/local/bin/pure…" 3 months ago Up 31 minutes 0.0.0.0:9490->9490/tcp pure-fa-openmetrics-exporter-docker-pure-fa-om-exporter-1
44db9ab12a38 prom/prometheus "/bin/prometheus --s…" 3 months ago Up 7 days 0.0.0.0:9090->9090/tcp pure-fa-openmetrics-exporter-docker-prometheus-1

I would like help with re-establishing collections on devices running Purity 6.4.4 and later.
NOTE: These devices use file services.
In equipment with version 6.4.3, there was no interruption in collection

Add vendor directory to manage deps

Some of our customers who are able to access this repo, may not have access to the repos that we list in our go.mod as dependancies. This will more or less enable 'offline' builds of our binary.

Enhancements to purefa_alerts_open, collect additional fields.

We would like to see if a request for additional data can be collected to the Alerts_Open endpoint. We would like to use the data to send alerts to Prometheus's Alert manager, but would like to have the additional data to manage the alerts, to silence the alerts and to have additional documentation when an alert is sent to a pager or MS-Teams channel. I reviewed the API of the data available and would like to have at least the data that is shown in the GUI (green), plus the "ID" field so that the alert can be managed.

This is what I found available in the REST-API:
The “green” highlighted fields are what is in the GUI, the Promethes fields are the yellow one and the “Severity” field.
Alert_API

Following Instructions but no Config File

I'm new to Docker and containers but I have the basic configuration running and I have gained access to the page so I know the container is running. Which example is the best configuration file to have basic metrics pulled and then how do I tell the Docker Build or what configuration file do I need to modify to use the config file i created?

Error in token file: [-t|--tokens] stat /opt/pure-fa-exporter/fa.yaml: no such file or directory

Hi all

When launching the fa exporter in docker, and using the -t switch to add the .yaml file for the tokens, I receive:

Error in token file: [-t|--tokens] stat /opt/pure-fa-exporter/fa.yaml: no such file or directory

My docker run command is:

docker run -d -p 9490:9490 --name pure-fa-om-exporter quay.io/purestorage/pure-fa-om-exporter -t /opt/pure-fa-exporter/fa.yaml

The file exists, and the contents are:

array_id1:
address: PUREARRAYFQDN
api_token: 7**********************************8

Any help is appreciated

Thanks

Rich

Add better error handling when the API token is incorrect.

Currently if Purity API is available, but the token is incorrect - currently we reply with a 200, and some internal go metrics, but missing the array metrics.

The Purity API currently replies with 400 with the error "Unable to list user for API token." we should forward the errors and http codes from Purity API when Purity replies with HTTP codes not equal to 2xx.


Also currently if the Purity API is not available we currently reply with 400 "Error connecting to FlashArray. Check your management endpoint and/or api token are correct." This error should be more clarifying in the docs to check for connection issues eg. firewall.

[new metric label proposal] - Add subscription info

In /api/2.30/subscriptions it now lists any subscriptions. It will be helpful to add this to purefa_info if possible, otherwise potentially purefa_info_subscription

From the API

{
    "continuation_token": null,
    "items": [
        {
            "service": "FlashArray",
            "id": "<guid>"
        }
    ],
    "more_items_remaining": false,
    "total_item_count": null
}

fatal error: concurrent map writes

Exporter exits with fatal error: concurrent map writes error.

While a workaround has been suggested:

Binary
Running as a service in Linux we can include Restart=always and RestartSec=1.

[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/usr/bin/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/out/bin/pure-fa-om-exporter --port 9490

Docker
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/grafana/README.md#check-docker-container-is-running
docker run --restart unless-stopped

We should really focus on finding the root cause of this issue to improve stability.

[new metric proposal] - Controller uptime in seconds - purefa_hw_controller_uptime_sec

Metric name: purefa_hw_controller_uptime_sec
Description: FlashArray hardware controller uptime in seconds
Dimensions: component_name, component_type

Example output:
purefa_hw_controller_uptime_sec{component_name="CT0",component_type="controller"} 1827364
purefa_hw_controller_uptime_sec{component_name="CT1",component_type="controller"} 1827129

Is the metric currently available in the Purity API? : Yes /api/2.30/controllers
Recommended Status in the metric spec : Accepted

Include metrics from /network-interfaces/performance for interface errors

I've had a customer request to add the following network error counters metrics.

Please assign to me, thanks.

Sample from the REST API Swagger:

  "more_items_remaining": false,
  "total_item_count": 0,
  "continuation_token": "string",
  "items": [
    {
      "name": "string",
      "interface_type": "string",
      "time": 0,
      "eth": {
        "other_errors_per_sec": 0,
        "received_bytes_per_sec": 0,
        "received_crc_errors_per_sec": 0,
        "received_frame_errors_per_sec": 0,
        "received_packets_per_sec": 0,
        "total_errors_per_sec": 0,
        "transmitted_bytes_per_sec": 0,
        "transmitted_carrier_errors_per_sec": 0,
        "transmitted_dropped_errors_per_sec": 0,
        "transmitted_packets_per_sec": 0
      },
      "fc": {
        "received_bytes_per_sec": 0,
        "received_crc_errors_per_sec": 0,
        "received_frames_per_sec": 0,
        "received_link_failures_per_sec": 0,
        "received_loss_of_signal_per_sec": 0,
        "received_loss_of_sync_per_sec": 0,
        "total_errors_per_sec": 0,
        "transmitted_bytes_per_sec": 0,
        "transmitted_frames_per_sec": 0,
        "transmitted_invalid_words_per_sec": 0
      }
    }
  ],
  "total": [
    {
      "name": "string",
      "interface_type": "string",
      "time": 0,
      "eth": {
        "other_errors_per_sec": 0,
        "received_bytes_per_sec": 0,
        "received_crc_errors_per_sec": 0,
        "received_frame_errors_per_sec": 0,
        "received_packets_per_sec": 0,
        "total_errors_per_sec": 0,
        "transmitted_bytes_per_sec": 0,
        "transmitted_carrier_errors_per_sec": 0,
        "transmitted_dropped_errors_per_sec": 0,
        "transmitted_packets_per_sec": 0
      },
      "fc": {
        "received_bytes_per_sec": 0,
        "received_crc_errors_per_sec": 0,
        "received_frames_per_sec": 0,
        "received_link_failures_per_sec": 0,
        "received_loss_of_signal_per_sec": 0,
        "received_loss_of_sync_per_sec": 0,
        "total_errors_per_sec": 0,
        "transmitted_bytes_per_sec": 0,
        "transmitted_frames_per_sec": 0,
        "transmitted_invalid_words_per_sec": 0
      }
    }
  ]
}

pure-fa-openmetrics-exporter ERROR :Target authorization token is missing and Endpoint parameter is missing

Hello

I want to run project but give me a error after on web site

image

ERROR :Target authorization token is missing
image

how to run this:
./out/bin/pure-fa-om-exporter --tokens ./token.yaml
token.yaml file is

array_id1:
address: “10.100.100.100”
api_token: “44basdasdasdadse4c6-3453123123127"

is this valid for AND yaml file ?
can you help me anyone

And after I can saw this screen but not runner

image

docker prometheus fails to start

I issue: sudo docker run -d -p 9090:9090 --name=prometheus -v /tmp/prometheus-pure.yml:/etc/prometheus/prometheus.yml -v /tmp/prometheus-data:/prometheus prom/prometheus:latest
The Display:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dcd500712644 prom/prometheus:latest "/bin/prometheus --c…" 22 seconds ago Exited (2) 21 seconds ago prometheus
c83486d8df90 quay.io/purestorage/pure-fa-om-exporter:latest "/usr/local/bin/pure…" 16 minutes ago Up 16 minutes 0.0.0.0:9490->9490/tcp pure-fa-om-exporter

Logs for prometheus:
sudo docker logs prometheus
ts=2023-02-03T16:42:25.144Z caller=main.go:512 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2023-02-03T16:42:25.144Z caller=main.go:556 level=info msg="Starting Prometheus Server" mode=server version="(version=2.42.0, branch=HEAD, revision=225c61122d88b01d1f0eaaee0e05b6f3e0567ac0)"
ts=2023-02-03T16:42:25.145Z caller=main.go:561 level=info build_context="(go=go1.19.5, platform=linux/amd64, user=root@c67d48967507, date=20230201-07:53:32)"
ts=2023-02-03T16:42:25.145Z caller=main.go:562 level=info host_details="(Linux 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 dcd500712644 (none))"
ts=2023-02-03T16:42:25.145Z caller=main.go:563 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2023-02-03T16:42:25.145Z caller=main.go:564 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2023-02-03T16:42:25.145Z caller=query_logger.go:91 level=error component=activeQueryTracker msg="Error opening query log file" file=/prometheus/queries.active err="open /prometheus/queries.active: permission denied"
panic: Unable to create mmap-ed active query log

goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker({0x7ffed10bef02, 0xb}, 0x14, {0x3d8ba20, 0xc000051040})
/app/promql/query_logger.go:121 +0x3cd
main.main()
/app/cmd/prometheus/main.go:618 +0x69d3

Seems to be a premission issue. Does anyone know how to fix this error.

[new metric proposal] - Object Limit per array - purefa_array_object_limit

For highly automated environments it is needed for the automation engine to know about the maximum number of objects (e.g. volumes, snapshots, pgroups, pods, etc.) supported so it will not run into errors when the limit has been reached.

Metric name: purefa_array_object_limit
Description: FlashArray object limit per object type
Dimensions: object_type
Dimension Example Values: object_type: volume, pgroup, snapshot, pods

Example output:
purefa_array_object_limit{object_type="pgroup"} 12345
purefa_array_object_limit{object_type="volume"} 54321

Is the metric currently available in the Purity API? : No
Recommended Status in the metric spec : Accepted

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.