Giter Club home page Giter Club logo

firehose_exporter's Introduction

Cloud Foundry Firehose Exporter Build Status

A Prometheus exporter proxy for Cloud Foundry Firehose metrics. Please refer to the FAQ for general questions about this exporter.

Architecture overview

Installation

Binaries

Download the already existing binaries for your platform:

$ ./firehose_exporter <flags>

From source

Using the standard go install (you must have Go already installed in your local machine):

$ go install github.com/bosh-prometheus/firehose_exporter
$ firehose_exporter <flags>

Docker

To run the firehose exporter as a Docker container, run:

$ docker run -p 9186:9186 boshprometheus/firehose-exporter <flags>

Cloud Foundry

The exporter can be deployed to an already existing Cloud Foundry environment:

$ git clone https://github.com/bosh-prometheus/firehose_exporter.git
$ cd firehose_exporter

Modify the included application manifest file to include your Cloud Foundry Firehose properties. Then you can push the exporter to your Cloud Foundry environment:

$ cf push

BOSH

This exporter can be deployed using the Prometheus BOSH Release.

Usage

Flags

Flag / Environment Variable Required Default Description
retro_compat.disable
FIREHOSE_EXPORTER_RETRO_COMPAT_DISABLE
No False Disable retro compatibility
retro_compat.enable_delta
FIREHOSE_EXPORTER_RETRO_COMPAT_ENABLE_DELTA
No False Enable retro compatibility delta in counter
metrics.shard_id
FIREHOSE_EXPORTER_DOPPLER_SUBSCRIPTION_ID
No prometheus Cloud Foundry Nozzle Subscription ID
metrics.expiration
FIREHOSE_EXPORTER_DOPPLER_METRIC_EXPIRATION
No 10 minutes How long Cloud Foundry metrics received from the Firehose are valid
metrics.batch_size
FIREHOSE_EXPORTER_METRICS_BATCH_SIZE
No infinite buffer Batch size for nozzle envelop buffer
metrics.node_index
FIREHOSE_EXPORTER_NODE_INDEX
No 0 Node index to use
metrics.timer_rollup_buffer_size
FIREHOSE_EXPORTER_TIMER_ROLLUP_BUFFER_SIZE
No 0 The number of envelopes that will be allowed to be buffered while timer http metric aggregations are running
filter.deployments
FIREHOSE_EXPORTER_FILTER_DEPLOYMENTS
No Comma separated deployments to filter
filter.events
FIREHOSE_EXPORTER_FILTER_EVENTS
No Comma separated events to filter. If not set, all events will be enabled (ContainerMetric, CounterEvent, HttpStartStop, ValueMetric)
logging.url
FIREHOSE_EXPORTER_LOGGING_URL
Yes Cloud Foundry Log Stream URL
logging.tls.ca
FIREHOSE_EXPORTER_LOGGING_TLS_CA
No Path to ca cert to connect to rlp
logging.tls.cert
FIREHOSE_EXPORTER_LOGGING_TLS_CERT
Yes Path to cert to connect to rlp in mtls
logging.tls.key
FIREHOSE_EXPORTER_LOGGING_TLS_KEY
Yes Path to key to connect to rlp in mtls
metrics.namespace
FIREHOSE_EXPORTER_METRICS_NAMESPACE
No firehose Metrics Namespace
metrics.environment
FIREHOSE_EXPORTER_METRICS_ENVIRONMENT
Yes Environment label to be attached to metrics
skip-ssl-verify
FIREHOSE_EXPORTER_SKIP_SSL_VERIFY
No false Disable SSL Verify
web.listen-address
FIREHOSE_EXPORTER_WEB_LISTEN_ADDRESS
No :9186 Address to listen on for web interface and telemetry
web.telemetry-path
FIREHOSE_EXPORTER_WEB_TELEMETRY_PATH
No /metrics Path under which to expose Prometheus metrics
web.auth.username
FIREHOSE_EXPORTER_WEB_AUTH_USERNAME
No Username for web interface basic auth
web.auth.password
FIREHOSE_EXPORTER_WEB_AUTH_PASSWORD
No Password for web interface basic auth
web.tls.cert_file
FIREHOSE_EXPORTER_WEB_TLS_CERTFILE
No Path to a file that contains the TLS certificate (PEM format). If the certificate is signed by a certificate authority, the file should be the concatenation of the server's certificate, any intermediates, and the CA's certificate
web.tls.key_file
FIREHOSE_EXPORTER_WEB_TLS_KEYFILE
No Path to a file that contains the TLS private key (PEM format)
profiler.enable
FIREHOSE_EXPORTER_ENABLE_PROFILER
No False Enable pprof profiling on app on /debug/pprof
log.level
FIREHOSE_EXPORTER_LOG_LEVEL
No info Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal]
log.in_json
FIREHOSE_EXPORTER_LOG_IN_JSON
No False Log in json

Metrics

For a list of Cloud Foundry Firehose metrics check the Cloud Foundry Component Metrics documentation.

The exporter returns additionally the following internal metrics:

Metric Description Labels
metrics.namespace_total_envelopes_received Total number of envelopes received from Cloud Foundry Firehose environment
metrics.namespace_last_envelope_received_timestamp Number of seconds since 1970 since last envelope received from Cloud Foundry Firehose environment
metrics.namespace_total_metrics_received Total number of metrics received from Cloud Foundry Firehose environment
metrics.namespace_last_metric_received_timestamp Number of seconds since 1970 since last metric received from Cloud Foundry Firehose environment
metrics.namespace_total_container_metrics_received Total number of container metrics received from Cloud Foundry Firehose environment
metrics.namespace_last_container_metric_received_timestamp Number of seconds since 1970 since last container metric received from Cloud Foundry Firehose environment
metrics.namespace_total_counter_events_received Total number of counter events received from Cloud Foundry Firehose environment
metrics.namespace_last_counter_event_received_timestamp Number of seconds since 1970 since last counter event received from Cloud Foundry Firehose environment
metrics.namespace_total_http_received Total number of http start stop received from Cloud Foundry Firehose environment
metrics.namespace_last_http_received_timestamp Number of seconds since 1970 since last http start stop received from Cloud Foundry Firehose environment
metrics.namespace_total_value_metrics_received Total number of value metrics received from Cloud Foundry Firehose environment
metrics.namespace_last_value_metric_received_timestamp Number of seconds since 1970 since last value metric received from Cloud Foundry Firehose environment

Contributing

Refer to CONTRIBUTING.md.

License

Apache License 2.0, see LICENSE.

firehose_exporter's People

Contributors

aeijdenberg avatar arthurhlt avatar benjaminguttmann-avtq avatar dependabot[bot] avatar frodenas avatar kkellner avatar mathias-ewald avatar mdimiceli avatar mevansam avatar mkuratczyk avatar mrbuk avatar psycofdj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

firehose_exporter's Issues

firehose_exporter internal server error 500

Hi, we are using the prometheus bosh release and we've found an issue with the firehose exporter related to issue #13. When a user has two queues in one vhost with the same name but once using "-" and once using "_" the firehose exporter returns a 500 internal server error upon scraping similar to #13.

This is the error message we get (anonymized):

6 error(s) occurred:

  • collected metric firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_GUID_example_queue_consumers label:<name:"bosh_deployment" value:"cf-rabbitmq" > label:<name:"bosh_job_id" value:”xxxxxx” > label:<name:"bosh_job_ip" value:”xxxxx” > label:<name:"bosh_job_name" value:"rabbitmq-server" > label:<name:"environment" value:"TEST" > label:<name:"origin" value:"p-rabbitmq" > label:<name:"unit" value:"count" > gauge:<value:3 >  has help "Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/GUID/test-queue/consumers' value metric from 'p-rabbitmq'." but should have "Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/GUID/test_queue/consumers' value metric from 'p-rabbitmq'."

From the other ticket I gathered that this might be hard to fix, but this is an issue that's not really preventable from an operators point of view for a platform with a large number of users and vhosts.

For reference we are using version 4.2.3 of the firehose exporter.

Can you please take a look at the issue?

Regards,
Daniel

Invalide metric type

Dear all,

With prometheus-boshrelease v23.3.0, we have an error on
invalid metric type "_kyu_9_jt_tug_3_l_369_i_92_jjg_consumers gauge"
and /metrics return
# TYPE firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_7_dd_8_ba_19_229_d_4_a_4_f_9_dce_4416_fbb_728_ff_spring_cloud_hystrix_stream_anonymous_._kyu_9_jt_tug_3_l_369_i_92_jjg_consumers gauge

server returned HTTP status 500 Internal Server Error

exporter throws error, when RMQ ODB instance are created. It's preventing scrape of other metrics and shows State as "DOWN".

$ curl http://10.193.177.96:9186/metrics
An error has occurred during metrics collection:

67 error(s) occurred:

  • collected metric firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_messages_pending label:<name:"bosh_deployment" value:"cf-rabbitmq" > label:<name:"bosh_job_id" value:"4e1f863c-236d-4bb2-ae44-48e221aacd7f" > label:<name:"bosh_job_ip" value:"10.193.177.118" > label:<name:"bosh_job_name" value:"rabbitmq-server" > label:<name:"environment" value:"cf" > label:<name:"origin" value:"p-rabbitmq" > label:<name:"unit" value:"count" > gauge:<value:0 > has help "Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/messages/pending' value metric from 'p-rabbitmq'." but should have "Cloud Foundry Firehose '/p.rabbitmq/rabbitmq/messages/pending' value metric from 'p-rabbitmq'."

In RabbitMQ ODB service, the valueMetric.Origin will be presented as "p.rabbitmq" while the share service instance would be presented as "p-rabbitmq". Somewhere in the exporter, this is being compare and throws exception as it was expecting the metric path to be same as deployment "p-rabbitmq"

firehose-exporter is missing in depth heap data ( Survivor Space, Eden Space, Metaspace etc)

Hi,
We want to use firehose-exporter to get some customize heap data, but it exposes only tenured gen data and drops rest others like,

  1. File Descriptors
  2. Heap:- Eden Space, Survivor Space
  3. Non-Heap:- Metaspace, Compressed Class Space, Code Cache
  4. Garbage Collection:- Collections, Pause Durations, Allocated/Promoted

Firehose-exporter output:-
# TYPE firehose_value_metric_9629_bd_8_a_d_499_47_c_6_9063_1190_b_4_e_27472_jvm_memory_used_bytes gauge firehose_value_metric_9629_bd_8_a_d_499_47_c_6_9063_1190_b_4_e_27472_jvm_memory_used_bytes{application_guid="9629bd8a-d499-47c6-9063-1190b4e27472",application_instance="1",area="heap",bosh_deployment="cf-831082f4cc611c010e5c",bosh_job_id="d11159c1-aefd-4ca7-a4fd-3e552828bfe0",bosh_job_ip="172.21.220.56",bosh_job_name="doppler",environment="AMS-NONPROD",id="Tenured Gen",instance_id="1",origin="9629bd8a-d499-47c6-9063-1190b4e27472",product="Pivotal Application Service",source_id="9629bd8a-d499-47c6-9063-1190b4e27472",system_domain="sys-ta-ams.af-klm.com",unit=""} 3.8100416e+07
We deployed promregator and all the above missing data are already there.
# TYPE jvm_memory_used_bytes gauge jvm_memory_used_bytes{area="nonheap",id="Code Cache",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 3.029344E7 jvm_memory_used_bytes{area="nonheap",id="Metaspace",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 5.7728192E7 jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 6894560.0 jvm_memory_used_bytes{area="heap",id="Eden Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 1.4957776E7 jvm_memory_used_bytes{area="heap",id="Survivor Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 746672.0 jvm_memory_used_bytes{area="heap",id="Tenured Gen",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:0",cf_instance_number="0",} 3.7933536E7 jvm_memory_used_bytes{area="nonheap",id="Code Cache",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 3.0955456E7 jvm_memory_used_bytes{area="nonheap",id="Metaspace",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 5.7782856E7 jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 6912616.0 jvm_memory_used_bytes{area="heap",id="Eden Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 1.9109648E7 jvm_memory_used_bytes{area="heap",id="Survivor Space",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 708144.0 jvm_memory_used_bytes{area="heap",id="Tenured Gen",org_name="PTOPT",space_name="default",app_name="afterburner",cf_instance_id="9629bd8a-d499-47c6-9063-1190b4e27472:1",cf_instance_number="1",} 3.8100416E7

Could you please help us to fix this issue?

Thanks.

Checking health of exporter

Hi,

I notice that the exporter always gives 200 ok response regardless of it being able to connect successfully to firehose socket or any errors displaying in the console. Is there a way to detect exporter errors from http response?

The process misses to export all firehose tags keys

We are using some metrics customization on the messsages sent to firehose (with tags Keys) . We can see those tags in the firehose (with cf nozzle) , but they are lost on the firehose exporter end point .
Those tags contain ApplicationId and AppInstance ID , so that it's possible to retrieve custom metrics in Prometheus for instance .
Those tags are standard regarding the documentation (https://github.com/cloudfoundry/dropsonde-protocol/blob/master/events/README.md) , so it's important that we can get them .
Thanks

No RabbitMQ metrics after PCF upgrade

BOSH Director 2.2-build.372
Pivotal Application Service 2.2.9
RabbitMQ 1.13.11
Spring Cloud Services 1.5.6
MySQL for PCF 1.10.14

was updated and now firehose_value_metric_p_rabbitmq_* metrcis are gone
firehose_exporter works ok but there are no metrics only from rabbitmq
logs are empty

Error while reading from the Firehose: Error dialing trafficcontroller server

I am trying to run the firehose exporter for prometheus and ran in to this issue as seen below:

Error while reading from the Firehose: Error dialing trafficcontroller server: malformed ws or wss URL.
Please ask your Cloud Foundry Operator to check the platform configuration (trafficcontroller is https ://doppler.system.**.org). source="firehose_nozzle.go:104"

bug: hangs after disconnect and does not attempt to reconnect

We had some of our vmware infrastructure unexpectedly reboot, causing the firehose_exporter to disconnect. It looks as though the firehose_exporter was hung in a state which would not attempt to reconnect, thus causing our metrics to fall behind.

time="2019-10-08T17:00:43Z" level=error msg="Error while reading from the Firehose: websocket: close 1006 (abnormal closure): unexpected EOF" source="firehose_nozzle.go:121"
time="2019-10-08T17:30:09Z" level=error msg="Error while reading from the Firehose: websocket: close 1006 (abnormal closure): unexpected EOF" source="firehose_nozzle.go:121"
time="2019-10-09T01:21:31Z" level=error msg="Error while reading from the Firehose: websocket: close 1008 (policy violation): Client did not respond to ping before keep-alive timeout expired." source="firehose_nozzle.go:121"
time="2019-10-09T01:21:31Z" level=error msg="Nozzle couldn't keep up. Please try scaling up the Nozzle." source="firehose_nozzle.go:129"

I feel as though this issue should be caught and either the firehose should exit so whatever scheduler is being used (bosh in this case) to run firehose_exporter can correct, or the firehose_exporter should continue to attempt a reconnect (more ideal and standard for prometheus).

We are using the prometheus bosh release 25.0.0, so firehose_exporter 6.0.0

Error logs - invalid metric name

Dear @frodenas,

Our Firehose Exporter has been stable for a while, the upgrade to the latest version has fixed all the problems we had in the past. However, starting this morning, the exporter started to produce weird errors, see below:

[ERR] time="2018-04-18T12:40:36Z" level=error msg="[error gathering metrics: 2 error(s) occurred:\n* collected metric firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_0_d_8748_f_6_8_c_33_4_c_51_99_ed_1_f_78_c_077761_e_deal_renewal_exchange_insurable_interest_service_depth label:<name:\"bosh_deployment\" value:\"cf-rabbitmq\" > label:<name:\"bosh_job_id\" value:\"0763273a-49d1-4615-8ce0-0e9cca4dbdcb\" > label:<name:\"bosh_job_ip\" value:\"10.16.190.138\" > label:<name:\"bosh_job_name\" value:\"rabbitmq-server\" > label:<name:\"environment\" value:\"adp\" > label:<name:\"origin\" value:\"p-rabbitmq\" > label:<name:\"unit\" value:\"count\" > gauge:<value:7 > has help \"Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealRenewalExchange.insurableInterestService/depth' value metric from 'p-rabbitmq'.\" but should have \"Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealRenewalExchange.insurable-interest-service/depth' value metric from 'p-rabbitmq'.\"\n* collected metric 

firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_0_d_8748_f_6_8_c_33_4_c_51_99_ed_1_f_78_c_077761_e_deal_renewal_exchange_coverage_service_consumers label:<name:\"bosh_deployment\" value:\"cf-rabbitmq\" > label:<name:\"bosh_job_id\" value:\"0763273a-49d1-4615-8ce0-0e9cca4dbdcb\" > label:<name:\"bosh_job_ip\" value:\"10.16.190.138\" > label:<name:\"bosh_job_name\" value:\"rabbitmq-server\" > label:<name:\"environment\" value:\"adp\" > label:<name:\"origin\" value:\"p-rabbitmq\" > label:<name:\"unit\" value:\"count\" > gauge:<value:0 > has help \"Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealRenewalExchange.coverageService/consumers' value metric from 'p-rabbitmq'.\" but should have \"Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealRenewalExchange.coverage-service/consumers' value metric from 'p-rabbitmq'.\"]" source="firehose_exporter.go:118"

The /metrics endpoint itself works fine and shows all the metrics. But when we pull this endpoint from the Prometheus, we are getting the following error:
invalid metric name "firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_e_5_d_35_ba_5_1360_4514_8_a_04_6_b_850_fcd_1192_deal_renewal_exchange_anonymous_._p_qp_jrx_txe_gx_a_0_n_4_zyo_g_consumers"

Could you please advise how to resolve this issue. Thanks!

Regards,
Robert

"discarded: duplicate label names" errors in the log

When I run logs -f firehose-exporter-0, I see the following entries for every prometheus pull. Resulting in large volume of logs.

Can I suppress them? Should this be logged at warning? Also what caused the duplicate labels?

Greatly appreciated for any insights.

time="2020-08-26T15:17:18Z" level=error msg="Value Metric http_request_seconds_sum from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric retention_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric shard_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric egress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric http_request_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric egress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric average_envelopes from loggregator.metron discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric queue_disk_usage from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric http_request_seconds_sum from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric egress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric http_request_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric queue_disk_usage from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric retention_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric average_envelopes from loggregator.metron discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric ingress_connection_count from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:18Z" level=error msg="Value Metric egress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric shard_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric retention_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric egress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric queue_disk_usage from log_store.nozzle discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric egress_connection_count from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric ingress_connection_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric queue_disk_usage from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric shard_count from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric http_request_seconds_sum from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric average_envelopes from loggregator.metron discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric http_request_seconds from log_store.log_store discarded: duplicate label names" source="value_metrics_collector.go:65"
time="2020-08-26T15:17:19Z" level=error msg="Value Metric queue_disk_usage from log_store.router discarded: duplicate label names" source="value_metrics_collector.go:65"

PCF - application metrics

We managed to get this running on a Docker instance, which then connects to Doppler and receives the metrics. Good job for implementing the exporter! :-)

However, we are not interested in all the metrics, but only metrics for specific application. Would it be possible to expose an endpoint only for a specific PCF app, and how could we do it?

Bad handshake error when connecting to doppler wss endpoint

I am running into an issue for accessing the PCF doppler wss endpoint on Azure. The error I received is: “error dialing trafficecontroller server: websocket: bad handshake.” cf logs work fine for that env. Questions: 1. Any tips on how to troubleshoot and fix this error? 2. Is there a way to run firehose_exporter in the debug mode?

The shutdown process does not proceed when a uaa token issuance failure occurs.

I am using prometheus v19.0.0 version. Therefore, firehose_exporter v4.2.5 is installed.

I do not have any CF related metrics recently, so I checked the log of firehose_exporter.

time="2018-03-29T07:04:25Z" level=info msg="Starting firehose_exporter (version=4.2.5, branch=master, revision=0d2a1ad41b0e47195aa39846ea3741592f8ac57d)" source="firehose_exporter.go:250"
time="2018-03-29T07:04:25Z" level=info msg="Build context (go=go1.9.2, user=root@092c0c1307bc, date=20171027-16:10:13)" source="firehose_exporter.go:251"
time="2018-03-29T07:04:25Z" level=info msg="Listening on :9186" source="firehose_exporter.go:328"
time="2018-03-29T07:04:25Z" level=info msg="Starting Firehose Nozzle..." source="firehose_nozzle.go:58"
time="2018-03-29T07:04:32Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:04:32Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:04:34Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:04:34Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:04:36Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:04:36Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:04:40Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:04:40Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:04:48Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:04:48Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:05:04Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:05:04Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:05:36Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:05:36Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:06:36Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:06:36Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:07:36Z" level=error msg="Error getting oauth token: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:07:36Z" level=error msg="Error while reading from the Firehose: Post https://uaa.mydomain.com/oauth/token: dial tcp <my.uaa.ip>:443: getsockopt: connection refused" source="firehose_nozzle.go:111"
time="2018-03-29T07:08:36Z" level=error msg="Error getting oauth token: Received a status code 503 Service Unavailable. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:08:36Z" level=error msg="Error while reading from the Firehose: Received a status code 503 Service Unavailable" source="firehose_nozzle.go:111"
time="2018-03-29T07:09:36Z" level=error msg="Error getting oauth token: Received a status code 503 Service Unavailable. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:09:36Z" level=error msg="Error while reading from the Firehose: Received a status code 503 Service Unavailable" source="firehose_nozzle.go:111"
time="2018-03-29T07:10:36Z" level=error msg="Error getting oauth token: Received a status code 503 Service Unavailable. Please check your Client ID and Secret." source="uaa_token_refresher.go:39"
time="2018-03-29T07:10:36Z" level=error msg="Error while reading from the Firehose: Received a status code 503 Service Unavailable" source="firehose_nozzle.go:111"

When I read the source code, it seems that the process shutdown should proceed after a few retries, but the shutdown did not actually proceed after three retries.

After restarting firehose_exporter with monit, the token issue has been resolved and is no longer reproducible.

Apart from the cause of failure to issue token in uaa, I suspect that the shutdown process is not executed after exceeding the number of retries. What do you think?

An error has occurred during metrics collection

Dear @frodenas,

when scraping data from our PCF platform, we are getting the following error:

An error has occurred during metrics collection:

16 error(s) occurred:
* collected metric firehose_value_metric_p_rabbitmq_p_rabbitmq_rabbitmq_queues_0_d_8748_f_6_8_c_33_4_c_51_99_ed_1_f_78_c_077761_e_deal_duplication_exchange_insurable_interest_service_depth label:<name:"bosh_deployment" value:"cf-rabbitmq" > label:<name:"bosh_job_id" value:"0763273a-49d1-4615-8ce0-0e9cca4dbdcb" > label:<name:"bosh_job_ip" value:"IP_ADDRESS" > label:<name:"bosh_job_name" value:"rabbitmq-server" > label:<name:"environment" value:"" > label:<name:"origin" value:"p-rabbitmq" > label:<name:"unit" value:"count" > gauge:<value:0 >  has help "Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealDuplicationExchange.insurable-interest-service/depth' value metric from 'p-rabbitmq'." but should have "Cloud Foundry Firehose '/p-rabbitmq/rabbitmq/queues/0d8748f6-8c33-4c51-99ed-1f78c077761e/DealDuplicationExchange.insurableInterestService/depth' value metric from 'p-rabbitmq'."

...

Pivotal Elastic Runtime version: v1.11.12
Firehose Exporter version: v4.2.7

The scraping was working fine before, the errors started appearing all of the sudden. Do you have any idea why this is happening and how to resolve it?

Thanks!

With kind regards,
Robert

Multiple instances of firehose exporter on different foundations suddenly stopped exposing metrics

I have this exporter deployed on several PCF foundations and all but one of them suddenly stopped displaying firehose metrics at the same time - all exactly 30 days after first deploying the app. There were no errors in the app's logs. The exporters were still running and showing the go metrics, just not the firehose metrics. Upon restart, all the metrics started working again on all of the exporters.

Gut feeling is it was some kind of token expiration issue.

quota metrics always expose zero value

In our deployment we found that quota metrics (e.g firehose_exporter_container_metric_disk_bytes_quota) always seem to yield value zero. All other metrics seem to work fine.
In Cloud Foundry, we see reasonable data for quota usage, so could it be that something gets lost in the firehose exporter?

Some metrics are missing

Hi
I try to monitoring CF with prometheus-boshrelease
currently I set up cf exporter, firehose exporter

Most metrics are collected with this exporter
but some metric were not show in prometheus DB

Ex)
firehose_counter_event_gorouter_bad_gateways_delta
firehose_counter_event_gorouter_bad_gateways_total
firehose_counter_event_gorouter_rejected_requests_delta
firehose_counter_event_gorouter_rejected_requests_total
firehose_counter_event_bbs_* metrics empty, except two metrics "firehose_counter_event_bbs_request_count_delta", "firehose_counter_event_bbs_request_count_total"

ENV)
cf : 238
diego : 0.1476.0

cf_exporter, version 0.4.3 (branch: master, revision: 9e37d9069bbb87d739d2c326981fed917ec016e4)
build user: root@1d21624
build date: 20170216-02:38:17
go version: go1.7.5

firehose_exporter, version 4.1.0 (branch: master, revision: 95333ea)
build user: root@387dfcd
build date: 20170216-01:18:51
go version: go1.7.5

HttpStartStop filter

The current supported filters do not yet include HttpStartStop. While I understand from #1 (comment) that this exporter is designed for a platform operator to collect platform metrics, I believe exporting a subset of the HttpStartStop would be useful:

An operator can currently use the cf apps metrics dashboard to check the metrics for a given app, e.g. to identify a "greedy app" that is using lots of fair cpu cycles.

It seems useful to enrich such per app dashboard for a cf operator platform with data from the HttpStartStop event:

  • routing traffic request rate (nb requests per second)
  • routing traffic latency (stopTimestamp - startTimestamp )
  • routing traffic bandwidth (only contentLength available, i.e. size of response, request size not yet available)
  • routing traffic distribution on app instances (instanceIndex)
  • routing traffic error rate (http statusCode)

The associated use-cases for a CF platform operators would be:

  • drilldown on some aggregated routing metrics when the routing layer is overloaded:

    latency Time in milliseconds that the Gorouter took to handle requests to its application endpoints. Emitted per router request.
    routed_app_requests The collector sums up requests for all dea-{index} components for its output metrics. Emitted every 5 seconds.

  • correlate diego cpu overprovisionning with application reported impacts, such as slow app performance (typically measure with latency of responses on incoming http requests)
  • troubleshoot unbalanced gorouter load balancing among app instances of a given app (when routing tables are stale/not fresh)
  • correlate application-reported errors (502 bad gateway status) with diego health management metrics/events

A last use-case I'm studying, is whether exporting HttpStartStop data could be used within the cloudfoundry-community/autosleep service to help get a more reliable measure of the date of last http traffic for a given application. This date is used an indicator of an application inactivity and triggers an automatic stopping the enrolled applications. Autosleep is currently using the recentlogs doppler endpoint. However this has some precision/performance limitations (cloudfoundry-community/autosleep#187 ) that prometheus could help solve (cloudfoundry-community/autosleep#264 (comment) ) if part of the HttpStartStop events would be exported.

Delay in metrics appearing in exporter

Hi,

I have noticed that sometimes that there is a delay from when metrics are returned by the firehose exporter compared to when they appear from the firehose running "cf nozzle". Is there a reason for this delay? It's most noticeable when the exporter is first started, because the metrics are totally missing until a while after.

For example, the rep CapacityTotalMemory is emitted every 60 seconds from Firehose, but there is usually a delay of more than 60 seconds until the metric is shown by the exporter.

What is the reason for this?

Prometheus throws NaN with firhose_exporter

Hi There,

I have deployed firhose_exporter ( 4 instances ) in pcf and using a standalone prometheus to scrape.
Majority of the metrics are good.
However, noticed some of the metrics are (approx 10%) showing 'NaN' in Prometheus as seen below :
[ 1538988656.294, "NaN" ] [ 1538988656.548, "1" ] [ 1538988657.559, "NaN" ]
Kindly assist.

Regards,
Vish.

Invalid label value

Every couple of days, firehose exporter stops responding due to a weird error, see below:

invalid label value "\x00\x00\x00\x01\xa8\n\xa5\x03\n\x03re"

This started happening after the last PCF upgrade. I assume the Pivotal guys did some changes in the way metrics are formatted within the Loggregator System. Or is this due to something else?

The current version of PCF is v1.10.6.0

Thanks!

Firehose dies during a Loadbalancer failover

Hey *

Apparently firehose_exporter silently dies when there is a LoadBalancer failover.

We did a LoadBalancer failover test (using F5 BigIP) and instantly all our firehose_exporter died silently (all of them, there are 4 or 5 running on various systems).

-> The F5 is in front of the go-router which will be used to get to the doppler.

What happened:

The MAC Address of the LoadBalancer changed, the VIP stayed the same and the ARP change was properly announced.

The processes were still reported as running by monit. We’ve restarted all of them without debugging any further (sorry :) ).

It would be cool if

  1. The process would simply die if something like this happens and monit will then restart it.
  2. The firehose_exporter is able to survive the failover.

cheers

Firehose exporter stopped exporting metrics

We deployed firehose-exporter v5.0.3 on 7 foundations and suddenly it stopped exporting the metrics but app it self up and running. We faced this issue on all foundations but not in any fixed patterns.
While debugging I found huge consumption of CPU and memory before it stopped exporting.
If I restart the firehose app then again it started exporting metrics.

screenshot of usage.
image

Could you please help me in troubleshooting?

firehose_exporter internal servererror 500 after update to Pivotal RabbitMQ Service 1.8.*

Reported by @Martin612 at cloudfoundry/prometheus-boshrelease#66:

Hello,

I'm using the prometheus bosh release in an Pivotal Cloud Foundry environment.

After updating the rabbitmq service tile to 1.8.* I am experiencing an issue with the firehose_exporter.

The update added an additional service broker for dedicated rabbitmq services.
This additional service broker producing following issue in firehose_exporter:

  • collected metric firehose_value_metric_p_rabbitmq_log_sender_total_messages_read label:<name:"bosh_deployment" value:"cf-rabbitmq" > label:<name:"bosh_job_id" value:"8daeceac-05cf-4094-a1ab-d9355dac1584" > label:<name:"bosh_job_ip" value:"192.168.17.15" > label:<name:"bosh_job_name" value:"rabbitmq-broker" > label:<name:"environment" value:"P" > label:<name:"origin" value:"p-rabbitmq" > label:<name:"unit" value:"count" > gauge:<value:0 > has help "Cloud Foundry Firehose 'logSenderTotalMessagesRead' value metric from 'p-rabbitmq'." but should have "Cloud Foundry Firehose 'logSenderTotalMessagesRead' value metric from 'p.rabbitmq'."

Turning off the logs from dedicated rabbitmq service broker fix this issue temporary.

Can you please update the firehose_exporter to consume logs from shared rabbitmq service broker and from dedicated rabbitmq service broker?

Counter Event Labels Have Double Underscore

I upgraded to 6.2.0 in my sandbox environment and found that with the recent change to counter events, my counter event label values have a double underscore between "counter_event" and the counter. I am seeing this on both _total and _delta counters. See example below:

firehose_counter_event__failed_scrapes_total{bosh_deployment="cf-xxxxxxxxxx",bosh_job_id="xxxxxxx",bosh_job_ip="xx.xx.xx.xx",bosh_job_name="diego_cell",environment="pcf",instance_id="xxxxxxxx",origin="",product="VMware Tanzu Application Service",scrape_source_id="syslog_agent",source_id="metrics-agent",system_domain="system.pcf.domain.local"} 0

This is on VMware TAS 2.10.9

issues while pushing firehose_exporter

Hi
I've cloned the app & updated the manifest.yml as below

  - name: firehose-exporter
    buildpack: go_buildpack
    env:
      FIREHOSE_EXPORTER_UAA_URL: "<URL>"
      FIREHOSE_EXPORTER_UAA_CLIENT_ID: "<CLIENT_ID>"
      FIREHOSE_EXPORTER_UAA_CLIENT_SECRET: "<CLIENT_SEC>"
      FIREHOSE_EXPORTER_DOPPLER_URL: "<url>"

and tried to push the app, with cf push. I get the error :

       **ERROR** To use go native vendoring set the $GOPACKAGENAME
       environment variable to your app's package name
       **ERROR** Unable to determine import path: GOPACKAGENAME unset

I've updated the manifest file with the env variable entry GOPACKAGENAME: firehose and tried pushing again. This time a different error.

**WARNING** Installing package '.' (default)
-----> Running: go install -tags cloudfoundry -buildmode pie .
firehose_exporter.go:13:2: cannot find package "github.com/bosh-prometheus/firehose_exporter/collectors" in any of:
        /tmp/gobuildpack.gopath011202497/.go/src/firehose/vendor/github.com/bosh-prometheus/firehose_exporter/collectors (vendor tree)
        /tmp/contents418976151/deps/0/go1.8.3/go/src/github.com/bosh-prometheus/firehose_exporter/collectors (from $GOROOT)
        /tmp/gobuildpack.gopath011202497/.go/src/github.com/bosh-prometheus/firehose_exporter/collectors (from $GOPATH)
firehose_exporter.go:14:2: cannot find package "github.com/bosh-prometheus/firehose_exporter/filters" in any of:
        /tmp/gobuildpack.gopath011202497/.go/src/firehose/vendor/github.com/bosh-prometheus/firehose_exporter/filters (vendor tree)
        /tmp/contents418976151/deps/0/go1.8.3/go/src/github.com/bosh-prometheus/firehose_exporter/filters (from $GOROOT)
        /tmp/gobuildpack.gopath011202497/.go/src/github.com/bosh-prometheus/firehose_exporter/filters (from $GOPATH)
firehose_exporter.go:15:2: cannot find package "github.com/bosh-prometheus/firehose_exporter/firehosenozzle" in any of:
        /tmp/gobuildpack.gopath011202497/.go/src/firehose/vendor/github.com/bosh-prometheus/firehose_exporter/firehosenozzle (vendor tree)
        /tmp/contents418976151/deps/0/go1.8.3/go/src/github.com/bosh-prometheus/firehose_exporter/firehosenozzle (from $GOROOT)
        /tmp/gobuildpack.gopath011202497/.go/src/github.com/bosh-prometheus/firehose_exporter/firehosenozzle (from $GOPATH)
firehose_exporter.go:16:2: cannot find package "github.com/bosh-prometheus/firehose_exporter/metrics" in any of:
        /tmp/gobuildpack.gopath011202497/.go/src/firehose/vendor/github.com/bosh-prometheus/firehose_exporter/metrics (vendor tree)
        /tmp/contents418976151/deps/0/go1.8.3/go/src/github.com/bosh-prometheus/firehose_exporter/metrics (from $GOROOT)
        /tmp/gobuildpack.gopath011202497/.go/src/github.com/bosh-prometheus/firehose_exporter/metrics (from $GOPATH)
firehose_exporter.go:17:2: cannot find package "github.com/bosh-prometheus/firehose_exporter/uaatokenrefresher" in any of:
        /tmp/gobuildpack.gopath011202497/.go/src/firehose/vendor/github.com/bosh-prometheus/firehose_exporter/uaatokenrefresher (vendor tree)
        /tmp/contents418976151/deps/0/go1.8.3/go/src/github.com/bosh-prometheus/firehose_exporter/uaatokenrefresher (from $GOROOT)
        /tmp/gobuildpack.gopath011202497/.go/src/github.com/bosh-prometheus/firehose_exporter/uaatokenrefresher (from $GOPATH)
       **ERROR** Unable to compile application: exit status 1
Failed to compile droplet: Failed to run finalize script: exit status 12
Exit status 223
Staging failed: STG: Exited with status 223
Stopping instance f410bee9-d644-408c-adf5-117702d74412
Destroying container
Successfully destroyed container

Could someone please let us know what's going on and how do we fix it ?

Thanks in advance.

index field from firehose exported as bosh_index instead of bosh_job_id

Hi,

For an event like that:
origin:"gorouter" eventType:ValueMetric job:"router" index:"b68ad2dc-1755-4588-a2bb-e261ffa251dd"

this exporter will export the index field as bosh_index. Meanwhile, bosh_exporter would export this value as bosh_job_id. This discrepancy prevents joins in Prometheus queries later. Relabelling would be a workaround but I don't think that's necessary - we could just change this exporter to export this field as bosh_job_id instead. Do you agree? I can prepare a PR if you are ok with this change. Thanks.

p-mysql Scrape Bug

Not sure if this is a duplicate of the already open issue (#20 ) but i attempted to install this exporter on PCF v 1.11. When i navigate to http://:9186/metrics. I get this message

87 error(s) occurred: (I just included the first message)

  • collected metric firehose_value_metric_p_mysql_p_mysql_performance_table_locks_waited label:<name:"bosh_deployment" value:"service-instance_effb449c-c681-48c8-b816-e7dd4dec7796" > label:<name:"bosh_job_id" value:"452326b5-3899-4445-96dd-7f537f5fdac8" > label:<name:"bosh_job_ip" value:"" > label:<name:"bosh_job_name" value:"" > label:<name:"environment" value:"cf" > label:<name:"origin" value:"p.mysql" > label:<name:"unit" value:"number" > gauge:<value:0 > has help "Cloud Foundry Firehose '/p.mysql/performance/table_locks_waited' value metric from 'p-mysql'." but should have "Cloud Foundry Firehose '/p-mysql/performance/table_locks_waited' value metric from 'p-mysql'."

I believe this prevents the scrape from happening. Asking around on the prometheus IRC got this result

"for the Go client all the ConstMetrics need to have the same help sting" if that is of any help

The process misses to export the firehose tag key "id"

This issue is part of the issue #42 .
It has been partially solved , but we still miss one tag : the id tag .
For instance here below a sample of firehose output (get by cf nozzle) :
origin:"a8f13731-bf3c-425a-8973-3c5d64d1da12" eventType:ValueMetric timestamp:1567780197799835512 deployment:"cf-831082f4cc611c010e5c" job:"doppler" index:"28bb8002-0c1b-40ae-a82e-4b302950611f" ip:"172.21.220.57" tags:<key:"applicationGuid" value:"a8f13731-bf3c-425a-8973-3c5d64d1da12" > tags:<key:"applicationInstance" value:"0" > tags:<key:"area" value:"nonheap" > tags:<key:"id" value:"Metaspace" > tags:<key:"instance_id" value:"0" > tags:<key:"product" value:"Pivotal Application Service" > tags:<key:"source_id" value:"a8f13731-bf3c-425a-8973-3c5d64d1da12" > tags:<key:"system_domain" value:"sys-ta-ams.af-klm.com" > valueMetric:<name:"jvm_memory_used_bytes" value:8.2426976e+07 unit:"" >
and the firehose-exporter line corresponding is :
firehose_value_metric_a_8_f_13731_bf_3_c_425_a_8973_3_c_5_d_64_d_1_da_12_jvm_memory_used_bytes{ application_guid="a8f13731-bf3c-425a-8973-3c5d64d1da12", application_instance="1", area="nonheap", bosh_deployment="cf-831082f4cc611c010e5c", bosh_job_id="d11159c1-aefd-4ca7-a4fd-3e552828bfe0", bosh_job_ip="172.21.220.56", bosh_job_name="doppler", environment="AMS-NONPROD", instance_id="1", origin="a8f13731-bf3c-425a-8973-3c5d64d1da12", product="Pivotal Application Service", source_id="a8f13731-bf3c-425a-8973-3c5d64d1da12", system_domain="sys-ta-ams.af-klm.com",unit=""} 1.046204e+07

We have all the tags except the id one (with the value MetaSpace) which misses .
This tag is needed to differentiate parts of the memory for java application (Metaspace , Survivor , Tenured Gen)
Thanks to have a look

Why firehose_http_start_stop_requests is not a counter?

Hi,

As you've seen in the prometheus-boshrelease issues, I'm currently trying to get the apps request per second metrics right.

In short, I propose to change this prometheus query (used in a Grafana dashboard) from this:

avg(rate(firehose_http_start_stop_requests{environment=~\"$environment\",bosh_deployment=~\"$bosh_deployment\",application_id=~\"$cf_application_id\"}[5m]))

to this:

sum(firehose_http_start_stop_requests{environment=~\"$environment\",bosh_deployment=~\"$bosh_deployment\",application_id=~\"$cf_application_id\"})/300

This already takes this graph closer to reality. However, I kept thinking about this firehose_http_start_stop_requests metric coming from firehose_exporter. There is just something I don't understand.

The FAQ says this:
The exporter summarizes applications related HTTP requests from a slidding window (doppler.metric-expiration command flag) from the Cloud Foundry Firehose and emits ....

I suppose with more knowledge about the CF Firehose behaviour this sentence could make sense, but for people like myself (and I'm sure other people) its hard to understand what this means. I can explain what I see happening though: When I send 1 request to an app, I'll see the firehose_http_start_stop_requests metric increase with 1 for this particular app. So far so good. But then after the doppler.metric-expiration time, the metric will decrease by 1.

What I'd like to understand is why this behaviour is implemented like this. Because in my mind these http_start_stop_requests events would be a typical use case for a counter metric. This would make the querying of the metric in Prometheus a lot more straightforward.

Because to be honest, I see the above sum(firehose_http_start_stop_requests{label=value})/300 solution more to be a workaround.

Would you care to explain?

firehose_value_metric_metrics_forwarder_http_server* spam a huge numvers of diffenent metrics name that kills my prometheus

how to filter those metrics to skip them receiving to Prometheus

my firehose_exporter config:

    exec firehose_exporter \
      --uaa.url="https://uaa.com" \
      --uaa.client-id="prometheus" \
      --uaa.client-secret="key" \
      --doppler.subscription-id="prometheus" \
      --logging.url="wss://doppler.com" \
      --logging.use-legacy-firehose \
      --metrics.environment="prod" \
      --skip-ssl-verify \
      --web.listen-address=":9186" \
      >>  ${LOG_DIR}/firehose_exporter.stdout.log \
      2>> ${LOG_DIR}/firehose_exporter.stderr.log
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499944544 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499944601 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499945233 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499945498 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499945667 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499946338 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499946608 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499948612 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499949104 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499949511 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499949709 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499950117 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499950127 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499951301 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499951488 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499952307 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499952861 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499954156 
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_499954482 
...
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966810_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966811_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966812_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966813_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966814_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966817_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966818_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966820_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482966821_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967528_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967531_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967533_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967535_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967536_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967537_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967538_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967539_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967540_false
firehose_value_metric_metrics_forwarder_http_server_request_count_get_200_api_v_1_attachment_attachment_1482967541_false

Firehose Exporter returns inconsistent number of values.

Hi, I am using the firehose exporter to gather information about capacity metrics in a PCF foundation. However i am getting an inconsistent number of values returned in queries. I am not sure if it something with the exporter or how i have set it up.
screenshot_2018-06-13 prometheus time series collection and processing server 1 1

Application CounterEvent metrics are not splitted over the application instances

We are using firehose-exporter as a single endpoint to scrape all in one application and system metrics from the firehose . Unfortunately for application metrics , we can only see the ValueMetric for individual instances . For CounterEvent we have one uniq metric which does not even represent the aggregation of all the instances of a particular application .
In the firehose we see them (cf nozzle -f CounterEvent) . So as a firehose proxy , i would expect to see the same metrics in firehose-exporter .
Thanks to answer if it's an improvement or an issue .

getting org-level and space-level metrics

Is it possible to get high-level metrics at org or space level as well, e.g. number of spaces for an org, number of applications per org/space, number of running/stopped/crashed application - basically what ones gets in the PCF UI, but have it available as metrics in prometheus.

We had a look but could not find such metrics. If it's not there yet, is this on the roadmap somewhere, and would others be interested in this.

BTW, if this looks interesting, we can look into adding this ...

with upgrade to 6.0.0 no firehose metrics is shown

I recently updated firehose-exporter to 6.0.0 from 5.4.0.
Replaced doppler.url with logging.url
Could not see any error message In logs

INFO[0000] Starting firehose_exporter (version=6.0.0, branch=master, revision=6da09b4b4c415cbd09d227efc0e0182ad77dbdb1)  source="firehose_exporter.go:176"
INFO[0000] Build context (go=go1.12.4, user=root@0362776c0d4a, date=20190422-04:28:13)  source="firehose_exporter.go:177"
INFO[0000] Listening on :9186                            source="firehose_exporter.go:234"
INFO[0000] Starting Firehose Nozzle...                   source="logstream.go:47"

but all the metrics are at 0 value eg.

firehose_last_container_metric_received_timestamp{environment="trial"} 0
firehose_last_counter_event_received_timestamp{environment="trial"} 0
firehose_last_envelope_received_timestamp{environment="trial"} 0
firehose_last_http_start_stop_received_timestamp{environment="trial"} 0

value metric origin difficult to use

I've found that the value metrics like this: firehose_value_metric_ded_d_e_e_5_d_deb_4_4_f_92_85_f_2_2_fe_3_ae_4_b_6_e_23_jvm_memory_committed_bytes
make it difficult to query in prometheus and I was wondering if there is a setting to omit the origin guid or if anyone has suggestions on ways to make it easier to query prometheus without needing to know the whole application guid

Thanks,

Zac

with 6.1.0, firehose_http_start_stop_server_request_duration_seconds is missing

Our dashboard relies on the following metrics

firehose_http_start_stop_server_request_duration_seconds{..host="route.abc.com",..,method="GET",quantile="0.5",scheme="http"} 0.004982986

This metric is missing in 6.1.0, but still available in 6.0.0.

BTW, firehose_http_start_stop_server_request_duration_seconds_count is not missing in 6.1.0

firehose_exporter crashes with fatal error: concurrent map iteration and map write

We are running 3 PCF deployments, each of them being monitored by a prometheus bosh-release (19.0.0).
On each of these envs we see every few days that the firehose_exporter stops providing data. Then checking the stderr log of the firehose_exporter job shows "stack traces" and a
"fatal error: concurrent map iteration and map write."
I cannot really find some correlation with things happening in these environments, they look completely random.

The problem is very similar to #17 .

I will attach the complete stderr log (the stdout log is empty).
Hopefully you can help finding the cause and a solution.

thanks,
Harry
firehose_exporter.stderr.log

with upgrade to 6.0.0 no firehose metrics is shown

I recently updated firehose-exporter to 6.0.0 from 5.4.0.
Replaced doppler.url with logging.url
Could not see any error message In logs

INFO[0000] Starting firehose_exporter (version=6.0.0, branch=master, revision=6da09b4b4c415cbd09d227efc0e0182ad77dbdb1)  source="firehose_exporter.go:176"
INFO[0000] Build context (go=go1.12.4, user=root@0362776c0d4a, date=20190422-04:28:13)  source="firehose_exporter.go:177"
INFO[0000] Listening on :9186                            source="firehose_exporter.go:234"
INFO[0000] Starting Firehose Nozzle...                   source="logstream.go:47"

but all the metrics are at 0 value eg.

firehose_last_container_metric_received_timestamp{environment="trial"} 0
firehose_last_counter_event_received_timestamp{environment="trial"} 0
firehose_last_envelope_received_timestamp{environment="trial"} 0
firehose_last_http_start_stop_received_timestamp{environment="trial"} 0

Firehose exporter crashes due to `concurrent map iteration and map write`

I've deployed this against my Cloud Foundry instance using the latest cfcommunity/firehose-exporter docker image. I can see data coming through in Grafana, however after a few hours it tends to crash with the following stack trace:

ime="2017-07-11T08:32:03Z" level=info msg="Starting firehose_exporter (version=4.2.1, branch=HEAD, revision=97f51ecad6d51ecd5279aa263ac462c0ab0c78f1)" source="firehose_exporter.go:250" 
time="2017-07-11T08:32:03Z" level=info msg="Build context (go=go1.8.3, user=root@9c6c91fc16d7, date=20170707-09:27:39)" source="firehose_exporter.go:251" 
time="2017-07-11T08:32:03Z" level=info msg="Starting Firehose Nozzle..." source="firehose_nozzle.go:58" 
time="2017-07-11T08:32:03Z" level=info msg="Listening on :9186" source="firehose_exporter.go:327" 
fatal error: concurrent map iteration and map write

goroutine 12120 [running]:
runtime.throw(0x8f7a6a, 0x26)
        /usr/local/go/src/runtime/panic.go:596 +0x95 fp=0xc4204e6ca0 sp=0xc4204e6c80
runtime.mapiternext(0xc4204e6dc8)
        /usr/local/go/src/runtime/hashmap.go:737 +0x7ee fp=0xc4204e6d50 sp=0xc4204e6ca0
github.com/cloudfoundry-community/firehose_exporter/metrics.(*Store).GetValueMetrics(0xc42009eb90, 0x404300, 0xc420415cc8, 0xc420415cd0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/metrics/store.go:214 +0x130 fp=0xc4204e6e38 sp=0xc4204e6d50
github.com/cloudfoundry-community/firehose_exporter/collectors.ValueMetricsCollector.Collect(0x8e82c2, 0x8, 0x0, 0x0, 0xc42009eb90, 0xc420143110, 0xc420789e60)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/collectors/value_metrics_collector.go:40 +0x43 fp=0xc4204e6f50 sp=0xc4204e6e38
github.com/cloudfoundry-community/firehose_exporter/collectors.(*ValueMetricsCollector).Collect(0xc420164a50, 0xc420789e60)
        <autogenerated>:9 +0x66 fp=0xc4204e6f98 sp=0xc4204e6f50
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func2(0xc4202df330, 0xc420789e60, 0xb0c240, 0xc420164a50)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:382 +0x61 fp=0xc4204e6fc0 sp=0xc4204e6f98
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4204e6fc8 sp=0xc4204e6fc0
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:383 +0x2ec

goroutine 1 [IO wait]:
net.runtime_pollWait(0x7f0de74cc168, 0x72, 0x0)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc420143418, 0x72, 0x0, 0xc420201a00)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc420143418, 0xffffffffffffffff, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).accept(0xc4201433b0, 0x0, 0xb09ec0, 0xc420201a00)
        /usr/local/go/src/net/fd_unix.go:430 +0x1e5
net.(*TCPListener).accept(0xc42007e300, 0xc42017bac0, 0x859d40, 0xffffffffffffffff)
        /usr/local/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).AcceptTCP(0xc42007e300, 0xc420049be8, 0xc420049bf0, 0xc420049be0)
        /usr/local/go/src/net/tcpsock.go:215 +0x49
net/http.tcpKeepAliveListener.Accept(0xc42007e300, 0x904640, 0xc42017ba40, 0xb0fd80, 0xc4201655f0)
        /usr/local/go/src/net/http/server.go:3044 +0x2f
net/http.(*Server).Serve(0xc4200bb4a0, 0xb0f700, 0xc42007e300, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2643 +0x228
net/http.(*Server).ListenAndServe(0xc4200bb4a0, 0xc4200bb4a0, 0x2)
        /usr/local/go/src/net/http/server.go:2585 +0xb0
net/http.ListenAndServe(0x8e707f, 0x5, 0x0, 0x0, 0xc42009eb90, 0xc420164a50)
        /usr/local/go/src/net/http/server.go:2787 +0x7f
main.main()
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehose_exporter.go:328 +0xae8

goroutine 21 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e2f0, 0xc42013c7c0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d
goroutine 21 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e2f0, 0xc42013c7c0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d

goroutine 22 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e300, 0xc42013c800)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d

goroutine 23 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e310, 0xc42013c840)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d

goroutine 24 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e320, 0xc42013c880)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d

goroutine 25 [select, 1 minutes]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.(*janitor).Run(0xc42013e330, 0xc42013c8c0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1037 +0x171
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache.runJanitor
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/patrickmn/go-cache/cache.go:1056 +0x8d

goroutine 26 [select]:
github.com/cloudfoundry-community/firehose_exporter/firehosenozzle.(*FirehoseNozzle).parseEnvelopes(0xc420084900, 0x1, 0x1)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehosenozzle/firehose_nozzle.go:90 +0x1ad
github.com/cloudfoundry-community/firehose_exporter/firehosenozzle.(*FirehoseNozzle).Start(0xc420084900, 0x0, 0x0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehosenozzle/firehose_nozzle.go:60 +0xb8
main.main.func1(0xc420084900)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehose_exporter.go:294 +0x2b
created by main.main
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehose_exporter.go:295 +0x5f0

goroutine 33 [runnable]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).firehose.func1(0xc42041b7a0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:232 +0x4a
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).listenForMessages(0xc420172240, 0xc420148120, 0xc420150130, 0x0, 0x0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:280 +0x247
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).listenAction.func1(0xc420172240, 0xc420148160, 0xc420172240)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:294 +0x11d
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).retryAction(0xc420172240, 0xc42014a140, 0xc4201442a0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:316 +0x141
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).firehose.func2(0xc4201442a0, 0xc420144240, 0xc4201700c0, 0xc420172240, 0xc420148120, 0xc420150130)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:240 +0x177
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer.(*Consumer).firehose
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/cloudfoundry/noaa/consumer/async.go:245 +0x10a
goroutine 12107 [chan receive]:
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather(0xc4200a8480, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:404 +0x4fe
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.UninstrumentedHandler.func1(0xb0edc0, 0xc42007e660, 0xc4200ec300)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/http.go:76 +0x47
net/http.HandlerFunc.ServeHTTP(0x904418, 0xb0edc0, 0xc42007e660, 0xc4200ec300)
        /usr/local/go/src/net/http/server.go:1942 +0x44
net/http.(Handler).ServeHTTP-fm(0xb0edc0, 0xc42007e660, 0xc4200ec300)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/firehose_exporter.go:232 +0x4d
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.InstrumentHandlerFuncWithOpts.func1(0xb0f300, 0xc420102000, 0xc4200ec300)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/http.go:307 +0x224
net/http.HandlerFunc.ServeHTTP(0xc42009f2c0, 0xb0f300, 0xc420102000, 0xc4200ec300)
        /usr/local/go/src/net/http/server.go:1942 +0x44
net/http.(*ServeMux).ServeHTTP(0xb423e0, 0xb0f300, 0xc420102000, 0xc4200ec300)
        /usr/local/go/src/net/http/server.go:2238 +0x130
net/http.serverHandler.ServeHTTP(0xc4200bb4a0, 0xb0f300, 0xc420102000, 0xc4200ec300)
        /usr/local/go/src/net/http/server.go:2568 +0x92
net/http.(*conn).serve(0xc42017ba40, 0xb0fcc0, 0xc420272300)
        /usr/local/go/src/net/http/server.go:1825 +0x612
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:2668 +0x2ce

goroutine 12108 [IO wait]:
net.runtime_pollWait(0x7f0de74cbf28, 0x72, 0x7)
        /usr/local/go/src/runtime/netpoll.go:164 +0x59
net.(*pollDesc).wait(0xc4205d3798, 0x72, 0xb0b380, 0xb073b0)
        /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
net.(*pollDesc).waitRead(0xc4205d3798, 0xc420272351, 0x1)
        /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
net.(*netFD).Read(0xc4205d3730, 0xc420272351, 0x1, 0x1, 0x0, 0xb0b380, 0xb073b0)
        /usr/local/go/src/net/fd_unix.go:250 +0x1b7
net.(*conn).Read(0xc42007e658, 0xc420272351, 0x1, 0x1, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:181 +0x70
net/http.(*connReader).backgroundRead(0xc420272340)
        /usr/local/go/src/net/http/server.go:656 +0x58
created by net/http.(*connReader).startBackgroundRead
        /usr/local/go/src/net/http/server.go:652 +0xdf
goroutine 12111 [runnable]:
os.(*File).Read(0xc420604008, 0xc4207d9000, 0x1000, 0x1000, 0x2000, 0x2000, 0x0)
        /usr/local/go/src/os/file.go:97 +0x318
bufio.(*Scanner).Scan(0xc4205b1a00, 0x5)
        /usr/local/go/src/bufio/scan.go:207 +0x294
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/procfs.FS.NewStat(0x8e7070, 0x5, 0x0, 0x0, 0x0)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/procfs/stat.go:35 +0x131
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/procfs.ProcStat.StartTime(0x1, 0xc420662010, 0xf, 0xc420662020, 0x1, 0x0, 0x1, 0x1, 0x0, 0xffffffffffffffff, ...)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/procfs/proc_stat.go:165 +0x3f
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*processCollector).processCollect(0xc42009e320, 0xc420789e60)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go:128 +0x55e
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*processCollector).(github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/cl
ient_golang/prometheus.processCollect)-fm(0xc420789e60)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go:90 +0x34
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*processCollector).Collect(0xc42009e320, 0xc420789e60)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/process_collector.go:108 +0x34
github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func2(0xc4202df330, 0xc420789e60, 0xb0c500, 0xc42009e320)
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:382 +0x61
created by github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
        /go/src/github.com/cloudfoundry-community/firehose_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:383 +0x2ec

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.