lablabs / cloudflare-exporter Goto Github PK
View Code? Open in Web Editor NEWPrometheus CloudFlare Exporter
License: Apache License 2.0
Prometheus CloudFlare Exporter
License: Apache License 2.0
Hi!
I am trying to run the image with docker and i am getting the following error :
time="2021-03-05 21:28:22" level=fatal msg="error from makeRequest: HTTP status 400: content \"{\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":6003,\\\"message\\\":\\\"Invalid request headers\\\",\\\"error_chain\\\":[{\\\"code\\\":6103,\\\"message\\\":\\\"Invalid format for X-Auth-Key header\\\"}]}],\\\"messages\\\":[],\\\"result\\\":null}\""
Any idea why this is happening ?
Api key seems to working fine according to cloudflare
curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \
-H "Authorization: Bearer *****************************" \
-H "Content-Type:application/json"
And the output :
{"result":{"id":"*****************************","status":"active"},"success":true,"errors":[],"messages":[{"code":10000,"message":"This API Token is valid and active","type":null}]}
Thanks in advance!
Cheers!
Hello,
so far, the exporter is only using the httpRequests1mGroups
and httpRequests1dByColoGroups
, the latter of which is being sunset on March 1, 2021.
I'd like to suggest to extend the available metrics by also making use of the following Datasets:
firewallEventsAdaptiveByTimeGroups
ipFlows1mAttacksGroups
synAvgPps1mGroups
I would also be happy to implement these myself and open a PR, but wanted to check interest with you first.
Hello team, just sharing one idea in case it is of your interest.
Cloudflare is the front ingress to our services. However, when there are issues between cloudflare and our ingress (ie: traffic doesn't even arrives to our services), Cloudflare is the only source of truth for these errors.
So would be very nice to have the count of requests with status codes for 400s and also for 500s per zone, so we can have alerts for these cases.
I hope you find this feature useful. Thanks for the amazing work here.
HTTP status 429: More than 1200 requests per 300 seconds reached. Please wait and consider throttling your request speed
May a good idea to add the possibility of limiting the number of requests per second?
Hi team!
Thanks for useful exporter.
It would be great to add firewall statistics to the metrics.
Minimally I would like to see counters of negative events, such as block and all types of challenges. And ideally all types of events are of course.
Unfortunately, I can't make a pull request.
Thanks.
Well, I would give more detail if there would be any..
CloudFlare has support for load balancing. Would be great to have support for load-balancer related metrics (pools, origins, health status, etc.).
I use Docker to deploy cloudflare-exporter, Prometheus and Grafana, and it works fine so far. But the data obtained and Cloudflare Dashboard Analytics data there are large differences. For example, 24 hours of traffic bandwidth usage: Cloudflare Dashboard Analytics shows 153.17 TB, but Grafana can only query to 120 TB.
It will be great if you can support official multi arch image like arm64.
Use case: Use this image in K8s on Raspberry PI 4
Hello
I currently have 20 domains with CF pro-plan and ~80 with free.
Exporter shows statistics only for domains with pro plan and higher
The only metric I see when FREE_TIER
is set to TRUE
is cloudflare_worker_cpu_time
Is this the expected behaviour?
First of all many thanks for this great exporter. Secondly, if possible adding the host label to all the applicable metrics would be very helpful, as we have several hosts under each of our zones and the ability to see the metrics of a specific one is very important to us.
edit: Upon further investigation of Prometheus and its limitations and best practices I see how that can be problematic.
I have installed the exporter via helmfile.
helmfile apply
With content:
repositories:
- name: "cloudflare-exporter"
url: "https://lablabs.github.io/cloudflare-exporter"
releases:
- name: "cloudflare-exporter"
namespace: "monitoring"
version: "0.0.3"
chart: "cloudflare-exporter/cloudflare-exporter"
wait: true
values:
- env:
- name: CF_API_KEY
value: R**
- name: CF_API_EMAIL
value: ma*
# Optionally select zones
# - name: CF_ZONES
# value: "<zone_id1>,<zone_id2>,..."
# DEPRICATED Optionally, you can filter zones by their adding IDs following the example below.
# - name: ZONE_XYZ
# value: <zone_id>
The pods are crashing due to:
time="2021-10-16 15:39:37" level=info msg="Beginning to serve on port :8080"
time="2021-10-16 15:39:38" level=fatal msg="error from makeRequest: HTTP status 400: content \"{\\\"success\\\":false,\\\"errors\\\":[{\\\"code\\\":6003,\\\"message\\\":\\\"Invalid request headers\\\",\\\"error_chain\\\":[{\\\"code\\\":6103,\\\"message\\\":\\\"Invalid format for X-Auth-Key header\\\"}]}],\\\"messages\\\":[],\\\"result\\\":null}\""
The API key has read access to any resource in my CF account. THis is where I got it from:
Do you know anything about this particular issue?
Hey there, great project!
Just wondering if there is any plans or thoughts on adding the Cloudflare worker metrics that are available as per: https://developers.cloudflare.com/analytics/graphql-api/tutorials/querying-workers-metrics ?
This is something we would find useful for monitoring the workers directly...
If this is of interest I'm happy to take a look and see if I can figure out a PR if you had any guidance on what/where to change?
Thanks!
Hi, after I execute docker run, I try to add Prometheus data source in Grafana with url: http://localhost:8080.
However, when I press "save and test", error show up "HTTP Error Not Found".
I also can't open web browser with http://localhost:8080.
Please help.
We have cloudflare-exporter running with several restarts due to this error:
/app/main.go:129 +0x278
main.fetchMetrics()
usr/local/go/src/sync/waitgroup.go:141 +0xb8
sync.(*WaitGroup).Wait(0x400010c070)
goroutine 2972 [running]:
panic: sync: WaitGroup is reused before previous Wait has returned
Any thoughs on that?
Thank you
Is it possible to return country code in 3 letters (instead 2 letters) ?
I would like to use Grafana Worldmap Panel but I can't because it uses geohash of 3 letters country code.
btw, I use prometheus
Thanks
Hi,
I installed cloudflare-exporter from helm repo, added CF_API_TOKEN, and received only worker metrics and the next error:
level=error msg="graphql: zones [***] are not authorized"
We have paid CF account, I also tried to set FREE_TIER=false but still not working properly
Any ideas how to fix it?
This exporter does not do anything at all.
Started via Docker Compose:
cloudflare_sv_exporter:
image: lablabs/cloudflare_exporter
environment:
CF_API_TOKEN: "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcd"
restart: always
Even after hours this is the only log output:
time="2023-04-10 14:34:09" level=info msg="Beginning to serve on port:8080, metrics path /metrics"
And this is the useless metrics output:
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 5.6611e-05
go_gc_duration_seconds{quantile="0.25"} 0.000141785
go_gc_duration_seconds{quantile="0.5"} 0.000198598
go_gc_duration_seconds{quantile="0.75"} 0.000257929
go_gc_duration_seconds{quantile="1"} 0.000517299
go_gc_duration_seconds_sum 0.0304871
go_gc_duration_seconds_count 144
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 9
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.4"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.603384e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.54412384e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 4298
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 640201
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 9.33136e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.603384e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.529024e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 5.464064e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 5183
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 5.742592e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.1993088e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.681148067506347e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 645384
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 4800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 90864
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 113904
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 5.627056e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 741062
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 589824
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 589824
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.2789136e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 7
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 4.99
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.6646144e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.68113724862e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.3869312e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 709
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
As it can clearly be seen this entire exporter is completely functionless. So eiter fix the broken software or fix the documentation if it is wrong and there are additional undocumented steps required to make this work. Also log output needs to be added urgently. Debugging without any logs is useless and hopeless.
go lang linter detected these issues:
zoneRequestUncached
is unused (deadcode)zoneRequestSSLUnencrypted
is unused (deadcode)zoneBandwidthUncached
is unused (deadcode)zoneBandwidthSSLUnencrypted
is unused (deadcode)zonePageviewsSearchEngines
is unused (deadcode)Fix these errors to pass the pipeline
Currently the only way to set a capability for pre-rendering/templating is via the helm --api-versions argument. That isn't supported by all tooling (including kustomize 4.x.x and argocd). Being able to set the capabilities via the values.yaml would greatly help in those cases. This is only used for the service-monitor.
I have metrics for all enterprise zones, but non-enterprise (free) zones don't collect metrics.
My docker config is:
docker_image_name: "lablabs/cloudflare_exporter"
# docker_image_tag: "latest"
docker_image_tag: "0.0.14"
docker_container_name: "cloudflare_exporter"
docker_container_env: {
CF_API_TOKEN: "{{ cloudflare_token }}",
SCRAPE_DELAY: "300s",
}
docker_container_published_ports: [ 8883:8080 ]
This issue looks as related.
Hello,
Since the 21st of June 2023, the exporter no longer returns data points for the metrics:
cloudflare_zone_requests_cached
;cloudflare_zone_requests_content_type
cloudflare_zone_requests_ssl_encrypted
;cloudflare_zone_requests_status
;cloudflare_zone_requests_browser_map_page_views_count
;cloudflare_zone_requests_total
;Other metrics such as cloudflare_zone_requests_origin_status_country_host
or cloudflare_zone_requests_status_country_host
are working fine. I am using the Enterprise plan in Cloudflare.
I see no logs in the exporter that could give more details. I tried restarting the exporter, using a new token, but it hasn't fixed the issue.
Do you have any idea on how to fix this issue?
Hi, I have a problem with request status detection. I have created an alert for the 521 status code, and when I receive the alert, I can not detect for which host it was. For example, alert triggered for zone="example.com," but under example.com, I have many hosts and LBs. Can we have the label for the hosts?
Wanted to check and see if there were any plans around the support of this project? There are a number of PRs and issues open with no activity since 12+ months ago.
Hello there!
Thank you very much for the wonderful exporter!
Can we add core web vitals metrics to the exporter?
Under 0.0.14 in the logs with CF_ZONES set the follow would show something like (changed IDs and names):
time="2024-05-15 16:31:08" level=info msg="Filtering zone: 0a3747db5274bf3b097c27abc54912f3 test-staging.com"
time="2024-05-15 16:32:08" level=info msg="Filtering zone: 00e8366440871a9ab90e587eb049df88 site-staging.com"
With 0.15, no zone filtering logs and all zone metrics would show up.
Tested a theory based on the code and set CFG_ZONES and the previous behavior worked.
I believe I have the fix, tested locally and will make a PR for the same.
Hey guys! Can you suggest best practices for using this exporter in terms of CPU consumption and what metrics are the most significant ? We face CPU overload when we use all included metrics. Any advice or suggestions would be of great help.
Hi,
I want to observe errors/requests counts by different status codes.
But the only available dimension is script_name
.
Looks like it's missing from being passed to labels at the moment:
cloudflare-exporter/prometheus.go
Lines 195 to 200 in 8e97133
Meanwhile, it seems to be supported:
cloudflare-exporter/cloudflare.go
Lines 40 to 47 in 8e97133
Do I miss anything?
Hey guys, thanks for your product; it has been a great help to our monitoring system. I have a question regarding what metrics you might personally suggest using. Currently, we are actively using cloudflare_zone_requests_status to monitor 5xx errors. Any advice or personal views would be greatly appreciated.
Hi Team thanks for the great project, its very useful.
The following metrics are exposed with hosts label:
cloudflare_zone_requests_status_country_host
cloudflare_zone_firewall_events_count
cloudflare_zone_requests_origin_status_country_host
This hosts label has too much cardinality. Every request with a unique subdomain will end up on the hosts label with a unique timeseries which cant be controlled on our end. We presently use relabel_configs to remove those metrics before ingestion. But i would very much appreciate if we can get the same metrics aggregated to a zone without the hosts label. Thanks again.
I have over 50 zones that are a mix of Free and Enterprise plans. While running the exporter cloudflare_zone_XXX data was not populated, but cloudflare_worker_XXX was working. When I specified a short list of zones with CF_ZONES, all the data was populated.
Looks like the condition "CF_ZONES : If not set, all zones from account are exported" is not working. Would be good to know if there is a maximum amount of zones that can scraped.
Greetings,
despite different configuration I am mising the actuall cloudflare metrics.
I have only:
promhttp_*
go_*
process_*
Tested with all 0.06-latest versions
with CF_ZONE=
anv and without.
What is wrong?
Metrics are exposed on /metrics
endpoint and everything seems to work. But in logs I got bunch of errors which are not verbose enough. It would be great to have it more verbose - by default or by some env variable e.g. DEBUG=1
..
time="2023-09-19 08:00:24" level=info msg="Beginning to serve on port:8080, metrics path /metrics"
time="2023-09-19 08:02:27" level=error msg="graphql: Internal server error"
time="2023-09-19 08:02:28" level=error msg="graphql: Internal server error"
time="2023-09-19 08:02:29" level=error msg="graphql: Internal server error"
time="2023-09-19 08:11:28" level=error msg="graphql: Internal server error"
time="2023-09-19 08:19:28" level=error msg="graphql: Internal server error"
time="2023-09-19 08:25:28" level=error msg="graphql: Internal server error"
time="2023-09-19 08:29:29" level=error msg="graphql: Internal server error"
or
time="2023-09-19 11:18:54" level=info msg="Beginning to serve on port:8080, metrics path /metrics"
time="2023-09-19 11:25:26" level=error msg="graphql: Internal authentication error: internal server error"
time="2023-09-19 11:25:27" level=error msg="graphql: Internal authentication error: internal server error"
time="2023-09-19 11:25:27" level=error msg="graphql: Internal authentication error: internal server error"
time="2023-09-19 11:25:27" level=error msg="graphql: Internal authentication error: internal server error"
time="2023-09-19 11:25:27" level=error msg="graphql: Internal authentication error: internal server error"
ghcr.io/lablabs/cloudflare_exporter
with dockerversion: '3.8'
services:
cloudflare_exporter:
image: ghcr.io/lablabs/cloudflare_exporter
read_only: true
environment:
- CF_API_TOKEN=<token_generated_in_cloudflare>
- SCRAPE_DELAY=30
ports:
- 8080:8080
restart: always
Errors have more details (which call failed, status code, stacktrace..)
Just common error message is shown.
This behaviour is seen when cloudflare-exporter
localy and also in gcp kubernetes cluster.
Hi team, thanks for a much needed project to improve our cloudflare observability. Really excited about this project. Well done!
I'm struggling however to find the right scraping delay for my project. I see that there's a scrape delay env var. When i check the code i see that code is substracting this value from time.Now(), and then that value minus 1 minute is the time range for which metrics are fetched?
Our sysdig monitor scrapes our metrics every 10s. I initially had the idea to just set the cloudflare exporter's scrape delay to 10s, but i'm not entirely sure if this is correct approach. Specifically I'm doing a sum(rate(cloudflare_zone_requests_origin_status_country_host metric[$__interval]) by (host)). I set the minInterval on UI to be 60s since you guys are looking back 1m as i describe above. Again, not sure. I'm comparing the nubmers i get from this to our nginx dashboard which i tend to trust more i guess but numbers dont seem to match.
In general would be good i guess if docs could be updated with some examples and some guidance on scrape intervals etc. Thanks again for an amazing project!
Hi team,
First of all thanks a lot for the amazing exporter.
There is an issue we discovered recently - we noticed that the metrics we are getting for example for cloudflare_zone_requests_total
are almost twice lower compared to those we see in cloudflare analytics dashboard.
After some manual GraphQL querying, it turned out that cloudflare is returning partial data if you query for 3 minutes ago data.
We did a change to move the now
value to 5 mins ago:
now := time.Now().Add(-300 * time.Second).UTC()
After this the metrics from the exporter are equal to those from cloudflare dashboard.
Hi,
is it possible to use this exporter in conjunction with OpenTelemetry or update it to produce OTLP signals? I believe there would be a high level of interest in the community and Prometheus supports native OTLP ingestion today
Hi, thank you for this tool but I am having a problem
Free Plan only
CF Token is Read only to Zone analytics, also tried a Read Only to ALL Resources
I am using docker-compose to run the image, Prometheus and Grafana also on docker-compose too.
cloudflare-exporter:
image: lablabs/cloudflare_exporter
container_name: cloudflare-exporter
environment:
- CF_API_TOKEN=*****
- CF_ZONES=***.com,***.org
- listen=:9102
- free_tier=true
- CF_API_KEY=*****
- CF_API_EMAIL=***
restart: unless-stopped
ports:
- "9102:8080"
Everything starts up fine and there is metrics showing at dockerhost:9102/metrics
, and Prometheus is successfully scraping the job. However the metrics shown are only some system metrics and NO cloudflare metrics at all. Please see below:
...
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 57
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
That's it
I've tried with both Token only and with API Key and Email. Again, everything seems to work, but I am just not getting any cloudflare metrics to show up. Any idea?
We currently run this in our K8's cluster, and had issues the last time cloudflare was down where pods would fail their probes, be killed by K8's, only to be restarted and die again. It ended up causing us some issues.
Logs just showed the following message over and over for every restart:
time="2021-11-26 16:13:40" level=fatal msg="HTTP status 502: service failure"
Is there a way if cloudflare is down for the pods to fail gracefully and log that fact?
Greetings,
I tried using all sorts of configurations, setting CF_ZONES, FREE_TIER, CF_API_KEY and CF_API_TOKEN but the Prometheus exporter only exports the cloudflare_worker_* metrics. I'm missing the cloudflare_zone_* metrics.
What is going wrong?
Deployed using helm chart version 0.1.8
I've deployed the exporter to AWS ECS. I’m using CF_API_TOKEN
to authenticate and I’m unable to resolve this error which occurs when the container is being launched:
http Status 400 invalid request headers (6003)
I’ve used tokens that were scoped for logging/analytics as well as global with no success. Any suggestions are appreciated. This is only an issue within ECS, running the container locally works without errors.
Good Day.
I have a some problemб when i run docker container without CF_ZONE.
after started command:
docker run --rm -p 8080:8081 -e CF_API_TOKEN=********-e LISTEN=:8081 lablabs/cloudflare_exporter
a had error:
level=error msg="graphql: zone '0aa6db791bbfb2a9cbf5bcc340f78739' does not have access to the path"
Please, help me.
Hi !
It seems that the binary builds for the 0.0.13 release are missing.
Could you add them @martinhaus ?
Thank you !
Hi,
I am trying to use this exporter to get the cloudflare metrics as shown in the repo. I used docker image lablabs/cloudflare_exporter set env for CF_API_EMAIL,CF_API_KEY,CF_ZONES but I do not see the metrics related with Cloudflare. I see these kind of metrics:
go_memstats_lookups_total 0
go_memstats_mallocs_total 144710
go_memstats_mcache_inuse_bytes 9600
go_memstats_mcache_sys_bytes 16384
I am not sure what I am doing wrong. Any suggestions will be appreciated.
Thank you.
Thank you for the great project. It is really useful!
Cloudflare in Analytics page has section with requested paths.
It is a very important parameter for me and I would like to have it in my monitoring system.
Not all paths, of course. Top 5 or Top 10, for instance.
Could you please add it in the cloudflare-exporter?
While installing Cloudflare-Exporter via Helm-Chart and helmfile:
repositories:
- name: "cloudflare-exporter"
url: "https://lablabs.github.io/cloudflare-exporter"
environments:
default:
values:
- cloudflareexporter:
key: ref+sops://secrets.yaml?format=yaml#/secrets/cloudflareexporter/key
releases:
- name: "cloudflare-exporter"
namespace: "monitoring"
version: "0.1.8"
chart: "cloudflare-exporter/cloudflare-exporter"
wait: true
values:
- env:
- name: CF_API_TOKEN
value: {{ .Values.cloudflareexporter.key | fetchSecretValue }}
- name: CF_API_EMAIL
value: <mail>
- name: CF_ZONES
value: <zone>
- name: securityContext.allowPrivilegeEscalation
value: false
- name: securityContext.runAsNonRoot
value: true
- name: securityContext.runAsUser
value: 1000
- name: securityContext.readOnlyRootFilesystem
value: true
- name: securityContext.capabilities.drop[0]
value: ALL
it seems that securityContext are not populated:
monitoring, cloudflare-exporter, Deployment (apps) has been added:
-
+ # Source: cloudflare-exporter/templates/deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: cloudflare-exporter
+ labels:
+ helm.sh/chart: cloudflare-exporter-0.1.8
+ app.kubernetes.io/name: cloudflare-exporter
+ app.kubernetes.io/instance: cloudflare-exporter
+ app.kubernetes.io/version: "0.0.9"
+ app.kubernetes.io/managed-by: Helm
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: cloudflare-exporter
+ app.kubernetes.io/instance: cloudflare-exporter
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: cloudflare-exporter
+ app.kubernetes.io/instance: cloudflare-exporter
+ spec:
+ securityContext:
+ {}
+ serviceAccountName: default
+ containers:
+ - name: cloudflare-exporter
+ securityContext:
+ {}
+ image: "ghcr.io/lablabs/cloudflare_exporter:0.0.9"
+ imagePullPolicy: Always
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ resources:
+ {}
+ env:
+ - name: CF_API_TOKEN
+ value: <token>
+ - name: CF_API_EMAIL
+ value: <mail>
+ - name: CF_ZONES
+ value: <zone>
monitoring, cloudflare-exporter, Service (v1) has been added:
-
+ # Source: cloudflare-exporter/templates/service.yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: cloudflare-exporter
+ labels:
+ helm.sh/chart: cloudflare-exporter-0.1.8
+ app.kubernetes.io/name: cloudflare-exporter
+ app.kubernetes.io/instance: cloudflare-exporter
+ app.kubernetes.io/version: "0.0.9"
+ app.kubernetes.io/managed-by: Helm
+ annotations:
+ prometheus.io/scrape: "true"
+ spec:
+ type: ClusterIP
+ ports:
+ - port: 8080
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: cloudflare-exporter
+ app.kubernetes.io/instance: cloudflare-exporter
According to the values.yaml file, it should be possible to populate these values.
Hi guys! Is possible to have a client request path metrics?
Something like top10, or configurable
I use this query to get that information:
query GetZoneTopNs {
viewer {
zones(filter: {zoneTag: $zoneTag}) {
total: httpRequestsAdaptiveGroups(filter: $filter, limit: 1) {
count
sum {
edgeResponseBytes
visits
__typename
}
__typename
}
topPaths: httpRequestsAdaptiveGroups(filter: $filter, limit: 15, orderBy: [$order]) {
count
avg {
sampleInterval
__typename
}
sum {
edgeResponseBytes
visits
__typename
}
dimensions {
metric: clientRequestPath
__typename
}
__typename
}
__typename
}
__typename
}
}
{
"zoneTag": "ZONEID",
"filter": {
"AND": [
{
"datetime_geq": "2022-10-06T19:41:43Z",
"datetime_leq": "2022-10-06T20:11:43Z"
},
{
"requestSource": "eyeball"
},
{
"clientRequestPath": "/api/"
}
]
},
"order": "count_DESC"
}
Getting this error message when running the exporter
level=error msg="graphql: zone '...' does not have access to the path"
How can I solve this problem?
Looking to the code i do not see any healthcheck endpoint.
Should we use the /metric for it? Probably would be better to add one (or two, for both liveness and for readiness probes, if it makes any sense for this)
I've got the cloudflare_exporter working here but for some reason I'm having issues with prometheus grabbing all the metrics.
This is all I get when looking in prometheus query.
When I do a curl localhost:9199/metrics I get TONS of metrics to the point it scrolls my screen completely. Is it possible that prometheus can't scrape so many metrics at a time?
Hi there,
When I look at the metrics coming from the load balancer, any time I get a result it seems to be divisible by 10.
have you got any idea why this is happening?
These metrics showing 500 messages during this period don't match the events in the logs at all either.
Payments.site.com has 4 occurrences of 500 messages in the last 48 hours, but in the metrics it's showing as 10.
api.site.com received 7 occurrences of 500 messages, and none are showing up in the metrics.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.