prometheus / influxdb_exporter Goto Github PK
View Code? Open in Web Editor NEWA server that accepts InfluxDB metrics via the HTTP API and exports them via HTTP for Prometheus consumption
License: Apache License 2.0
A server that accepts InfluxDB metrics via the HTTP API and exports them via HTTP for Prometheus consumption
License: Apache License 2.0
Hello
I am creating this issue to discuss. When sending standard golang metrics exporter by telegraf influxdb_exporter was broken for all metrics with error:
An error has occurred while serving metrics:
2 error(s) occurred:
* collected metric named "go_gc_duration_seconds_count" collides with previously collected summary named "go_gc_duration_seconds"
There are some ways to fix this problem:
I can make PR with this fix
I use curl to create data to influxdb export and Prometheus to monitor influxdb export.
curl -i -XPOST 'http://server2:9122/write?db=my_test' --data-binary 'seconde_test,host=server_0,src_addr=192.168.33.93,dst_addr=192.168.33.88,src_port=4242,dst_port=9096,protocol=ICMP value=1'
The child node Prometheus got an error when I sent the data in bulk.the ip:212 is slave,the if:85 is master.The error is as follows:
caller=federate.go:166 component=web msg="federation failed" err="write tcp 192.168.33.212:9090-192.168.33.85:47094: write: broken pipe"
This error will only appear when the data is generated in large quantities. Is there a problem with the way I use it, or do I need to set some parameters?
MAINTAINER should be replaced by LABEL maintainer="..."
See https://docs.docker.com/engine/reference/builder/#maintainer-deprecated
Hi ALL,
I configure heapster to send metrics to influxdb_exporter via
- /heapster
- --source=kubernetes
- --sink=influxdb:http://localhost:9122
when I tried to get these metrics via curl http://localhost:9122/metrics
, following errors displayed:
An error has occurred during metrics gathering:
2268 error(s) occurred:
* collected metric cpu_usage_rate label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"container.slice/container-kubepods.slice/container-kubepods-besteffort.slice/container-kubepods-besteffort-poda5d379ad_9148_11e7_83ac_30e1715dbd38.slice" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 > has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric memory_major_page_faults_rate label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"system.slice/systemd-update-utmp.service" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 > has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric memory_major_page_faults label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"container.slice/container-kubepods.slice/container-kubepods-burstable.slice/container-kubepods-burstable-pod88cc3016_914d_11e7_83ac_30e1715dbd38.slice" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 > has label dimensions inconsistent with previously collected metrics in the same metric family
....
Can you help to check if there is configuraion or the bug fo influxdb_exporter?
InfluxDB 2.0 is now available. It comes with new APIs. Find out which of the endpoints we support have 2.x equivalents, and add them. We should support both endpoints at the same time.
I'm getting the following error when I try localhost:9122/metrics
Guess this is because of empty or conflicting values?
* collected metric processes_unknown label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-4-63" > untyped:<value:0 > has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric net_err_in label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-33-127" > label:<name:"interface" value:"eth0" > label:<name:"rack" value:"logs" > untyped:<value:0 > has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric kernel_interrupts label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-3-243" > untyped:<value:2.2162444e+07 > has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric procstat_cpu_time_system label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-34-34" > label:<name:"pidfile" value:"/var/run/tomcat7.pid" > label:<name:"process_name" value:"java" > label:<name:"rack" value:"logs" > untyped:<value:23071.55 > has label dimensions inconsistent with previously collected metrics in the same metric family
>> vetting code
main.go:122: arg sample for printf verb %q of wrong type: *main.influxDBSample
main.go:169: Collect passes Lock by value: main.influxDBCollector contains sync.Mutex
main.go:193: Describe passes Lock by value: main.influxDBCollector contains sync.Mutex
exit status 1
I would like to add HTTPS and Basic auth here, but I have some questions that influx experts might help grasp
Would this exporter still work with https?
Should we disable udp when we enable https?
Do you have further concerns about influx compat if we add tls here?
This Exporter is used to collect monitoring data points for influxDB. I have searched for a lot of documents but I haven't found out how to set up the address for influxDB. Could you please tell me how I can set up?
Hi,
I've tried to understand how is working this exporter based on the description but not clear.
Kindly could you describe a little if I could use or is possible another approach for my scope?
I need to ave all in prometheus or either to find a way with influxdb in order to get those metrics.
Kindly take a look on the URL: https://community.influxdata.com/t/input-data-formats-json-to-influxdb-issue/4654
Unfortunately on the internet few details also on google group community.....
Kind Regards,
I am trying to use exporter instead of influx to store data in prometheus , my application do ping to influx before writing services . Is it possible to add /ping endpoint to exporter ?
https://archive.docs.influxdata.com/influxdb/v0.10/concepts/api/
By default the InfluxDB HTTP API listens on port 8086. The /ping, /write, and /query endpoints are all part of the HTTP API.
If multiple sources post to the influx collector and the operator doesn't control the series names, it would be useful to optionally prefix sample names with the database name from each source. If you all would be open to this feature, I'd be happy to submit a patch.
Sorry guys, this is not an issue but rather a question.
How do you configure the InfluxDB_exporter with Prometheus after its installation ?
I have not yet installed the InfluxDB exported but I am thinking at what come after that.
I did search for these information everywhere but I didn't find anything.
I would appreciate if you can point me to some info or guide me a bit.
Thanks
The README says
An exporter for metrics in the InfluxDB format used since 0.9.0. It collects metrics in the line protocol via a HTTP API...
However, I don't see where or how that happens. Could you provide an example usage of how to point this exporter to an instance of InfluxDB via HTTP? I'm not an expert on Go code, but it appears everything is set up to work over UDP only (for InfluxDB communication).
Perhaps I am misinterpreting the purpose of this exporter, but it seems to be quite different from the rest of the ones I've worked with. I essentially want to point this exporter at at InfluxDB, have the exporter gather metrics, and then set up Prometheus to scrape from the /metrics
endpoint it provides. How do I go about doing this?
E.g: Consider that a data was being published in this format
cpu_usage_guest{cpu="cpu1",host="host1"} 0
If you add a new tag say app like this
cpu_usage_guest{app="xxx",cpu="cpu1",host="host1"} 0
the endpoint start throwing error and requires to stop and restart the application to have correct data
If the remote system's time drifts from the local system's time, a potential memory leak can occur or no metrics at all are scraped.
In line 174 additionally the local system's time should be stored and used for expiry of sampled data.
Once there is an error(sending bad packet) in influxdb-exporter(udp) then all the packets after that were getting dropped. verified this by checking on the metrics URL.
error log when sending the bad packet.
level=error ts=2022-12-09T10:12:07.254Z caller=main.go:94 msg="Error parsing UDP packet" err="unable to parse 'www_app,event_,random_type=cat value=1': missing tag value"
Context:
We are using this exporter to push metrics in influxdb-format to Prometheus. We use dedicated script libary from specific software vendor on metric source.
Issue:
This scripts pushing the metrics expect error messages as JSON and run into error if it is no JSON.
Actual the influxdb_exporter give back simple text:
error parsing request: unable to parse 'my_metric': missing fields
To be more compatible with a normal influxdb the error messages should look like this:
{"error":"error parsing request: unable to parse 'my_metric': missing fields"}
Hi,
I'm new to prometheus, I don't know how to collect metric from influx and inject them in promotheus.
If someone has a beginner tutorial :)
Regards
Hi Brian,
Would it be possible to get some clarification on how to use influxdb_exporter, i need to use this exporter until go 1.8.0 and telegraf 1.3.0 is released as i am trying to connect telegraf to prometheus.
However telegraf currently has a bug #influxdata/telegraf#2282 which is causing issues within prometheus and grafana.
Until then i would like to use this if possible,
I have already looked through the code to see if there is any mention of a database but so far i haven't come across anything .
Could i clarify with you, is telegraf meant to write to the influxdb_exporter or is the influxdb_exporter meant to collect from an influxdb instance ? Have i misunderstood the docs or am i on the right track.
when configuring telegraf to point to the influxdb_exporter like so... as per your docs.
[[outputs.influxdb]]
urls = ["http://localhost:9122"]
database = "influxdb" # this says it is required in the telegraf docs
I get the following from telegrafs docker logs output.
Database creation failed: Post http://localhost:9122/query?db=&q=CREATE+DATABASE+%22influxdb%22: dial tcp [::1]:9122: getsockopt: connection refused
Error writing to output [influxdb]: Could not write to any InfluxDB server in cluster
Is this repo still being maintained? or have you dropped it because telegraf already supports exposing Prometheus style endpoints ?
Notes: Everything is running in a container.
Thank you in advance for your help.
Kind Regards
K
# docker pull prom/influxdb-exporter
Using default tag: latest Error response from daemon: manifest for prom/influxdb-exporter:latest not found
About two years ago we decided to move from logrus to go-kit for logging, this is one of the repos that still needs switching. prometheus/snmp_exporter#447 is an example PR.
CircleCI 1.0 will be shut down on August 31. Resources
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.