Giter Club home page Giter Club logo

influxdb_exporter's Introduction

Prometheus
Prometheus

Visit prometheus.io for the full documentation, examples and guides.

CI Docker Repository on Quay Docker Pulls Go Report Card CII Best Practices Gitpod ready-to-code Fuzzing Status OpenSSF Scorecard

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

The features that distinguish Prometheus from other metrics and monitoring systems are:

  • A multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
  • PromQL, a powerful and flexible query language to leverage this dimensionality
  • No dependency on distributed storage; single server nodes are autonomous
  • An HTTP pull model for time series collection
  • Pushing time series is supported via an intermediary gateway for batch jobs
  • Targets are discovered via service discovery or static configuration
  • Multiple modes of graphing and dashboarding support
  • Support for hierarchical and horizontal federation

Architecture overview

Architecture overview

Install

There are various ways of installing Prometheus.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.

Docker images

Docker images are available on Quay.io or Docker Hub.

You can launch a Prometheus container for trying it out with

docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus

Prometheus will now be reachable at http://localhost:9090/.

Building from source

To build Prometheus from source code, You need:

Start by cloning the repository:

git clone https://github.com/prometheus/prometheus.git
cd prometheus

You can use the go tool to build and install the prometheus and promtool binaries into your GOPATH:

GO111MODULE=on go install github.com/prometheus/prometheus/cmd/...
prometheus --config.file=your_config.yml

However, when using go install to build Prometheus, Prometheus will expect to be able to read its web assets from local filesystem directories under web/ui/static and web/ui/templates. In order for these assets to be found, you will have to run Prometheus from the root of the cloned repository. Note also that these directories do not include the React UI unless it has been built explicitly using make assets or make build.

An example of the above configuration file can be found here.

You can also build using make build, which will compile in the web assets so that Prometheus can be run from anywhere:

make build
./prometheus --config.file=your_config.yml

The Makefile provides several targets:

  • build: build the prometheus and promtool binaries (includes building and compiling in web assets)
  • test: run the tests
  • test-short: run the short tests
  • format: format the source code
  • vet: check the source code for common errors
  • assets: build the React UI

Service discovery plugins

Prometheus is bundled with many service discovery plugins. When building Prometheus from source, you can edit the plugins.yml file to disable some service discoveries. The file is a yaml-formated list of go import path that will be built into the Prometheus binary.

After you have changed the file, you need to run make build again.

If you are using another method to compile Prometheus, make plugins will generate the plugins file accordingly.

If you add out-of-tree plugins, which we do not endorse at the moment, additional steps might be needed to adjust the go.mod and go.sum files. As always, be extra careful when loading third party code.

Building the Docker image

The make docker target is designed for use in our CI system. You can build a docker image locally with the following commands:

make promu
promu crossbuild -p linux/amd64
make npm_licenses
make common-docker-amd64

Using Prometheus as a Go Library

Remote Write

We are publishing our Remote Write protobuf independently at buf.build.

You can use that as a library:

go get buf.build/gen/go/prometheus/prometheus/protocolbuffers/go@latest

This is experimental.

Prometheus code base

In order to comply with go mod rules, Prometheus release number do not exactly match Go module releases. For the Prometheus v2.y.z releases, we are publishing equivalent v0.y.z tags.

Therefore, a user that would want to use Prometheus v2.35.0 as a library could do:

go get github.com/prometheus/[email protected]

This solution makes it clear that we might break our internal Go APIs between minor user-facing releases, as breaking changes are allowed in major version zero.

React UI Development

For more information on building, running, and developing on the React-based UI, see the React app's README.md.

More information

  • Godoc documentation is available via pkg.go.dev. Due to peculiarities of Go Modules, v2.x.y will be displayed as v0.x.y.
  • See the Community page for how to reach the Prometheus developers and users on various communication channels.

Contributing

Refer to CONTRIBUTING.md

License

Apache License 2.0, see LICENSE.

influxdb_exporter's People

Contributors

beorn7 avatar beryju avatar bharaththiruveedula-zz avatar brian-brazil avatar carlpett avatar csthomas1 avatar danquack avatar dependabot[bot] avatar grobie avatar ing8ar avatar lionralfs avatar matthiasr avatar nexucis avatar prombot avatar roidelapluie avatar sdurrheimer avatar simonpasquier avatar superq avatar vidister avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

influxdb_exporter's Issues

when new tag is added it throws error

E.g: Consider that a data was being published in this format

cpu_usage_guest{cpu="cpu1",host="host1"} 0

If you add a new tag say app like this

cpu_usage_guest{app="xxx",cpu="cpu1",host="host1"} 0

the endpoint start throwing error and requires to stop and restart the application to have correct data

Tutorial

Hi,

I'm new to prometheus, I don't know how to collect metric from influx and inject them in promotheus.
If someone has a beginner tutorial :)

Regards

go vet errors

>> vetting code
main.go:122: arg sample for printf verb %q of wrong type: *main.influxDBSample
main.go:169: Collect passes Lock by value: main.influxDBCollector contains sync.Mutex
main.go:193: Describe passes Lock by value: main.influxDBCollector contains sync.Mutex
exit status 1

Clarify purpose of the exporter + point at InfluxDB's native integrations

The README says

An exporter for metrics in the InfluxDB format used since 0.9.0. It collects metrics in the line protocol via a HTTP API...

However, I don't see where or how that happens. Could you provide an example usage of how to point this exporter to an instance of InfluxDB via HTTP? I'm not an expert on Go code, but it appears everything is set up to work over UDP only (for InfluxDB communication).

Perhaps I am misinterpreting the purpose of this exporter, but it seems to be quite different from the rest of the ones I've worked with. I essentially want to point this exporter at at InfluxDB, have the exporter gather metrics, and then set up Prometheus to scrape from the /metrics endpoint it provides. How do I go about doing this?

label dimensions inconsistent with previously collected metrics

I'm getting the following error when I try localhost:9122/metrics

Guess this is because of empty or conflicting values?

* collected metric processes_unknown label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-4-63" > untyped:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric net_err_in label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-33-127" > label:<name:"interface" value:"eth0" > label:<name:"rack" value:"logs" > untyped:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric kernel_interrupts label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-3-243" > untyped:<value:2.2162444e+07 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric procstat_cpu_time_system label:<name:"dc" value:"ap" > label:<name:"host" value:"ip-172-16-34-34" > label:<name:"pidfile" value:"/var/run/tomcat7.pid" > label:<name:"process_name" value:"java" > label:<name:"rack" value:"logs" > untyped:<value:23071.55 >  has label dimensions inconsistent with previously collected metrics in the same metric family

influxdb export send data to prometheus error

I use curl to create data to influxdb export and Prometheus to monitor influxdb export.

1

curl -i -XPOST 'http://server2:9122/write?db=my_test' --data-binary 'seconde_test,host=server_0,src_addr=192.168.33.93,dst_addr=192.168.33.88,src_port=4242,dst_port=9096,protocol=ICMP value=1'

The child node Prometheus got an error when I sent the data in bulk.the ip:212 is slave,the if:85 is master.The error is as follows:

caller=federate.go:166 component=web msg="federation failed" err="write tcp 192.168.33.212:9090-192.168.33.85:47094: write: broken pipe"

This error will only appear when the data is generated in large quantities. Is there a problem with the way I use it, or do I need to set some parameters?

Add test for gzip'ed ingestion

Follow-up to #78: add tests for this functionality, ideally as close as possible to the way an official client would do it. Also relates to #80 – are there any differences with InfluxDB 2.0 and compression?

Clarification on DB connection

@brian-brazil

Hi Brian,

Would it be possible to get some clarification on how to use influxdb_exporter, i need to use this exporter until go 1.8.0 and telegraf 1.3.0 is released as i am trying to connect telegraf to prometheus.
However telegraf currently has a bug #influxdata/telegraf#2282 which is causing issues within prometheus and grafana.

Until then i would like to use this if possible,
I have already looked through the code to see if there is any mention of a database but so far i haven't come across anything .

Could i clarify with you, is telegraf meant to write to the influxdb_exporter or is the influxdb_exporter meant to collect from an influxdb instance ? Have i misunderstood the docs or am i on the right track.

when configuring telegraf to point to the influxdb_exporter like so... as per your docs.

 [[outputs.influxdb]] 
   urls = ["http://localhost:9122"]
   database = "influxdb"  # this says it is required in the telegraf docs

I get the following from telegrafs docker logs output.

Database creation failed: Post http://localhost:9122/query?db=&q=CREATE+DATABASE+%22influxdb%22: dial tcp [::1]:9122: getsockopt: connection refused

Error writing to output [influxdb]: Could not write to any InfluxDB server in cluster

Is this repo still being maintained? or have you dropped it because telegraf already supports exposing Prometheus style endpoints ?

Notes: Everything is running in a container.

Thank you in advance for your help.

Kind Regards
K

https://community.influxdata.com/t/input-data-formats-json-to-influxdb-issue/4654

Hi,

I've tried to understand how is working this exporter based on the description but not clear.
Kindly could you describe a little if I could use or is possible another approach for my scope?

I need to ave all in prometheus or either to find a way with influxdb in order to get those metrics.
Kindly take a look on the URL: https://community.influxdata.com/t/input-data-formats-json-to-influxdb-issue/4654

Unfortunately on the internet few details also on google group community.....

Kind Regards,

Configuration of Influxdb exporter

Sorry guys, this is not an issue but rather a question.
How do you configure the InfluxDB_exporter with Prometheus after its installation ?
I have not yet installed the InfluxDB exported but I am thinking at what come after that.
I did search for these information everywhere but I didn't find anything.
I would appreciate if you can point me to some info or guide me a bit.
Thanks

Add HTTPS and Basic auth

I would like to add HTTPS and Basic auth here, but I have some questions that influx experts might help grasp

Would this exporter still work with https?
Should we disable udp when we enable https?
Do you have further concerns about influx compat if we add tls here?

Errors when push standart golang client metrics to influxdb_exporet

Hello
I am creating this issue to discuss. When sending standard golang metrics exporter by telegraf influxdb_exporter was broken for all metrics with error:

An error has occurred while serving metrics:

2 error(s) occurred:
* collected metric named "go_gc_duration_seconds_count" collides with previously collected summary named "go_gc_duration_seconds"

There are some ways to fix this problem:

  • Add option to skip internal go metrics
  • Move internel metrics to new endpoint (Best in my opinion)

I can make PR with this fix

udp listener not responding after getting parse error.

Once there is an error(sending bad packet) in influxdb-exporter(udp) then all the packets after that were getting dropped. verified this by checking on the metrics URL.
error log when sending the bad packet.
level=error ts=2022-12-09T10:12:07.254Z caller=main.go:94 msg="Error parsing UDP packet" err="unable to parse 'www_app,event_,random_type=cat value=1': missing tag value"

What happened to the docker image?

# docker pull prom/influxdb-exporter
Using default tag: latest Error response from daemon: manifest for prom/influxdb-exporter:latest not found

Optionally prefix sample name with database name

If multiple sources post to the influx collector and the operator doesn't control the series names, it would be useful to optionally prefix sample names with the database name from each source. If you all would be open to this feature, I'd be happy to submit a patch.

Error messages should have JSON format

Context:
We are using this exporter to push metrics in influxdb-format to Prometheus. We use dedicated script libary from specific software vendor on metric source.

Issue:
This scripts pushing the metrics expect error messages as JSON and run into error if it is no JSON.

Actual the influxdb_exporter give back simple text:

error parsing request: unable to parse 'my_metric': missing fields

To be more compatible with a normal influxdb the error messages should look like this:

{"error":"error parsing request: unable to parse 'my_metric': missing fields"}

error occur when configured as heapster --sink

Hi ALL,

I configure heapster to send metrics to influxdb_exporter via

        - /heapster
        - --source=kubernetes
        - --sink=influxdb:http://localhost:9122

when I tried to get these metrics via curl http://localhost:9122/metrics, following errors displayed:

An error has occurred during metrics gathering:

2268 error(s) occurred:
* collected metric cpu_usage_rate label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"container.slice/container-kubepods.slice/container-kubepods-besteffort.slice/container-kubepods-besteffort-poda5d379ad_9148_11e7_83ac_30e1715dbd38.slice" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric memory_major_page_faults_rate label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"system.slice/systemd-update-utmp.service" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family
* collected metric memory_major_page_faults label:<name:"cluster_name" value:"default" > label:<name:"container_name" value:"container.slice/container-kubepods.slice/container-kubepods-burstable.slice/container-kubepods-burstable-pod88cc3016_914d_11e7_83ac_30e1715dbd38.slice" > label:<name:"nodename" value:"cnpvgl56588418" > label:<name:"type" value:"sys_container" > untyped:<value:0 >  has label dimensions inconsistent with previously collected metrics in the same metric family

....

Can you help to check if there is configuraion or the bug fo influxdb_exporter?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.