Giter Club home page Giter Club logo

metrics's Introduction

Build Status GoDoc Go Report codecov

metrics - lightweight package for exporting metrics in Prometheus format

Features

  • Lightweight. Has minimal number of third-party dependencies and all these deps are small. See this article for details.
  • Easy to use. See the API docs.
  • Fast.
  • Allows exporting distinct metric sets via distinct endpoints. See Set.
  • Supports easy-to-use histograms, which just work without any tuning. Read more about VictoriaMetrics histograms at this article.
  • Can push metrics to VictoriaMetrics or to any other remote storage, which accepts metrics in Prometheus text exposition format. See these docs.

Limitations

Usage

import "github.com/VictoriaMetrics/metrics"

// Register various metrics.
// Metric name may contain labels in Prometheus format - see below.
var (
	// Register counter without labels.
	requestsTotal = metrics.NewCounter("requests_total")

	// Register summary with a single label.
	requestDuration = metrics.NewSummary(`requests_duration_seconds{path="/foobar/baz"}`)

	// Register gauge with two labels.
	queueSize = metrics.NewGauge(`queue_size{queue="foobar",topic="baz"}`, func() float64 {
		return float64(foobarQueue.Len())
	})

	// Register histogram with a single label.
	responseSize = metrics.NewHistogram(`response_size{path="/foo/bar"}`)
)

// ...
func requestHandler() {
	// Increment requestTotal counter.
	requestsTotal.Inc()

	startTime := time.Now()
	processRequest()
	// Update requestDuration summary.
	requestDuration.UpdateDuration(startTime)

	// Update responseSize histogram.
	responseSize.Update(responseSize)
}

// Expose the registered metrics at `/metrics` path.
http.HandleFunc("/metrics", func(w http.ResponseWriter, req *http.Request) {
	metrics.WritePrometheus(w, true)
})

// ... or push registered metrics every 10 seconds to http://victoria-metrics:8428/api/v1/import/prometheus
// with the added `instance="foobar"` label to all the pushed metrics.
metrics.InitPush("http://victoria-metrics:8428/api/v1/import/prometheus", 10*time.Second, `instance="foobar"`, true)

By default, exposed metrics do not have TYPE or HELP meta information. Call ExposeMetadata(true) in order to generate TYPE and HELP meta information per each metric.

See docs for more info.

Users

FAQ

Why the metrics API isn't compatible with github.com/prometheus/client_golang?

Because the github.com/prometheus/client_golang is too complex and is hard to use.

Why the metrics.WritePrometheus doesn't expose documentation for each metric?

Because this documentation is ignored by Prometheus. The documentation is for users. Just give meaningful names to the exported metrics or add comments in the source code or in other suitable place explaining each metric exposed from your application.

How to implement CounterVec in metrics?

Just use GetOrCreateCounter instead of CounterVec.With. See this example for details.

Why Histogram buckets contain vmrange labels instead of le labels like in Prometheus histograms?

Buckets with vmrange labels occupy less disk space compared to Promethes-style buckets with le labels, because vmrange buckets don't include counters for the previous ranges. VictoriaMetrics provides prometheus_buckets function, which converts vmrange buckets to Prometheus-style buckets with le labels. This is useful for building heatmaps in Grafana. Additionally, its' histogram_quantile function transparently handles histogram buckets with vmrange labels.

metrics's People

Contributors

aierui avatar alicebob avatar andrewchubatiuk avatar dmitryk-dk avatar ernado avatar f41gh7 avatar hagen1778 avatar imorph avatar lammel avatar sequix avatar tenmozes avatar valyala avatar vtolstov avatar xsteadfastx avatar zekker6 avatar zhengtianbao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metrics's Issues

Support additional golang runtime metrics.

It'd be great to have following metrics:

https://pkg.go.dev/runtime/metrics

/sched/latencies:seconds
	Distribution of the time goroutines have spent in the scheduler
	in a runnable state before actually running. Bucket counts
	increase monotonically.

/sync/mutex/wait/total:seconds
	Approximate cumulative time goroutines have spent blocked
	on a sync.Mutex or sync.RWMutex. This metric is useful for
	identifying global changes in lock contention. Collect a mutex
	or block profile using the runtime/pprof package for more
	detailed contention data.

how to replace prometheus Histogram and Summary counter

I want to switch to tis package, but don't understand from docs how to do that i'm already have in prometheus:

timer := prometheus.NewTimer(prometheus.ObserverFunc(func(v float64) {       
  us := v * 1000000 // make microseconds                                     
  timeCounterSummary.WithLabelValues(name).Observe(us)                       
  timeCounterHistogram.WithLabelValues(name).Observe(v)                      
}))                                                                          
defer timer.ObserveDuration()                                                

how can i do that in this package?

Wrong histogram buckets?

example:

package main

import (
	"fmt"
	"github.com/VictoriaMetrics/metrics"
)

func main() {
	var bb bytes.Buffer
	test := metrics.NewHistogram(`test`)
	test.Update(1000)
	test.Update(1001)
	test.Update(1002)
	metrics.WritePrometheus(&bb, false)
	fmt.Printf("%s", bb)

result:

test_bucket{vmrange="9.5e2...1.0e3"} 3
test_sum 3003
test_count 3

expected:

test_bucket{vmrange="9.5e2...1.0e3"} 1
test_bucket{vmrange="1.0e3...1.5e3"} 2
test_sum 3003
test_count 3

vmagent loses some metrics because it doesn't push them on shutdown

Description

We noticed that some metrics are not pushed randomly. After some debugging, we found out that this only happens when vmagent is running for a short period of time and cannot push all the metrics because some of them were created between the last scrape and shutdown.
The metrics appear in the input file, but they are not sent to the -remoteWrite.url endpoint.

A possible solution might be to change the code here

metrics/push.go

Lines 236 to 242 in fdfd428

case <-stopCh:
if wg != nil {
wg.Done()
}
return
}
}

to push the metrics on shutdown

To reproduce

Use vmagent in an environment with a short life cycle.

Version

vmagent-20230313-021802-tags-v1.89.1-0-g388d6ee16
But it doesn't really matter since the same problem exists even in the last version of vmagent

What's the best way to purge a metrics set?

Hi,

I'm trying to implement some clean-up logic on metrics.Set

func generateMetricSet() *metrics.Set {
	ms := metrics.NewSet()
	total_metrics := 1000
	for total_metrics > 0 {
		ms.GetOrCreateCounter(RandStringBytesRmndr(50)).Set(uint64(rand.Int63()))
		total_metrics--
	}
	return ms
}

const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
func RandStringBytesRmndr(n int) string {
	b := make([]byte, n)
	for i := range b {
		b[i] = letterBytes[rand.Int63() % int64(len(letterBytes))]
	}
	return string(b)
}

func BenchmarkMetricParser_Purge(b *testing.B) {
	for i := 0; i < b.N; i++ {
		ms := generateMetricSet()
		for _,m := range ms.ListMetricNames() {
			ms.UnregisterMetric(m)
		}
	}
}
func BenchmarkMetricParser_Purge2(b *testing.B) {
	for i := 0; i < b.N; i++ {
		ms := generateMetricSet()
		_ = ms
		ms = nil
	}
}
BenchmarkMetricParser_Purge-4                458           5385638 ns/op          366105 B/op       4066 allocs/op
BenchmarkMetricParser_Purge2-4               618           3673018 ns/op          315736 B/op       4054 allocs/op

What's is a best approach to do the periodic cleanup?

VM as a metric solution, metrtics API need to add more feature

I want to use metrics API to replace prometheus API.
But when I read source code, I think add more features will be wonderful:

  1. keep fast and low memory use, and support profile report.
  2. user can limit the metric count (for a server, don't let ruin robustness)
  3. user can limit the memory use of metric api
  4. remove short time metrics which not change for hours
  5. make own binary marshal format and binary protocol, low cpu use and low bindwith.

thanks.

vmrange histograms in stock Prometheus?

Plenty of services came to expect Prometheus style of histograms.

Considering how hard it is to pull off a single day migration, what can orgs do to make vmrange histograms work in existing dashboards?

Do you have a way to show data without VictoriaMetrics extensions?

Summary Unregister is partial

Call Unregister method for summary metric does only partial removing, so re-registering of metric is impossible:

s := NewSet()
s.NewSummary("summary")
s.UnregisterMetric("summary")
s.NewSummary("summary") // this fails 

program will oom cause massive metrics in memory without expiring mechanism

there has a program and it consumes massive message from kafka. then it will trans the message to metrics format and use the vm metrics sdk to push to victoriametrics.
cause the vm metrics sdk holds global map to store the all the metrics in it and there has no expiring mechanism, the map will expand infinitely and finally the program oom. The process described above will be more obvious. Any solution to solve it?
and it seems there has a similar problem in https://github.com/prometheus/client_golang
prometheus/client_golang#920

// Set is a set of metrics.
//
// Metrics belonging to a set are exported separately from global metrics.
//
// Set.WritePrometheus must be called for exporting metrics from the set.
type Set struct {
	mu        sync.Mutex
	a         []*namedMetric
	m         map[string]*namedMetric
	summaries []*Summary
}

Introduce a `Close` method that pushes metrics one last time.

When you use the InitPush method to push metrics from a job, not all metrics get pushed.

This specifically has to do with metrics that are pushed just before a job is terminated. An example of such a metric could be how long a job took to process. Since by the time this metric is pushed, there is never a last metric push, it never gets sent to a cluster.

A solution could be to introduce a Close method perhaps compatible with io.Closer to push all metrics one last time. before a job is killed.

Tag 1.25.0+ is not compatible with go 1.20

New runtime metrics were added in this PR.

Specifically the metric "/gc/gomemlimit:bytes", "go_memlimit_bytes" is not available in go 1.20. It was added in this commit.

Using go 1.20 causes the following panic:

panic: BUG: unexpected runtimemetrics.KindBad for sample.Name="/gc/gomemlimit:bytes"

goroutine 140 [running]:
github.com/VictoriaMetrics/metrics.writeRuntimeMetric({0xe0f8c0, 0xc0002fe7e0}, {0xd23363?, 0x0?}, 0x0?)
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/go_metrics.go:97 +0x1d7
github.com/VictoriaMetrics/metrics.writeRuntimeMetrics({0xe0f8c0, 0xc0002fe7e0})
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/go_metrics.go:90 +0x139
github.com/VictoriaMetrics/metrics.writeGoMetrics({0xe0f8c0, 0xc0002fe7e0})
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/go_metrics.go:26 +0x46
github.com/VictoriaMetrics/metrics.WriteProcessMetrics({0xe0f8c0, 0xc0002fe7e0})
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/metrics.go:213 +0x25
github.com/VictoriaMetrics/metrics.WritePrometheus({0xe0f8c0, 0xc0002fe7e0}, 0x1)
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/metrics.go:88 +0x299
github.com/VictoriaMetrics/metrics.InitPush.func1({0xe0f8c0?, 0xc0002fe7e0?})
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/push.go:53 +0x27
github.com/VictoriaMetrics/metrics.InitPushExt.func1()
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/push.go:128 +0x1b4
created by github.com/VictoriaMetrics/metrics.InitPushExt
	/home/runner/go/pkg/mod/github.com/!victoria!metrics/[email protected]/push.go:121 +0x68e

Steps to reproduce:

Using go 1.20

func TestMetrics(t *testing.T) {
	var bb bytes.Buffer
	writeRuntimeMetrics(&bb)
	fmt.Println(bb.String())
}

This same test passes using go 1.21

High metric churn when using histogram

We are using statsd_exporter, and I'm looking to plug VM histograms there. One issue I see is the high metric churn because each vmrange label is its own series. And we use a TTL, so there will be a large number of short duration metrics. I suppose it depends on what the cardinality of values is for the target applications. For a webapp ideally, they should be within 10s (which means 18 series). But as we start measuring parts of an application, for .e.g external dependencies like Redis, Mysql, etc this cardinality can explode.

So question is, in practice what is the cost of the VM histograms v/s the Prometheus histogram? Is it advisable to continue to use Prometheus style histograms, and use VM histograms for special cases?

Another approach I'm looking at to improve on Prometheus Histogram is to also publish summary metrics. i.e. every timer produces a histogram + summary. So if a histogram caps at 10s, the summary can provide quantiles at 1.0, 0.0, p95 etc; albeit not across series, it's still useful. This seems less "costly" for me compared to moving to VM histograms.

Histograms compatible with Prometheus

Documentation makes a statement on Prometheus histograms support.

Why Histogram buckets contain vmrange labels instead of le labels like in Prometheus histograms?

Buckets with vmrange labels occupy less disk space compared to Promethes-style buckets with le labels, because vmrange buckets don't include counters for the previous ranges. VictoriaMetrics provides prometheus_buckets function, which converts vmrange buckets to Prometheus-style buckets with le labels. This is useful for building heatmaps in Grafana. Additionally, its' histogram_quantile function transparently handles histogram buckets with vmrange labels.

While I appreciate the new way of exposing and storing histogram data with vmrange, the lack of of support for le histograms reduces applicability of this library.

As a developer I'm not entirely happy with standard prometheus client (for a variety of reasons from clumsy API to protobuf dependencies) and I would like to use metrics instead, but as a developer I don't have control of what kind of technology is provided by infrastructure (Prometheus, VictoriaMetrics, etc..).

Unfortunately lack of support for le histograms makes this client incompatible with Prometheus scraper/db.

Hopefully the decision about additionally supporting le histograms can be reconsidered, so that metrics works nicely with a variety of scrapers/db.

Maybe that would also help for better adoption of VictoriaMetrics as applications may have a smoother transition by integrating metrics client first and switching scraper/db technology later.

Question: Implement Set on Gauge

I strumbled across a common pitfall in Go wrt Closures. Consider the code:

func (s *stats) Set(label string, val float64) {
	val2 := val
	s.metrics.GetOrCreateGauge(label, func() float64 {
		return val2
	})
}
  • When s.Set() is called for the first time the value of val and val2 is same.
  • When s.Set() is invoked the next time the value of val updates but val2 is still the same.

This strange behavior happens because Gauge stores the function closure inside the struct. Any future invocations don't update the function and the user sees the old value.

The workaround was to put the value in a map first and inside function closure, lookup on the map to get the key.

func (s *stats) Set(label string, val float64) {
	s.Lock()
	defer s.Unlock()
	s.guages[label] = val
	s.metrics.GetOrCreateGauge(label, func() float64 {
		return s.guages[label]
	})
}

I am not sure but storing a closure inside Gauge does not seem like a proper API to me. Is there any reason why a .Set(float64) method can't be provided and why a func() float64 has to be passed instead? The Gauge API can be similar to Counter.Set method which takes a uint64.

This took a lot of time to figure out so documenting in case anyone else also comes across this behaviour.

ability to transparent replace original

nice package, thanks.
does it possible to transparently replace github.com/prometheus/client_golang/prometheus
with github.com/VictoriaMetrics/metrics via go.mod ?
I don't want to change import paths in code, but prefer to override it via go.mod

How to display the histogram with vmrange label in grafana?

Some case,all failed

histogram_quantile(0.9, rate(pair_cache_seconds_bucket[5m]))

prometheus_buckets(sum(rate(pair_cache_seconds_bucket)) by (vmrange))

histogram_quantile(0.99, sum(increase(pair_cache_seconds_bucket[5m])) by (vmrange))

How does VM solve "incompatible bucket ranges" problem?

Hi there,

I came across the blog post titled Improving Histogram Usability for Prometheus and Grafana where the author claims to have resolved "Issue #3: incompatible bucket ranges." I am curious about the specific approach taken to solve this problem. Could you please provide more details on how this issue was addressed? Additionally, if possible, point me to the relevant source code where this solution has been implemented.

Moreover, I have encountered a problem in our production environment related to this topic. When the range of buckets varies, the calculated percentile data seems to be completely inaccurate. Below, I am providing the relevant data for your reference.

When using the following expression to calculate the final percentile value, the result returned is 30000, which is incorrect:

(histogram_quantile(0.99, sum (rate(http_client_requests_seconds_bucket{env="staging",project_name="xx"}[2m])) by (le, uri, method, project_name)) * 1000)

Data

[{"metric":{"le":"+Inf","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"+Inf","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"30.0","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"28.633115306","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"22.906492245","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"17.179869184","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"15.748213416","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"14.316557651","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"12.884901886","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"11.453246121","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"10.021590356","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"10.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"8.589934591","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"7.158278826","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"6.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"5.726623061","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"4.294967296","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"4.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"3.937053352","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"3.579139411","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"3.22122547","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"3.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"2.863311529","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"2.505397588","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"2.147483647","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"2.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"1.789569706","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"1.5","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"1.431655765","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"1.073741824","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"1.0","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"0.984263336","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.894784851","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.805306366","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.768","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"0.715827881","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.64","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9166"]},{"metric":{"le":"0.626349396","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.536870911","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.512","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9165"]},{"metric":{"le":"0.447392426","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.384","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9165"]},{"metric":{"le":"0.357913941","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.268435456","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.256","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9164"]},{"metric":{"le":"0.246065832","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.223696211","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.20132659","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.192","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9161"]},{"metric":{"le":"0.178956969","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.156587348","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.134217727","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.128","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9157"]},{"metric":{"le":"0.111848106","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.089478485","instance":"10.90.39.2:8081"},"value":[1716366921.957,"26"]},{"metric":{"le":"0.067108864","instance":"10.90.39.2:8081"},"value":[1716366921.957,"25"]},{"metric":{"le":"0.064","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9140"]},{"metric":{"le":"0.061516456","instance":"10.90.39.2:8081"},"value":[1716366921.957,"25"]},{"metric":{"le":"0.055924051","instance":"10.90.39.2:8081"},"value":[1716366921.957,"25"]},{"metric":{"le":"0.050331646","instance":"10.90.39.2:8081"},"value":[1716366921.957,"24"]},{"metric":{"le":"0.044739241","instance":"10.90.39.2:8081"},"value":[1716366921.957,"23"]},{"metric":{"le":"0.04","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9126"]},{"metric":{"le":"0.039146836","instance":"10.90.39.2:8081"},"value":[1716366921.957,"23"]},{"metric":{"le":"0.033554431","instance":"10.90.39.2:8081"},"value":[1716366921.957,"23"]},{"metric":{"le":"0.032","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9113"]},{"metric":{"le":"0.027962026","instance":"10.90.39.2:8081"},"value":[1716366921.957,"23"]},{"metric":{"le":"0.024","instance":"10.90.42.35:8081"},"value":[1716366921.957,"9097"]},{"metric":{"le":"0.022369621","instance":"10.90.39.2:8081"},"value":[1716366921.957,"21"]},{"metric":{"le":"0.016777216","instance":"10.90.39.2:8081"},"value":[1716366921.957,"4"]},{"metric":{"le":"0.015379112","instance":"10.90.39.2:8081"},"value":[1716366921.957,"2"]},{"metric":{"le":"0.013981011","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.01258291","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.012","instance":"10.90.42.35:8081"},"value":[1716366921.957,"7339"]},{"metric":{"le":"0.011184809","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.009786708","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.008388607","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.008","instance":"10.90.42.35:8081"},"value":[1716366921.957,"1"]},{"metric":{"le":"0.006990506","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.006","instance":"10.90.42.35:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.005592405","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.004194304","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.004","instance":"10.90.42.35:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.003844776","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.003495251","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.003145726","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.003","instance":"10.90.42.35:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.002796201","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.002446676","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.002097151","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.002","instance":"10.90.42.35:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.001747626","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.001398101","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.001048576","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.001","instance":"10.90.39.2:8081"},"value":[1716366921.957,"0"]},{"metric":{"le":"0.001","instance":"10.90.42.35:8081"},"value":[1716366921.957,"0"]}]

I would appreciate it if you could analyze this data and help identify the root cause of the issue.

Sampling support

Metrics update in application's hot path may be expensive. Some heavy-loaded web servers or processing pipelines usually serve hundreds of thousands of events per second, and to have the visibility they also increment a bunch of metrics for each event. It would be cool, if metrics package could have native sampling support. For example:

requestsTotal = metrics.NewCounterSampled("requests_total", 0.1)

Every call to requestsTotal.Inc() will have 10% chance to actually increment the counter, and 90% chance to do nothing. But when increment actually happens, it will be +=10 instead of +=1.

Workaround. Batch metrics updates in a hot path and make them periodic (e.g. every 100ms) with a fixed rate.

Add function to Unregister metrics from default set

While there is possibility to register metrics into default set, there is no option to unregister them from default set.
As a workaround user can use his own set but this may be complicated if metrics are used in multiple packages.

Export CPU cores metrics

Was going through the 1.73.0 Release Notes and came across this issue where process_cpu_cores_available which is available in all Victoriametrics components.

I am wondering, shouldn't this be available inside metrics library as well over here: https://github.com/VictoriaMetrics/metrics/blob/master/process_metrics_linux.go

This library already exports process specific CPU/Memory/IO metrics, so might be a good place to add it here so all users of this library can get access to process_cpu_cores_available as well?

Thanks

[FR] Possible package push improvements

Hello everyone!
I have some questions/proposals to improve the user experience when pushing metrics using this library. Some of them are not backward-compatible, so it's up to you to decide on these :)

  • Changing the data type of extraLabels from the string expecting it to be properly formatted to map[string]string would allow to prepare the extra labels in the expected way without exposing these details and checking for the validity of the data
  • Instead of adding each extra label to each metric on each push it is possible to pass these extra labels as extra_label query arg once when initializing, something like
args := url.Values{}
for label, value := range extraLabels {
    args.Add("extra_label", fmt.Sprintf("%s=%s", label, value))
}
    pu.RawQuery = args.Encode()
    url := parsed.String()
  • Adding some metrics for visibility could be useful for debug, something like
metrics_push_interval_seconds
metrics_push_total
metrics_push_errors_total
metrics_push_bytes_pushed_total
metrics_push_duration_seconds
metrics_push_block_size_bytes

Helper function to build metric's name

Hi, thank you for the lib it's great.

I have a proposition to add a helper function that creates a metric's name by it's name and labels. Current API for Counter(applies to other types too) expects a final name, like foo{bar="baz",aaa="b"}.

Building such names is not that hard but it's repetitive work without sense (had to copy buildName helper 4 or so times between projects).

Proposal: add func BuildName(name string, labels ...string) string that creates a valid Prometheus compatible name. Especially there is no error result, we can just panic if the labels variable isn't even (key-value, huh). However, it may return an error, not so hard too.

Happy to make a PR for that. Thanks.

Add optional support for data type and description for metrics

Prometheus metrics generally list a data type and a description of the metric. Some agents even look for this data. It generally looks like something below.

# HELP http_requests_total Total number of http api requests
# TYPE http_requests_total counter
http_requests_total{api="add_product"} 4633433

I believe this library doesn't add the data type nor the description which makes it difficult for some agents to scrape the data points.

Would it be possible to add the ability to denote a description and data type for each metric?

push with authorization

It would be great if metric push allowed to add authorization header like "Bearer xxxxx".
This will allow push data to managed instance on victoriametrics.com

go_* metrics do not use VictoriaMetrics histogram

I'm using export process metrics as described in the example.

metrics.WritePrometheus(w, true)

and the timing metrics use a Prometheus type histogram with le instead of a VM type histogram with vmrange.

go_sched_latencies_seconds_bucket{le="0"} 0

Why is that? Do you plan to support the ability to provide go_* histogram metrics using the VM way?

Integrate library into statsd_exporter like service

I'm using Prometheus statsd_exporter to send metrics to Victoriametrics. I would like to use the metrics library, primarily to support VictoriaMetrics Histograms. Looking into statsd_exporter, it's not easy to add this feature there, as its coupled tightly with the Prometheus Go client library.

In addition, my future plan is to move to Statsd to VM push-based pipeline; however, this means no Histogram support (except for gostatsd by Atlassian that has experimental support for Prometheus style Histograms). It would be great if VM type histograms are added there, or other notable statsd servers.

Support request method

The current request method is hardcoded(default is GET), and we hope to configure it through options.

metrics/push.go

Line 370 in da211e5

req, err := http.NewRequestWithContext(ctx, "GET", pc.pushURL.String(), reqBody)

Repeating "/proc/self/io: no such file or directory" error on kernel without CONFIG_TASK_IO_ACCOUNTING

Describe the bug

When running a kernel config which does not include CONFIG_TASK_IO_ACCOUNTING, the logs are filled with the repeating error

"ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory"

To Reproduce

Run VM on any kernel which does not have CONFIG_TASK_IO_ACCOUNTING in it's config. You will find that there is no /proc/*/io path.

Version

/proc/*/io
victoria-metrics-20221117-195124-heads-public-single-node-0-g353396aa2

Logs

56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:42:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:42:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:42:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:42:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:42:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:43:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:44:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:45:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:46:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:47:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:48:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:08 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:18 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:28 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:38 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:48 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory
56b3e8fe5e17_victoria-metrics_victoriametrics_1 | 2023/01/08 01:49:58 ERROR: metrics: cannot open "/proc/self/io": open /proc/self/io: no such file or directory

Used command-line flags

-promscrape.config=/srv/prometheus.yml -selfScrapeInterval=10s

Additional information

On a kernel without CONFIG_TASK_IO_ACCOUNTING, /proc/*/io will never populate.

As such, if VM cannot find /proc/self/io it should stop querying it, and stop logging it.

Specify timestamp when pushing metrics

Hello,

I haven't been able to get the client_golang package to push anything into my Victoria Metrics instance and therefore I tried this package. This package was a lot simpler and I could push data into my Victoria Metrics instance.

However in my particular case I want to set the timestamp for each metric I push into Victoria Metrics. The client_golang client supports this via NewMetricWithTimestamp but I wasn't able to find any information about pushing a metric with a set timestamp in this package.

Is this something that this package supports?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.