Giter Club home page Giter Club logo

promxy's Introduction

Promxy Test Actions Status GoDoc Go Report Card build Docker Repository on Quay

pronounced "promski" or präm-sē

High-level overview

Promxy is a prometheus proxy that makes many shards of prometheus appear as a single API endpoint to the user. This significantly simplifies operations and use of prometheus at scale (when you have more than one prometheus host). Promxy delivers this unified access endpoint without requiring any sidecars, custom-builds, or other changes to your prometheus infrastructure.

Why promxy?

Detailed version

Short version: Prometheus itself provides no real HA/clustering support. As such the best-practice is to run multiple (e.g N) hosts with the same config. Similarly prometheus has no real built-in query federation, which means that you end up with N sources in grafana which is (1) confusing to grafana users and (2) has no support for aggregation across the sources. Promxy enables an HA prometheus setup by "merging" the data from the duplicate hosts (so if there is a gap in one, promxy will fill with the other). In addition Promxy provides a single datasource for all promql queries -- meaning your grafana can have a single source and you can have globally aggregated promql queries.

Quickstart

Release binaries are available on the releases page.

If you are interested in hacking on promxy (or just running your own build), you can clone and build:

git clone [email protected]:jacksontj/promxy.git
cd promxy/cmd/promxy && go build

An example configuration file is available in the repo.

With that configuration modified and ready, all that is left is to run promxy:

./promxy --config=config.yaml

FAQ

What is a "ServerGroup"?

A ServerGroup is a set of prometheus hosts configured the same. This is a common best practice for prometheus infrastructure as prometheus itself doesn't support any HA/clustering. This allows promxy to merge data from multiple hosts in the ServerGroup (all until it becomes a priority). This allows promxy to "fill" in the holes in timeseries, such as the ones created when upgrading prometheus or rebooting the host

What versions of prometheus does promxy support?

Promxy uses the /v1 API of prometheus under-the-hood, meaning that promxy simply requires that API to be present. Promxy has been used with as early as prom 1.7 and as recent as 2.13. If you run into issues with any prometheus version with the /v1 API please open up an issue.

What version of prometheus does promxy use? And what does that mean?

Promxy is currently using a fork based on prometheus 2.24. This version isn't supremely important, but it is relevant for promql features (e.g. subqueries) and sd config options.

What changes are required to my prometheus infra for promxy?

None. Promxy is simply an aggregating proxy that sends requests to prometheus -- meaning it requires no changes to your existing prometheus install.

Can I have promxy as a downstream of promxy?

Yes! Promxy simply aggregates other prometheus API endpoints together so you can definitely layer promxy. Similarly you can mix prometheus API endpoints, for example you could have prometheus, promxy, and VictoriaMetrics all as downstreams of a promxy host -- since they all have prometheus compatible APIs.

What is query performance like with promxy?

Promxy's goal is to be the same performance as the slowest prometheus server it has to talk to. If you have a query that is significantly slower through promxy than on prometheus direct please open up an issue so we can get that taken care of.

Note: if you are running prometheus <2.2 you may notice "slow" performance when running queries that access large amounts of data. This is due to inefficient json marshaling in prometheus. You can workaround this by configuring promxy to use the remote_read API.

How does Promxy know what prometheus server to route to?

Promxy currently does a complete scatter-gather to all configured server groups. There are plans to reduce scatter-gather queries but in practice the current "scatter-gather always" implementation hasn't been a bottleneck.

How do I use alerting/recording rules in promxy?

Promxy is simply an aggregating proxy in front of your prometheus infrastructure. As such, you can use promxy to create alerting/recording rules which will execute across your entire prometheus infrastructure. For example, if you wanted to know that the global error rate was <10% this would be impossible on the individual prometheus hosts (without federation, or re-scraping) but trivial in promxy.

Note: recording rules in regular prometheus write to their local tsdb. Promxy has no local tsdb, so if you wish to use recording rules (or see the metrics from alerting rules) a remote_write endpoint must be defined in the promxy config (which is where it will send those metrics).

What happens when an entire ServerGroup is unavailable?

The default behavior in the event of a servergroup being down is to return an error. If all nodes in a servergroup are down the resulting data can be inaccurate (missing data, etc.) -- so we'd rather by default return an error than an inaccurate value (since alerting etc. might rely on it, we don't want to hide a problem).

Now with that said if you'd like to make some or all servergroups "optional" (meaning the errors will be ignored and we'll serve the response anyways) you can do this using the ignore_error option on the servergroup.

Questions/Bugs/etc.

Feedback is greatly appreciated. If you find a bug, have a feature request, or just have a general question feel free to open up an issue!

promxy's People

Contributors

alessandroniciforo avatar arramos84 avatar berthartm avatar chaets avatar dependabot[bot] avatar frebib avatar garyclee avatar grzesuav avatar hatemosphere avatar hayk96 avatar hienvanhuynh avatar jacksontj avatar jerrybelmonte avatar jmcarp avatar kailunwang-houzz avatar kcboyle avatar mortenmj avatar msambol avatar obitech avatar quentinbisson avatar robertjsullivan avatar sundy-li avatar urgerestraint avatar vgalda avatar vsliouniaev avatar warshawd avatar wing924 avatar yogeek avatar zeeshen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

promxy's Issues

Add config option to optionally not hit all nodes in a server_group

Separately from the scatter-gather, promxy hits each node in a given server group and merges there results together. In the case where the first result has no "holes" (as defined by the anti-affinity config) then it doesn't look at the second response. Right now the query is sent to both for consistent load/performance, but if there where more hosts in the server_group (>2 -- something like 10) then hitting all nodes would be excessive. It seems prudent to add some configs:

  1. parallel server fetch count -- how many servers to send the initial request to
  2. max server fetch count -- how many we'll continue sending until

With these a user could control (1) how many servers to hit and (2) if promxy should hold the request trying to get the data from more servers.

Slow query response with multiple clients

The promxy proxy service seems to be considerably slower when there are multiple different clients running queries. When we had multiple end-users (3-4) open grafana dashboards with promxy as datasource, the response time dropped drastically.

From looking at the log output to stdout - it seems to run the queries one at a time (queue). This may be why the delay increases.

1 client : 2-4 second dashboard load/refresh time
3 clients : 5-30 second dashboard load/refresh time.

Panic upon accessing `/`

new issue since disabling forwarding of other URLs

2018/04/30 20:43:28 http: panic serving 10.42.16.53:61716: runtime error: invalid memory address or nil pointer dereference
goroutine 2259 [running]:
net/http.(*conn).serve.func1(0xc42022b360)
	/usr/local/go/src/net/http/server.go:1726 +0xd0
panic(0x19c81c0, 0x2af26b0)
	/usr/local/go/src/runtime/panic.go:505 +0x229
github.com/jacksontj/promxy/vendor/github.com/prometheus/prometheus/web.New.func3(0x1e3d360, 0xc420025180, 0xc4201dd900)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/vendor/github.com/prometheus/prometheus/web/web.go:210 +0x48
github.com/jacksontj/promxy/vendor/github.com/prometheus/common/route.(*Router).handle.func1(0x1e3d360, 0xc420025180, 0xc4201dd800, 0x0, 0x0, 0x0)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/vendor/github.com/prometheus/common/route/route.go:60 +0x222
github.com/jacksontj/promxy/vendor/github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc4204ab890, 0x1e3d360, 0xc420025180, 0xc4201dd800)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/vendor/github.com/julienschmidt/httprouter/router.go:299 +0x6c1
github.com/jacksontj/promxy/vendor/github.com/prometheus/common/route.(*Router).ServeHTTP(0xc420365160, 0x1e3d360, 0xc420025180, 0xc4201dd800)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/vendor/github.com/prometheus/common/route/route.go:98 +0x4c
main.main.func2(0x1e3d360, 0xc420025180, 0xc4201dd800)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/cmd/promxy/main.go:241 +0xc4
github.com/jacksontj/promxy/vendor/github.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc420bd51d0, 0x1e3d360, 0xc420025180, 0xc4201dd800)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/vendor/github.com/julienschmidt/httprouter/router.go:359 +0x29d
github.com/jacksontj/promxy/logging.(*ApacheLoggingHandler).ServeHTTP(0xc42065f3c0, 0x1e46d20, 0xc4202062a0, 0xc4201dd800)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/logging/logging.go:72 +0x1dc
net/http.serverHandler.ServeHTTP(0xc4202109c0, 0x1e46d20, 0xc4202062a0, 0xc4201dd800)
	/usr/local/go/src/net/http/server.go:2694 +0xbc
net/http.(*conn).serve(0xc42022b360, 0x1e489a0, 0xc420ac6180)
	/usr/local/go/src/net/http/server.go:1830 +0x651
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2795 +0x27b

Limit queries to server groups by using label?

Hi,

First off - great project. Looking forward to making use of it.

I had a question regarding a (potential) improvement to the scatter gather logic for queries. I know there is currently an issue (#2) open in regards to reducing the querying to only those server groups that have a given set of metrics. However, I was curious if you'd given any thought to reducing the queries by the labels provided in the promql statement prior to issuing a query to all server groups.

That is, if I had a promql statement that looked like this:
http_server_200{sg="localhost_9090"}

Only send that query to server groups with the matching label. Maybe this is naive and ignoring the existing implementation but just curious if this was something you'd considered.

Thanks!

Promxy exposed metrics question

Hi there,

I'm trying to setup some monitoring based on the exposed metrics from promxy. One of the metrics exposed is server_group_request_sum and server_group_request_count. My understanding is that the first one represents the sum of the requests times while the latter represents the total count of the requests.

If this is correct it is not very clear what is the unit size(seconds,milliseconds,microseconds?) of the server_group_request_sum metric. The results I have so far doesn't make any sense to me if I assume the unit size is any of these: seconds/milliseconds/microseconds.

Properly handle timeout responses from downstream

Right now all errors from downstream prometheus hosts are treated as regular errors and passed back up. the promql API will wrap certain errors (timeout, cancel, etc.) on its own (based on the status code-- https://prometheus.io/docs/prometheus/latest/querying/api/#format-overview wrapping done here https://github.com/prometheus/prometheus/blob/master/web/api/v1/api.go#L386).

The action here is to make the prometheus client in promxy check the status code, and wrap with the appropriate error type based on the downstream error.

http dial timeout can be too short

I have some prometheus instances that are about 200ms away. Since the dial timeout is hard coded to 200ms, i intermittently get connection failures. I understand that the httpclient config options are from the prometheus library, so difficult to alter, but would it perhaps be possible to add a global level config setting to override the dial timeout?

Add support for recording rules?

As originally brought up in #74 recording rules are a bit confusing. To summarize the issue, promxy can execute the rule but has no place to store it (prom doesn't expose an appender remote API).

It would be good to add support, but it is unclear what to do with them. As it stands today promxy will error out if you ask it to run recording rules.

Some options I see:

  • configurable remote_write endpoint to store the points to
  • create our own scrape endpoint to publish metrics from (meaning something would have to scrape it)
  • ?

Update: in addition to recording rules, regular rule evaluations also append metrics using this same mechanism (not sure why they don't use a scrape endpoint, but they do).

Notification queue overflowing

level=warn ts=2018-04-30T19:32:51.762010083Z caller=notifier.go:362 component=notifier msg="Alert batch larger than queue capacity, dropping alerts" num_dropped=1

Whole group is down if one of targets is down

I have a group with two static targets and if one of these is down promxy gets 502 from it (response from proxy). It worked fine, but since 0.0.34 it doesn't any more – if one of targets is down, promxy also returns an error.

Question: Combining promxy with promxy possible?

Hello,

is it possible to use promxy just as a proxy for one prometheus instance.
I want to get the federation endpoint of a prometheus that I can't access directly from my main prometheus. But I can run a proxy to access it. Would you recommend to use promxy or approach with something like nginx?

And would it also be possible to access promxy by another promxy?

Possible setups that I am thinking of are to make it more clear:
1.
main prometheus -> promxy -> promxy -> prometheus without direct access
2.
main prometheus -> promxy -> nginx -> prometheus without direct access

Question: can I route queries based on *existing* labels

When you add labels to the labels section , this does two things:

  1. Add a label to the metrics
  2. Restrict queries to the group that has that label

We've got multiple datacenters, and each one already has its own datacenter label, so I don't want to add another. For one thing, they would not apply to historical metrics. But for another, I don't want to have to update a bunch of dashboards with a new label in the queries.

Question: can I get Promxy to filter on an existing label without trying to add it? (We have tried, and it didn't work)

Getting changing data, with 2 nodes in Server Group

I look at the data directly between the 2 servers, and they seem to basically match, but in Grafana through Promxy, the data jumps randowly, sometimes into invalid ranges (free memory goes negative).

Normal data:

screen shot 2019-01-23 at 1 34 29 pm

Deviation 1:

screen shot 2019-01-23 at 1 36 35 pm

Deviation 2:

screen shot 2019-01-23 at 1 35 14 pm

Config:

promxy:
  server_groups:
    - static_configs:
        - targets:
          - prometheus-infra-1.XXXXX.net:9090
          - prometheus-infra-2.XXXXX.net:9090
      # labels to be added to metrics retrieved from this server_group
      labels:
        sg: _prometheus_infra_ops_mode_net9090
      # anti-affinity for merging values in timeseries between hosts in the server_group
      anti_affinity: 10s
      # options for promxy's HTTP client when talking to hosts in server_groups
      http_client:
        # dial_timeout controls how long promxy will wait for a connection to the downstream
        # the default is 200ms.
        dial_timeout: 1s
        tls_config:
          insecure_skip_verify: true

Direct prom data 1:

screen shot 2019-01-23 at 1 56 37 pm

Direct prom data 2:

screen shot 2019-01-23 at 1 56 42 pm

Ideas on why it would change like this? Also I dont see where anti_affinity is documented, so I dont know if it's worth changing that.

Add timeouts/healthchecking to downstream prometheus hosts in servergroups

The current workaround for the issue described here ran into problems with #70 . From the link:

        // as of now the service_discovery mechanisms in prometheus have no mechanism of
	// removing unhealthy hosts (through relableing or otherwise). So for now we simply
	// set a dial timeout, assuming that if we can't TCP connect in 200ms it is probably
	// dead. Our options for doing this better in the future are (1) configurable
	// dial timeout (2) healthchecks (3) track "healthiness" of downstream based on our
	// requests to it -- not through other healthchecks

This means the workaround doesn't work for those who use auth to talk to their prometheus hosts. As for our options (1) isn't really viable without making changes to the upstream common packages as they don't expose either the RT or the option to set a dial timeout (2/3) are both viable without upstream changes, but require additional configuration etc.

For now it seems like creating upstream issues for option 1 is the easiest, and if that doesn't go anywhere then we can pursue 2/3

Error message occurring with --help option

We see this message on doing ./promxy --help or ./promxy -h :

FATA[0000] Error parsing flags: Usage:
promxy [OPTIONS]

And the Help text is repeated.
See below :

Usage:
  promxy [OPTIONS]

Application Options:
      --bind-addr= address for promxy to listen on (default: :8082)
      --config=    path to the config file
      --log-level= Log level (default: info)

Help Options:
  -h, --help       Show this help message

FATA[0000] Error parsing flags: Usage:
  promxy [OPTIONS]

Application Options:
      --bind-addr= address for promxy to listen on (default: :8082)
      --config=    path to the config file
      --log-level= Log level (default: info)

Help Options:
  -h, --help       Show this help message

promxy compiled with go 1.10 on SuSE linux

relabel_configs should be metric_relabel_configs

Hello!

https://github.com/jacksontj/promxy/blob/master/servergroup/config.go#L50-L55

// RelabelConfigs are similar in function and identical in configuration as prometheus'
// relabel config for scrape jobs. The difference here being that the source labels
// you can pull from are from the downstream servergroup target and the labels you are
// relabeling are that of the timeseries being returned. This allows you to mutate the
// labelsets returned by that target at runtime.
RelabelConfigs []*config.RelabelConfig `yaml:"relabel_configs,omitempty"`

Please correct me if I'm wrong. According to this comment, it seems that relabel_configs apply not to each discovered target but to each sample.

Prometheus use relabel_configs to relabel discovered target and use metric_relabel_configs to relabel samples.

In addition, we need to relabel discovered target or we can't use many service discovery methods other than static_sd and file_sd.

question: log levels

I am not able to control the log level.

I've tried running promxy v0.0.18 with different --log-level values (debug, info, warn). For example I always see requests that shouldn't appear with --log-level=warn:

[28/Jul/2018 07:51:59] "GET /-/healthy HTTP/1.1 200 23" 0.000015
[28/Jul/2018 07:52:00] "GET /metrics HTTP/1.1 200 5156" 0.083476
[28/Jul/2018 07:52:00] "GET /-/healthy HTTP/1.1 200 23" 0.000015

What are the supported --log-level values?
Thank you!

Configurations with LB using promxy

Hello ,

I am trying to use promxy. Currently I have 3 Nodes of prometheus servers with same configuration to achieve the replication of the data.

On top of that I have configured Nginx for LB. So how I can use promxy with this setup? What configurations (endpoints) I need to modify in Grafana ?

Confused about alerting and rules configurations

Hi,

I'm just getting started with Prometheus and I've just started looking at this project so please pardon any ignorance.

I'm mainly unsure how to configure recording rules and alerting in Promxy. Do these settings replace the equivalent settings in Prometheus itself? Are they in addition to them?

Also what kind of system specs are needed by Promxy compared to my Prometheus servers? Does it need to run on equivalent hardware or is it easy to scale out horizontally with multiple Promxy nodes behind a single load balancer?

Thanks!

Add more detailed debug logs for query routing

With the change to the upstream prom client we lost most of the debug log details. Before it printed out the exact urls queried for a request. Given that everything is based around the API interface -- I think it will make more sense to print the name/args for the method being called with some identifier of "what" this API layer is. This way it can be layered in for both the individual nodes as well as the aggregation/relabels/etc.

Cannot install promxy

[root@promxy-poc ~]# go get -u github.com/jacksontj/promxy/cmd/promxy
package github.com/prometheus/prometheus/storage/metric: cannot find package "github.com/prometheus/prometheus/storage/metric" in any of:
	/usr/local/go/src/github.com/prometheus/prometheus/storage/metric (from $GOROOT)
	/root/go/src/github.com/prometheus/prometheus/storage/metric (from $GOPATH)
package github.com/prometheus/prometheus/storage/local: cannot find package "github.com/prometheus/prometheus/storage/local" in any of:
	/usr/local/go/src/github.com/prometheus/prometheus/storage/local (from $GOROOT)
	/root/go/src/github.com/prometheus/prometheus/storage/local (from $GOPATH)

Investigate mechanisms to reduce scatter gather

Right now each query must be sent to each server_group to see if they have data that matches. We could (for example) maintain a bloom filter of metric names that exist on the remote server_groups -- then only send queries there when a name match exists (this is only helpful if the metric isn't in all of them-- since doing this with labels is probably impactical).

add support subquery

Since v2.8, prometheus supports subquery. I would be nice if promxy also support it.

Add support for various timeranges associated with ServerGroups

The goal here being to support sending queries to separate stacks of metrics storage based on their "staleness". The prom API unfortunately doesn't expose the StartTime() method from the Storage -- so we'd need to add config options to set a time.Duration offset and duration for each servergroup. Then promxy can add this dimension to its filtering for where to send queries to.

Add Docker image

It would be great to have a Dockerfile and a ready-to-use docker image.
Are you open to contributions?

Comparing to remote_read

Hello!
Currently, I use prometheus remote_read to archive:

  • no more "holes" in metrics
  • single source in grafana
  • no need for aggregation layers of prometheus anymore!

I konw this function is not designed for this purpose, but it works.

My config is like:

remote_read:
- url: http://prom-us1:9090/api/v1/read
- url: http://prom-us2:9090/api/v1/read
- url: http://prom-jp1:9090/api/v1/read
- url: http://prom-jp2:9090/api/v1/read

I found remote_read is faster than promxy with some queries.
For example, simple query like cpu_usage{hostname="myserver"}, remote_read is fater than promxy. But aggregated query like count(up{}), promxy is fater than remote_read.

I think it's needed to add a document to compare and benchmark remote_read and promxy.

Performance and Volume metrics

Hi Jackson,
Do you have any performance numbers on promxy meaning what is the max load that you seen promxy support?
Thanks,
Prabha

Query result may incomplete

It's not clear form README, but I read the code and found that the server_groups means the HA pair of prometheus, which run with the same config. And current implementation is reading from only one server in the server group.
If I'm wrong, please correct me and ignore below.

Because prometheus don't support clustering or bulk import, it may lack some time series data due to server temporally down or reloading config. The best practice is running 2 prometheus with same config to provide HA. The single prometheus may lack some data in query result, but we can get all data if we combined the HA pair result.

Because current implementation read from only one server in the server group, the result may be incomplete. I suggest make it read at least 2 servers in server group and combine the result.

recording rules not work

set a recording rule:

groups
- name: example
    rules:
      - record: query_qps
        expr: irate(http_requests_total{handler="query"}[5m])

query irate(http_requests_total{handler="query"}[5m]) , can find data.

query query_qps , result no data.

Connection leak when using remote_read

Thank you very much for your useful tool!

With an enabled option "remote_read", each request via Promxy leaves an unused ESTABLISHED connection to Prometheus backend.
Numbers of the connections is increasing and when it reaches approximatlly 500 simultaneous connections, service freezes. Keepalive closes these connections after 5 minutes, but it doesn't help much.

How to check, config.yaml:

global:
  evaluation_interval: 10s

promxy:
  server_groups:
    - static_configs:
      - targets:
        - prometheus:9090
      remote_read: true
      anti_affinity: 10s
      http_client:
        dial_timeout: 200ms
        tls_config:
          insecure_skip_verify: true

run:
promxy --config=config.yaml --log-level=debug

watch lsof on every query:
lsof -r 2 -p `pidof promxy`

for example:
curl 'http://localhost:8082/api/v1/query_range?query=up%7B%7D&start=1545912434.588&end=1545912734.588&step=1&_=1545912714581'

Tested on v0.0.30 (with recent patch) and current master.

How to compile promxy from source ?

Hi,
I tried to do "go get -u github.com/jacksontj/promxy/cmd/promxy" and that failed due to SSL/proxy issues. Then I cloned the repo and couldn't find a Makefile so tried various combinations of "go build" but there were several errors, pasted below :

../../../../prometheus/common/log/log.go:28:2: case-insensitive import collision: "github.com/Sirupsen/logrus" and "github.com/sirupsen/logrus"
../../servergroup/servergroup.go:17:2: cannot find package "github.com/prometheus/common/promlog"
../../../../sirupsen/logrus/terminal_check_notappengine.go:9:2: cannot find package "golang.org/x/crypto/ssh/terminal" in any of:
 /go/src/golang.org/x/crypto/ssh/terminal (from $GOROOT)
/prometheus/src/golang.org/x/crypto/ssh/terminal (from $GOPATH)
../../../../prometheus/common/route/route.go:7:2: cannot find package "golang.org/x/net/context" in any of:
/golang/go/src/golang.org/x/net/context (from $GOROOT)
/prometheus/src/golang.org/x/net/context (from $GOPATH)
../../../../sirupsen/logrus/terminal_linux.go:10:8: cannot find package "golang.org/x/sys/unix" in any of:
/golang/go/src/golang.org/x/sys/unix (from $GOROOT)
/prometheus/src/golang.org/x/sys/unix (from $GOPATH)

Could we have some instructions on how to compile from source ?
Thanks!!

Basic auth is not working

A simple config with basic auth configured leads to panic. There is no documentation regarding basic auth, but from the source it seems that it should work.

global:
  evaluation_interval: 5s

promxy:
  server_groups:
    - static_configs:
      - targets:
        - localhost/prometheus1
        - localhost/prometheus2
      anti_affinity: 10s
      http_client:
        basic_auth:
          username: 'user'
          password: 'password'
        tls_config:
          insecure_skip_verify: true
./promxy --config=promxy.yml
panic: interface conversion: http.RoundTripper is *config.basicAuthRoundTripper, not *http.Transport

goroutine 1 [running]:
github.com/jacksontj/promxy/servergroup.(*ServerGroup).ApplyConfig(0xc42029ad20, 0xc420362200, 0x1, 0xc4202048c8)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/servergroup/servergroup.go:133 +0x40e
github.com/jacksontj/promxy/proxystorage.(*ProxyStorage).ApplyConfig(0xc420539b70, 0xc4204601c0, 0xc4204601c0, 0x0)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/proxystorage/proxy.go:70 +0x146
main.reloadConfig(0xc420540180, 0x5, 0x8, 0x1d36266, 0x8)
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/cmd/promxy/main.go:84 +0xc4
main.main()
	/home/tjackson/workspace/golang/src/github.com/jacksontj/promxy/cmd/promxy/main.go:303 +0x13a7

support other service discovery method

Correct me if I'm wrong. It looks don't support other service discovery other than static config.
It's better to support other methods what prometheus provides.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.