Comments (11)
It looks like a bug in Coroot, will try to reproduce it
from coroot.
Your screenshot shows no connections between Prometheus and coroot-pg-agent
(there should be an inbound connection to pg-agent). Are you sure that Prometheus has discovered the pg-agent's endpoint?
It seems that the Postgres metrics in your Prometheus are being gathered by pg_exporter
, not coroot-pg-agent
. Please check this out.
from coroot.
Thank you for the answer!
Yes, my Prometheus catches the metrics from pg-agent, I see it in it's interface
and also see graphics
But Coroot UI still doesn't get these metrics from Prometheus
from coroot.
To understand the case better, I need to see the following metrics with their labels, including the original IP addresses (Coroot maps Postgres metrics to the corresponding containers by matching IP:PORT pairs)
container_net_tcp_listen_info{container_id=~".*pg-agent.*"}
container_net_tcp_active_connections{container_id=~".*pg-agent.*"}
pg_up
from coroot.
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="10.100.10.111:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="127.0.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.17.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.17.0.2:80", machine_id="4150b6543c254de9bf577a"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.18.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.19.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.20.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/coroot-pg-agent", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.21.0.1:9886", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_active_connections{actual_destination="10.100.10.111:5432", container_id="/docker/coroot-pg-agent", destination="10.100.10.111:5432", instance="10.100.10.111:9887", job="Node-agent_coroot", machine_id="4150b6543c254de9bf577a"} 1
pg_up{instance="10.100.10.111:9187", job="Databases"} 1
pg_up{instance="10.100.10.111:9886", job="PG-agent_coroot"} 1
where port 9886 is used by pg-agent, port 9887 is used by node-agent, port 9187 is used by postgres_exporter
from coroot.
At this level, everything looks correct.
Can you please show:
container_net_tcp_listen_info{listen_addr=~".+:5432"}
What does the Postgres instance look like in the Coroot UI?
from coroot.
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="10.100.10.111:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="127.0.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres, instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.17.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.18.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.19.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.20.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.20.0.28:5432", machine_id="4150b6543c254de9bf577a"} 1
container_net_tcp_listen_info{container_id="/docker/postgres", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.21.0.1:5432", machine_id="4150b6543c254de9bf577a", proxy="dockerd"} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="10.100.10.111:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="127.0.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.17.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.18.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.19.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.20.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="172.21.0.1:5432", machine_id="4150b6543c254de9bf577a} 1
container_net_tcp_listen_info{container_id="/system.slice/docker.service", instance="10.100.10.111:9887", job="Node-agent_coroot", listen_addr="[::1]:5432", machine_id="4150b6543c254de9bf577a} 1
from coroot.
@doonydoo, can you please update Coroot to version 0.10.1 and verify if this issue has been fixed?
from coroot.
Thank you, Nikolay, now it works!
But still have no infoirmation from pg-agent about locks metrics: pg_lock_awaiting_queries, pg_wal_receiver_status, pg_wal_replay_paused, pg_wal_receive_lsn, pg_wal_reply_lsn (I use PostgreSQL 11 version).
Is it possible to get the graph like this in my UI?
from coroot.
The pg_lock_awaiting_queries
metric is reported only if the agent has detected blocked queries, so it seems that there have not been any such situations yet in your case.
The pg_wal_receiver_status, pg_wal_replay_paused, pg_wal_receive_lsn, pg_wal_reply_lsn metrics are gathered only if the Postgres server is a replica. It it true for your case?
If the instance is the master, only pg_wal_current_lsn
is gathered.
from coroot.
@def Thank you for your help and for the explanation!
from coroot.
Related Issues (20)
- incorrect grouping of nodes names HOT 5
- Single place to see logs and traces across different applications HOT 1
- Wrong link in coroot doc website HOT 1
- The loop is unconditionally terminated HOT 1
- Update document to support l7 Dubbo protocol HOT 1
- PostgreSQL Support - CloudNativePG(PostgreSQL Operator for Kubernetes) HOT 2
- Cannot read Prometheus data: too long line HOT 1
- Alerts on logging HOT 5
- VictoriaMetrics instead of Prometheus HOT 5
- Logs advanced patterns HOT 2
- Network graph toggle "only errors"
- Modes for displaying the service map
- Detailing of retransmits and other characteristics of connections.
- Costs for on-prem HOT 3
- RTT and other chart specify value type correctly HOT 3
- Issue with Enabling Profiling using Coroot's Official Helm Chart HOT 4
- Support OTLP compression HOT 5
- Support Search Trace
- Profiling most as 'unknown'
- Does coroot support clickhouse multi shards ? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from coroot.