Giter Club home page Giter Club logo

zabbix-extensions's Introduction

Zabbix Extensions that extends Zabbix monitoring facilities.

Unfortunately, most of these scripts aren't supported. If you discover issues with some or want to propose an improvement, please create a Pull-Request.

Stephan Knauss is actively maintaining the PostgreSQL and iostat sub-section. Please file issues if you detect a problem.

-- Features:

  • written in Bash (requires minimal dependencies);
  • in some cases may needs utilities for working directly with the applications (redis-cli, hpacucli, psql, etc...).
  • extensions tested, but proper work isn't guaranteed (when a mismatch between software version);

--

Disclaimer. The information contained in this repository is for general information purposes only. The information is provided by me and other contributors and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.

In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this repository.

Through this repository you are able to link to other websites or repos which are not under the our control. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.

Every effort is made to keep the repo up and running smoothly. However, I takes no responsibility for, and will not be liable for, the repository being temporarily unavailable due to technical issues beyond our control.

zabbix-extensions's People

Contributors

3oleg avatar 4orbit avatar aekondratiev avatar andrewmcgilvray avatar anlide avatar bakfietz avatar bmax77-1977 avatar casperr0 avatar cryptomaniac avatar dan-aksenov avatar dant4z avatar dockrize avatar eugenekkh avatar lenucksi avatar lesovsky avatar maxxer avatar nvrmindu avatar patsevanton avatar rodo avatar sedych avatar stephankn avatar vikingunet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zabbix-extensions's Issues

monitoring postgresql

I have problem with UserParameter

Item preprocessing step #1 failed: cannot extract value from json by path "$.idle_in_transaction": cannot parse as a valid JSON object: invalid object format, expected opening character '{' or '[' at: 'ERROR: column "wait_event" does not exist
LINE 1: ...x_connections')::int) AS total_pct, sum(CASE WHEN wait_event...

postgresql-extended-template.xml unexpected tag "preprocessing".

Hi.
Ubuntu 16.04, zabbix 3.4.14 pgsql (upgraded from 3.2.9)
import postgresql-extended-template.xml is failed with error
Invalid tag "/zabbix_export/templates/template(1)/items/item(1)": unexpected tag "preprocessing".
What could cause this problem and how to solve it correctly?

postgresql monitoring error

Have some errors during execution with pg_buffercache, pg_stat_statements.
Both extension have been created properly.

template 3.4
zabbix 3.4.15
postgresql 9.5

EXECUTE_STR() command:'psql -qAtX -h 127.0.0.1 -p 5432 -U postgres -d rgu_pre -c "SELECT row_to_json(j) FROM (SELECT current_setting('block_size')::intcount() AS total, current_setting('block_size')::intsum(CASE WHEN isdirty THEN 1 ELSE 0 END) AS dirty, current_setting('block_size')::intsum(CASE WHEN isdirty THEN 0 ELSE 1 END) AS clear, current_setting('block_size')::intsum(CASE WHEN reldatabase IS NOT NULL THEN 1 ELSE 0 END) AS used, current_setting('block_size')::intsum(CASE WHEN usagecount>=3 THEN 1 ELSE 0 END) AS popular FROM pg_buffercache) AS j"' len:186 cmd_result:'ERROR: relation "pg'
48984:20190329:131129.261 Sending back [ERROR: relation "pg_buffercache" does not exist
LINE 1: ... usagecount>=3 THEN 1 ELSE 0 END) AS popular FROM pg_bufferc...

EXECUTE_STR() command:'psql -qAtX -h 127.0.0.1 -p 5432 -U postgres -d rgu_pre -c "select round((sum(total_time) / sum(calls))::numeric,2) from pg_stat_statements"' len:190 cmd_result:'ERROR: relation "pg'
48984:20190329:131154.517 Sending back [ERROR: relation "pg_stat_statements" does not exist
LINE 1: ...d((sum(total_time) / sum(calls))::numeric,2) from pg_stat_st...
^]

EXECUTE_STR() command:'psql -qAtX -h 127.0.0.1 -p 5432 -U postgres -d rgu_pre -c "select coalesce(extract(epoch from max(age(now(), query_start))), 0) from pg_stat_activity where wait_event is not null"' len:180 cmd_result:'ERROR: column "wait'
48984:20190329:131102.065 Sending back [ERROR: column "wait_event" does not exist
LINE 1: ...), query_start))), 0) from pg_stat_activity where wait_event...

A problem about your postgresql config file

Hi,
Just now, I used yours postgresql template, I found pg.ping has no data. I found an error in your config files. In your file , you written "UserParameter=pgsql.ping[*],/bin/echo -e "\timing \n select 1" | psql -qAtX $1 |grep Time |cut -d' ' -f2" . Run User Parameter ,you will found you can't get a time, because "psql -qAtX" and "psql" has different . so you should delete "-qAtX" from here ensure to get a run-time value.

if you has any other problem, email to me.
Email: [email protected]

thanks.

ZBX_NOTSUPPORTED: Unsupported item key.

in zabbix agent log I have:
... Requested [pgsql.bgwriter["127.0.0.1","5432","censored","postgres"]]
... Sending back [ZBX_NOTSUPPORTED: Unsupported item key.]

postgresql 14.2 from PGDG repo.
is this postgresql supported\tested ?
or I miss something?

postgresql: rework settings items

some specific settings are individually fetched as items. Other settings like installed extensions not at all.

see UserParameter=pgsql.setting and the associated items.

Rework needs to be done to fetch settings in a more efficient way and also enable checking for diffs.

I think it should be reworked to fetch both settings and extensions in a single call, either in a csv or in a json object. maybe sorting it alphabetically for robustness.

Then it allows to trigger for missing or wrong config options (or extensions) and also using diff() function to monitor for general config changes, potentially happening during some update.

Last iostat data timestamp too old

I am using a template in Zabbix iostat.
I connected zabbix agents according to the instructions. The test request from the server to the agent is successful. However, I get a "Last iostat data timestamp too old" notification from the server side.
If I make a request:
zabbix_get -s 0.0.0.0 -k iostat.summary
I get: "timestamp": "2024-02-14T11:54:00+0100" (time is correct)

However, in the Zabbix web interface in the "last data" section I see the date (last value) 2024-02-10 10:00:00

Why is that ? 4 days difference
Screenshot_3

monitoring skytools

Алексей, приветствую.

Кроме как у тебя, готовый шаблон для skytools со скриптами обнаружения, больше нигде не нашел.
У тебя он не обновлялся более 2 лет, и, по большому счету, в нем есть все необходимо.
Как ты считаешь, что туда можно еще добавить? Не менялся ли он у тебя локально никак?

Спасибо за ответ.

PostgreSQL version 10+

Hi!
Template for monitoring PostgreSQL is not working on PostgreSQL 10 since the naming convention was changed.

xlog was renamed to wal, location renamed to lsn etc.
Monitoring role (pg_monitor) was added so now you do not need superuser (function pg_ls_dir for wal check should be changed to pg_ls_waldir).

I fixed it in my environment and I would like to share it.

postgres@$ diff postgresql_lt_10.conf postgresql_gt_10.conf
69,70c69,70
< UserParameter=pgsql.streaming.lag.bytes[*],psql -qAtX $1 -c "select greatest(0,pg_xlog_location_diff(pg_current_xlog_location(), replay_location)) from pg_stat_replication where client_addr = '$2'"
< UserParameter=pgsql.streaming.lag.seconds[*],psql -qAtX -h $2 $1 -c "SELECT CASE WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp()) END"
---
> UserParameter=pgsql.streaming.lag.bytes[*],psql -qAtX $1 -c "select greatest(0,pg_wal_lsn_diff(pg_current_wal_lst(), replay_lsn)) from pg_stat_replication where client_addr = '$2'"
> UserParameter=pgsql.streaming.lag.seconds[*],psql -qAtX -h $2 $1 -c "SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn() THEN 0 ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp()) END"



85,86c85,86
< UserParameter=pgsql.wal.write[*],psql -qAtX $1 -c "select pg_xlog_location_diff(pg_current_xlog_location(),'0/00000000')"
< UserParameter=pgsql.wal.count[*],psql -qAtX $1 -c "select count(*) from pg_ls_dir('pg_xlog')"
---
> UserParameter=pgsql.wal.write[*],psql -qAtX $1 -c "select pg_wal_lsn_diff(pg_current_wal_lsn(),'0/00000000')"
> UserParameter=pgsql.wal.count[*],psql -qAtX $1 -c "select count(*) from pg_ls_waldir()"
92d91
<
postgres@$

Received value [ERROR: column "waiting" does not existLINE 1: select count(*) from pg_stat_activity where waiting ^] is not suitable for value type [Numeric (unsigned)] and data type [Decimal]

Error in zabbix:
Received value [ERROR: column "waiting" does not existLINE 1: select count(*) from pg_stat_activity where waiting ^] is not suitable for value type [Numeric (unsigned)] and data type [Decimal]

As kan be read on the pg_activity github, there has been a change in the pg_stat_activity.wait. For Postgresql 9.6 postgresql.conf needs to be changed to

UserParameter=pgsql.connections.waiting[*],psql -qAtX $1 -c "select count(*) from pg_stat_activity where wait_event"

не работает cgroup.memory[/,tasks]

выводит список всех процессов. вылечил так

!/bin/sh

Автор: Лесовский А.В.

Описание: Сбор данных о CGROUPS

CG_NAME="$1"
CG_PARAM="$2"
STAT_PARAM="$3"
NUMA_NODE="$4"
STAT_FILE="memory.stat"
NUMA_STAT="memory.numa_stat"

if [[ "$CG_PARAM" =~ "${STAT_FILE}" ]]
then
if [ $# -lt 3 ]; then echo "ZBX_NOTSUPPORTED"; exit; fi
grep -w $STAT_PARAM /sys/fs/cgroup/memory/$CG_NAME/memory.stat |cut -d' ' -f2 2> /dev/null || echo "ZBX_NOTSUPPORTED"
elif [[ "$CG_PARAM" =~ "${NUMA_STAT}" ]]
then
if [ $# -lt 4 ]; then echo "ZBX_NOTSUPPORTED"; exit; fi
grep -w $STAT_PARAM /sys/fs/cgroup/memory/$CG_NAME/memory.numa_stat |grep -oE N"$NUMA_NODE"=[0-9]+ |cut -d= -f2 2> /dev/null || echo "ZBX_NOTSUP$
else
if [ $# -lt 2 ]; then echo "ZBX_NOTSUPPORTED"; exit; fi
if [[ "$CG_PARAM" = "memory.oom_control" ]]
then
grep under_oom /sys/fs/cgroup/memory/$CG_NAME/memory.oom_control |cut -d" " -f2 2> /dev/null || echo "ZBX_NOTSUPPORTED"
else
if [[ "$CG_PARAM" = "tasks" ]]
then
cat /sys/fs/cgroup/memory/$CG_NAME/$CG_PARAM | wc -l 2> /dev/null || echo "ZBX_NOTSUPPORTED"
else
cat /sys/fs/cgroup/memory/$CG_NAME/$CG_PARAM 2> /dev/null || echo "ZBX_NOTSUPPORTED"
fi
fi
fi

it seems iostat bug

iostat -d 1 1 

If use iostat second params 1 ,The result is always the same.

databases discovery strange behavior

Hi, I've a strange behavior with databases discovery rule.
This is my {$PG_CONNINFO}: -h 192.168.xxx.xxx -p 5432 -U zabbix

The rule discover only zabbix database but, by command line, with the follow command I get all databases:

zabbix_get -s 192.168.xxx.xxx -k pgsql.db.discovery[-h 192.168.xxx.xxx -p 5432 -U zabbix]

So, why I get this difference from command line and dashboard?
Thanks a lot

Error agent info

PostgreSQL 13
Zabbix 5.4

database zabbixdb
user zbx_monitor

pointed out in pg_hba.conf
host zabbixdb zbx_monitor 127.0.0.1/32 trust

Agent info
Preprocessing failed for: Password zbx_monitor: .psql: error: fe_sendauth: no password supplied

  1. Failed: cannot extract value from json by path "$.deadlocks": cannot parse as a valid JSON object: invalid object format, expected opening character '{' or '[' at: 'password zbx_monitor:
    psql: error: fe_sendauth: no password supplied'

When entering a command, it asks for a password
sudo -u zabbix psql -h 127.0.0.1 -p 5432 -U zbx_monitor -d zabbixdb -l

Iostat parse issue in iostat.conf?

Hello,

I've setup iostat script/conf/template to monitor IO utilization with Zabbix. But it looks like something is wrong with parse command defined in iostat.conf. Utilization field shows wrong number:

# cat /tmp/iostat-cron.out | grep -i sdd
sdd               0.37     5.81 5843.25 3000.61 155660.68 60781.83    48.95    25.60    2.89    1.50    5.61   0.09  83.71

# grep -w sdd /tmp/iostat-cron.out | awk 'BEGIN {n=split("rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm util", arr);}{print "{"}{for(i=1;i<=n;++i){printf("\t\"%s\":\"%.2f\"", arr[i], $i); if(i<=n){printf(",\n");}}}{print "\n}"}'
{
        "rrqm/s":"0.00",
        "wrqm/s":"0.37",
        "r/s":"5.81",
        "w/s":"5843.25",
        "rkB/s":"3000.61",
        "wkB/s":"155660.68",
        "avgrq-sz":"60781.83",
        "avgqu-sz":"48.95",
        "await":"25.60",
        "r_await":"2.89",
        "w_await":"1.50",
        "svctm":"5.61",
        "util":"0.09",

}

Can you please have a look? Maybe it's my mistake?

Import postgresql-extended-template.xml failed

When I try import template, I was get this error:
Cannot implode expression "{pgCayenne-Template:pgsql.db.size[{$PG_CONNINFO},{#DBNAME}].last()}>{$PG_DATABASE_SIZE_THRESHOLD}". Incorrect trigger function "last" provided in expression. Parameter sec or #num or user macro expected, "" given.

Trouble with parsing iostat v12

In the iostat version >= 12, in the "Device" section, the ":" sign was removed therefore parsing does not work and the script cannot get a list of devices. I propose a solution:
[iostat.conf]
iostat -d | awk 'BEGIN {check = 0; count = 0; array [0] = 0;} {if (check == 1 && $ 1! = "") {array [count] = $ 1; count = count + 1;} if ($ 1 == "Device:" || $ 1 == "Device") {check = 1;}} END {printf ("{\ n \ t " data \ ": [\ n"); for (i = 0; i <count; ++ i) {printf ("\ t \ t {\ n \ t \ t \ t " {# HARDDISK} \ ": "% s \ "}", array [i]) ; if (i + 1 <count) {printf (", \ n");}} printf ("]} \ n")}}

[iostat-collect.sh]
DISK = $ ($ IOSTAT -x 1 "$ SECONDS" | awk 'BEGIN {check = 0;} {if (check == 1 && $ 1 == "avg-cpu:") {check = 0} if (check == 1 && $ 1! = "") {print $ 0} if ($ 1 == "Device:" || $ 1 == "Device") {check = 1}} | | tr '\ n' '| )

outdated files/pgbouncer/scripts/pgbouncer.stat.sh

either I'm very wrong or this script is out of date. fields like avg_req are no long exist and the order are wrong. Exemple "maxwait", it cut field 10 but is 14.
Pgbouncer version in question is 1.18.0

Error restarting zabbix agent after adding Postgresql

I followed the instructions for adding Postgresql from the readme.md file here: https://github.com/lesovsky/zabbix-extensions/blob/master/files/postgresql/README.md

Then when i tried to restart the zabbix agent (I'm using zabbix 5.2 and PostgreSQL 13), I got this error:

zabbix_agentd [3555]: ERROR: cannot add user parameter "pgsql.bgwriter[*],psql -qAtX $1 -c "SELECT row_to_json(j) FROM (SELECT checkpoints_timed, checkpoints_req, checkpoint_write_time, ch>
Nov 01 09:09:23 michaelv2 systemd[1]: zabbix-agent.service: Control process exited, code=exited status=1
Nov 01 09:09:23 michaelv2 systemd[1]: zabbix-agent.service: Failed with result 'exit-code'.
Nov 01 09:09:23 michaelv2 systemd[1]: Failed to start Zabbix Agent.
-- Subject: Unit zabbix-agent.service has failed

I have trust turned on for everything in pg_hba.conf.

I tested using zabbix_get and here is the output:

zabbix_get -s 127.0.0.1 -k pgsql.ping['-h 127.0.0.1 -p 5432 -U postgres -d postgres']
-->
-h 127.0.0.1 -p 5432 -U postgres -d postgres: - no response

Which version of zabbix?

Hi, I was trying to get the glusterfs client working with Zabbix 2.2.1.. I get this error in the zabbix server log:

31227:20140215:184042.724 item [host1.example.com:glusterfs.discovery] became not supported: Not supported by Zabbix Agent

I am able to run /usr/libexec/zabbix-extensions/scripts/glusterfs.discovery.sh on host1.example.com and it returns:
{
"data":[

{
    "{#MOUNT}":"/mnt/storage"
},

]

}

I have also placed glusterfs.conf in /etc/zabbix/zabbix_agentd.d/

UserParameter=glusterfs.discovery,/usr/libexec/zabbix-extensions/scripts/glusterfs.discovery.sh
UserParameter=glusterfs.check.endpoint[*],ls $1 &> /dev/null && echo 0 || echo 1

Not issue but a help : Extract data from array json

Hi M,

Can you help me pleaz.
I have :
1- creating a user parameter : UserParameter=pgsql.transactions1.active[*],psql -qAtX $1 -c "SELECT json_agg(row_to_json(j)) FROM ( select coalesce(extract(epoch from max(age(now(), query_start))), 0),query from pg_stat_activity where state <> 'idle in transaction' and state <> 'idle' and query not ilike '%coalesc%' group by query) AS j"

2 - The result is a json in array format like this : [{"coalesce":-0.003137,"query":"select 1"},{"coalesce":-0.003137,"query":"select 1"}]
3- master item with a key : pgsql.transactions1.active[{$PG_CONNINFO}] and it gattering data (work fine.
4 - creating a dependent item to extracting data from the master item.

i want to know how a dependent item can extracting coalesc's and query's from the master item using preprocessing tab --> jsonpath

Thank you in advenced

pgbouncer discovery not working

Hello.
script pgbouncer.pool.discovery.sh gives following error:

ERROR:  invalid command 'SELECT d.datname as "Name",
       pg_catalog.pg_get_userbyid(d.datdba) as "Owner",
       pg_catalog.pg_encoding_to_char(d.encoding) as "Encoding",
       pg_catalog.array_to_string(d.datacl, '\n') AS "Access privileges"
FROM pg_catalog.pg_database d
ORDER BY 1;', use SHOW HELP;

I think it's problem using psql with -l key on line 14, which tries to execute select statement in bouncer mode.

poollist=$(psql -h $hostname -p $port -U $username -tAF: --dbname=$dbname -c "show pools" |cut -d: -f1,2 |grep -v ^pgbouncer)

After removing -l key scripts works fine for me.

Template DB PostgreSQL: PostgreSQL: number of running processes postgres

If you using a remote zabbix server to get information from postgreSQL process them can't retrive the data because it dont recover the prostgresql process, only from the zabbix user ( because it connect using the zabbix_agent to the remote server ).

[root@zabbix ~]# zabbix_get -s REMOTE_SERVER_IP -k proc.num[postgres]
0

keepalived.addr.discovery.sh lists broadcast ip when defined in keepalived.conf

When keepalived.conf contains virtual_ipadresses in form of
/ brd dev scope label
as described in the keepalived.conf man-page. Your discovery script will return both the virtual ipadresses and the broadcast addreses.

Changing

ADDRESSES=$(sed -n -e '/virtual_ipaddress {/,/}/p' $KEEPALIVED_CONF |grep -v ^# |grep -oE '([0-9]{1,3}[\.]){3}[0-9]{1,3}*')

into

ADDRESSES=$(sed -n -e '/virtual_ipaddress {/,/}/p' $KEEPALIVED_CONF |grep -v ^# |grep -oE '^ *(([0-9]{1,3}[\.]){3}[0-9]{1,3}*)' | tr -d [:space:]

could solve this. Not sure if that is the best method.

Time in active transaction too long on trigger

Hi,

First, i wanna thank you for this plugin, its very helpful.

Can you please help me understanding the time displayed in this trigger :
PostgreSQL active transaction too long on {HOSTNAME} (time={ITEM.LASTVALUE})

The query used to collecting data is :
select coalesce(extract(epoch from max(age(now(), query_start))), 0) from pg_stat_activity
where state <> 'idle in transaction' and state <> 'idle' and query NOT LIKE 'autovacuum: %'

For example the coalesce returns 0.0845 s and the trigger send alerts like :
PostgreSQL active transaction too long on BDD (time=5m 16s 84ms)

Can you please help me found from where Zabbix get this time=5m 16s 84ms

Thank you

postgresql template pgsql.streaming.lag.seconds

After postgresql v10 we don`t need to connect to standby server for calculating pgsql.streaming.lag.seconds

in postgresql.conf
UserParameter=pgsql.streaming.lag.seconds[*],psql -qAtX $1 -c "select coalesce(extract(epoch from replay_lag), 0) from pg_stat_replication where client_addr = '$2'"

in zabbix template
<key>pgsql.streaming.lag.seconds[{$PG_CONNINFO},{#HOTSTANDBY}]</key>

What`s profitable: if you use any HA software for postgresql(patroni), you can't say which server will be standby, and you need a template, where no difference for monitoring master or slave.

Next problem is pgsql.wal.write on slave postgresql(ERROR: recovery is in progress for function pg_current_wal_lsn), I rewrite it to next:
UserParameter=pgsql.wal.write[*],if [ "$(psql -qAtX $1 -c 'select pg_is_in_recovery()')" = "f" ]; then psql -qAtX $1 -c "select pg_wal_lsn_diff(pg_current_wal_lsn(),'0/00000000')"; else echo "0"; fi

redis template seem incomplete

hi,

i'm testing redis template and it seems incomplete.

In userparameters i see "redis.discovery", apparently to autodiscover some items with LLD, but in template this not appears.

In another way, items use key "redis" but in redis.conf this not exists, it seems the correct key is "redis.stat"

Is there an obsolete export of template?

Kind regards

pgsql.ping

I found one issue with pgsql.ping command

echo -e "\\\timing \n select 1" | psql -qAtX $1 |grep Time |cut -d' ' -f2

Ubuntu, Zabbix 2.4.3, command is executed in sh, not bash by zabbix, so it has a problem with "-e" parameter; updated command:

echo "\\\timing \n select 1" | psql -qAtX $1 |grep Time |cut -d' ' -f2

Please test/update. Thx.

hwraid-smartarray

In the event of a failed drive a temperature is not reported so you get a 'Key Value' required error.

00:13:22 # zabbix_sender -z 10.17.5.44 -i /tmp/zabbix-sender-hp-raid-data.in -vv
zabbix_sender [26874]: ERROR: [line 21] 'Key value' required
Sending failed.

server1 hpraid.pd.status[0:2I:2:8] OK
server1 hpraid.pd.temperature[0:2I:2:8] 27
server1 hpraid.pd.status[0:1I:2:1] Failed
server1 hpraid.pd.temperature[0:1I:2:1]
server1 hpraid.pd.status[0:1I:2:2] OK
server1 hpraid.pd.temperature[0:1I:2:2] 25

locale ru_RU

when using ru_RU locale - it's collecting data from iostat with "," delimiter, but zabbix server cannot identify this as numbers. zabbix server is also to ru_RU locale, but it's expecting numbers only with "." as delimiter.
i'm using zabbix 2.2

workaround
adding LANG=en_EN.UTF-8 to iostat-collect.sh

UserParameter for streaming replication byte lag can produce negative values

The user parameter to calculate byte lag can produce negative values, causing the item to fall into "Not Supported" status in Zabbix.

These negative values are due to sent_location being lower than replay_location, making it appear that the slave is "ahead" of the master. http://pgsql.privatepaste.com/db98e7efb8 . I ran into this on my environment and determined that this was caused by sent_location being lower than the master's pg_current_xlog_location, concluding that the negative values are not a real indicator of streaming replication byte lag.

Fix is to simply have the byte lag UserParameter return 0 in the event of negative values:

UserParameter=pgsql.streaming.lag.bytes[*],psql -qAtX $1 -c "select GREATEST(0,pg_xlog_location_diff(sent_location, replay_location)) AS pg_xlog_location_diff from pg_stat_replication where client_addr = '$2'"

P.S. Thank you for sharing this template, it's by far the best solution for monitoring PostgreSQL with Zabbix IMO

IOstats do not parse data into zabbix server

default

root@zabbix:~# zabbix_get -s server_ip -k iostat.discovery
{
"data":[
{
"{#HARDDISK}":"sda"},
{
"{#HARDDISK}":"sdb"},
{
"{#HARDDISK}":"md0"},
{
"{#HARDDISK}":"md1"},
{
"{#HARDDISK}":"dm-0"},
{
"{#HARDDISK}":"dm-1"},
{
"{#HARDDISK}":"dm-2"}]}

Naming of template and items

I installed your template and am glad it was easy to install.

What I don't understand is the naming you use. Everywhere I see "Zabbix", I would use "Postgres" in the XML template. In fact, I did so, and that makes things a lot clearer. I didn't change the xml tags, just "Zabbix" with capital Z. See the txt attachment - XML is not accepted for uploads.

Postgres-Server-Template.txt

'formulaid' tag is missing in keepalived template

When importing template keepalived to zabbix, it fails.
Says:
Invalid tag "/zabbix_export/templates/template(1)/discovery_rules/discovery_rule(1)/filter/conditions/condition(1)": the tag "formulaid" is missing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.