Giter Club home page Giter Club logo

zabbix-proxmox's Introduction

Monitor a Proxmox cluster with Zabbix

Get cluster and node details from the Proxmox API and report them to Zabbix using zabbix_sender.

Features

  • Low Level Discovery of cluster nodes
  • Collects cluster quorum and nodes status, overall cluster and nodes RAM/CPU usage and KSM sharing, vRAM allocation and usage, vCPU and vHDD allocations, number of VMs and LXC containers running or stopped.
  • Low Level Discovery of storage and storage utilization

Installation

The script can run on any host with Python, a functional zabbix_sender and access to the Proxmox API. A Zabbix server or Zabbix proxy would be logical candidates.

  • Install Python proxmoxer: pip install proxmoxer
  • Install Python requests: pip install requests
  • Copy script scripts/proxmox_cluster.py and make it executable. The script is executed from cron or systemd timers and can be placed anywhere logical.
  • Import the valuemap templates/snmp_boolean_type_valuemap.xml into your Zabbix server. This valuemap is used to display quorum and nodes online status.
  • Import the template templates/proxmox_cluster_template.xml into your Zabbix server.
  • Create a Proxmox host in Zabbix. This is not an actual server but represents the whole cluster.
  • Attach the template Template Proxmox cluster to the host.
  • Create a zabbix user in Proxmox: pveum useradd zabbix@pve -comment "Zabbix monitoring user"
  • Set a password for the zabbix user in Proxmox: pveum passwd zabbix@pve
  • Grant read only permissions to the zabbix user. The built in PVEAuditor role seems a good choice: pveum aclmod / -user zabbix@pve -role PVEAuditor
  • Set up scheduled tasks executing the script. The following two examples use cron: crontab -e -u zabbix
    • Send discovery data: 0 */4 * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx01.your.tld -u zabbix@pve -p password -s -t proxmox.tokyo.prod -d
    • Send item data: */10 * * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx01.your.tld -u zabbix@pve -p password -s -t proxmox.tokyo.prod

Configuration

The script accepts the following parameters:

  • -a : Proxmox API hostname or IP address (can include port if the API listens on a non default port, e.g. pmx01.your.tld:8443)
  • -c : Zabbix agent configuration file that is passed as a parameter to zabbix sender (defaults to: /etc/zabbix/zabbix_agentd.conf)
  • -d : Send discovery data instead of item data
  • -e : Get extended VM configuration details in order to collect vHDD allocations (see notes)
  • -i : Ignore zabbix_sender non-zero exit codes (see Discovery errors)
  • -o : Output the zabbix_sender response summary
  • -p : Proxmox API password
  • -s : Enable storage discovery and monitoring
  • -t : Zabbix target host name (the host in Zabbix with the Template Proxmox cluster template attached)
  • -u : Proxmox API username (defaults to: zabbix@pve)
  • -v : Verbose, prints data and zabbix_sender results to stdout.
  • -z : Full path to zabbix_sender (defaults to /usr/bin/zabbix_sender)

Notes

Getting all vHDD information requires parsing the full VM configuration. That results in one additional API call for each VM to retrieve the configuration. Subsequent processing relies heavily on regular expressions. As this is an expensive process it is optional and can be enabled by specifying -e on the command line.

Resources allocated to templates are not included in the total vCPU, vHDD and vRAM numbers reported to zabbix.

If there is no load balancer fronting the API it would make sense to use multiple scheduled tasks using different Proxmox servers. This would distribute the load and ensure Zabbix remains updated during maintenance or downtime of a host. An example using cron would look as follows:

# Item updates every 10 minutes
0,20,40 * * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx01.your.tld -u zabbix@pve -p password -t proxmox.tokyo.prod
10,30,50 * * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx02.your.tld -u zabbix@pve -p password -t proxmox.tokyo.prod
# LLD updates every 4 hours
23 0,8,16 * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx01.your.tld -u zabbix@pve -p password -t proxmox.tokyo.prod -d
38 4,12,20 * * * /usr/lib/zabbix/bin/proxmox_cluster.py -a pmx02.your.tld -u zabbix@pve -p password -t proxmox.tokyo.prod -d 

One of the zabbix item keys in the script, and template, is prefixed promox. That is obviously a typo but changing it would mean breaking compatability with existing installations. Changing the key in zabbix would mean losing historical data which is also undesirable. This is purely a cosmetic issue but if desirable you can of course change the prefix for those items. In that case also make sure that the keys in the template are updated accordingly.

If you define the zabbix monitor user in Linux instead of Proxmox the -u parameter would have to reflect that by using the pam realm: zabbix@pam.

Storage monitoring was added to the script later and is not enabled by default to maintain compatibility. Use the -s parameter to enable it. This needs to be done for both the discovery and metric collection invocations. For existing installations the proxmox_cluster_template.xml needs to be imported again as it contains new discovery rules. Alternatively you can import the proxmox_cluster_storage_addon_template.xml and attach it to your Proxmox cluster host as an additional template. This can be useful if the cluster template was renamed after the original import.

Minimum requirements Proxmox 5, Python 3.7 and Zabbix 3.0.

Verified with Proxmox 6, Python 3.9 and Zabbix 5.0.

Issues

The first step when diagnosing issues is to ensure that zabbix_sender is working and the target host in zabbix is configured correctly. Try the following command on the host where the script is going to run. This should return "processed: 1; failed: 0":

[user@zabbix ~]# /usr/bin/zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s proxmox.tokyo.prod -k promox.cluster.quorate -o 1
Response from "127.0.0.1:10051": "processed: 1; failed: 0; total: 1; seconds spent: 0.000036"
sent: 1; skipped: 0; total: 1

The value for the -s parameter is the host you configured in the zabbix GUI to receive the data and attached the template to. That is the value you should use for the -t parameter with the script. (please note that the key value of the -k parameter is currently indeed promox.cluster.quorate an unfortunate typo mentioned under notes as well).

Discovery errors

There have been reports of zabbix_sender returning a partial fail (2) exit status when sending discovery data. While this results in the script reporting an error the discovery data is actually processed by the zabbix server.

You can test sending the recovery data manually as follows:

[user@zabbix ~]# /usr/bin/zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s proxmox.tokyo.prod -k proxmox.nodes.discovery -o '{"data": [{"{#NODE}": "pve01"}, {"{#NODE}": "pve02"}, {"{#NODE}": "pve03"}]}'

We have been unable to replicate the issue. However the error does not affect the overall functionality. Nodes are discovered and will populate in Zabbix, but this script will also exit with a non-zero value. If that causes issues in cron you can use the -i parameter to ignore non-zero zabbix_sender return codes when sending the discovery data.

License

This software is licensed under GNU General Public License v3.0

zabbix-proxmox's People

Contributors

peterlackner avatar takala-jp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zabbix-proxmox's Issues

CPU, Ram, disk usage, net in and out for VM and CT needed

I have a lot of CTs and VMs and I need to monitor the CPU, RAM, disk space used and network in and out.
How can I use the API to get the information of each one?
Is it possible to have a discovery rule to see these hosts appear or how to declare them in Zabbix?

I can see it in the json of the API :
https://proxmox.mynode.com:8006/api2/json/nodes/mynode/lxc

Help needed with installation

Dear @takala-jp,

while trying to get your script running, I'm not getting anywhere.

My tests run fine:

root@proxmox-1:/etc/zabbix# /usr/bin/zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s PVECluster -k promox.cluster.quorate -o 1
Response from "192.168.150.20:10051": "processed: 1; failed: 0; total: 1; seconds spent: 0.000090"
sent: 1; skipped: 0; total: 1
root@proxmox-1:/etc/zabbix# /usr/bin/zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s PVECluster -k proxmox.nodes.discovery -o '{"data": [{"{#NODE}": "proxmox-1"}, {"{#NODE}": "proxmox-2"}, {"{#NODE}": "proxmox-3"}]}'
Response from "192.168.150.20:10051": "processed: 1; failed: 0; total: 1; seconds spent: 0.000130"
sent: 1; skipped: 0; total: 1

However, when running my cron:

*/4 * * * * 	root	/etc/zabbix/scripts/proxmox_cluster.py -a proxmox-1.lan.xxxxxx.de -u zabbix@pve -p XXXXXXXX -s -t PVECluster -v

I receive this error message:

{
    "status": {
        "quorate": 1,
        "cpu_total": 12,
        "cpu_usage": 6.687758518756563,
        "ram_total": 11933937664,
        "ram_used": 10407415808,
        "ram_free": 1526521856,
        "ram_usage": 87.20856519466398,
        "ksm_sharing": 0,
        "vcpu_allocated": 17,
        "vram_allocated": 9663676416,
        "vhdd_allocated": 0,
        "vram_used": 3754319872,
        "vram_usage": 38.84980943467882,
        "vms_running": 3,
        "vms_stopped": 0,
        "vms_total": 3,
        "lxc_running": 12,
        "lxc_stopped": 0,
        "lxc_total": 12,
        "vm_templates": 0,
        "nodes_total": 3,
        "nodes_online": 3
    },
    "nodes": {
        "proxmox-1": {
            "online": 1,
            "vms_total": 1,
            "vms_running": 1,
            "lxc_total": 4,
            "lxc_running": 4,
            "vcpu_allocated": 5,
            "vram_allocated": 2684354560,
            "vhdd_allocated": 0,
            "vram_used": 471769088,
            "ksm_sharing": 0,
            "cpu_total": 4,
            "cpu_usage": 4.05476566614007,
            "ram_total": 3977629696,
            "ram_used": 3402174464,
            "ram_free": 575455232,
            "ram_usage": 85.5327097799302
        },
        "proxmox-2": {
            "online": 1,
            "vms_total": 1,
            "vms_running": 1,
            "lxc_total": 5,
            "lxc_running": 5,
            "vcpu_allocated": 6,
            "vram_allocated": 3221225472,
            "vhdd_allocated": 0,
            "vram_used": 1108971520,
            "ksm_sharing": 0,
            "cpu_total": 4,
            "cpu_usage": 10.939139559286499,
            "ram_total": 3977629696,
            "ram_used": 3438993408,
            "ram_free": 538636288,
            "ram_usage": 86.4583601499741
        },
        "proxmox-3": {
            "online": 1,
            "vms_total": 1,
            "vms_running": 1,
            "lxc_total": 3,
            "lxc_running": 3,
            "vcpu_allocated": 6,
            "vram_allocated": 3758096384,
            "vhdd_allocated": 0,
            "vram_used": 2173579264,
            "ksm_sharing": 0,
            "cpu_total": 4,
            "cpu_usage": 5.069370330843119,
            "ram_total": 3978678272,
            "ram_used": 3566247936,
            "ram_free": 412430336,
            "ram_usage": 89.63398626869422
        }
    },
    "storage": {
        "storage/proxmox-1/ZFSPool-1": {
            "disk_use": 17804296192,
            "disk_max": 719943892992,
            "disk_use_p": 2.4730116284489188
        },
        "storage/proxmox-3/ZFSPool-3": {
            "disk_use": 19384537088,
            "disk_max": 719943892992,
            "disk_use_p": 2.692506635126829
        },
        "storage/proxmox-2/local": {
            "disk_use": 7899578368,
            "disk_max": 67605540864,
            "disk_use_p": 11.684809066007386
        },
        "storage/proxmox-3/local": {
            "disk_use": 8080879616,
            "disk_max": 67605540864,
            "disk_use_p": 11.95298419733977
        },
        "storage/proxmox-1/local": {
            "disk_use": 9029054464,
            "disk_max": 67605540864,
            "disk_use_p": 13.355494754732417
        },
        "storage/proxmox-2/ZFSPool-2": {
            "disk_use": 24378494976,
            "disk_max": 719943892992,
            "disk_use_p": 3.3861659517224205
        },
        "storage/proxmox-2/NFS-NAS": {
            "disk_use": 3476255408128,
            "disk_max": 3932981428224,
            "disk_use_p": 88.38728256333918
        },
        "storage/proxmox-3/NFS-NAS": {
            "disk_use": 3476255408128,
            "disk_max": 3932981428224,
            "disk_use_p": 88.38728256333918
        },
        "storage/proxmox-1/NFS-NAS": {
            "disk_use": 3476255408128,
            "disk_max": 3932981428224,
            "disk_use_p": 88.38728256333918
        }
    }
}
PVECluster promox.cluster.quorate 1669544282 1
PVECluster promox.cluster.cpu_total 1669544282 12
PVECluster promox.cluster.cpu_usage 1669544282 6.687758518756563
PVECluster promox.cluster.ram_total 1669544282 11933937664
PVECluster promox.cluster.ram_used 1669544282 10407415808
PVECluster promox.cluster.ram_free 1669544282 1526521856
PVECluster promox.cluster.ram_usage 1669544282 87.20856519466398
PVECluster promox.cluster.ksm_sharing 1669544282 0
PVECluster promox.cluster.vcpu_allocated 1669544282 17
PVECluster promox.cluster.vram_allocated 1669544282 9663676416
PVECluster promox.cluster.vhdd_allocated 1669544282 0
PVECluster promox.cluster.vram_used 1669544282 3754319872
PVECluster promox.cluster.vram_usage 1669544282 38.84980943467882
PVECluster promox.cluster.vms_running 1669544282 3
PVECluster promox.cluster.vms_stopped 1669544282 0
PVECluster promox.cluster.vms_total 1669544282 3
PVECluster promox.cluster.lxc_running 1669544282 12
PVECluster promox.cluster.lxc_stopped 1669544282 0
PVECluster promox.cluster.lxc_total 1669544282 12
PVECluster promox.cluster.vm_templates 1669544282 0
PVECluster promox.cluster.nodes_total 1669544282 3
PVECluster promox.cluster.nodes_online 1669544282 3
PVECluster proxmox.node.online.[proxmox-1] 1669544282 1
PVECluster proxmox.node.vms_total.[proxmox-1] 1669544282 1
PVECluster proxmox.node.vms_running.[proxmox-1] 1669544282 1
PVECluster proxmox.node.lxc_total.[proxmox-1] 1669544282 4
PVECluster proxmox.node.lxc_running.[proxmox-1] 1669544282 4
PVECluster proxmox.node.vcpu_allocated.[proxmox-1] 1669544282 5
PVECluster proxmox.node.vram_allocated.[proxmox-1] 1669544282 2684354560
PVECluster proxmox.node.vhdd_allocated.[proxmox-1] 1669544282 0
PVECluster proxmox.node.vram_used.[proxmox-1] 1669544282 471769088
PVECluster proxmox.node.ksm_sharing.[proxmox-1] 1669544282 0
PVECluster proxmox.node.cpu_total.[proxmox-1] 1669544282 4
PVECluster proxmox.node.cpu_usage.[proxmox-1] 1669544282 4.05476566614007
PVECluster proxmox.node.ram_total.[proxmox-1] 1669544282 3977629696
PVECluster proxmox.node.ram_used.[proxmox-1] 1669544282 3402174464
PVECluster proxmox.node.ram_free.[proxmox-1] 1669544282 575455232
PVECluster proxmox.node.ram_usage.[proxmox-1] 1669544282 85.5327097799302
PVECluster proxmox.node.online.[proxmox-2] 1669544282 1
PVECluster proxmox.node.vms_total.[proxmox-2] 1669544282 1
PVECluster proxmox.node.vms_running.[proxmox-2] 1669544282 1
PVECluster proxmox.node.lxc_total.[proxmox-2] 1669544282 5
PVECluster proxmox.node.lxc_running.[proxmox-2] 1669544282 5
PVECluster proxmox.node.vcpu_allocated.[proxmox-2] 1669544282 6
PVECluster proxmox.node.vram_allocated.[proxmox-2] 1669544282 3221225472
PVECluster proxmox.node.vhdd_allocated.[proxmox-2] 1669544282 0
PVECluster proxmox.node.vram_used.[proxmox-2] 1669544282 1108971520
PVECluster proxmox.node.ksm_sharing.[proxmox-2] 1669544282 0
PVECluster proxmox.node.cpu_total.[proxmox-2] 1669544282 4
PVECluster proxmox.node.cpu_usage.[proxmox-2] 1669544282 10.939139559286499
PVECluster proxmox.node.ram_total.[proxmox-2] 1669544282 3977629696
PVECluster proxmox.node.ram_used.[proxmox-2] 1669544282 3438993408
PVECluster proxmox.node.ram_free.[proxmox-2] 1669544282 538636288
PVECluster proxmox.node.ram_usage.[proxmox-2] 1669544282 86.4583601499741
PVECluster proxmox.node.online.[proxmox-3] 1669544282 1
PVECluster proxmox.node.vms_total.[proxmox-3] 1669544282 1
PVECluster proxmox.node.vms_running.[proxmox-3] 1669544282 1
PVECluster proxmox.node.lxc_total.[proxmox-3] 1669544282 3
PVECluster proxmox.node.lxc_running.[proxmox-3] 1669544282 3
PVECluster proxmox.node.vcpu_allocated.[proxmox-3] 1669544282 6
PVECluster proxmox.node.vram_allocated.[proxmox-3] 1669544282 3758096384
PVECluster proxmox.node.vhdd_allocated.[proxmox-3] 1669544282 0
PVECluster proxmox.node.vram_used.[proxmox-3] 1669544282 2173579264
PVECluster proxmox.node.ksm_sharing.[proxmox-3] 1669544282 0
PVECluster proxmox.node.cpu_total.[proxmox-3] 1669544282 4
PVECluster proxmox.node.cpu_usage.[proxmox-3] 1669544282 5.069370330843119
PVECluster proxmox.node.ram_total.[proxmox-3] 1669544282 3978678272
PVECluster proxmox.node.ram_used.[proxmox-3] 1669544282 3566247936
PVECluster proxmox.node.ram_free.[proxmox-3] 1669544282 412430336
PVECluster proxmox.node.ram_usage.[proxmox-3] 1669544282 89.63398626869422
PVECluster proxmox.storage.disk_use.[storage/proxmox-1/ZFSPool-1] 1669544282 17804296192
PVECluster proxmox.storage.disk_max.[storage/proxmox-1/ZFSPool-1] 1669544282 719943892992
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-1/ZFSPool-1] 1669544282 2.4730116284489188
PVECluster proxmox.storage.disk_use.[storage/proxmox-3/ZFSPool-3] 1669544282 19384537088
PVECluster proxmox.storage.disk_max.[storage/proxmox-3/ZFSPool-3] 1669544282 719943892992
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-3/ZFSPool-3] 1669544282 2.692506635126829
PVECluster proxmox.storage.disk_use.[storage/proxmox-2/local] 1669544282 7899578368
PVECluster proxmox.storage.disk_max.[storage/proxmox-2/local] 1669544282 67605540864
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-2/local] 1669544282 11.684809066007386
PVECluster proxmox.storage.disk_use.[storage/proxmox-3/local] 1669544282 8080879616
PVECluster proxmox.storage.disk_max.[storage/proxmox-3/local] 1669544282 67605540864
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-3/local] 1669544282 11.95298419733977
PVECluster proxmox.storage.disk_use.[storage/proxmox-1/local] 1669544282 9029054464
PVECluster proxmox.storage.disk_max.[storage/proxmox-1/local] 1669544282 67605540864
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-1/local] 1669544282 13.355494754732417
PVECluster proxmox.storage.disk_use.[storage/proxmox-2/ZFSPool-2] 1669544282 24378494976
PVECluster proxmox.storage.disk_max.[storage/proxmox-2/ZFSPool-2] 1669544282 719943892992
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-2/ZFSPool-2] 1669544282 3.3861659517224205
PVECluster proxmox.storage.disk_use.[storage/proxmox-2/NFS-NAS] 1669544282 3476255408128
PVECluster proxmox.storage.disk_max.[storage/proxmox-2/NFS-NAS] 1669544282 3932981428224
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-2/NFS-NAS] 1669544282 88.38728256333918
PVECluster proxmox.storage.disk_use.[storage/proxmox-3/NFS-NAS] 1669544282 3476255408128
PVECluster proxmox.storage.disk_max.[storage/proxmox-3/NFS-NAS] 1669544282 3932981428224
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-3/NFS-NAS] 1669544282 88.38728256333918
PVECluster proxmox.storage.disk_use.[storage/proxmox-1/NFS-NAS] 1669544282 3476255408128
PVECluster proxmox.storage.disk_max.[storage/proxmox-1/NFS-NAS] 1669544282 3932981428224
PVECluster proxmox.storage.disk_use_p.[storage/proxmox-1/NFS-NAS] 1669544282 88.38728256333918

Error while sending items:  Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-T', '-i', '-']' returned non-zero exit status 2.

my zabbix_agentd.conf is:

PidFile=/run/zabbix/zabbix_agentd.pid
LogFile=/var/log/zabbix-agent/zabbix_agentd.log
LogFileSize=0
Server=192.168.150.20
ServerActive=192.168.150.20
Hostname=proxmox-1.lan.xxxxxxxx.de
AllowRoot=1
Include=/etc/zabbix/zabbix_agentd.conf.d/*.conf

What am I missing?

Kind regards,

Erro ao tentar instalar o proxmoxer

root@CT-ZABBIX-LOCAL:~# pip install proxmoxer
Collecting proxmoxer
Using cached https://files.pythonhosted.org/packages/00/dd/629ec9dfdab26a75e3120403231bf3dc3ecda3ebe36db72c829ae30cbfca/proxmoxer-2.0.1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-EhsH6Y/proxmoxer/setup.py", line 9, in
from proxmoxer import version as proxmoxer_version
File "proxmoxer/init.py", line 6, in
from .core import * # noqa
File "proxmoxer/core.py", line 75
content += f" - {errors}"
^
SyntaxError: invalid syntax

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-EhsH6Y/proxmoxer/
root@CT-ZABBIX-LOCAL:~#

Proxmox 5x works fine but not 6.3

Hi, so I have such issue on 6.3 that discovery works fine - but not the main script :(

Traceback (most recent call last):
File "/usr/local/bin/proxmox_cluster.py", line 192, in
node_status = proxmox.nodes(n).status.get()
File "/usr/local/lib/python3.6/site-packages/proxmoxer/core.py", line 105, in get
return self(args)._request("GET", params=params)
File "/usr/local/lib/python3.6/site-packages/proxmoxer/core.py", line 94, in _request
resp.reason, resp.content))
proxmoxer.core.ResourceException: 595 Errors during connection establishment, proxy handshake: Connection refused - b''

Any ideas why discovery works fine (I get list of nodes no problem) but main script fails?

The same setup with Proxmox 5 works like a charm.

Trying discovery storage getting error

Trying discovery storage getting error in WEB 6.0: Cannot send request: wrong discovery rule type. [host_discovery.php:794 → CApiWrapper->__call() → CFrontendApiWrapper->callMethod() → CApiWrapper->callMethod() → CFrontendApiWrapper->callClientMethod() → CLocalApiClient->callMethod() → CTask->create() → CTask->validateCreate() → CTask->checkEditableItems() → CApiService::exception() in include/classes/api/services/CTask.php:444]

proxmoxer.core.ResourceException: 401 Unauthorized: permission denied - invalid PVE ticket - b''

Hey,
After executing the below command we are getting a "401 Unauthorized: permission denied" error.

/usr/lib/zabbix/bin/proxmox_cluster.py -a [pmx01.your.tld] -u zabbix@pve -p [password] -t [proxmox.tokyo.prod]

Error:
Traceback (most recent call last):
File "/usr/lib/zabbix/bin/proxmox_cluster.py", line 192, in
node_status = proxmox.nodes(n).status.get()
File "/usr/local/lib/python3.7/dist-packages/proxmoxer/core.py", line 123, in get
return self(args)._request("GET", params=params)
File "/usr/local/lib/python3.7/dist-packages/proxmoxer/core.py", line 110, in _request
(self._store["serializer"].loads(resp) or {}).get('errors')
proxmoxer.core.ResourceException: 401 Unauthorized: permission denied - invalid PVE ticket - b''

Unable to get data

Hello i have an issue with the script :

python3.9 /usr/lib/zabbix/bin/proxmox_cluster.py -a IPOFPROXMOX -u zabbix@pve -p PASSWORDMAGIC -t IPOFZABBIX -i -o -d

Response from "IPOFZABBIX:10051": "processed: 0; failed: 1; total: 1; seconds spent: 0.000030"

sent: 1; skipped: 0; total: 1

python3.9 /usr/lib/zabbix/bin/proxmox_cluster.py -a IPOFPROXMOX -u zabbix@pve -p PASSWORDMAGIC -t IPOFZABBIX -i -o

Response from "IPOFZABBIX:10051": "processed: 0; failed: 54; total: 54; seconds spent: 0.000754"

sent: 54; skipped: 0; total: 54

image

Anybody have an idea how to solve that ?

Thanks

Error after Update PVE

After Updaing to the latest "7.1-10" i got the following errors.

root@zabbix:/opt# /opt/proxmox_cluster.py -a ck6.fritz.box -u zabbix@pve -p 'PASSWORD' -t Proxmox_Cluster
Traceback (most recent call last):
  File "/opt/proxmox_cluster.py", line 200, in <module>
    node_status = proxmox.nodes(n).status.get()
  File "/usr/local/lib/python3.8/dist-packages/proxmoxer/core.py", line 105, in get
    return self(args)._request("GET", params=params)
  File "/usr/local/lib/python3.8/dist-packages/proxmoxer/core.py", line 90, in _request
    raise ResourceException("{0} {1}: {2} - {3}".format(
proxmoxer.core.ResourceException: 596 Errors during TLS negotiation, request sending and header processing: Connection timed out - b''

Connection refused zabbix_agent

hello i'm getting an error on proxmox 6.3.

i'm new to zabbix.

root@pve01:/etc/zabbix# /opt/zabbix/proxmox_cluster.py -a 127.0.0.1 -u zabbix@pve -p SUp3viS1On*Z@bbiX! -t pve01.smjed.net -d
Error while sending discovery data: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-spve01.smjed.net', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "pve01"}]}']' returned non-zero exit status 1.
root@pve01:/etc/zabbix# /usr/bin/zabbix_sender -vv -c /etc/zabbix/zabbix_agentd.conf -s 172.0.0.1 -k promox.cluster.quorate -o 1
zabbix_sender [39593]: DEBUG: send value error: cannot connect to [[127.0.0.1]:10051]: [111] Connection refused
Sending failed.

image

the agent is running on 10050 but when i change it to 10051 (per the error) i get : root@pve01:/etc/zabbix# /usr/bin/zabbix_sender -vv -c /etc/zabbix/zabbix_agentd.conf -s 172.0.0.1 -k promox.cluster.quorate -o 1
zabbix_sender [40722]: DEBUG: answer [ZBX_NOTSUPPORTED]
zabbix_sender [40722]: Warning: incorrect answer from server [ZBX_NOTSUPPORTED]
Sending failed.

Spaces and other characters appearing and breaking the code

Hi takala-jp,

Good work on the code so far!

I seem to be having a very strange issue with the script in that it seems to be stripping spaces and also inserting other formatting characters into the command it's trying to run. I've started looking at the Python code but I'm no expert.

Is this something specific to my versions? I'm on Python 3.6.8. Output is below:

[root@appliance ~]# /usr/bin/python3 /usr/lib/zabbix/bin/proxmox_cluster.py -a 192.168.0.173 -u zabbix@pve -p XXX -t proxmox.local -d -v
{"data": [{"{#NODE}": "proxmox"}]}
Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sproxmox.local', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "proxmox"}]}']' returned non-zero exit status 2.
[root@appliance ~]#


[root@appliance ~]# /usr/bin/python3 /usr/lib/zabbix/bin/proxmox_cluster.py -a 192.168.0.173 -u zabbix@pve -p XXX -t proxmox.local -e -v
{
"status": {
"quorate": 0,
"cpu_total": 6,
"ram_total": 16807809024,
"ram_used": 10454061056,
"ram_free": 6353747968,
"ram_usage": 62.19764301862644,
"ksm_sharing": 2002599936,
"vcpu_allocated": 8,
"vram_allocated": 24574427136,
"vhdd_allocated": 34363932672,
"vram_used": 8687667747,
"vram_usage": 35.35247311736154,
"vms_running": 5,
"vms_stopped": 0,
"vms_total": 5,
"lxc_running": 0,
"lxc_stopped": 0,
"lxc_total": 0,
"vm_templates": 0,
"nodes_total": 0,
"nodes_online": 1,
"cpu_usage": 2.5157753984468902
},
"nodes": {
"proxmox": {
"online": 1,
"vms_total": 5,
"vms_running": 5,
"lxc_total": 0,
"lxc_running": 0,
"vcpu_allocated": 8,
"vram_allocated": 24574427136,
"vhdd_allocated": 34363932672,
"vram_used": 8687667747,
"ksm_sharing": 2002599936,
"cpu_total": 6,
"cpu_usage": 2.5157753984468902,
"ram_total": 16807809024,
"ram_used": 10454061056,
"ram_free": 6353747968,
"ram_usage": 62.19764301862644
}
}
}
proxmox.local promox.cluster.quorate 1593449419 0
proxmox.local promox.cluster.cpu_total 1593449419 6
proxmox.local promox.cluster.ram_total 1593449419 16807809024
proxmox.local promox.cluster.ram_used 1593449419 10454061056
proxmox.local promox.cluster.ram_free 1593449419 6353747968
proxmox.local promox.cluster.ram_usage 1593449419 62.19764301862644
proxmox.local promox.cluster.ksm_sharing 1593449419 2002599936
proxmox.local promox.cluster.vcpu_allocated 1593449419 8
proxmox.local promox.cluster.vram_allocated 1593449419 24574427136
proxmox.local promox.cluster.vhdd_allocated 1593449419 34363932672
proxmox.local promox.cluster.vram_used 1593449419 8687667747
proxmox.local promox.cluster.vram_usage 1593449419 35.35247311736154
proxmox.local promox.cluster.vms_running 1593449419 5
proxmox.local promox.cluster.vms_stopped 1593449419 0
proxmox.local promox.cluster.vms_total 1593449419 5
proxmox.local promox.cluster.lxc_running 1593449419 0
proxmox.local promox.cluster.lxc_stopped 1593449419 0
proxmox.local promox.cluster.lxc_total 1593449419 0
proxmox.local promox.cluster.vm_templates 1593449419 0
proxmox.local promox.cluster.nodes_total 1593449419 0
proxmox.local promox.cluster.nodes_online 1593449419 1
proxmox.local promox.cluster.cpu_usage 1593449419 2.5157753984468902
proxmox.local proxmox.node.online.[proxmox] 1593449419 1
proxmox.local proxmox.node.vms_total.[proxmox] 1593449419 5
proxmox.local proxmox.node.vms_running.[proxmox] 1593449419 5
proxmox.local proxmox.node.lxc_total.[proxmox] 1593449419 0
proxmox.local proxmox.node.lxc_running.[proxmox] 1593449419 0
proxmox.local proxmox.node.vcpu_allocated.[proxmox] 1593449419 8
proxmox.local proxmox.node.vram_allocated.[proxmox] 1593449419 24574427136
proxmox.local proxmox.node.vhdd_allocated.[proxmox] 1593449419 34363932672
proxmox.local proxmox.node.vram_used.[proxmox] 1593449419 8687667747
proxmox.local proxmox.node.ksm_sharing.[proxmox] 1593449419 2002599936
proxmox.local proxmox.node.cpu_total.[proxmox] 1593449419 6
proxmox.local proxmox.node.cpu_usage.[proxmox] 1593449419 2.5157753984468902
proxmox.local proxmox.node.ram_total.[proxmox] 1593449419 16807809024
proxmox.local proxmox.node.ram_used.[proxmox] 1593449419 10454061056
proxmox.local proxmox.node.ram_free.[proxmox] 1593449419 6353747968
proxmox.local proxmox.node.ram_usage.[proxmox] 1593449419 62.19764301862644

(b'Response from "127.0.0.1:10051": "processed: 0; failed: 38; total: 38; seconds spent: 0.000165"\nsent: 38; skipped: 0; total: 38\n', None)
[root@appliance ~]#


I can't get data

root@pve1:/etc/zabbix# ./proxmox_cluster.py -a 192.168.0.201 -u zabbix@pve -p 15901590 -t mon.testlab.com -d
Error while sending discovery data: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-smon.testlab.com', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "pve1"}, {"{#NODE}": "pve2"}]}']' returned non-zero exit status 2.

root@pve1:/etc/zabbix# zabbix_sender -v -c /etc/zabbix/zabbix_agentd.conf -s pve1.testlab.com -k promox.cluster.quorate -o 1
Response from "192.168.0.164:10051": "processed: 0; failed: 1; total: 1; seconds spent: 0.000057"
sent: 1; skipped: 0; total: 1

returned non-zero exit status 2

Hi,

Installed per your instructions, but running:
./proxmox_cluster.py -a 10.200.246.129 -u zabbix@pve -p dQxlDJVxCJds9wOocqI0 -t clpprx01.mngnet -d -v

Results in:
{"data": [{"{#NODE}": "lpavmh06"}, {"{#NODE}": "lpavmh11"}, {"{#NODE}": "lpavmh04"}, {"{#NODE}": "lpavmh05"}]}
Error while sending discovery data: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sclpprx01.mngnet', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "lpavmh06"}, {"{#NODE}": "lpavmh11"}, {"{#NODE}": "lpavmh04"}, {"{#NODE}": "lpavmh05"}]}']' returned non-zero exit status 2.

This command is run on zabbix server itself. What am i doing wrong?

Cheers

Problem to authenticate

Hello, first thanks for the module!!

I do not know if you can help me with something, I'm trying to configure the plugin, but I receive this error "Promox API call failed: Couldn't authenticate user: root@pam to https://localhost:8006/api2/json/access/ticket" I already tried using zabbix@pve user, without success.

Thanks

Unable to open zabbix_sender

I'm getting the following error when executing this command using Zabbix on a Raspberry Pi.
I've installed zabbix-sender through apt-get install and running Python 3.
Any advise where to look at? Thanks in advance!

Command:
/etc/proxmox_cluster.py -a 192.168.178.2 -u zabbix@pve -p password -t proxmox.proxmox.prod -d
Error:
Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sproxmox.proxmox.prod', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "pve"}]}']' returned non-zero exit status 2.

error while sending values

Hey there,
first of all: thank you for this template!

I am however running in a slight problem.

For testing purposes I tried manually running the script as follows:
/root/sys/proxmox_cluster.py -v -a adelie.domain.tld -u zabbix@pve -p zabbix -t CLUSTER.domain.tld

The last lines of the output are the following:

CLUSTER.domain.tld proxmox.node.vram_used.[emperor] 1582624195 3453960192
CLUSTER.domain.tld proxmox.node.vcpu_allocated.[emperor] 1582624195 15

('Error while sending values:', 'str() takes at most 1 argument (2 given)')

I believe that there is a problem in proxmox_cluster.py script somewehere in the block between 302 and 313.

Any help would be greatly appreciated!
Thanks in advance.

Unable to open zabbix_sender on discovery

Hey,
we noticed this problem after our upgrade to Proxmox 7.0-13: the cron-job for sending the discovery data /root/sys/proxmox_cluster.py -a adelie.domain.de -u zabbix@pve -p zabbix -t CLUSTER.domain.de -d fails with:

Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sCLUSTER.domain.de', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "chinstrap"}, {"{#NODE}": "adelie"}, {"{#NODE}": "emperor"}]}']' returned non-zero exit status 2.

  • The -s option is set correctly and works as exptected when not using -d.
  • /usr/bin/zabbix_sender is the correct path

The ouput with -v and -d:

{"data": [{"{#NODE}": "chinstrap"}, {"{#NODE}": "adelie"}, {"{#NODE}": "emperor"}]}
Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-sCLUSTER.domain.de', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "chinstrap"}, {"{#NODE}": "adelie"}, {"{#NODE}": "emperor"}]}']' returned non-zero exit status 2.

Has there been some changes to the Proxmox-API so that the values were moved or cant be parsed correctly anymore?

Ignore wearout

How can i ignore wearout on all PVE nodes? Because so its not supported for my ssd.

ModuleNotFoundError: No module named 'proxmoxer'

I am running on ubuntu 18.04. I walked through the whole process and all seemed well but when I try to run the script I get the following:

rminor@u2000:~/scripts$ pip install proxmoxer
Collecting proxmoxer
Installing collected packages: proxmoxer
Successfully installed proxmoxer-1.2.0

Its installed

rminor@u2000:~/scripts$ ./proxmox_cluster.py -a 10.102.1.2 -u zabbix@pve -p mypass -t ProxmoxCluster -d
Traceback (most recent call last):
File "./proxmox_cluster.py", line 29, in
from proxmoxer import ProxmoxAPI
ModuleNotFoundError: No module named 'proxmoxer'

Tried running as sudo and still no dice!

flag to set the zabbix host missing

I tried to get this running inside a zabbix-proxy container.

For running in a container (zabbix host or zabbix proxy) you would have to pass the "-z" parameter to zabbix_sender to configure the Zabbix host.

Also there is no Zabbix agent running in such a container so the config file makes not really sense there.

I have put a Zabbix config file into my zabbix-proxy container and adapted the proxmox_cluster.py at all necessary places and added my zabbix proxy internal docker container IP here.

Perhaps a flag to pass the Zabbix host would be great (-z is already used for the zabbix_sender binary).

Erros na coleta quando o Proxmox não tem SSL

root@CT-ZABBIX-LOCAL:# /usr/lib/zabbix/externalscripts/proxmox_cluster.py -a 10.200.4.2 -u zabbix@pve -p chrisinfo -s -t SERVIDOR-BL08 -d
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host '10.200.4.2'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host '10.200.4.2'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py:1063: InsecureRequestWarning: Unverified HTTPS request is being made to host '10.200.4.2'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
Error while sending discovery data: init() got an unexpected keyword argument 'capture_output'
root@CT-ZABBIX-LOCAL:
#

Incorrect command output

Good afternoon! Thank you more for the code. However, there are some issues with the value output.
The first problem is that there is no information about ram, cpu by cluster.
[root@hektor sbin]# /usr/bin/python3 /usr/local/sbin/proxmox_cluster.py -vv -a 192.168.0.5 -u zabbix@pve -p xxxxx -t hektor { }, "status": { "quorate": 1, "cpu_total": 0, "cpu_usage": 0.0, "ram_total": 0, "ram_used": 0, "ram_free": 0, "ram_usage": 0, "ksm_sharing": 6547615744, "vcpu_allocated": 0, "vram_allocated": 0, "vhdd_allocated": 0, "vram_used": 0, "vram_usage": 0, "vms_running": 0, "vms_stopped": 0, "vms_total": 0, "lxc_running": 0, "lxc_stopped": 0, "lxc_total": 0, "vm_templates": 0, "nodes_total": 1, "nodes_online": 1 }, }, "nodes": { }, "hector": { "online": 1, "vms_total": 0, "vms_running": 0, "lxc_total": 0, "lxc_running": 0, "vcpu_allocated": 0, "vram_allocated": 0, "vhdd_allocated": 0, "vram_used": 0, "ksm_sharing": 6547615744, "cpu_total": 0, "cpu_usage": 0, "ram_total": 0, "ram_used": 0, "ram_free": 0, "ram_usage": 0.0 } } }
hector promox.cluster.quorate 1638268539 1 hector promox.cluster.cpu_total 1638268539 0 hector promox.cluster.cpu_usage 1638268539 0.0 hectare promox.cluster.ram_total 1638268539 0 hectare promox.cluster.ram_used 1638268539 0 hectare promox.cluster.ram_free 1638268539 0 hectare promox.cluster.ram_usage 1638268539 0 hectare promox.cluster.ksm_sharing 1638268539 6547615744 hectares promox.cluster.vcpu_allocated 1638268539 0 hectare promox.cluster.vram_allocated 1638268539 0 hectare promox.cluster.vhdd_allocated 1638268539 0 hectare promox.cluster.vram_used 1638268539 0 hectares promox.cluster.vram_usage 1638268539 0 hectare promox.cluster.vms_running 1638268539 0 hectare promox.cluster.vms_stopped 1638268539 0 hectares promox.cluster.vms_total 1638268539 0 hectare promox.cluster.lxc_running 1638268539 0 hectare promox.cluster.lxc_stopped 1638268539 0 hectare promox.cluster.lxc_total 1638268539 0 hector promox.cluster.vm_templates 1638268539 0 hectare promox.cluster.nodes_total 1638268539 1 hectare promox.cluster.nodes_online 1638268539 1 hectare proxmox.node.online.[hectare] 1638268539 1 hectare proxmox.node.vms_total.[hectare] 1638268539 0 hectare proxmox.node.vms_running.[hectare] 1638268539 0 hectare proxmox.node.lxc_total.[hectare] 1638268539 0 hectare proxmox.node.lxc_running.[hectare] 1638268539 0 hectare proxmox.node.vcpu_allocated.[hectare] 1638268539 0 hector proxmox.node.vram_allocated.[hector] 1638268539 0 hectare proxmox.node.vhdd_allocated.[hectare] 1638268539 0 hectare proxmox.node.vram_used.[hectare] 1638268539 0 hectare proxmox.node.ksm_sharing.[hectare] 1638268539 6547615744 hectare proxmox.node.cpu_total.[hectare] 1638268539 0 hectare proxmox.node.cpu_usage.[hectare] 1638268539 0 hectare proxmox.node.ram_total.[hectare] 1638268539 0Well, the problem writes every 10 minutes, because most likely the wrong cron is configured hectare proxmox.node.ram_used.[hectare] 1638268539 0 hectare proxmox.node.ram_free.[hectare] 1638268539 0 hector proxmox.node.ram_usage.[hector] 1638268539 0.0
(b'Response from "zabbix.ru:10051": 'processed: 38; failed: 0; total: 38; seconds spent: 0.000270"\nsent: 38; skipped: 0; total: 38\n', None)

-t is the name in zabbix, -a is the ip where proxmox is running.
image
Well, the problem writes every 10 minutes, because most likely the wrong cron is configured

Could you help what this could be related to?

No status updates from {HOSTNAME}, Ceph via trapper works from this host

root@pve1:~# /usr/share/zabbix-agent/proxmox_cluster.py -a 10.0.0.11 -u zabbix@pve -p password -t 10.0.0.241 -d
Unable to open zabbix_sender: Command '['/usr/bin/zabbix_sender', '-c/etc/zabbix/zabbix_agentd.conf', '-s10.0.0.241', '-kproxmox.nodes.discovery', '-o{"data": [{"{#NODE}": "pve3"}, {"{#NODE}": "pve2"}, {"{#NODE}": "pve1"}]}']' returned non-zero exit status 2.

Same in bash:
root@pve1:~# zabbix_sender -c/etc/zabbix/zabbix_agentd.conf -s10.0.0.241 -kproxmox.nodes.discovery -o{"data": [{"{#NODE}": "pve3"}, {"{#NODE}": "pve2"}, {"{#NODE}": "pve1"}]}
zabbix_sender [1068585]: invalid parameter "[{{#NODE}:"
zabbix_sender [1068585]: invalid parameter "pve3},"
zabbix_sender [1068585]: invalid parameter "{{#NODE}:"
zabbix_sender [1068585]: invalid parameter "pve2},"
zabbix_sender [1068585]: invalid parameter "{{#NODE}:"
zabbix_sender [1068585]: invalid parameter "pve1}]}"

Same in bash with quotation mark ' for -o key:
root@pve1:~# zabbix_sender -c/etc/zabbix/zabbix_agentd.conf -s10.0.0.241 -kproxmox.nodes.discovery -o'{"data": [{"{#NODE}": "pve3"}, {"{#NODE}": "pve2"}, {"{#NODE}": "pve1"}]}'
info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000027"
sent: 1; skipped: 0; total: 1

root@pve1:~ # pveversion --verbose
proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

VM discovery ?

Hello and thanks for this very interesting and promising project! Not really an issue but a small question: I have a few proxmox nodes (stand-alone, not in a cluster), and I'm looking for a template with automatic discovery of all the VMs running: your project seems to be the closest match. Maybe you already tried something like this ? Otherwise I will try sometime later. Goal would be to have vm name, kvm id, status (on/off), cpu load, ram usage, network usage, disk io, agent information (ip), etc. for each VM.

Do you think it would be possible ? If you have any suggestion, please feel free to tell, thanks in advance ! (I only have basic Zabbix knowledge, just configuration, no template coding yet.)

Best regards, Olivier

How to upgrade Python in Proxmox 6.x

Hi,

I am currently running Proxmox VE 6.1-7 ( Linux proxmox1 5.3.18-2-pve #1 SMP PVE 5.3.18-2 ) which includes Python 2.7.16.

As I also run Zabbix 4.x as a monitoriting tool in my environment (in a VM) I would also like to monitor my Proxmox hosts.

Can suggest the correct procedure for upgrading Python without breaking any existing Proxmox functionality?

Thanks

David

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.