Giter Club home page Giter Club logo

mq-metric-samples's Introduction

mq-metric-samples

This repository contains a collection of IBM MQ monitoring agents that utilize the IBM MQ Go metric packages to provide programs that can be used with existing monitoring technologies such as Prometheus, AWS CloudWatch, etc. It can also send data to an OpenTelemetry environment. Statistics and status information can be collected from queue managers and made available in databases to enable dashboard and historic reporting.

The dspmqrtj program

The repository also includes a program which traces the route a message can take through the MQ network. It is similar to the dspmqrte program that is part of the MQ product, but writes the output in JSON format. See the dspmqrtj subdirectory for more information.

Health Warning

This package is provided as-is with no guarantees of support or updates. There are also no guarantees of compatibility with any future versions of the package; interfaces and functions are subject to change based on any feedback. You cannot use IBM formal support channels (Cases/PMRs) for assistance with material in this repository.

These programs use a specific version of the mqmetric and ibmmq golang packages. Those packages are in the mq-golang repository and are also included in the vendor tree of this repository. They are referenced in the go.mod file if you wish to reload all of the dependencies by running go mod vendor.

Getting started

Requirements

You will require the following programs:

  • Go compiler - version 1.20 is the minimum defined here
  • C compiler

MQ Client SDK

The MQ Client SDK for C programs is required in order to compile and run Go programs. You may have this from an MQ Client installation image (eg rpm, deb formats for Linux, msi for Windows).

For Linux x64 and Windows systems, you may also choose to use the MQ Redistributable Client package which is a simple zip/tar file that does not need any privileges to install:

See the README file in the mq-golang repository for more information about any environment variables that may be required to point at non-default directories for the MQ C SDK.

Building a component on your system directly

  • You need to have the MQ client libraries installed first.
  • Create a directory where you want to work with the programs.
  • Change to that directory.
  • Use git to get a copy of this repository into a new directory in the workspace. For example:
git clone https://github.com/ibm-messaging/mq-metric-samples.git src/github.com/ibm-messaging/mq-metric-samples
  • Navigate to the mq-metric-samples root directory (./src/github.com/ibm-messaging/mq-metric-samples)

  • All the prereq packages are already available in the vendor directory, but you can run go mod vendor to reload them

  • From this root directory of the repository you can then compile the code. For example,

  cd ./src/github.com/ibm-messaging/mq-metric-samples
  export CGO_LDFLAGS_ALLOW='-Wl,-rpath.*'
  mkdir -p /tmp/go/bin
  go build -mod=vendor -o /tmp/go/bin/mq_prometheus ./cmd/mq_prometheus/*.go

At this point, you should have a compiled copy of the code in /tmp/go/bin. Each monitor agent directory also has sample scripts, configuration files etc to help with getting the agent running in your specific environment. The -mod=vendor option is important so that the build process does not need to download additional files from external

repositories.

Using containers to build the programs

The Dockerfile in the root directory gives a simple way to both build and run a collector program through containers. You still need to provide the configuration file at runtime, perhaps as a mounted volume. For example:

  docker build -t mqprom:1.0 .
  docker run   -p 9157:9157 -v <directory>/mq_prometheus.yaml:/opt/config/mq_prometheus.yaml mqprom:1.0

Platform support

This Dockerfile should work for a variety of platforms. For those with a Redistributable client, it uses curl to automatically download and unpack the required MQ files. For other platforms, it assumes that you have an MQINST subdirectory under this root, and then copied the .deb files (or the .tar.gz file for Linux/arm64 systems) from your real MQ installation tree into it.

Additional container scripts

As a more flexible example, you can use the buildMonitors.sh script in the scripts subdirectory to build a Docker container that in turn will build all the binary programs and copy them to a local directory. That script also sets some extra version-related flags that will be shown when the program starts. The container will automatically download and install the MQ client runtime files needed for compilation. This might be a preferred approach when you want to run a collector program alongside a queue manager (perhaps as an MQ SERVICE) and you need to copy the binaries to the target system.

Building to run on Windows

There is a buildMonitors.bat file to help with building on Windows. It assumes you have the tdm-gcc-64 64-bit compiler suite installed. It builds all the collectors and corresponding YAML configuration files into %GOPATH%/bin

Queue manager configuration

When metrics are being collected from the publish/subscribe interface (all platforms except z/OS), there are some considerations:

  • MAXHANDS on queue manager: The default configuration of these collectors uses non-durable subscriptions to get information about queue metrics. Each subscription uses an object handle. If many queues are being monitored the default MAXHANDS may need to be increased. A warning is printed if the monitor thinks this attribute appears too low. See below for an alternative option.
  • MAXDEPTH on model queues: The model queue used as the basis for publication and reply queues in the monitor must have a MAXDEPTH suitable for the expected amount of data. For published metrics, this is estimated based on holding one minute's amount of publications; the number of monitored channels is also used as an estimate, although that does not need to be time-based as the data is requested directly by the monitor.
  • USEDLQ on the admin topic: The USEDLQ attribute on the topic object associated with the metrics publications (usually SYSTEM.ADMIN.TOPIC) determines what happens if the subscriber's queue is full. You might prefer to set this to NO to avoid filling the system DLQ if the collection program does not read the publications frequently enough.

Local and client connections

Connections to the queue manager can be made with either local or client bindings. Running the collector "alongside" the queue manager is usually preferred, with the collector configured to run as a service. Sample scripts in this repository show how to define an appropriate MQ SERVICE. Client connections can be made by specifying the channel and connName information in the basic configuration; in this mode, only plaintext communication is available (similar to the MQSERVER environment variable). For secure communication using TLS, then you must provide connection information via a CCDT. Use the ccdtUrl configuration option or environment variables to point at a CCDT that can be in either binary or JSON format. The runMonitorTLS.sh script gives a simple example of setting up a container to use TLS.

Using durable subscriptions

An alternative collection mechanism uses durable subscriptions for the queue metric data. This may avoid needing to increase the MAXHANDS attribute on a queue manager. (Queue manager-level metrics are still collected using non-durable subscriptions.)

To set it up, you must provide suitable configuration options. In the YAML configuration, these are the attributes (command line or environment variable equivalents also exist):

  • replyQueue must refer to a local queue (not a model queue)
  • replyQueue2 must also be set, referring to a different local queue
  • durableSubPrefix is a string that is unique across any collectors that might be connected to this queue manager

If you use durable subscriptions, then the named reply queues may continue to receive publications even when the collector is not running, so that may induce queue-full reports in the error log or events. The subscriptions can be manually removed using the "DELETE SUB()" MQSC command for all subscriptions where the subscription ids begin with the durableSubPrefix value. The scripts/cleanDur.sh program can be used for this deletion. You should also clean the subscriptions when the configuration of which data to collect has changed, particularly the queueSubscriptionSelector option.

Monitor configuration

The monitors always collect all of the available queue manager-wide metrics. They can also be configured to collect statistics for specific sets of queues where metrics are published by the queue manager. Object status queries can be used to extract more information about other objects such as channels and subscriptions.

The exporters can have their configuration given on the command line via flags, via environment variables, or in a YAML file described below.

Wildcard Patterns for Queues

The sets of queues to be monitored can be given either directly on the command line with the -ibmmq.monitoredQueues flag, put into a separate file which is also named on the command line, with the -ibmmq.monitoredQueuesFile flag, or in the equivalent YAML configuration.

The parameter can include both positive and negative wildcards. For example ibmmq.monitoredQueues=A*,!AB*" will collect data on queues beginning with "AC" or "AD" but not "AB". The full rules for expansion can be seen near the bottom of the discover.go module in the mqmetric package.

The queue patterns are expanded at startup of the program and at regular intervals thereafter. So newly-defined queues will eventually be monitored if they match the pattern. The rediscovery interval is 1h by default, but can be modified by the rediscoverInterval parameter.

Channel Status

The monitor programs can process channel status, reporting that back into the database.

The channels to be monitored are set on the command line, similarly to the queue patterns, with -ibmmq.monitoredChannels or -ibmmq.monitoredChannelFile. Unlike the queue monitoring, wildcards are handled automatically by the channel status API. So you do not need to restart this monitor in order to pick up newly-defined channels that match an existing pattern. Only positive wildcards are allowed here; you cannot explicitly exclude channels.

Another parameter is pollInterval. This determines how frequently the channel status is collected. You may want to have it collected at a different rate to the queue data, as it may be more expensive to extract the channel status. The default pollInterval is 0, which means that the channel status is collected every time the exporter processes the queue and queue manager resource publications. Setting it to 1m means that a minimum time of one minute will elapse between asking for channel status even if the queue statistics are gathered more frequently.

A short-lived channel that connects and then disconnects in between collection intervals will leave no trace in the status or metrics.

Channel Metrics

Some the responses from the DISPLAY CHSTATUS command have been selected as metrics. The key values returned include the status and number of messages processed.

The message count for SVRCONN channels is the number of MQI calls made by the client program.

There are actually two versions of the channel status returned. The channel_status metric has the value corresponding to one of the MQCHS_* values. There are about 15 of these possible values. There is also a channel_status_squash metric which returns one of only three values, compressing the full set into a simpler value that is easier to put colours against in Grafana. From this squashed set, you can readily see if a channel is stopped, running, or somewhere in between.

Channel Instances and Labels

Channel metrics are given labels to assist in distinguishing them. These can be displayed in Grafana or used as part of the filtering. When there is more than one instance of an active channel, the combination of channel name, connection name and job name will be unique (though see the z/OS section below for caveats on that platform).

The channel type (SENDER, SVRCONN etc) and the name of the remote queue manager are also given as labels on the metric.

Channel Dashboard Panels

An example Grafana dashboard shows how these labels and metrics can be combined to show some channel status from Prometheus. The Channel Status table panel demonstrates a couple of features. It uses the labels to select unique instances of channels. It also uses a simple number-to-text map to show the channel status as a word (and colour the cell) instead of a raw number.

The metrics for the table are selected and have '0' added to them. This may be a workround of a Grafana bug, or it may really be how Grafana is designed to work. But without that '+0' on the metric line, the table was showing multiple versions of the status for each channel. This table combines multiple metrics on the same line now.

Information about channels comes from the DISPLAY CHSTATUS CURRENT command. That only shows channels with a known state and does not report on inactive channels. To also see the inactive channels, then set the showInactiveChannels configuration attribute to true.

NativeHA

When NativeHA is used, the queue manager publishes some metrics on its status. These are automatically collected whenever available, and can be seen in the metric lists. The metrics are given a prefix or series of "nha". For example, ibmmq_nha_synchronous_log_sent_bytes is one metric shown in Prometheus. The NativeHA "instance" - the names given to the replicas - is added as the nhainstance tag to the metrics.

Depending on configuration, the collector may be able to automatically reconnect to the new instance after a failover. If that is not possible, you will need to have a process to restart the collector once the new replica has taken over.

z/OS Support

Because the DIS QSTATUS and DIS CHSTATUS commands can be used on z/OS, the monitors support showing some information from a z/OS queue manager. There is nothing special needed to configure it, beyond the client connectivity that allows an application to connect to the z/OS system.

The -ibmmq.useStatus (command line) or useObjectStatus (YAML) parameter must be set to true to use the DIS QSTATUS command.

There is also support for using the RESET QSTATS command on z/OS. This needs to be explicitly enabled by setting the -ibmmq.resetQStats (command line) or useResetQStats (YAML) flag to true. While this option allows tracking of the number of messages put/got to a queue (which is otherwise unavailable from z/OS queue manager status queries), it should not be used if there are any other active monitoring solutions that are already using that command.

Only one monitor program can reliably use RESET QSTATS on a particular queue manager, to avoid the information being split between them.

Statistics are available for pagesets and bufferpools, similar to the DISPLAY USAGE command.

Channel Metrics on z/OS

On z/OS, there is no guaranteed way to distinguish between multiple instances of the same channel name. For example, multiple users of the same SVRCONN definition. On Distributed platforms, the JOBNAME attribute does that job; for z/OS, the channel start date/time is used in this package as a discriminator, and used as the jobname label in the metrics. That may cause the stats to be slightly wrong if two instances of the same channel are started simultaneously from the same remote address. The sample dashboard showing z/OS status includes counts of the unique channels seen over the monitoring period.

Authentication

Monitors can be configured to authenticate to the queue manager, sending a userid and password.

The userid is configured using the -ibmmq.userid flag. The password can be set either by using the -ibmmq.password flag, or by passing it via stdin. That allows it to be piped from an external stash file or some other mechanism. Using the command line flags for controlling passwords is not recommended for security-sensitive environments.

Where authentication is needed for access to a database, passwords for those can also be passed via stdin.

YAML configuration for all exporters

Instead of providing all of the configuration for the exporters via command-line flags, you can also provide the configuration in a YAML file. Then only the -f command-line option is required for the exporter to point at the file.

All of the exporters support the same configuration options for how to connect to MQ and which objects are monitored. There is then an exporter-specific section for additional configuration such as how to contact the back-end database. The common options are shown in a template in this directory; the exporter-specific options are in individual files in each directory. Combine the two pieces into a single file to get a complete deployable configuration.

Unlike the command line flags, lists are provided in a more natural format instead of comma-separated values in a single string. If an option is provided on both the command line and in the file, it is the file that takes precedence. Not all strings need to be surrounded by quote characters in the file, but some (eg "!SYSTEM*") seem to need it. The example files have used quotes where they have been found to be necessary.

The field names are slightly different in the YAML file to try to make them a bit more consistent and structured. The command flags are not being changed to preserve compatibility with previous versions.

User passwords can be provided in the file, but it is not recommended that you do that. Instead provide the password either on the command line or piped via stdin to the program.

Environment variable configuration for all exporters

As a further alternative for configuration, parameters can be set by environment variables. This may be more convenient when running collectors in a container as the variables may be easier to modify for each container than setting up different YAML files. The names of the variables follow the YAML naming pattern with an IBMMQ prefix, underscore separators, and in uppercase.

For example, the queue manager name can be set with IBMMQ_CONNECTION_QUEUEMANAGER. You can use the "-h" parameter to the collector to see the complete set of options.

Precedence of configuration options

The command line flags are highest precedence. Environment variables override settings in the YAML file, And the YAML overrides the hardcoded default values.

Metadata tags

For all the collectors, you can configure additional metadata that is replicated into the tags or labels produced on each metric. These might indicate, for example, whether a queue manager is DEV or PROD level. These are defined using the -ibmmq.metadataTags and -ibmmq.metadataValues command line flags (comma-separated), corresponding IBMMQ_CONNECTION-level environment variables, or as arrays within the YAML configuration file.

More information

Each of the sample monitor programs has its own README file describing any particular considerations. The metrics.txt file in this directory has a summary of the available metrics for each object type.

History

See CHANGELOG in this directory.

Issues and Contributions

For feedback and issues relating specifically to this package, please use the GitHub issue tracker.

Contributions to this package can be accepted under the terms of the Developer's Certificate of Origin, found in the DCO file of this repository. When submitting a pull request, you must include a statement stating you accept the terms in the DCO.

Copyright

© Copyright IBM Corporation 2016, 2024

mq-metric-samples's People

Contributors

dependabot[bot] avatar ibmmqmet avatar matrober-uk avatar paologallinaharbur avatar parrobe avatar rvm-cicd-machine-user avatar sdmarshall79 avatar thorhs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mq-metric-samples's Issues

When I try to build the pkg getting below error.

When I try to build the pkg getting below error.
Followed all the steps specified in the Readme but still getting this error.

export GOPATH=/tmp/MQ_Monitor/go
export GOROOT=/XXX/XXX/XXX/XXX/XXX/go/go1.14.7.linux_amd64-1.14.7/go1.14.7.linux_amd64/go
export GOPROXY=http://goproxyXXX.XXXX.XXX.XXX.com
export CGO_ENABLED=0

bash-4.2$ go build -o $GOPATH/bin/mq_prometheus ./cmd/mq_prometheus/*.go
go: downloading github.com/prometheus/client_golang v1.6.0
go: downloading github.com/ibm-messaging/mq-golang/v5 v5.1.1
go: downloading github.com/sirupsen/logrus v1.5.0
go: downloading gopkg.in/yaml.v2 v2.2.5
go: downloading golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f
go: downloading golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
go: downloading github.com/prometheus/client_model v0.2.0
go: downloading github.com/prometheus/common v0.9.1
go: downloading github.com/beorn7/perks v1.0.1
go: downloading github.com/prometheus/procfs v0.0.11
go: downloading github.com/cespare/xxhash/v2 v2.1.1
go: downloading github.com/golang/protobuf v1.4.0
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
go: downloading google.golang.org/protobuf v1.21.0

github.com/ibm-messaging/mq-golang/v5/mqmetric

../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/channel.go:305:44: undefined: ibmmq.MQCFH
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/channel.go:544:32: undefined: ibmmq.MQCFH
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/discover.go:65:26: undefined: ibmmq.MQObject
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/discover.go:1079:39: undefined: ibmmq.PCFParameter
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:93:12: undefined: ibmmq.MQReturn
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:325:41: undefined: ibmmq.MQObject
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:353:39: undefined: ibmmq.MQObject
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:361:46: undefined: ibmmq.MQObject
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:365:50: undefined: ibmmq.MQObject
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/qmgr.go:175:45: undefined: ibmmq.MQCFH
../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/qmgr.go:175:45: too many errors

MQ Prometheus Exporter - ibmmq_queue_depth not found for queue names with /

Hello,
Here's an issue I am currently experiencing when working with MQ Prometheus Exporter.

Brief Description

ibmmq_queue_depth metric is not published by the MQ Prometheus Exporter for the queues having a queue name containing "/" character.

Reproducer

  • mq-metrics-samples version(s) that are affected by this issue.
    9ce9d17

  • Dockerfile used to run IBM-MQ 9.1.2.0 DEV version and compile the MQ Prometheus Exporter
    Dockerfile.txt

  • Docker run command used to run the container
    docker run --entrypoint "/bin/bash" --env LICENSE=accept --env MQ_DEV=true --env MQ_QMGR_NAME=QM1 --env MQ_APP_PASSWORD=app --env MQ_ADMIN_PASSWORD=admin --publish 1414:1414 --publish 9157:9157 --publish 9443:9443 --ipc host -it ibm_mq_prometheus:9.1.2.0-dev-build

  • Copy the compiled MQ Prometheus, shell wrapper and configuration into /usr/local/bin/mqgo/
    cd /root/gowork/bin/; mkdir /usr/local/bin/mqgo; cp mq* /usr/local/bin/mqgo/; cd /usr/local/bin/mqgo/

  • Update MQ Prometheus Exporter Queue Pattern
    sed -i 's/APP\.\*,MYQ\.\*/\*,\!SYSTEM\.\*,\!AMQ\.\*/g' /usr/local/bin/mqgo/mq_prometheus.sh
    Queue pattern should look like:
    queues="*,!SYSTEM.*,!AMQ.*"

  • Udpate MQ Configuration to include a Queue with name containing a '/' character
    Modify i.e. /etc/mqm/10-dev.mqsc.tpl by adding
    DEFINE QLOCAL('DEV/QUEUE/3') REPLACE

  • Start the Queue Manager
    su mqm; runmqdevserver;

    If asked, re-run it as root (probably for password setup), then remove /run/runmqserver/tls/ and re-run it as mqm.

  • Start MQ Prometheus Exporter
    ./mq_prometheus.sh QM1 &

  • Look for the published metrics :

# HELP ibmmq_queue_attribute_max_depth Queue Max Depth
# TYPE ibmmq_queue_attribute_max_depth gauge
ibmmq_queue_attribute_max_depth{platform="UNIX",qmgr="QM1",queue="DEV.DEAD.LETTER.QUEUE",usage="NORMAL"} 5000
ibmmq_queue_attribute_max_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.1",usage="NORMAL"} 5000
ibmmq_queue_attribute_max_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.2",usage="NORMAL"} 5000
ibmmq_queue_attribute_max_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.3",usage="NORMAL"} 5000
ibmmq_queue_attribute_max_depth{platform="UNIX",qmgr="QM1",queue="DEV/QUEUE/3",usage="NORMAL"} 5000
# HELP ibmmq_queue_depth Queue depth
# TYPE ibmmq_queue_depth gauge
ibmmq_queue_depth{platform="UNIX",qmgr="QM1",queue="DEV.DEAD.LETTER.QUEUE",usage="NORMAL"} 0
ibmmq_queue_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.1",usage="NORMAL"} 0
ibmmq_queue_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.2",usage="NORMAL"} 0
ibmmq_queue_depth{platform="UNIX",qmgr="QM1",queue="DEV.QUEUE.3",usage="NORMAL"} 0

As you can see ibmmq_queue_depth metric is not available for DEV/QUEUE/3 queue.

To me it seems as there is an issue during the queue discovery; I'm not really experienced in GO so I did not try my chance with the debugger.

Thanks a lot in advance

Integrate Prometehus with MQ from a Client computer

Hi all,

We are just trying to do the integration for monitoring, but we have a question, if it is possible to run the scripts that collect the metrics on a client computer.
Pointing

-IP-HOST MQ
-QueueManageme
-Port
-Channel

We don't want to install or run anything on the QManager server.
It is possible to carry out this configuration and which file should we install on the client that would serve as a pivot

Please your help to make this implementation.

Thank you very much

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Build Test Container

I am building the ibm-messaging/mq-container from the Dockerfile and i can specify the enable prometheus metrics, but all i get is the qmgr metrics and I am not even sure if that is accomplished with the mq-metric-samples.
Do we have a sample mq container that has all the prometheus metrics enabled (queues, channels, etc)? OR is there any easy way to build the ibm-messaging/mq-container with the latest version of mq-metric-samples?

Connection to Distributed MQ's

Hi There,

We are having an issue when connecting to a distributed MQ

./mq_prometheus.sh
IBM MQ metrics exporter for Prometheus monitoring

Warning: Data from 'RESET QSTATS' has been requested.
Ensure no other monitoring applications are also using that command.

INFO[0000] Connected to queue manager MQMBUAT01
FATA[0013] Error subscribing to $SYS/MQ/INFO/QMGR/MQMBUAT01/Monitor/STATQ/%s/OPENCLOSE: Error subscribing to topic '$SYS/MQ/INFO/QMGR/MQMBUAT01/Monitor/STATQ/SYSTEM.BROKER.AGGR.REQUEST/OPENCLOSE': MQSUB: MQCC = MQCC_FAILED [2] MQRC = MQRC_HANDLE_NOT_AVAILABLE [2017]

There are no topics on the queue manager so I would like to know if the code supports connecting to distributed MQs?

metricset not loading

Hello,

Do you know where the qmgr metricset is loaded? The fieldnames I am getting are different:

prometheus.metrics.ibmmq_qmgr_non_persistent_message_browse_count as oppose to
prometheus.metrics.ibmmq_qmgr_non_persistent_message_browse_total

Thanks,

Where is the queue manager name configured? Is it auto lookup or profile?

Where is the queue manager name configured? Is it auto lookup or profile?
I successfully built the project, because the package path of import seems to have changed, so I changed the package path of import, but nothing else was changed, and then the following errors were included during startup:
IBM MQ metrics exporter for Prometheus monitoring

ERRO[0000] Must provide a queue manager name to connect to.

Can't connect to Queue Manager

I can't seem to connect to our Queue Manager. I am using a passwordless user account but it keeps returning the message " MQCC = " MQCC_Failed [2] MQRC_HOST_NOT_AVAILABLE [2538]. Is there a way to specify the Queue manager port you are connecting to? I'm able to connect to the queue using a different application. Any ideas would help.

subscriptino queu growing when metrics not grabbed

Hello,

I've beginning to play with the mq prometheus exporter and it works fine. However I've noticed that the queue created by the managed subscription (AMQ.....) was read only when the metrics were grabbed and as the dynamic queu depth is only 5000 it becomes quickly full when prometheus is stopped.
Is there a way to change this behaviour?

Thanks!

./mq_prometheus.sh does not honour queues="!SYSTEM.*"

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

I can connect to my remote Queue manager successfully as I get the below debug.

INFO[0000] IBMMQ Describe started
INFO[0000] Platform is ZOS
INFO[0000] Listening on port 9157

My mq_prometheus.sh is as below.

#!/bin/bash

# This is used to start the IBM MQ monitoring service for Prometheus

# The queue manager name comes in from the service definition as the
# only command line parameter
qMgr=$1

# Set the environment to ensure we pick up libmqm.so etc
# Try to run it for a local qmgr; if that fails fallback to a
# default
# If this is a client connection, then deal with no known qmgr of the given name.
. /opt/mqm/bin/setmqenv -m $qMgr -k >/dev/null 2>&1
if [ $? -ne 0 ]
then
  . /opt/mqm/bin/setmqenv -s -k
fi

# A list of queues to be monitored is given here.
# It is a set of names or patterns ('*' only at the end, to match how MQ works),
# separated by commas. When no queues match a pattern, it is reported but
# is not fatal.
# The set can also include negative patterns such as "!SYSTEM.*".
queues="!SYSTEM.*"

# An alternative is to have a file containing the patterns, and named
# via the ibmmq.monitoredQueuesFile option.

# Do similar for channels
channels="*"

# See config.go for all recognised flags
ARGS="-ibmmq.queueManager=$qMgr"
ARGS="$ARGS -ibmmq.monitoredQueues=$queues"
ARGS="$ARGS -ibmmq.monitoredChannels=$channels"
ARGS="$ARGS -ibmmq.monitoredTopics=#"
ARGS="$ARGS -ibmmq.monitoredSubscriptions=*"
ARGS="$ARGS -rediscoverInterval=1h"
ARGS="$ARGS -ibmmq.useStatus=true"
ARGS="$ARGS -ibmmq.client=true"
ARGS="$ARGS -ibmmq.userid=<username>"
ARGS="$ARGS -ibmmq.password=<password"
ARGS="$ARGS -log.level=info"

# This may help with some issues if the program has a SEGV. It
# allows Go to do a better stack trace.
export MQS_NO_SYNC_SIGNAL_HANDLING=true


# Start via "exec" so the pid remains the same. The queue manager can
# then check the existence of the service and use the MQ_SERVER_PID value
# to kill it on shutdown.
exec ./mq_prometheus $ARGS

However, I get the following exception when I navigate to <ip_address>:9157/metrics in a browser

goroutine 25 [running]:
runtime/debug.Stack(0x2920c, 0x0, 0x0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
<STUFF DELETED BY USER REQUEST AFTER CLOSURE>

I would like to know why the exporter is still parsing messages from SYSTEM.BROKER.ADMIN.STREAM even though I have excluded in my mq_prometheus.sh by adding queues="!SYSTEM.*" to it
Or if that is not the offending line, what could be the possible causes for the stack dump?
I have compiled v4.13 (latest) of the code.

Is there a binary version available?

What I saw on github, I don't know how to integrate it into a complete monitoring plugin. Although you gave some introductions, there is no binary integrated version provided.

DESCR file in metrics

Hi,
Is any chance to metrics return DESCR field and USAGE (for the queues).

BR
Piotrek

mq_aws not gathering statistics due to missing config "usePublications"

Using Master with Latest commit of 042f1ef on 2 Apr.

I found that using mq_aws to export from IBM MQ v9.1 to CloudWatch that it was not gathering any statistics. This was due to a missing config "usePublications".

In mq-golang/mqmetric/discover.go it was exiting the function discoverStats() early at the test 'if metaPrefix == "" && !usePublications {'

In mq-metric-samples/cmd/mq_aws/config.go I needed to add this code change to set that config parameter :

diff --git a/cmd/mq_aws/config.go b/cmd/mq_aws/config.go
index dfe2e86..7892788 100644
--- a/cmd/mq_aws/config.go
+++ b/cmd/mq_aws/config.go
@@ -62,6 +62,7 @@ func initConfig() {
        flag.StringVar(&config.namespace, "ibmmq.namespace", "IBM/MQ", "Namespace for metrics")
 
        flag.BoolVar(&config.cc.ClientMode, "ibmmq.client", false, "Connect as MQ client")
+       flag.BoolVar(&config.cc.UsePublications, "ibmmq.usePublications", true, "Use resource publications. Set to false to monitor older Distributed platforms")
 
        flag.StringVar(&config.interval, "ibmmq.interval", "60", "How many seconds between each collection")
        flag.IntVar(&config.maxErrors, "ibmmq.maxErrors", 10000, "Maximum number of errors communicating with server before considered fatal")

After this fix it would gather statistics for QueueManager and Queues and export them into CloudWatch.

Channel statistics metrics availability

Hi,

We use the latest mq-golang version of 4.0.5 where we could only see the below metrics for channels. If we enable the channel statistics, we couldn't get enough metrics to pick them in prometheus. Hope this would be added in the upcoming version. It is worth keeping this issue open until we add this channel statistics metrics.

ibmmq_channel_instance_type
ibmmq_channel_messages
ibmmq_channel_status
ibmmq_channel_status_squash
ibmmq_channel_substate
ibmmq_channel_time_since_msg
ibmmq_channel_type

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Error when i try to run multiple agent on the same server

hello ,
i have 10 queue manager running on the same server
i'm using the mq-metric-samples version 5.1.2 and even tried with version 5.1.3
and when i try to run the metrics agent it's failing for a lot of them with bellow error :

goroutine 0 [idle]:
runtime: unknown pc 0x7fa33737d387
stack: frame={sp:0x7f9fd80fd918, fp:0x0} stack=[0x7f9fc01fe268,0x7f9fd80fde68)
00007f9fd80fd818: ffffffffffffff00 000000c000332300
00007f9fd80fd828: 000000c00035d3e0 000000c0003826c0
00007f9fd80fd838: 000000c00036fc00 000000c0003834a0
00007f9fd80fd848: 000000c000383560 000000c0003834d0
00007f9fd80fd858: 000000c000383590 00007f9fd80fd940
00007f9fd80fd868: 0000000000000007 0000000000000001
00007f9fd80fd878: 0000000000000000 0000000000000007
00007f9fd80fd888: 00007fa337378976 0000000000000000
00007f9fd80fd898: 00007f9fd80fda30 00007f9fbc000b00
00007f9fd80fd8a8: 00007f9f00000001 0000000000000001
00007f9fd80fd8b8: 000000c0001d3fb0 0000000000000002
00007f9fd80fd8c8: 0000000000000000 0000000000000000
00007f9fd80fd8d8: 0000000000000000 0000000000000000
00007f9fd80fd8e8: 00007fa33770f868 0000000000ab9f5d
00007f9fd80fd8f8: 00007f9fbc0008c0 0000000000000000
00007f9fd80fd908: 0000000000a677f0 0000000000000000
00007f9fd80fd918: <00007fa33737ea78 0000000000000020
00007f9fd80fd928: 0000000000000000 0000000000000000
00007f9fd80fd938: 0000000000000000 0000000000000000
00007f9fd80fd948: 0000000000000000 0000000000000000
00007f9fd80fd958: 0000000000000000 0000000000000000
00007f9fd80fd968: 0000000000000000 0000000000000000
00007f9fd80fd978: 0000000000000000 0000000000000000
00007f9fd80fd988: 0000000000000000 0000000000000000
00007f9fd80fd998: 0000000000000000 0000000000000000
00007f9fd80fd9a8: 0000000000000000 0000000000000000
00007f9fd80fd9b8: 0000000000000000 0000000000000000
00007f9fd80fd9c8: 0000000000000000 0000000000000000
00007f9fd80fd9d8: 0000000000000000 0000000000000000
00007f9fd80fd9e8: 0000000000000000 0000000000000000
00007f9fd80fd9f8: 0000000000000000 00007f9fbc0008c0
00007f9fd80fda08: 0000000000000000 0000000000a677f0
runtime: unknown pc 0x7fa33737d387
stack: frame={sp:0x7f9fd80fd918, fp:0x0} stack=[0x7f9fc01fe268,0x7f9fd80fde68)
00007f9fd80fd818: ffffffffffffff00 000000c000332300
00007f9fd80fd828: 000000c00035d3e0 000000c0003826c0
00007f9fd80fd838: 000000c00036fc00 000000c0003834a0
00007f9fd80fd848: 000000c000383560 000000c0003834d0
00007f9fd80fd858: 000000c000383590 00007f9fd80fd940
00007f9fd80fd868: 0000000000000007 0000000000000001
00007f9fd80fd878: 0000000000000000 0000000000000007
00007f9fd80fd888: 00007fa337378976 0000000000000000
00007f9fd80fd898: 00007f9fd80fda30 00007f9fbc000b00
00007f9fd80fd8a8: 00007f9f00000001 0000000000000001
00007f9fd80fd8b8: 000000c0001d3fb0 0000000000000002
00007f9fd80fd8c8: 0000000000000000 0000000000000000
00007f9fd80fd8d8: 0000000000000000 0000000000000000
00007f9fd80fd8e8: 00007fa33770f868 0000000000ab9f5d
00007f9fd80fd8f8: 00007f9fbc0008c0 0000000000000000
00007f9fd80fd908: 0000000000a677f0 0000000000000000
00007f9fd80fd918: <00007fa33737ea78 0000000000000020
00007f9fd80fd928: 0000000000000000 0000000000000000
00007f9fd80fd938: 0000000000000000 0000000000000000
00007f9fd80fd948: 0000000000000000 0000000000000000
00007f9fd80fd958: 0000000000000000 0000000000000000
00007f9fd80fd968: 0000000000000000 0000000000000000
00007f9fd80fd978: 0000000000000000 0000000000000000
00007f9fd80fd988: 0000000000000000 0000000000000000
00007f9fd80fd998: 0000000000000000 0000000000000000
00007f9fd80fd9a8: 0000000000000000 0000000000000000
00007f9fd80fd9b8: 0000000000000000 0000000000000000
00007f9fd80fd9c8: 0000000000000000 0000000000000000
00007f9fd80fd9d8: 0000000000000000 0000000000000000
00007f9fd80fd9e8: 0000000000000000 0000000000000000
00007f9fd80fd9f8: 0000000000000000 00007f9fbc0008c0
00007f9fd80fda08: 0000000000000000 0000000000a677f0

goroutine 1 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7fa2c70ecee8, 0x72, 0x0)
/data/mqm/tools/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc0003e4b18, 0x72, 0x0, 0x0, 0x9bbdc8)
/data/mqm/tools/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/data/mqm/tools/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc0003e4b00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/data/mqm/tools/go/src/internal/poll/fd_unix.go:384 +0x1d4
net.(*netFD).accept(0xc0003e4b00, 0xaaa77941f4eddf01, 0xc0003ecc80, 0xaaa77941f4eddf26)
/data/mqm/tools/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc0003f09a0, 0x600e9c11, 0xc000143ca0, 0x4c0ee6)
/data/mqm/tools/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0003f09a0, 0xc000143cf0, 0x18, 0xc000000180, 0x83841c)
/data/mqm/tools/go/src/net/tcpsock.go:261 +0x64
net/http.(*Server).Serve(0xc000422000, 0xa80660, 0xc0003f09a0, 0x0, 0x0)
/data/mqm/tools/go/src/net/http/server.go:2930 +0x25d
net/http.(*Server).ListenAndServe(0xc000422000, 0xc000422000, 0xc000143f30)
/data/mqm/tools/go/src/net/http/server.go:2859 +0xb7
net/http.ListenAndServe(...)
/data/mqm/tools/go/src/net/http/server.go:3115
main.main()
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/cmd/mq_prometheus/main.go:119 +0x523

goroutine 38 [select]:
github.com/prometheus/client_golang/prometheus.(*Registry).Gather(0xc000110870, 0x0, 0x0, 0x0, 0x0, 0x0)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/registry.go:510 +0xb6a
github.com/prometheus/client_golang/prometheus/promhttp.HandlerFor.func1(0x7fa2c5f146b8, 0xc0003d40a0, 0xc0001b9300)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go:126 +0x93
net/http.HandlerFunc.ServeHTTP(0xc0003dbb20, 0x7fa2c5f146b8, 0xc0003d40a0, 0xc0001b9300)
/data/mqm/tools/go/src/net/http/server.go:2041 +0x44
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1(0x7fa2c5f146b8, 0xc0003d40a0, 0xc0001b9300)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:40 +0xab
net/http.HandlerFunc.ServeHTTP(0xc0003e1a10, 0x7fa2c5f146b8, 0xc0003d40a0, 0xc0001b9300)
/data/mqm/tools/go/src/net/http/server.go:2041 +0x44
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1(0xa80920, 0xc0004220e0, 0xc0001b9300)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go:100 +0xda
net/http.HandlerFunc.ServeHTTP(0xc0003e1b00, 0xa80920, 0xc0004220e0, 0xc0001b9300)
/data/mqm/tools/go/src/net/http/server.go:2041 +0x44
net/http.(*ServeMux).ServeHTTP(0x105f3c0, 0xa80920, 0xc0004220e0, 0xc0001b9300)
/data/mqm/tools/go/src/net/http/server.go:2416 +0x1a5
net/http.serverHandler.ServeHTTP(0xc000422000, 0xa80920, 0xc0004220e0, 0xc0001b9300)
/data/mqm/tools/go/src/net/http/server.go:2836 +0xa3
net/http.(*conn).serve(0xc0003585a0, 0xa81ce0, 0xc0003ee680)
/data/mqm/tools/go/src/net/http/server.go:1924 +0x86c
created by net/http.(*Server).Serve
/data/mqm/tools/go/src/net/http/server.go:2962 +0x35c

goroutine 76 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7fa2c70ece08, 0x72, 0xffffffffffffffff)
/data/mqm/tools/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc0003e4b98, 0x72, 0x0, 0x1, 0xffffffffffffffff)
/data/mqm/tools/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/data/mqm/tools/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0003e4b80, 0xc00044e401, 0x1, 0x1, 0x0, 0x0, 0x0)
/data/mqm/tools/go/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc0003e4b80, 0xc00044e401, 0x1, 0x1, 0x0, 0x0, 0x0)
/data/mqm/tools/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc0003189d8, 0xc00044e401, 0x1, 0x1, 0x0, 0x0, 0x0)
/data/mqm/tools/go/src/net/net.go:184 +0x8e
net/http.(*connReader).backgroundRead(0xc00044e3f0)
/data/mqm/tools/go/src/net/http/server.go:689 +0x58
created by net/http.(*connReader).startBackgroundRead
/data/mqm/tools/go/src/net/http/server.go:685 +0xd0

goroutine 78 [semacquire, 1 minutes]:
sync.runtime_Semacquire(0xc0005ac088)
/data/mqm/tools/go/src/runtime/sema.go:56 +0x42
sync.(*WaitGroup).Wait(0xc0005ac080)
/data/mqm/tools/go/src/sync/waitgroup.go:130 +0x64
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func2(0xc0005ac080, 0xc0003e2060, 0xc0003e20c0)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/registry.go:460 +0x2b
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/registry.go:459 +0x5d8

goroutine 79 [semacquire]:
github.com/prometheus/client_golang/prometheus.NewGaugeVec.func1(0xc000e16500, 0x8, 0x8, 0x0, 0x0)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/gauge.go:152 +0xae
github.com/prometheus/client_golang/prometheus.(*metricMap).getOrCreateMetricWithLabels(0xc0003dd860, 0x1533ebd37670bb8b, 0xc000e81110, 0x0, 0x0, 0x0, 0x0, 0x0)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/vec.go:356 +0x231
github.com/prometheus/client_golang/prometheus.(*metricVec).getMetricWith(0xc0003dd830, 0xc000e81110, 0x20, 0xb2d8d8441a7f0476, 0x20, 0xc0000dfef8)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/vec.go:154 +0xa4
github.com/prometheus/client_golang/prometheus.(*GaugeVec).GetMetricWith(...)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/gauge.go:203
github.com/prometheus/client_golang/prometheus.(*GaugeVec).With(0xc000318818, 0xc000e81110, 0x9bbea8, 0x7)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/gauge.go:226 +0x3c
main.(*exporter).Collect(0xc0003e2a80, 0xc0003e2060)
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/cmd/mq_prometheus/exporter.go:343 +0x1f86
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/registry.go:443 +0x19d
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
/data/mqm/tools/mq_prometheus_exporter/src/mq-metric-samples-5.1.3/vendor/github.com/prometheus/client_golang/prometheus/registry.go:535 +0xe36

rax 0x0
rbx 0x7fa33770f868
rcx 0xffffffffffffffff
rdx 0x6
rdi 0xf26d
rsi 0xf87d
rbp 0xab9f5d
rsp 0x7f9fd80fd918
r8 0xa
r9 0x7f9fd80fe700
r10 0x8
r11 0x202
r12 0x7f9fbc0008c0
r13 0x0
r14 0xa677f0
r15 0x0
rip 0x7fa33737d387
rflags 0x202
cs 0x33
fs 0x0
gs 0x0

have you any idea about the problem ?

Thanks

Channel metrics

We have ~7000 active channel connections on some queue managers. The exporter is timing out after 120 seconds on some scrapes. Found these messages in the logs - MQRC_NO_MSG_AVAILABLE [2033].

Any ideas how to fix this?

Not working us if port is different to default in zOS

We are trying to monitor MQ in zOS.
From a linux servers we are configuring:
export MQSERVER = 'APP1.CH.MVS/TCP/zoshost(1415)'
mq_prometheus.sh MQ1

Note: The channel, zoshost, port and queue manager are correct.

The error that throws us is the following:

AMQ7048: The queue manager name is either not valid or not known.
IBM MQ metrics exporter for Prometheus monitoring
FATA [0030] MQGET: MQCC = MQCC_FAILED [2] MQRC = MQRC_NO_MSG_AVAILABLE [2033]

When we use the default port (1414) it works correctly and we can obtain metrics.

Compile issue: github.com/aws/[email protected]: connection refused

Hi Mark.
I am trying to compile your latest mq_metrics v 5.0.0 package which uses go modules to make use of the new -ibmmq.QueueSubscriptionFilter option (BTW, thanks for adding it).
But I am getting the following error:
go build -o $GOPATH/bin/mq_prometheus ./cmd/mq_prometheus/*.go
go: github.com/aws/[email protected]: Get "https://proxy.golang.org/github.com/aws/aws-sdk-go/@v/v1.30.18.mod": dial tcp 172.217.16.177:443: connect: connection refused

I am compiling on a Linux machine that has no external access. Is there any workaround?

Support for MQ Active/Standby Mode

We have a 2 node MQ Cluster - one node is primary (active), one node is secondary (standby). MQ is started with the -x flag so whichever cluster comes up first registers as primary, second one as secondary. Failover is automatic if the listener ever crashes on the primary node.

MQ exporter does not seem to support this, as it will start on whichever node is primary, but whichever node is secondary will continuously crash when started as a systemd process.

Should the exporter check if the node is registered as standby and if so should it not crash loop and instead provide some metric saying it is in standby mode?

Not able to build the MQ_exporter

Hi,

I'm trying to compile release "Update for MQ V9.1.3"
I have installed MQ libraries "9.1.0.3-IBM-MQC-Redist-LinuxX64.tar" under /opt/mqm/

When compiling i get the following error message:
# github.com/ibm-messaging/mq-golang/mqmetric
../../src/github.com/ibm-messaging/mq-golang/mqmetric/channel.go:259:44: undefined: ibmmq.MQCFH
../../src/github.com/ibm-messaging/mq-golang/mqmetric/discover.go:65:26: undefined: ibmmq.MQObject

I used the following commands to compile:

export GOROOT=/usr/local/go
export GOPATH=$HOME/projects/mq_exporter
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
export PATH=$GOPATH:$GOROOT:$PATH
export MQ_INSTALLATION_PATH=/opt/mqm
export CGO_CFLAGS="-I$MQ_INSTALLATION_PATH/inc"
export CGO_LDFLAGS="-L$MQ_INSTALLATION_PATH/lib64 -Wl,-rpath,$MQ_INSTALLATION_PATH/lib64"
export CGO_LDFLAGS_ALLOW="-Wl,-rpath.*"

cd $HOME/projects/mq_exporter/cmd/mq_prometheus
env GOOS=windows GOARCH=amd64 go build *.go

MQRC_Q_MGR_NAME_ERROR [2058]

I have built the mq_prometheus go program on the MQ Server and attempted to start it however I keep getting ...

IBM MQ metrics exporter for Prometheus monitoring
time="2020-09-14T16:43:11-04:00" level=fatal msg="Cannot connect to queue manager MYQMGR : MQCONNX: MQCC =
MQCC_FAILED [2] MQRC = MQRC_Q_MGR_NAME_ERROR [2058]"

I'm guessing I'm either passing the QMGR name incorrectly (is a flag required) or this requires an MQ Client connection (and associated channel on the qmgr)?

Error running monitor in container with non-root user (OpenShift)

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.

v5.1.2

  • A small code sample or description that demonstrates the issue.
AMQ6300E: Directory '/usr/games/IBM' could not be created: '☺'.

I've this problem running in Openshift, for that reason I ran the container with docker with a different user (games) with the same result as I got in openshift. ( Same message )

Extra Information: The running container was created with the Dockerfile.run.

Is there a way to run the container with a non-root user?

BR
Facundo

Does this project work for Windows ?

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

time="2020-08-27T15:21:38-03:00" level=error msg="Must provide a queue manager name to connect to."

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Hello, Mark!

First of all, thanks for making this programs available for us. They are really helpful!

We are trying to build the monitor here in a Windows machine and finally we could build the mq_prometheus.exe and the yaml file, but when we try to start it we receive the following error in the sysout:
"time="2020-08-27T15:21:38-03:00" level=error msg="Must provide a queue manager name to connect to.""

It's weird, because we are passing the name of the qmgr in the MQ service and in the yaml file either. like this:
"global:
useObjectStatus: true
useResetQStats: false
logLevel: INFO
metaprefix: ""
pollInterval: 30s
rediscoverInterval: 1h
tzOffset: 0h

connection:
queueManager: Alex"

The client connection isn't being used:
clientConnection: false

Could you give us a hand, please?

Best Regards,
Alex

Problem with yaml config file

I can't use configuration yaml file.

mq-metric-samples in the newest version.
go version go1.15.2 linux/amd64

`IBM MQ metrics exporter for Prometheus monitoring

INFO[0000] Connected to queue manager ORANGE
INFO[0000] IBMMQ Describe started
INFO[0000] Platform is UNIX
panic: http: invalid pattern

goroutine 1 [running]:
net/http.(*ServeMux).Handle(0xd387e0, 0x0, 0x0, 0xa20a40, 0xc00027c5a0)
/opt/go/src/net/http/server.go:2427 +0x305
net/http.Handle(...)
/opt/go/src/net/http/server.go:2476
main.main()
/root/go/src/src/github.com/ibm-messaging/mq-metric-samples/cmd/mq_prometheus/main.go:112 +0x3b1
`

Default yaml file from repo (github.com/ibm-messaging/mq-metric-samples/config.common.yaml)

Cannot build agent to influxdb

I try to build code to win and influxdb, but got error:

github.com/ibm-messaging/mq-golang/v5/mqmetric ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/channel.go:338:44: undefined: ibmmq.MQCFH ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/channel.go:592:32: undefined: ibmmq.MQCFH ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/discover.go:65:26: undefined: ibmmq.MQObject ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/discover.go:1156:39: undefined: ibmmq.PCFParameter ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:93:12: undefined: ibmmq.MQReturn ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:334:41: undefined: ibmmq.MQObject ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:365:39: undefined: ibmmq.MQObject ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:373:46: undefined: ibmmq.MQObject ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/mqif.go:377:50: undefined: ibmmq.MQObject ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/qmgr.go:193:45: undefined: ibmmq.MQCFH ../../../../pkg/mod/github.com/ibm-messaging/mq-golang/[email protected]/mqmetric/qmgr.go:193:45: too many errors

How you see errors in ibm-messaging packet, and it's too many to fix it by myself. Or i do something wrong?

Client Connections

Trying to use mq_prometheus exporter , But I want to connect to remote QMGR through Server connection channel to monitor MQ, Can I compile your source code as is to make it work to create exporter ?

Some metrics are not always being exported

  • mq-metrics-samples version: 4.1.3

Hello. I've compiled the MQ Golang and the mq-metrics samples in a Windows machine.
Everything is working fine, except for a minor problem:

Sometimes, when I hit the prometheus endpoint, I have all the metrics I want. But on other times, I get only some few metrics. For exemple:
The ibmmq_qmgr_system_cpu_time_percentage metric is not always being exported, however, the ibmmq_qmgr_connection_count is always there.

The problem is: As my Prometheus is scrapping each 30s, sometimes it's staying a long time without a lot of metrics.

Is there a way to force the exporter to get all the metrics every time it's hit?
Or is it problem something related w/ my Windows (or MQ Environment)?

Thanks in advance.

Error while building main.go

Tried to build the main.go file using gitbash on a windows go environment.
Facing the below error:
$ go build -o main.go

vendor/github.com/ibm-messaging/mq-golang/mqmetric

......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\channel.go:274:44: undefined: ibmmq.MQCFH
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\channel.go:513:32: undefined: ibmmq.MQCFH
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\discover.go:65:26: undefined: ibmmq.MQObject
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\discover.go:1060:39: undefined: ibmmq.PCFParameter
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\mqif.go:264:41: undefined: ibmmq.MQObject
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\mqif.go:292:39: undefined: ibmmq.MQObject
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\mqif.go:300:46: undefined: ibmmq.MQObject
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\mqif.go:304:50: undefined: ibmmq.MQObject
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\qmgr.go:175:45: undefined: ibmmq.MQCFH
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\queue.go:333:42: undefined: ibmmq.MQCFH
......\vendor\github.com\ibm-messaging\mq-golang\mqmetric\queue.go:333:42: too many errors

Adding Docker support for prometheus to make easier adoption

Use Case

Building and running the exporter can be difficult for someone not used to Golang. Moreover having an image for prometheus exporter is becoming a standard.

Related issue #41

It would be nice to have the possibility to provide a Docker image (Es: ibm-messaging/prometheus) with the exporter and the libraries needed.

Environment variables and the config file can be passed at runtime and a docker compose (or kube config file) could help out.

  • Would it be valuable for the users of this exporter?
  • Is there any limitation I am not aware of?
  • Is there any issue with licensing providing the image with those libraries?

If you are interested I could propose a PR, let me know!


Building the image

I performed some tests and for example a Dockerfile as the following would provide a way to spin up the exporter and to configure it:

FROM ubuntu:bionic

RUN apt update
RUN apt install -y  curl

RUN  mkdir -p /opt/mqm \
  && chmod a+rx /opt/mqm

ENV RDURL="https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqdev/redist" \
    RDTAR="IBM-MQC-Redist-LinuxX64.tar.gz" \
    VRMF=9.1.5.0

RUN cd /opt/mqm \
 && curl -LO "$RDURL/$VRMF-$RDTAR" \
 && tar -zxf ./*.tar.gz \
 && rm -f ./*.tar.gz 

COPY ./exporter /exporter

CMD [ "/opt/mqm/exporter" ]

This could be the second step of the build dockerfile you have already in place.

A travis job (a manual step, or an other CI) could for example push the image to the repository and the user would need only to run:

$docker run -e MQSERVER=CHAEL1/TCP/'<IP>(<PORT>)'  -v $(pwd)/mq_prometheus.yaml:/mq_prometheus.yaml -it ibm-messaging/prometheus:latest /exporter -f /mq_prometheus.yaml

Leveraging the image

Having a docker image could open some interesting paths and helping the configuration phase, for example running it in a docker compose together with a prometheus scraper.

It would scrape and forward metrics to a backend after processing them (for example I tested out a solution using https://github.com/newrelic/nri-prometheus)

version: '3.0'
services:
  mq-exporter:
    image: ibm-messaging/prometheus:latest
    container_name: mq-exporter
    restart: unless-stopped
    ports:
      - 9171:9171
    volumes:
      - ./mq_prometheus.yml:/mq_prometheus.yaml
    environment:
      - MQSERVER=CHAEL1/TCP/'<IP>(<PORT>)'
  nri-prometheus:
    image: newrelic/nri-prometheus:2.0
    container_name: nri-prometheus
    restart: unless-stopped
    volumes:
      - ./config.yml:/config.yaml
    environment:
      - LICENSE_KEY=<Newrelic license key>

Channel metrics not collected if not in running status

Thanks for mq-metrics! I think I have found this following issue.
I am using the mq-metrics-samples/mq_influx and importing into an InfluxDB.
I am collecting metrics on a pattern list of Queues and for all Channels ("-ibmmq.monitoredChannels=*")
I have an Alert in Grafana and Kapacitor to detect when any of the Channels have Status that is not Running (value=3).

Observed Behaviour

I do not receive any metrics on Channels that have Status != Running (value of 3).
Note: This may be limited to just Channels of Type (Requestor and Receiver).

Expected Behaviour

I would expect to receive metrics on all the Channels on my list of Queues regardless of their Status (Running, Inactive, ...)

I can somewhat workaround this issue in Grafana and Kapacitor Alerts by having a Query for each Channel and detecting Missing data and alerting on that but it is a very awkward solution.
Note: This solution doesnt help for some Channels that I do not want to alert on if in Inactive Status (my Receiver Channels).

I have looked through the source code and cannot see any obvious code that would do a "do not discover or export channels if Status != Running"

Post MQ v9.1.1 queue metrics are loading

used the new MQ Sample metrics along with MQ v9.1.1 post build few MQ queue metrics are not working.

ex: ibmmq_object_mqget & ibmmq_object_mqput

Used the below Prometheus scripts to collect the queue and channel metrics.

===================================
#!/bin/sh

This is used to start the IBM MQ monitoring service for Prometheus

The queue manager name comes in from the service definition as the

only command line parameter

qMgr=$1

Set the environment to ensure we pick up libmqm.so etc

. /opt/mqm/bin/setmqenv -m $qMgr -k

A list of queues to be monitored is given here.

It is a set of names or patterns ('*' only at the end, to match how MQ works),

separated by commas. When no queues match a pattern, it is reported but

is not fatal.

queues="*"

An alternative is to have a file containing the patterns, and named

via the ibmmq.monitoredQueuesFile option.

See config.go for all recognised flags

A list of queues to be monitored is given here.

channels="*"

Start via "exec" so the pid remains the same. The queue manager can

then check the existence of the service and use the MQ_SERVER_PID value

to kill it on shutdown.

#exec /usr/local/bin/mq_prometheus -ibmmq.queueManager=$qMgr -ibmmq.monitoredQueues="$queues" -log.level=error
exec /usr/local/bin/mq_prometheus -ibmmq.queueManager=$qMgr -ibmmq.monitoredQueues="$queues" -ibmmq.monitoredChannels="$channels" -log.level=error -ibmmq.qStatus=true

===========================================

Can this be used to monitor MQ in Containers?

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Error in Channel.go while compiling

Using the Merged master code base:

mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/mqmetric/channel.go:312:25: multiple-value regexp.MatchString() in single-value context
mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/mqmetric/channel.go:312:26: undefined: subkey

MQ Monitoring - Docker image

Hi - I have couple of questions regarding this monitoring.

  1. There is not enough information regarding the docker implementation. I have successfully built the docker image, but not sure how to run and what kind of parameter it takes. I am basically running the monitor in the client mode.

  2. I have multiple queue managers in an environment. Does this suite support multiple queue managers?

  3. What is the preferred IDE for viewing this monitoring code ?

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Distributed MQ not populating exported metrics

Hi,

We are trying to run the mq exporter on our distributed queue and are facing some difficulty.

I have enabled TRACE debug and can see no clues in the debug

IBM MQ metrics exporter for Prometheus monitoring

Warning: Data from 'RESET QSTATS' has been requested.
Ensure no other monitoring applications are also using that command.

DEBU[0000] Monitored topics are '*'
DEBU[0000] Connecting to queue manager MQMBDEV01
INFO[0000] Connected to queue manager  MQMBDEV01
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/ADMININQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/ADMINOUTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/DELIVERYQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/FAULTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/REQUESTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/RESPONSEQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/SYNCHRONOUSREQUESTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_CMCONNECTOR/SYNCHRONOUSRESPONSEQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/ADMININQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/ADMINOUTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/DELIVERYQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/FAULTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/REQUESTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/RESPONSEQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/SYNCHRONOUSREQUESTQUEUE
ERRO[0000] Warning: Cannot subscribe to queue containing '/': AC01_JTEXTCONNECTOR/SYNCHRONOUSRESPONSEQUEUE
INFO[0004] Warning: Maximum queue depth on SYSTEM.DEFAULT.MODEL.QUEUE may be too low. Current value = 5000
DEBU[0004] About to allocate gauges
DEBU[0004] Created gauge for 'qmgr_mq_trace_file_system_in_use_bytes' from 'MQ trace file system - bytes in use'
DEBU[0004] Created gauge for 'qmgr_mq_trace_file_system_free_space_percentage' from 'MQ trace file system - free space'
DEBU[0004] Created gauge for 'qmgr_mq_errors_file_system_in_use_bytes' from 'MQ errors file system - bytes in use'
DEBU[0004] Created gauge for 'qmgr_mq_errors_file_system_free_space_percentage' from 'MQ errors file system - free space'
DEBU[0004] Created gauge for 'qmgr_mq_fdc_file_count' from 'MQ FDC file count'
DEBU[0004] Created gauge for 'qmgr_queue_manager_file_system_in_use_bytes' from 'Queue Manager file system - bytes in use'
DEBU[0004] Created gauge for 'qmgr_queue_manager_file_system_free_space_percentage' from 'Queue Manager file system - free space'
DEBU[0004] Created gauge for 'qmgr_log_logical_written_bytes' from 'Log - logical bytes written'
DEBU[0004] Created gauge for 'qmgr_log_current_primary_space_in_use_percentage' from 'Log - current primary space in use'
DEBU[0004] Created gauge for 'qmgr_log_workload_primary_space_utilization_percentage' from 'Log - workload primary space utilization'
DEBU[0004] Created gauge for 'qmgr_log_required_for_media_recovery_bytes' from 'Log - bytes required for media recovery'
DEBU[0004] Created gauge for 'qmgr_log_max_bytes' from 'Log - bytes max'
DEBU[0004] Created gauge for 'qmgr_log_file_system_in_use_bytes' from 'Log file system - bytes in use'
DEBU[0004] Created gauge for 'qmgr_log_file_system_max_bytes' from 'Log file system - bytes max'
DEBU[0004] Created gauge for 'qmgr_log_physical_written_bytes' from 'Log - physical bytes written'
DEBU[0004] Created gauge for 'qmgr_log_occupied_by_reusable_extents_bytes' from 'Log - bytes occupied by reusable extents'
DEBU[0004] Created gauge for 'qmgr_log_occupied_by_extents_waiting_to_be_archived_bytes' from 'Log - bytes occupied by extents waiting to be archived'
DEBU[0004] Created gauge for 'qmgr_log_write_size_bytes' from 'Log - write size'
DEBU[0004] Created gauge for 'qmgr_log_in_use_bytes' from 'Log - bytes in use'
DEBU[0004] Created gauge for 'qmgr_log_write_latency_seconds' from 'Log - write latency'
DEBU[0004] Created gauge for 'qmgr_persistent_message_mqput1_count' from 'Persistent message MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_failed_mqput1_count' from 'Failed MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_put_non_persistent_messages_bytes' from 'Put non-persistent messages - byte count'
DEBU[0004] Created gauge for 'qmgr_put_persistent_messages_bytes' from 'Put persistent messages - byte count'
DEBU[0004] Created gauge for 'qmgr_mqstat_count' from 'MQSTAT count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_message_mqput_count' from 'Non-persistent message MQPUT count'
DEBU[0004] Created gauge for 'qmgr_persistent_message_mqput_count' from 'Persistent message MQPUT count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_message_mqput1_count' from 'Non-persistent message MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_interval_mqput_mqput1_total_count' from 'Interval total MQPUT/MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_interval_mqput_mqput1_total_bytes' from 'Interval total MQPUT/MQPUT1 byte count'
DEBU[0004] Created gauge for 'qmgr_failed_mqput_count' from 'Failed MQPUT count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_message_browse_bytes' from 'Non-persistent message browse - byte count'
DEBU[0004] Created gauge for 'qmgr_failed_mqcb_count' from 'Failed MQCB count'
DEBU[0004] Created gauge for 'qmgr_interval_destructive_get_total_count' from 'Interval total destructive get- count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_message_destructive_get_count' from 'Non-persistent message destructive get - count'
DEBU[0004] Created gauge for 'qmgr_got_non_persistent_messages_bytes' from 'Got non-persistent messages - byte count'
DEBU[0004] Created gauge for 'qmgr_got_persistent_messages_bytes' from 'Got persistent messages - byte count'
DEBU[0004] Created gauge for 'qmgr_persistent_message_browse_bytes' from 'Persistent message browse - byte count'
DEBU[0004] Created gauge for 'qmgr_mqcb_count' from 'MQCB count'
DEBU[0004] Created gauge for 'qmgr_persistent_message_destructive_get_count' from 'Persistent message destructive get - count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_message_browse_count' from 'Non-persistent message browse - count'
DEBU[0004] Created gauge for 'qmgr_failed_browse_count' from 'Failed browse count'
DEBU[0004] Created gauge for 'qmgr_expired_message_count' from 'Expired message count'
DEBU[0004] Created gauge for 'qmgr_mqctl_count' from 'MQCTL count'
DEBU[0004] Created gauge for 'qmgr_interval_destructive_get_total_bytes' from 'Interval total destructive get - byte count'
DEBU[0004] Created gauge for 'qmgr_failed_mqget_count' from 'Failed MQGET - count'
DEBU[0004] Created gauge for 'qmgr_persistent_message_browse_count' from 'Persistent message browse - count'
DEBU[0004] Created gauge for 'qmgr_purged_queue_count' from 'Purged queue count'
DEBU[0004] Created gauge for 'qmgr_rollback_count' from 'Rollback count'
DEBU[0004] Created gauge for 'qmgr_commit_count' from 'Commit count'
DEBU[0004] Created gauge for 'qmgr_create_durable_subscription_count' from 'Create durable subscription count'
DEBU[0004] Created gauge for 'qmgr_create_non_durable_subscription_count' from 'Create non-durable subscription count'
DEBU[0004] Created gauge for 'qmgr_non_durable_subscriber_high_water_mark' from 'Non-durable subscriber - high water mark'
DEBU[0004] Created gauge for 'qmgr_mqsubrq_count' from 'MQSUBRQ count'
DEBU[0004] Created gauge for 'qmgr_durable_subscriber_low_water_mark' from 'Durable subscriber - low water mark'
DEBU[0004] Created gauge for 'qmgr_delete_non_durable_subscription_count' from 'Delete non-durable subscription count'
DEBU[0004] Created gauge for 'qmgr_subscription_delete_failure_count' from 'Subscription delete failure count'
DEBU[0004] Created gauge for 'qmgr_failed_mqsubrq_count' from 'Failed MQSUBRQ count'
DEBU[0004] Created gauge for 'qmgr_durable_subscriber_high_water_mark' from 'Durable subscriber - high water mark'
DEBU[0004] Created gauge for 'qmgr_alter_durable_subscription_count' from 'Alter durable subscription count'
DEBU[0004] Created gauge for 'qmgr_resume_durable_subscription_count' from 'Resume durable subscription count'
DEBU[0004] Created gauge for 'qmgr_failed_create_alter_resume_subscription_count' from 'Failed create/alter/resume subscription count'
DEBU[0004] Created gauge for 'qmgr_delete_durable_subscription_count' from 'Delete durable subscription count'
DEBU[0004] Created gauge for 'qmgr_non_durable_subscriber_low_water_mark' from 'Non-durable subscriber - low water mark'
DEBU[0004] Created gauge for 'qmgr_topic_mqput_mqput1_interval_total' from 'Topic MQPUT/MQPUT1 interval total'
DEBU[0004] Created gauge for 'qmgr_interval_topic_put_total' from 'Interval total topic bytes put'
DEBU[0004] Created gauge for 'qmgr_published_to_subscribers_message_count' from 'Published to subscribers - message count'
DEBU[0004] Created gauge for 'qmgr_published_to_subscribers_bytes' from 'Published to subscribers - byte count'
DEBU[0004] Created gauge for 'qmgr_non_persistent_topic_mqput_mqput1_count' from 'Non-persistent - topic MQPUT/MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_persistent_topic_mqput_mqput1_count' from 'Persistent - topic MQPUT/MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_failed_topic_mqput_mqput1_count' from 'Failed topic MQPUT/MQPUT1 count'
DEBU[0004] Created gauge for 'qmgr_mqconn_mqconnx_count' from 'MQCONN/MQCONNX count'
DEBU[0004] Created gauge for 'qmgr_failed_mqconn_mqconnx_count' from 'Failed MQCONN/MQCONNX count'
DEBU[0004] Created gauge for 'qmgr_concurrent_connections_high_water_mark' from 'Concurrent connections - high water mark'
DEBU[0004] Created gauge for 'qmgr_mqdisc_count' from 'MQDISC count'
DEBU[0004] Created gauge for 'qmgr_mqopen_count' from 'MQOPEN count'
DEBU[0004] Created gauge for 'qmgr_failed_mqopen_count' from 'Failed MQOPEN count'
DEBU[0004] Created gauge for 'qmgr_mqclose_count' from 'MQCLOSE count'
DEBU[0004] Created gauge for 'qmgr_failed_mqclose_count' from 'Failed MQCLOSE count'
DEBU[0004] Created gauge for 'qmgr_mqinq_count' from 'MQINQ count'
DEBU[0004] Created gauge for 'qmgr_failed_mqinq_count' from 'Failed MQINQ count'
DEBU[0004] Created gauge for 'qmgr_mqset_count' from 'MQSET count'
DEBU[0004] Created gauge for 'qmgr_failed_mqset_count' from 'Failed MQSET count'
DEBU[0004] Created gauge for 'queue_mqopen_count' from 'MQOPEN count'
DEBU[0004] Created gauge for 'queue_mqclose_count' from 'MQCLOSE count'
DEBU[0004] Created gauge for 'queue_mqinq_count' from 'MQINQ count'
DEBU[0004] Created gauge for 'queue_mqset_count' from 'MQSET count'
DEBU[0004] Created gauge for 'queue_avoided_percentage' from 'queue avoided bytes'
DEBU[0004] Created gauge for 'queue_lock_contention_percentage' from 'lock contention'
DEBU[0004] Created gauge for 'queue_mqput1_persistent_message_count' from 'MQPUT1 persistent message count'
DEBU[0004] Created gauge for 'queue_persistent_bytes' from 'persistent byte count'
DEBU[0004] Created gauge for 'queue_mqput_non_persistent_message_count' from 'MQPUT non-persistent message count'
DEBU[0004] Created gauge for 'queue_mqput_persistent_message_count' from 'MQPUT persistent message count'
DEBU[0004] Created gauge for 'queue_mqput1_non_persistent_message_count' from 'MQPUT1 non-persistent message count'
DEBU[0004] Created gauge for 'queue_non_persistent_bytes' from 'non-persistent byte count'
DEBU[0004] Created gauge for 'queue_avoided_puts_percentage' from 'queue avoided puts'
DEBU[0004] Created gauge for 'queue_mqput_mqput1_count' from 'MQPUT/MQPUT1 count'
DEBU[0004] Created gauge for 'queue_mqput_bytes' from 'MQPUT byte count'
DEBU[0004] Created gauge for 'queue_destructive_mqget_persistent_bytes' from 'destructive MQGET persistent byte count'
DEBU[0004] Created gauge for 'queue_mqget_browse_persistent_bytes' from 'MQGET browse persistent byte count'
DEBU[0004] Created gauge for 'queue_expired_messages' from 'messages expired'
DEBU[0004] Created gauge for 'queue_destructive_mqget_non_persistent_message_count' from 'destructive MQGET non-persistent message count'
DEBU[0004] Created gauge for 'queue_destructive_mqget_non_persistent_bytes' from 'destructive MQGET non-persistent byte count'
DEBU[0004] Created gauge for 'queue_purged_count' from 'queue purged count'
DEBU[0004] Created gauge for 'queue_mqget_bytes' from 'MQGET byte count'
DEBU[0004] Created gauge for 'queue_mqget_browse_non_persistent_bytes' from 'MQGET browse non-persistent byte count'
DEBU[0004] Created gauge for 'queue_mqget_browse_non_persistent_message_count' from 'MQGET browse non-persistent message count'
DEBU[0004] Created gauge for 'queue_mqget_browse_persistent_message_count' from 'MQGET browse persistent message count'
DEBU[0004] Created gauge for 'queue_mqget_count' from 'MQGET count'
DEBU[0004] Created gauge for 'queue_destructive_mqget_persistent_message_count' from 'destructive MQGET persistent message count'
DEBU[0004] Created gauge for 'queue_average_queue_time_seconds' from 'average queue time'
DEBU[0004] Created gauge for 'queue_depth' from 'Queue depth'
DEBU[0004] Created gauge for 'qmgr_ram_total_estimate_for_queue_manager_bytes' from 'RAM total bytes - estimate for queue manager'
DEBU[0004] Created gauge for 'qmgr_user_cpu_time_estimate_for_queue_manager_percentage' from 'User CPU time - percentage estimate for queue manager'
DEBU[0004] Created gauge for 'qmgr_system_cpu_time_estimate_for_queue_manager_percentage' from 'System CPU time - percentage estimate for queue manager'
DEBU[0004] Created gauge for 'qmgr_user_cpu_time_percentage' from 'User CPU time percentage'
DEBU[0004] Created gauge for 'qmgr_system_cpu_time_percentage' from 'System CPU time percentage'
DEBU[0004] Created gauge for 'qmgr_cpu_load_one_minute_average_percentage' from 'CPU load - one minute average'
DEBU[0004] Created gauge for 'qmgr_cpu_load_five_minute_average_percentage' from 'CPU load - five minute average'
DEBU[0004] Created gauge for 'qmgr_cpu_load_fifteen_minute_average_percentage' from 'CPU load - fifteen minute average'
DEBU[0004] Created gauge for 'qmgr_ram_free_percentage' from 'RAM free percentage'
DEBU[0004] Created gauge for 'qmgr_ram_total_bytes' from 'RAM total bytes'
DEBU[0004] PubSub Gauges allocated
DEBU[0004] Created gauge for 'channel_name'
DEBU[0004] Created gauge for 'channel_connname'
DEBU[0004] Created gauge for 'channel_substate'
DEBU[0004] Created gauge for 'channel_batchsz_long'
DEBU[0004] Created gauge for 'channel_xmitq_time_long'
DEBU[0004] Created gauge for 'channel_type'
DEBU[0004] Created gauge for 'channel_instance_type'
DEBU[0004] Created gauge for 'channel_status_squash'
DEBU[0004] Created gauge for 'channel_xmitq_time_short'
DEBU[0004] Created gauge for 'channel_time_since_msg'
DEBU[0004] Created gauge for 'channel_attribute_max_inst'
DEBU[0004] Created gauge for 'channel_attribute_max_instc'
DEBU[0004] Created gauge for 'channel_rqmname'
DEBU[0004] Created gauge for 'channel_messages'
DEBU[0004] Created gauge for 'channel_status'
DEBU[0004] Created gauge for 'channel_nettime_long'
DEBU[0004] Created gauge for 'channel_batchsz_short'
DEBU[0004] Created gauge for 'channel_jobname'
DEBU[0004] Created gauge for 'channel_batches'
DEBU[0004] Created gauge for 'channel_nettime_short'
DEBU[0004] ChannelGauges allocated
DEBU[0004] Created gauge for 'queue_attribute_usage'
DEBU[0004] Created gauge for 'queue_qtime_short'
DEBU[0004] Created gauge for 'queue_time_since_put'
DEBU[0004] Created gauge for 'queue_time_since_get'
DEBU[0004] Created gauge for 'queue_oldest_message_age'
DEBU[0004] Created gauge for 'queue_output_handles'
DEBU[0004] Created gauge for 'queue_attribute_max_depth'
DEBU[0004] Created gauge for 'queue_name'
DEBU[0004] Created gauge for 'queue_input_handles'
DEBU[0004] Created gauge for 'queue_qtime_long'
DEBU[0004] Queue  Gauges allocated
DEBU[0004] Created gauge for 'topic_messages_received'
DEBU[0004] Created gauge for 'topic_publisher_count'
DEBU[0004] Created gauge for 'topic_subscriber_count'
DEBU[0004] Created gauge for 'topic_time_since_msg_published'
DEBU[0004] Created gauge for 'topic_time_since_msg_received'
DEBU[0004] Created gauge for 'topic_name'
DEBU[0004] Created gauge for 'topic_type'
DEBU[0004] Created gauge for 'topic_messages_published'
DEBU[0004] Topic  Gauges allocated
DEBU[0004] Created gauge for 'subscription_type'
DEBU[0004] Created gauge for 'subscription_time_since_message_published'
DEBU[0004] Created gauge for 'subscription_messsages_received'
DEBU[0004] Created gauge for 'subscription_subid'
DEBU[0004] Created gauge for 'subscription_name'
DEBU[0004] Created gauge for 'subscription_topic'
DEBU[0004] Subscription Gauges allocated
DEBU[0004] Created gauge for 'qmgr_connection_count'
DEBU[0004] Created gauge for 'qmgr_channel_initiator_status'
DEBU[0004] Created gauge for 'qmgr_command_server_status'
DEBU[0004] Created gauge for 'qmgr_name'
DEBU[0004] Created gauge for 'qmgr_uptime'
DEBU[0004] QMgr   Gauges allocated
DEBU[0004] Created gauge for 'bufferpool_buffers_free_percent'
DEBU[0004] Created gauge for 'bufferpool_buffers_total'
DEBU[0004] Created gauge for 'bufferpool_id'
DEBU[0004] Created gauge for 'bufferpool_location'
DEBU[0004] Created gauge for 'bufferpool_pageclass'
DEBU[0004] Created gauge for 'bufferpool_buffers_free'
DEBU[0004] Created gauge for 'pageset_pages_persistent'
DEBU[0004] Created gauge for 'pageset_status'
DEBU[0004] Created gauge for 'pageset_expansion_count'
DEBU[0004] Created gauge for 'pageset_id'
DEBU[0004] Created gauge for 'pageset_bufferpool'
DEBU[0004] Created gauge for 'pageset_pages_total'
DEBU[0004] Created gauge for 'pageset_pages_unused'
DEBU[0004] Created gauge for 'pageset_pages_nonpersistent'
DEBU[0004] BP/PS  Gauges allocated
INFO[0004] IBMMQ Describe started
INFO[0004] Platform is UNIX
INFO[0004] Listening on port 9176
INFO[0047] IBMMQ Collect started
DEBU[0047] Polling for object status
DEBU[0050] Collected all channel status
DEBU[0050] Collected all topic status


I am running the exporter with the following parameters

queues="*"


channels="*"

ARGS="-ibmmq.queueManager=$qMgr"
ARGS="$ARGS -ibmmq.monitoredQueues=$queues"
ARGS="$ARGS -ibmmq.monitoredChannels=$channels"
ARGS="$ARGS -ibmmq.monitoredTopics=*"
ARGS="$ARGS -ibmmq.monitoredSubscriptions=*"
ARGS="$ARGS -rediscoverInterval=1h"
ARGS="$ARGS -ibmmq.client=true"
ARGS="$ARGS -log.level=trace"
ARGS="$ARGS -ibmmq.useStatus=true"
ARGS="$ARGS -ibmmq.resetQStats=true"
ARGS="$ARGS -ibmmq.httpListenPort=9176"

export MQS_NO_SYNC_SIGNAL_HANDLING=true

exec ./mq_prometheus $ARGS

We can see via MQ explorer that our messages are being consumed by the MQ exporter.
MQ_consuming

We cannot see the metrics page at :9176/metrics as the page never finishes loading.

Can you please give us some hints as to what the problem could be and what we could try next?

Problems compiling mq_influx

Hi,

I'm having problems trying to get a compiled version of mq_influx. I'm using
go version go1.13.6 linux/amd64

I've tried multiple things but I'm stucked in this error:

go build -o bin/mq_influx src/ibm-messaging/mq-metric-samples/cmd/mq_influx/*.go

command-line-arguments

src/ibm-messaging/mq-metric-samples/cmd/mq_influx/main.go:54:70: cannot use &config.cf.CC (type *"github.com/ibm-messaging/mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/mqmetric".ConnectionConfig) as type *"ibm-messaging/mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/mqmetric".ConnectionConfig in argument to "ibm-messaging/mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/mqmetric".InitConnection

Is it related to the go version I'm using? Or it's something else I'm missing? I've followed the related README files.

Regards.

Label for hostname

Hi,

I have queue managers running on Windows Server as resources in multi-node Failover Cluster.
I'm using metrics in Prometheus and dashboards in Grafana, these are working great.
But I have one problem with Alertmanager. Alerts from Alertmanager are sent to our alerting system where we need to know the hostname on which queue manager is running at given moment.
Can you add hostname as one of labels on all metrics (like platform and qmgr)? Maybe with on/off switch in configuration for those who don't need it?

Thanks

Howto monitor multiple queue managers with 1 instance of mq_prometheus

Please include the following information in your ticket.

  • mq-metrics-samples version(s) that are affected by this issue.
  • A small code sample or description that demonstrates the issue.

Instead of running multiple instances of mq_prometheus, is there a way to aggregate the data from multiple queue managers and have them all exported to a single :9157/metrics webpage?

Checking QMGR Status

Hi,
Which metric records the current status of QMGR?
Now I'm checking if it running doing something like that "count(ibmmq_qmgr_commit_count)"
There is another way?
I'm using c972b14 (the latest).

BR

IP binding address

Hi thanks for the amazing work.

This is not an issue.
I'am using mq export for prometheus on a multi homed host, is possibile to bind the exporter on a specific ip address ?

Thank you.

Building on z/OS (USS)

Trying to install mq_prometheus to run in the z/OS unix system services for monitoring queue managers running on the mainframe. I have successfully built go and gofmt using the z-os/go repo https://github.com/zos-go/go

The version of go on that repo is quite outdated (1.6), so I have requested the port to be updated. Meaning when I am trying to build mq_prometheus (latest pulled version), I get the following error

go build -o bin/zos_s390x/mq_prometheus src/github.com/ibm-messaging/mq-metric-samples/cmd/mq_prometheus/*.go

src/github.com/ibm-messaging/mq-metric-samples/cmd/mq_prometheus/exporter.go:33:2: no buildable Go source files in /fidglbl/perfwas/go-zos/src/github.com/ibm-messaging/mq-metric-samples/vendor/github.com/ibm-messaging/mq-golang/ibmmq

Not able to get info for shared queues

name = "github.com/ibm-messaging/mq-golang"
version = "4.1.0"

Using MQ client; MQSeriesRuntime-9.0.0-7.x86_64.rpm
Connecting to IBM MQ for z/OS ver v9.0.0.0

Able to get info when QSG disposition = 'Queue manager' but not when 'Shared'. Is this not available with client?

Client connections in YAML config

Hi Mark,

Is it possible to add the host, port and channel information for client connections to the exporter's YAML config?

All of the queue managers that I'd like to connect to are remote on z/OS and reside on multiple hosts. I'm assuming that I will need to run a program for each queue manager that I want to connect to but I don't know how to specify unique MQSERVER environment variables for each connection.

Thanks!

mq_prometheus scale up issue

We are trying to leverage mq_prometheus for monitoring and statistics
purpose against MQ 9 QMGRs. The mq metrics samples has been downloaded
from github (master copy)

https://github.com/ibm-messaging/mq-metric-samples

and mq_prometheus has been built for RHEL 7, and successfully able to
connect to the QMGR instance and subscribe to system topics to collect
the stats.

However, observed a challenge in scaling up when tried to monitor
number of queues by using the supported pattern (*asterisk).

For example:

/local/umbuild/prometheus/mq_prometheus -ibmmq.queueManager=NAUMQ02 -

ibmmq.
monitoredQueuesFile=/var/mqm/qmgrs/NAUMQ02/config/mq_prometheus_queues -
log.level=error

While the queues to be monitored are listed in file
/var/mqm/qmgrs/NAUMQ02/config/mq_prometheus_queues as follows

cat /var/mqm/qmgrs/NAUMQ02/config/mq_prometheus_queues

PERFTEST.QUEUE.*
PERFTEST.QUEUE.666
PERFTEST.QUEUE.777

In the PoC environment we have, the pattern PERFTEST.QUEUE.* resolves
into 2000 Queues, in this case the mq_prometheus fails to inquire all
the queues due to in-sufficient output buffer, and gets the TRUNCATED
message.

When I looked at the code github.com/ibm-messaging/mq-
golang/mqmetric/discover.go where discover() function is implemented,
the code just allocates around 32KB chunk of memory, and also mentions
about hitting the TRUNCATED message error.

I just tried increasing the buffer size to accommodate more than 2000
Queues, recompiled the mq_prometheus and deployed.

This resolves the TRUNCATED message error, however mq_promethues will
hit the MAXHANDS (default 256) limits at QMGR level as mq_prometheus
opens at least 4 subscription handles (one each for OPENCLOSE, PUT,
GET, SETINQ) for each Queues to be monitored.

TOPICSTR($SYS/MQ/INFO/QMGR/NAUMQ02/Monitor/STATQ/PERFTEST.QUEUE.
1001/GET)
TOPICSTR($SYS/MQ/INFO/QMGR/NAUMQ02/Monitor/STATQ/PERFTEST.QUEUE.
1001/OPENCLOSE)
TOPICSTR($SYS/MQ/INFO/QMGR/NAUMQ02/Monitor/STATQ/PERFTEST.QUEUE.
1001/PUT)
TOPICSTR($SYS/MQ/INFO/QMGR/NAUMQ02/Monitor/STATQ/PERFTEST.QUEUE.
1001/INQSET)

To overcome this, changed the MAXHANDS (10000), then mq_prometheus
succeeds. Relate to this have couple of questions.

1.Is there a plan (in near term) to address the TRUNCATED message
issue? i.e., by allocating the memory in loop OR providing an
environment variable to mention size of buffer?
2.What is the implication of increasing the MAXHANDS to higher value,
in terms of system resource configuration or kernel parameter
settings? The scope of the MAXHANDS is per connection, but in real
production environment where thousands of client connections are active
we need to make sure those client connections doesnt suffer in terms
of resources.
3.Can the number of handles be reduced by subscribing to higher level
of the topic string? example: Instead of 4 listed above, subscribe to
$SYS/MQ/INFO/QMGR/NAUMQ02/Monitor/STATQ/PERFTEST.QUEUE.1001 to get all
4 key value pair and process it.

certificate validation date.

Hi,
Is it possible add metric to check certificates validation date in mq keystore?
Best solution will be checking date for personal and trusted certificates.

BR
Piotrek

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.