Giter Club home page Giter Club logo

jmeter-prometheus-plugin's Introduction

jmeter-prometheus-plugin's People

Contributors

chiabre avatar dependabot[bot] avatar giovannipaologibilisco avatar johndwalker avatar johrstrom avatar keydam avatar shiunu avatar stephan3555 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jmeter-prometheus-plugin's Issues

NonGUI mode: The JVM should have exited but did not

Version 0.5.0
After test i have:

Tidying up ...    @ Tue Aug 27 18:16:35 MSK 2019 (1566918995756)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[pool-2-thread-1,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#park
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
java.util.concurrent.LinkedBlockingQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run

Thread[DestroyJavaVM,5,main], stackTrace:

Workaround: set jmeterengine.stopfail.system.exit=true

If i stop to send pull requests from prometheus to jmeter-exporter - all fine (of course, without metrics in prometheus).

threaddump-1566919071184.txt

No .jar file released

Hi, first of all, thank you for your work!
Do you thing it is possible to release the final .JAR file in the "Release" section?
This will make a life of a non developer a little bit easy! :)

Regards.

Could not find Prometheus Listener in Jmeter

Could not find Prometheus Listener in Jmeter.
Maven build successful, placed the created shaded and original jar in $Jmeter_Home\lib.
Started the Jmeter application, checked for Listener, but could not find it.
No error logs in Jmeter application.

Environment :
Win 10 Enterprise, 64bit
java version "1.8.0_151"
Apache Jmeter 4.0 and Apache 3.3

compile issue

Below lib is not available after importing pom.xml.

import org.apache.jmeter.assertions.AssertionResult;
import org.apache.jmeter.engine.util.NoThreadClone;
import org.apache.jmeter.reporters.AbstractListenerElement;

Deploy to mvn repository

Could you upload the releases to a remote mvn repo such as https://mvnrepository.com to easily download at build time?

Distributed tests only run once

Originally posted in #62. I'm moving it here because it seems to be a whole new thing and possibly a bug in this library. Originally opened there by @jrodguitar.

I setup the jmeter distributed, and prometheus and my pods (which I'm going to test some load against).
Prometheus shows the endpoints as DOWN, before i run any tests.
Prometheus shows the endpoints as UP, while the jmeter tests are running
When the jmeter tests finish, Prometheus shows the endpoints as DOWN.
Up to this point everything is fine.

If i want to do another run of the distributed jmeter, the endpoints never become UP anymore.
even doing a curl localhost:9270 in the pod, seems unreachable.
but running this command kubectl describe endpoints jmeter-slaves-svc
shows that the port 9270 is properly listed.

Any ideas?
The workaround is to redeploy over and over my jmeter slaves pods. but that's not ideal.

Send zeros for failure false and true

I think the plugin needs to send zero values in some cases. Specifically, when the first observation (value) for metric{sampler_name="spam", failure="false"} is created, a matching observation metric{sampler_name="spam", failure="true"} 0 should be emitted.

Sending zero seems not a common thing in Prometheus, usually we send nothing to indicate that no observations have been made. That is correct for (for example) sampler_name because having observations on sampler spam does not imply that sampler ham exists.

However, assertion success and failure are related in the human mind: with a for assertions, we expect the total assertions a_t to be the sum of successes a_s and failures a_f. Unlike the samplers, having any number of successes implies that we also have failures (possibly 0), and vice versa.

Currently, the expectation is correct most of the time, iff a_s > 0 & a_f > 0. If no failures have been found yet: a_t != a_s + <undefined> The solution is to set a_f = 0 if a_s is set/modified and a_f has not been set yet.

Use case: I have a Grafana panel with the number of failures in my nightly tests, based on metric{failures="true"}. If all goes very well, the number shows up as "n/a" because no failures have been found. Switching to display the number of successes will go wrong when all tests fail.

Error in compiling on Windows during Surefire tests

Trying to compile (mvn clean package -e) on Windows Server 2016 with Java 11.0.3 fails during unit test executions with the following error:
[Error] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test failed.: NullPointerException -> [Help 1
org.apache.maven.lifecycle,LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test (default-test) on project jmeter-prometheus-plugin: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test failed.

Can't seem to rename listeners or config elements

Can't seem to rename listeners or config elements. To reproduce simply try to rename and save (ctrl+s) and it'll revert back to what it defaults to.

I don't think this has any functionality impact, but is a bug no less.

Can't save and exit when test plans have prometheus elements

For whatever reason, this plugin is holding up the exit routine for JMeter when Saving then exiting.

To replicate simply:
load plugin version 0.2.0-rc1 into prometheus
open the example testplan in docs/examples
close jmeter, and when prompted to save, click yes

Expected results: jmeter saves and exists
actual results: jmeter saves but does not exit.

IllegalArgumentException thrown when test starts

Two errors get thrown when running the example (and likely during other testplans too). This doesn't affect the actual running of the test, but is a bug none the less. It seems there's a threading issue.

019-02-21 09:36:26,099 DEBUG [StandardJMeterEngine] c.g.j.l.PrometheusListener(200): added jsr223_rsize_as_hist to list of collectors
2019-02-21 09:36:26,105 DEBUG [StandardJMeterEngine] c.g.j.c.JMeterCollectorRegistry(85): created and registered [collector.help: default help string, collector.metric_name: jsr223_animals_total, collector.type:
COUNTER, collector.labels: [color, size, mammal], collector.quantiles_or_buckets: , ]
2019-02-21 09:36:26,106 DEBUG [StandardJMeterEngine] c.g.j.c.CollectorElement(80): added jsr223_animals_total to list of collectors
2019-02-21 09:36:26,106 DEBUG [StandardJMeterEngine] c.g.j.c.PrometheusMetricsConfig(50): Test started, adding 1 collectors to variables
2019-02-21 09:36:26,106 DEBUG [StandardJMeterEngine] c.g.j.c.PrometheusMetricsConfig(55): Added ([collector.help: default help string, collector.metric_name: jsr223_animals_total, collector.type: COUNTER, collec
tor.labels: [color, size, mammal], collector.quantiles_or_buckets: , ],io.prometheus.client.Counter@9bbcae2) to variables.
2019-02-21 09:36:26,107 TRACE [StandardJMeterEngine] c.g.j.c.JMeterCollectorRegistry(78): jsr223_rt_as_hist found already registered.
2019-02-21 09:36:26,107 DEBUG [StandardJMeterEngine] c.g.j.l.PrometheusListener(200): added jsr223_rt_as_hist to list of collectors
2019-02-21 09:36:26,111 ERROR [StandardJMeterEngine] c.g.j.l.PrometheusListener(208): Didn't create new collector because of error,
java.lang.IllegalArgumentException: Collector already registered that provides name: jsr223_rt_as_summary_count
at io.prometheus.client.CollectorRegistry.register(CollectorRegistry.java:54) ~[jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.collector.JMeterCollectorRegistry.getOrCreateAndRegister(JMeterCollectorRegistry.java:82) ~[jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.listener.PrometheusListener.makeNewCollectors(PrometheusListener.java:171) [jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.listener.PrometheusListener.testStarted(PrometheusListener.java:136) [jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfStart(StandardJMeterEngine.java:215) [ApacheJMeter_core.jar:4.0 r1823414]
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:384) [ApacheJMeter_core.jar:4.0 r1823414]
at java.lang.Thread.run(Thread.java:844) [?:?]
2019-02-21 09:36:26,122 TRACE [StandardJMeterEngine] c.g.j.c.JMeterCollectorRegistry(78): jsr223_rt_as_hist found already registered.
2019-02-21 09:36:26,122 DEBUG [StandardJMeterEngine] c.g.j.l.PrometheusListener(200): added jsr223_rt_as_hist to list of collectors
2019-02-21 09:36:26,122 ERROR [StandardJMeterEngine] c.g.j.l.PrometheusListener(208): Didn't create new collector because of error,
java.lang.IllegalArgumentException: Collector already registered that provides name: jsr223_rt_as_summary_count
at io.prometheus.client.CollectorRegistry.register(CollectorRegistry.java:54) ~[jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.collector.JMeterCollectorRegistry.getOrCreateAndRegister(JMeterCollectorRegistry.java:82) ~[jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.listener.PrometheusListener.makeNewCollectors(PrometheusListener.java:171) [jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at com.github.johrstrom.listener.PrometheusListener.testStarted(PrometheusListener.java:136) [jmeter-prometheus-plugin-0.2.0-rc3.jar:?]
at org.apache.jmeter.engine.StandardJMeterEngine.notifyTestListenersOfStart(StandardJMeterEngine.java:215) [ApacheJMeter_core.jar:4.0 r1823414]
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:384) [ApacheJMeter_core.jar:4.0 r1823414]
at java.lang.Thread.run(Thread.java:844) [?:?]
2019-02-21 09:36:26,128 INFO [StandardJMeterEngine] o.a.j.g.u.JMeterMenuBar(551): setRunning(true, local)
2019-02-21 09:36:26,317 INFO [StandardJMeterEngine] o.a.j.e.StandardJMeterEngine(453): Starting ThreadGroup: 1 : listener tg
2019-02-21 09:36:26,317 INFO [StandardJMeterEngine] o.a.j.e.StandardJMeterEngine(513): Starting 40 threads for group listener tg.

Metric reuse

Consider this use case where someone want's 1 metric with different labels (labels in the JMeter sense). A test plan that may look something like:

TestPlan
| --- A
| | - Prometheus Listener A
|
| --- B (want to filter this this)
|
| --- C
| | - Prometheus Listener C

That generates my_rt_metric{label="[A,C]"}

Under the current versions, one would have to use 2 entirely different metrics like my_rt_metric_A{} and my_rt_metric_B{}

loss of data points at the very end of tests

I tried putting 30 sec or more to the prometheus.delay property, but as soon as Jmeter test stop, metrics http server is down immediately and Prometheus scrapper loses the data. how to resolve this?

Number of JMeter Threads Are Not Correctly Logged

Hi,
I did the configuration to monitor using prometheus listener in JMeter 3.2 and monitored it on Grafana Dashboard. During testing I noticed not all threads are getting logged in the dashboard.

For, e.g. if I have to 2 JMeter thread groups in the JMX script. First Thread Group is running with 5 threads and second thread Group is running with 10 threads. While monitoring in Grafana, I noticed the Number of running threads is always shows 10 instead of 15. If I interchange the number of threads in the script, it shows 5 threads only.

Please provide your fix if it's an issue.

Regards,
Pawan Singh

Error on build

Looks to be an error when building master mvn clean package:
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.533 s <<< FAILURE! - in com.github.johrstrom.config.gui.ConfigGuiTest
[ERROR] simpleTest(com.github.johrstrom.config.gui.ConfigGuiTest) Time elapsed: 0.533 s <<< ERROR!
java.lang.NullPointerException

Configurable collector type for assertions

I would like to make the type of collector for assetions configurable.

In my use case I'm interested in separating the time taken to respond to requests according to the outcome of the assertion.

e.g. In my (JSON) response there is a "fallback" field, I would like to be able to separate the time taken to serve request when "fallback" is true and false. Using just volumes this is not possible because the sampler collector aggregates both of them.

Another option would be adding the assertion value as a label to the sample collector but since multiple assertions can be defined for each sampler the number of labels could easily explode.

Unable to expose metrics from JMeter

Hi Guys,

I'm planning to enable the metrics in Jmeter but I'm unable to do that. I have downloaded the Jmeter packg and created/copied the two .jar files to $JMETER_HOME/lib/ext and build the Jmeter files as a Docker image and pushed to docker hub. Now I have installed the docker image on my server. But I couldn't find the metrics. I'm getting the following errors " curl: (52) Empty reply from server". How can I get metrics from Jmeter for monitor the same from Prometheus-Grafana. Please help.

Ability to specify IP / bind address?

Thanks for the project Jeff - nice work.

Is it possible to assign the bind address (0.0.0.0 or IP address) so that metrics are served by an externally accessible address (rather than the default 127.0.0.1 / localhost)? If this is not currently possible, are there plans to add this enhancement?

Use cases related to the question:

  • Ability to run JMeter on one machine but access metrics via Prometheus from another.
  • Ability to run JMeter on a host machine with Prometheus running on a docker image on the host machine

In each case, localhost/127.0.0.1 is not usable.

I think there is an interest in this functionality from others as it is referenced in passing in previously closed issues:

Filter samples

Sometimes you don't want to time sampler (or care to anyway); like debug, beanshell, JSR223, etc, because maybe they just do some logic and they don't actually make a request or test something.

So, there should be some way to filter Samplers such that they're not exposed as metrics.

loss of data points at the very end of tests

Let's imagine a jmeter test with this plugin(s) is running and it ends at 1:00:00. The prometheus server that is scraping this particular JVM (jmeter instance) is in a cycle such that it'll scrape at 12:59:35 and 1:00:05 (30 second scrape interval).

That leaves roughly 25 seconds of uncollected metrics because the test ends calls this plugins' testEnded function which stops the http server. Now, that may or may not be negligible in a performance test which can run for an hour or more, but it has to at least be documented as a known issue.

User Configurable Text Labels

The InfluxDB listener has fields such as 'measurement', 'application', and 'testTitle' that can be configured in the Listener UI in JMeter. Having one or two user configurable text fields can be really handy as it allows automation to pass in metadata about the test to JMeter which Jmeter can then pass on to the metrics reporting solution. For instance, variables such as Environment, Test Type, Test Name can be passed as a Label to Prometheus, Grafana, etc where you can filter on them. This allows you to do queries such as "Show me test results run on CI environment" or "Show me test results for testTitle Nightly-Test". It would be great to see a couple of user configurable text labels implemented in this Prometheus listener.

screen shot 2019-01-23 at 1 58 36 pm

Support Push Gateway

I use this plugin for very long lived tests (they're functional tests that run forever actually). But that's perhaps not the general use case. This enhancement is to support the PushGateway as well as a server.

I'm thinking this implementation looks like this:

Add response_code and response_message metrics

Hi!
On influx backend listener "responseMessage", "responseCode" fields are available.
And responseMessage is used when we have a error response.
They are very useful in grafana table view like that:
image.
Is it possible to add that kind of metrics?

additional configurations

Would be useful to provide additional percentiles 90 and 95th along with 99th. Can this be obtain via update of this plugin code or would there need to be any additional changes jmeter side. By default jmeter backend listener by default expose 90, 95 and 99th.

functionality for listening to assertions

Currently we can only listen to samples. This ticket is to provide that functionality like this

  • if label is specified, the collect results for all the assertions (if there are more than 1)
  • if label is not specified simply collect from the last.
  • only counter type and success ratio metrics should be able to listen to assertions.

I'm not sure if it makes sense to measure assertions with complex types like Histograms and Summaries measuring things like response size (example: A histogram of assertion response sizes?), but I'm welcome to community feedback.

Add option for Simpleclient_hotspot metrics

If people use this for performance testing (as may very well be the case) it's probably worth it to allow for optional jvm metrics be exposed from the load generators.

An easy way of doing this would be to simply use the prometheus.io/simpleclient_hotspot libraries.

Redo UI

If we keep going at this rate, we'll overload that dialogue box. I modelled it after the Simple Data Writer, but maybe it's becoming apparent that there are many configurations that would be better just in the main ui instead.

Prometheus Listener Server / Port ?

Hi,

The listener gui interface does not allow to specify an IP / port. What should I do when prometheus is not on the same server as jmeter?

Regards
Joao

Collect thread group as a label

Currently we only collect threads with the state label. This ticket is to add the group label to indicate what thread group a given thread is in.

Add code as label keyword

This issue is to enable the use of 'code' as a label keyword such that the label value is the sample's response code (200, 404, etc.).

Use Sample Variables

Jmeter allows for sample_variables to be populated and used in the SimpleWriter listener class. We should also use this feature for labeling.

No test cases

This project has no test cases which is simply unacceptable. Write test cases.

HTTP Sampler Metrics

Hello,

Sorry if this question is too simple, but we're having a really tough time accessing what we hope we'd see when attaching a listener to an HTTP sampler. The only jmeter_ metric that appears in the scrape request is the number of client threads, but none of the "standard" metrics like response time average, min, max, etc.

Do we need to define each of these to be output individually? If we need to add them to the table in the GUI, is there any "standard" or "automatic" variables available for this sampler? Or, do we need to copy all of the HTTP metrics we'd want into "custom" metrics?

Hopefully this question makes sense... I'm hoping someone has encountered this before and we're not the only people wanting the "standard" response metrics from an HTTP request :)

thanks much

Client/Server instantion kerfuffle

One of the pain points in running a Jmeter test in distributed mode is the collection of results on the Client instance controlling the test. Jmeter implements this feature in different ways, from syncronous forwarding of the response to batch forwarding of statistics data.

It would be very useful being able to launch the exporter on all the Jmeter Server instances in order to allow scaping directly from the injectors.
This also allows to brake down events by injector.

How do I add data source into grafana?

Hello,

My environment condition:
Linux Mint: 19
JMeter: 5.0
Grafana: v5.3.2 (0d821d0)

I can retrieve output data from http://localhost:9270/metrics when running pressure loading.

But, when I add http://localhost:9270 to Grafana to be one of 'Data Sources'. That seem can't be detected well. That will keep show 'Exclamation mark'.

So even thought I import 'JMeter.json' into Grafana, I still can't see Grafana normal behavioral when I do load test. Is that also happen to you?

Thanks.

Enable some defaults.

Users may just want to use some metrics "out of the box". They may not want to fiddle with all these knobs, they want something they can just start using that gives "pretty good metric collection without a whole lot of work".

So to enable that perhaps the GUI should initialize with a set of "core" HTTP metrics like reponse time, success rates, and so on.

Users should be able to set a configuration flag to use to this behaviour.

Distributed mode HowTo?

Hi!
As I understand one way to collect metrics from distributed JMeter mode with kubernetes is using <kubernetes_sd_config> with parameter role=pod in prometheus config?
If yes it will be useful to collect metrics from master as influx backend listener does.

Assertions don't use sample_variables

Assertions don't use sample_variables but samples do. Assertions also need sample_variables so that failures can be identified through the sample_variables.

OOM on long running tests

On very long running synthetic (functional) tests I got this that was making JMeter increasingly slow to make requests and to create metrics.

I'm using these GC settings which honestly should be just fine.
-Xmx512m -Xms512m -XX:+UseG1GC -XX:MaxGCPauseMillis=250 -XX:G1ReservePercent=20

Dec 04 21:20:42 hostname jmeter[21784]: 2017-12-04 21:20:42,900 ERROR o.a.j.JMeter: Uncaught exception:
Dec 04 21:20:42 hostname jmeter[21784]: java.lang.OutOfMemoryError: Java heap space
Dec 04 21:20:42 hostname jmeter[21784]: Uncaught Exception java.lang.OutOfMemoryError: Java heap space. See log file for details.
Dec 04 21:20:42 hostname jmeter[21784]: 2017-12-04 21:20:42,900 ERROR o.a.j.JMeter: Uncaught exception:
Dec 04 21:20:42 hostname jmeter[21784]: java.lang.OutOfMemoryError: Java heap space
Dec 04 21:20:42 hostname jmeter[21784]: Uncaught Exception java.lang.OutOfMemoryError: Java heap space. See log file for details.
Dec 04 21:20:42 hostnamejmeter[21784]: 2017-12-04 21:20:42,900 WARN o.e.j.u.t.QueuedThreadPool: Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$2@73db30b in qtp354789746{STARTED,8<=8<=200,i=4,q=0}
Dec 04 21:20:42 hostname jmeter[21784]: 2017-12-04 21:20:42,904 ERROR o.a.j.JMeter: Uncaught exception:
Dec 04 21:20:42 hostname jmeter[21784]: java.lang.OutOfMemoryError: Java heap space
Dec 04 21:20:42hostname jmeter[21784]: Uncaught Exception java.lang.OutOfMemoryError: Java heap space. See log file for details.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.