Giter Club home page Giter Club logo

locust4j's Introduction

Locust4j Build Status Coverage Status

Links

Description

Locust4j is a load generator for locust, written in Java. It's inspired by boomer and nomadacris.

It's a benchmarking library, not a general purpose tool. To use it, you must implement test scenarios by yourself.

Usage examples

Features

  • Write user test scenarios in Java
    Because it's written in Java, you can use all the things in the Java Ecosystem.

  • Thread-based concurrency
    Locust4j uses threadpool to execute your code with low overhead.

Build

git clone https://github.com/myzhan/locust4j
cd locust4j
mvn package

Locally Install

mvn install

Maven

Add this to your Maven project's pom.xml.

<dependency>
    <groupId>com.github.myzhan</groupId>
    <artifactId>locust4j</artifactId>
    <version>LATEST</version>
</dependency>

More Examples

See Main.java.

This file represents all the exposed APIs of Locust4j.

NOTICE

  1. The task instance is shared across multiply threads, the execute method must be thread-safe.
  2. Don't catch all exceptions in the execute method, just leave every unexpected exceptions to locust4j.

Author

  • myzhan
  • vrajat

Known Issues

  • When stop-the-world happens in the JVM, you may get wrong response time reported to the master.
  • Because of the JIT compiler, Locust4j will run faster as time goes by, which will lead to shorter response time.

License

Open source licensed under the MIT license (see LICENSE file for details).

locust4j's People

Contributors

aparmet-toast avatar dependabot[bot] avatar ferristseng avatar myzhan avatar neyzoter avatar stephen-harris avatar vzhikserg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

locust4j's Issues

Can we look for env vars for master host and port?

It would be nice if we could check env vars LOCUST_MASTER_NODE_HOST and LOCUST_MASTER_NODE_PORT for connection info instead of having to define it inside the tests. This would match the python clients way of passing in connection info.

当master发出stop命令时,任务线程无法终止

Hi,最近在使用您的框架时发现了这个问题,研读了线程部分的代码后,发现了问题所在。
在Runner.stop这个方法中,您使用了shutdown的方式来关闭线程池,那么正在执行任务的线程并不能接受到任何消息,而是会继续做完AbstractTask.run方法的任务,但是这个方法里面是个死循环,这样就导致了正在执行任务的线程不能正常终止!

提供两个解决思路:

  1. 采用shutdownNow暴力终止线程池,这会保证正在执行任务的线程收到一个InterruptedException,AbstractTask类及其子类需要对这个中断进行处理(不能吞掉),确保AbstractTask能相应中断并且退出while循环;

  2. 在AbstractTask类的while循环中判断Runner.state,当检测到状态为Stoped时,主动退出while循环。此处需要保证所有的任务线程都能够获取到这个状态并退出,否则当master的下一次请求到来时,将会重置任务Runner的状态,没来得及获得CPU时间片的任务线程就不会退出了~

Locust4j Slave fails to send heartbeat to master

Hi,
Thanks for nice useful library!

OS: MacOS-Mojave, Openstack - CentOS7
Java: OpenJDK11, OpenJDK8

I am trying to run Locust4j as per this link using Java : https://www.blazemeter.com/blog/locust-performance-testing-using-java-and-kotlin/

Once I run the main class, I see this log statement in master log:
INFO/locust.runners: Client '' reported as ready. Currently 1 clients ready to swarm.

And, java main class log says:
DEBUG com.github.myzhan.locust4j.rpc.ZeromqClient - Locust4j is connected to master(127.0.0.1:5557)

However, immediately within second, I keep getting this error in master log :

2020-02-24 04:58:26,104] dca90482df05/ERROR/stderr: Traceback (most recent call last):
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr:
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr: File "src/gevent/greenlet.py", line 818, in gevent._greenlet.Greenlet.run
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr:
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr: File "/usr/local/lib/python3.7/site-packages/locust/runners.py", line 462, in client_listener
c.cpu_usage = msg.data['current_cpu_usage']
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr:
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr: KeyError: 'current_cpu_usage'
[2020-02-24 04:58:26,105] dca90482df05/ERROR/stderr:
[2020-02-24 04:58:26,106] dca90482df05/ERROR/stderr: 2020-02-24T10:58:26Z
[2020-02-24 04:58:26,106] dca90482df05/ERROR/stderr:
[2020-02-24 04:58:26,106] dca90482df05/ERROR/stderr: <Greenlet at 0x10a16f050: <bound method MasterLocustRunner.client_listener of <locust.runners.MasterLocustRunner object at 0x10a164790>>> failed with KeyError
[2020-02-24 04:58:30,436] dca90482df05/INFO/locust.runners: Slave dca90482df05_2513ac6bee7e675ef4cf877736e3da33 failed to send heartbeat, setting state to missing.

I tried with JDK11, JDK8 and in Mac & Openstack as well. Also, tried running it from Eclipse IDE and through terminal as mentioned in the above BlazeMeter url. No luck. What might be going wrong?

Thanks!

System.exit() called

I would like to do full automated performance test with csv results analysis (asseration) in java and report generated for example in allure tool.
Problem is: after load testing (locust --run-time reached) in Runner.java (line 228) System.exit(0) is called.

Worker stuck in loop on startup

locust4j: 1.0.12
master: 1.2 or 1.4, looks like it doesn't really matter

scenario:

  1. Deploy locust master in k8s cluster
  2. Deploy locust4j workers in k8s cluster
  3. Go to locust master workers page

expected results:
number of available workers equals number of deployed workers
actual results:
sometimes a few workers are missing on the page, while it actually runs and consumes 100% CPU

It happens from time to time, looks like it depends on k8s node performance (too slow or too few cores available) and cause race condition while startup

Stack trace of the main thread stuck in that cycle forever (other threads are waiting):

"main" #1 prio=5 os_prio=0 cpu=545997.96ms elapsed=582.88s allocated=13858K defined_classes=1064 tid=0x00007f65d8028800 nid=0xd runnable [0x00007f65debd5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.read0([email protected]/Native Method)
at sun.nio.ch.FileDispatcherImpl.read([email protected]/FileDispatcherImpl.java:48)
at sun.nio.ch.IOUtil.readIntoNativeBuffer([email protected]/IOUtil.java:276)
at sun.nio.ch.IOUtil.read([email protected]/IOUtil.java:245)
at sun.nio.ch.IOUtil.read([email protected]/IOUtil.java:223)
at sun.nio.ch.SourceChannelImpl.read([email protected]/SourceChannelImpl.java:277)
at zmq.Signaler.recv(Signaler.java:159)
at zmq.Mailbox.recv(Mailbox.java:97)
at zmq.SocketBase.processCommands(SocketBase.java:931)
at zmq.SocketBase.send(SocketBase.java:696)
at org.zeromq.ZMQ$Socket.send(ZMQ.java:2588)
at org.zeromq.ZMQ$Socket.send(ZMQ.java:2577)
at com.github.myzhan.locust4j.rpc.ZeromqClient.send(ZeromqClient.java:44)
at com.github.myzhan.locust4j.runtime.Runner.getReady(Runner.java:340)
at com.github.myzhan.locust4j.Locust.run(Locust.java:179)
- locked <0x00000000d5f82848> (a com.github.myzhan.locust4j.Locust)
at com.github.myzhan.locust4j.Locust.run(Locust.java:154)
at com.sampleworker.App.main(App.java:82)

Basically the worker can't send ready status to master, though it actually established a connection.

speculation
Not entirely sure, but my guess that the race condition happen here:

        this.executor.submit(new Receiver(this));
        try {
            this.rpcClient.send(new Message("client_ready", null, this.nodeID));
        } catch (IOException ex) {
            logger.error("Error while sending a message that the system is ready", ex);
        }
  1. Receiver thread tries to process incoming commands from the socket (some messages from master)
  2. rpcClient.send under the hood also checks if there something else to read from the socket, see zmq.SocketBase.send(SocketBase.java:696):
        //  Process pending commands, if any.
        boolean brc = processCommands(0, true);
        if (!brc) {
            return false;
        }

Probably need to start receiver thread after client_ready cmd, or use some sort of lock on client

Exception with locust 2.2.1

When running with the latest version of locust (2.2.1) the following exception occurs:

15:24:07.087 TKD [Thread-3receive-from-client] ERROR c.g.myzhan.locust4j.runtime.Runner - Error while receiving a message
java.io.IOException: Message received unsupported type: ARRAY
	at com.github.myzhan.locust4j.message.Message.unpackMap(Message.java:88)
	at com.github.myzhan.locust4j.message.Message.unpackMap(Message.java:85)
	at com.github.myzhan.locust4j.message.Message.<init>(Message.java:38)
	at com.github.myzhan.locust4j.rpc.ZeromqClient.recv(ZeromqClient.java:42)
	at com.github.myzhan.locust4j.runtime.Runner$Receiver.run(Runner.java:387)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

From a little bit of debugging it appears that there is a user_classes message being passed of type ARRAY

Checking Master Status

Hi there,

Good job on the library!
I'm trying to understand if you currently have any exposed way that you are checking that:

  • master is up and ready to be connected
  • slave is connected to master

Cheers!

ack message from master not supported

As the log says:
2022-08-30 12:45:28,904 ERROR [com.git.myz.loc.run.Runner] (Thread-54receive-from-client) Got ack message from master, which is not supported, please report an issue to locust4j.

I use Locust 2.11.1, when the client connection is established with the master, on master log I see:

[2022-08-30 12:45:28,900] localhost.localdomain/WARNING/locust.runners: A worker (localhost.localdomain_f83b72ddd1e9ba171d0fe4063ab2a0b5) running a different version (-1) connected, master version is 2.11.1

Analysing Locust4j - Performance test

I'm exploring Locust for performance test our applications. Would like to reuse the existing Java page objects(selenium web elements) instead of rewriting in Python.
Issue: I try to invoke a particular count of users, by entering in the textbox "Number of total users to simulate" (on the locust ui) but actually runs into an infinite loop spawning users forever.

  1. How do I control the no. of requests.
  2. Is this a right approach, where I'm trying to fire Selenium events instead of http get/post requests
    Attaching the code snippet
    testcode.TXT

Thanks,
Sirisha

Setup the Slave Report Interval

I would like to change the slave interval for the reporting but, if I'm not wrong, this a private constant inside the StatsTimer class. Is there any way to change this value?

num_clients renamed to num_users

Trying to run local locust instance and creating tasks with locust4j.

Null reference exception: [Thread-2receive-from-client] ERROR com.github.myzhan.locust4j.runtime.Runner - Error while receiving a message
java.lang.NullPointerException
at com.github.myzhan.locust4j.runtime.Runner.hatchMessageIsValid(Runner.java:200)
at com.github.myzhan.locust4j.runtime.Runner.onMessage(Runner.java:235)
at com.github.myzhan.locust4j.runtime.Runner.access$500(Runner.java:32)
at com.github.myzhan.locust4j.runtime.Runner$Receiver.run(Runner.java:316)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

Their github indicates a change in argument from num_clients to num_users: locustio/locust@26573d8

Failure occurrences do not seem to be reflecting the actual number

Hi,

The occurrences of the failure is giving an irrelevant number in the results.
The way we are grabbing this variable is as following in the Locust master python class:

stats = environment.stats
    items = []
    entries = stats.entries
    for name, method in entries.keys():
        entry = entries.get((name, method))
        item = {"name": name,
                "method": method,
                "avg": round(entry.avg_response_time, 0),
                "avg_size": round(entry.avg_content_length, 0)
                }
        items.append(item)

    result["items"] = items

    errors = []
    for stats_error in stats.errors.values():
        error = {
            "name": stats_error.name,
            "method": stats_error.method,
            "error": stats_error.error,
            "occurrences": stats_error.occurrences
        }
        errors.append(error)


Task class

We are appending a unique random id to each execute() block, so that no two runs are similar.

public void execute() throws Exception {

        String workflowId = RandomStringUtils.randomNumeric(8);
        boolean failure = false;
        long category;

        User user = userQueue.pop();
        userQueue.addLast(user);
        String userAccessToken;
        .
        .
        .


//API call
       Response recent = EngagementController.getRecentEngagements(userAccessToken, 0);
        if ((recent.getStatusCode() != HttpStatus.SC_OK) && !failure) {
            failure = true;
            LocustUtil.recordFailure("GET " + workflowId, getName() + "/engagement-management/engagements/recent?size=3", recent,
                    userId, "recentEngagementsFailure_" + workflowId);
        }
}

Locust util record failure method

public static void recordFailure(final String requestType, final String taskName,
                                     final Response response, String userId, final String sampleWorkflowId) {
        Locust.getInstance().recordFailure(
                requestType,
                taskName,
                response.getTime(),
                "Response: StatusCode: " + response.getStatusCode() + "\n" +
                        "Header: Date: " + response.getHeader("date") + " " +
                        "Response Body: " + response.getBody() + " " +
                        "for userId: " + userId);
    }

And also that unique id is appended to the failure record. Ideally no two failures should have similar method or name.
How is occurrence variable getting its value then?

failure occurences

ThreadInterruptedException is not handled correctly in task

When I set step load parameters on master (e.g. step users 100 and duration 60s), than master sends new hatch messsage with 60s interval, but old threads are not stopped.

It's because Abstract task executor swallows InterruptedException which is thrown by my task in Thread.sleep:

2020-07-31 12:38:31,514 [locust4j-worker#13] ERROR c.g.myzhan.locust4j.AbstractTask - Unknown exception when executing the task
java.lang.InterruptedException: sleep interrupted

and task thread cannot be stopped

To fix it I have to throw Error instead of InterruptedException what stops the task, but looks ugly.

Locast throw Unhandled exception on master node during slave connection

Got:

Unhandled exception in greenlet: <Greenlet at 0x7fccc01e47b0: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7fccc01c6a90>>>
Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/alexche/.local/lib/python3.8/site-packages/locust/runners.py", line 627, in client_listener
    client_id, msg = self.server.recv_from_client()
  File "/home/alexche/.local/lib/python3.8/site-packages/locust/rpc/zmqrpc.py", line 45, in recv_from_client
    msg = Message.unserialize(data[1])
  File "/home/alexche/.local/lib/python3.8/site-packages/locust/rpc/protocol.py", line 18, in unserialize
    msg = cls(*msgpack.loads(data, raw=False, strict_map_key=False))
  File "msgpack/_unpacker.pyx", line 161, in msgpack._unpacker.unpackb
TypeError: unpackb() got an unexpected keyword argument 'strict_map_key'

when locust4j client connects to master

master starts using:
locust -f /src/main/resources/locust-master.py --run-time=20m --headless -u 1 -r 1 --master

slave start with:

val locust = Locust.getInstance()
   locust.setMasterHost("127.0.0.1");
   locust.setMasterPort(5557)
   locust.setMaxRPS(2)
   locust.run(RdScenario())

with scenario:

class RdScenario(private val profile: EnvironmentProfile) : AbstractTask() {
    override fun getWeight() = 1

    override fun getName() = "Some name"

    override fun execute() {
    }
}

locust version 1.4.1
locust4j version 1.0.12

Run locust using a schedule from yaml file and RampUpRateLimiter not working as expected

yaml file being read into locustConfig:
uiMaxRPS: 20
autoRun: true
steps:

  • numUsers: 10
    hatchRate: 1
    durationInSec: 30
    maxRPS: 20
    name: Initial Load
  • numUsers: 15
    hatchRate: 2
    durationInSec: 60
    maxRPS: 30
    name: Ramp up
  • numUsers: 20
    hatchRate: 3
    durationInSec: 90
    maxRPS: 40
    name: Peak Load

DurationInSec - indicates how long should the step be run in seconds
NumUsers - indicates the maximum number of users to spawn
HatchRate - indicates how fast each user can be spawn per second
MaxRPS - indicates max RPS for the step
Name - a label to show which step the schedule is running

I am trying to implement a scenario where if autoRun is true then the run of locust is automatic based on the details from Step in yaml config file and if possible the run is visible on the locust web UI or at least I still get the statistics report in a presentable form .

When autoRun is false then the user gets to control the NumUsers and HatchRate from Locust Web UI. The false flow is working very well but for autoRun true neither the task execution is happening as expected nor I see anything on locust web UI nor the execution stops after steps completion.

Below is code I used:

 if (locustConfig.getAutoRun()) {
            for (LocustConfig.Step step : locustConfig.getSteps()) {
                System.out.println("Executing step : "+ step.getName() + "\n MaxRPS: " +step.getMaxRPS()+ "\n HatchRate: " +step.getHatchRate()+ "\n Duration: " +step.getDurationInSec());
                locust.setRateLimiter(new RampUpRateLimiter(step.getMaxRPS(), step.getNumUsers(), step.getHatchRate(), TimeUnit.SECONDS, step.getDurationInSec(), TimeUnit.SECONDS));
                tasks.clear();
                tasks.add(new GetHandshakeApiTask(1, getHandshakeApi));
                tasks.add(new GetValidateUserLicenseApiTask(1, getValidateUserLicenseApi));
                locust.run(tasks);
                
            }
        }  else {
            locust.setMaxRPS(locustConfig.getUiMaxRPS());
            tasks.add(new GetHandshakeApiTask(1, getHandshakeApi));
            tasks.add(new GetValidateUserLicenseApiTask(1, getValidateUserLicenseApi));
    
            locust.run(tasks);
        }
        
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            locust.stop();
            Stats.getInstance().stop();
         }));
        
    }
}

Please help me with this implementation.

What version(s) of locustio are supported for the masters in distributed mode

Hello. What versions of locustio are supported for the master when running in distributed mode?

When I run my slaves against 0.12.1 master, and I receive a ReadTimeout on a slave task, I frequently see this error reported from the master and its not recorded properly in the web ui:

[2019-11-05 17:26:19,758] ip-10-51-78-73/INFO/locust.runners: Sending hatch jobs to 5 ready clients
[2019-11-05 17:26:22,312] ip-10-51-78-73/ERROR/stderr: Traceback (most recent call last):
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/runners.py", line 387, in client_listener
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: events.slave_report.fire(client_id=msg.node_id, data=msg.data)
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/events.py", line 34, in fire
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: handler(**kwargs)
[2019-11-05 17:26:22,313] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/stats.py", line 600, in on_slave_report
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: entry = StatsEntry.unserialize(stats_data)
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/stats.py", line 405, in unserialize
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: setattr(obj, key, data[key])
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: KeyError: 'num_none_requests'
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: 2019-11-05T17:26:22Z
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: 
[2019-11-05 17:26:22,314] ip-10-51-78-73/ERROR/stderr: <Greenlet at 0x7fdf6579c158: <bound method MasterLocustRunner.client_listener of <locust.runners.MasterLocustRunner object at 0x7fdf6578ded0>>> failed with KeyError
[2019-11-05 17:26:26,368] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_0220ff2b03bbbc1c91800b7b0ba756dc failed to send heartbeat, setting state to missing.
[2019-11-05 17:26:26,368] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_e2aa790224e6a7e6e6a4d6507e16aee5 failed to send heartbeat, setting state to missing.
[2019-11-05 17:26:26,368] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_a92507d37bd5ca89067babee4df7f3fc failed to send heartbeat, setting state to missing.
[2019-11-05 17:26:26,369] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_37c521fec7104ac830e46eb9e82fb350 failed to send heartbeat, setting state to missing.
[2019-11-05 17:26:26,369] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_6f3e1b09ce3ca2428552929d5a2fe351 failed to send heartbeat, setting state to missing.

When I run against a 0.12.2 master, the error looks like this:

[2019-11-05 17:01:43,741] ip-10-51-78-73/INFO/locust.runners: Sending hatch jobs to 5 ready clients
[2019-11-05 17:02:25,716] ip-10-51-78-73/ERROR/stderr: Traceback (most recent call last):
[2019-11-05 17:02:25,717] ip-10-51-78-73/ERROR/stderr: File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
[2019-11-05 17:02:25,717] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/runners.py", line 360, in client_listener
[2019-11-05 17:02:25,717] ip-10-51-78-73/ERROR/stderr: events.slave_report.fire(client_id=msg.node_id, data=msg.data)
[2019-11-05 17:02:25,717] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/events.py", line 34, in fire
[2019-11-05 17:02:25,717] ip-10-51-78-73/ERROR/stderr: handler(**kwargs)
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/stats.py", line 578, in on_slave_report
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: global_stats.errors[error_key] = StatsError.from_dict(error)
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: File "/home/ubuntu/.local/lib/python2.7/site-packages/locust/stats.py", line 531, in from_dict
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: data["occurrences"]
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: KeyError: 'occurrences'
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: 2019-11-05T17:02:25Z
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: 
[2019-11-05 17:02:25,718] ip-10-51-78-73/ERROR/stderr: <Greenlet at 0x7ff475682578: <bound method MasterLocustRunner.client_listener of <locust.runners.MasterLocustRunner object at 0x7ff47563cb90>>> failed with KeyError
[2019-11-05 17:02:29,223] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_849f28d9ddd56bba629998f553e34770 failed to send heartbeat, setting state to missing.
[2019-11-05 17:02:29,223] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_15a0898d89d93894938e50e3a10ecece failed to send heartbeat, setting state to missing.
[2019-11-05 17:02:29,223] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_4fe0e0a4cc179528915ce45261fcd873 failed to send heartbeat, setting state to missing.
[2019-11-05 17:02:29,223] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_5cbaa23ff1e82b80317e34339301c773 failed to send heartbeat, setting state to missing.
[2019-11-05 17:02:29,223] ip-10-51-78-73/INFO/locust.runners: Slave ip-10-51-78-238_ab8e8d9579b16324e08941ed9d8ddd6e failed to send heartbeat, setting state to missing.

As far as I can tell, this error is not something I can fix with changes to my Task implementation and suspect it may be due to incompatibility with some newer versions of the locustio master? Any insight appreciated.

I don't see these errors running against an 0.11.0 locustio master.

ClassCastException calling Locust.setVerbose();

Hello. I am seeing this exception:

Exception in thread "main" java.lang.ClassCastException: org.slf4j.impl.Log4jLoggerFactory cannot be cast to ch.qos.logback.classic.LoggerContext
	at com.github.myzhan.locust4j.Locust.setVerbose(Locust.java:129)

When trying to call Locust.setVerbose(false).

Locust 1.x compatibility

I tried running Locust4j and it says it's connected to master, but when I open the webui there are 0 workers listed.

Tried running with 1.14.x and 1.0.x, neither worked. Are there any plans to support 1.x locust?

Error while receiving a message

I have used several locust versions(2.0, 2.2, 2.16, 2.20) to control locust4j workers, but exceptions always occured.And witch locust version do you recomand to use?

2024-05-17 17:18:19 ERROR Runner:415 - Error while receiving a message
java.lang.IllegalArgumentException
        at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1314)
        at java.util.concurrent.ThreadPoolExecutor.<init>(ThreadPoolExecutor.java:1237)
        at com.github.myzhan.locust4j.runtime.Runner.startSpawning(Runner.java:202)
        at com.github.myzhan.locust4j.runtime.Runner.onSpawnMessage(Runner.java:291)
        at com.github.myzhan.locust4j.runtime.Runner.onMessage(Runner.java:336)
        at com.github.myzhan.locust4j.runtime.Runner.access$600(Runner.java:28)
        at com.github.myzhan.locust4j.runtime.Runner$Receiver.run(Runner.java:413)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
2024-05-17 17:19:43 ERROR Runner:415 - Error while receiving a message
java.lang.NullPointerException
        at com.github.myzhan.locust4j.runtime.Runner.shutdownThreadPool(Runner.java:244)
        at com.github.myzhan.locust4j.runtime.Runner.stop(Runner.java:254)
        at com.github.myzhan.locust4j.runtime.Runner.onMessage(Runner.java:339)
        at com.github.myzhan.locust4j.runtime.Runner.access$600(Runner.java:28)
        at com.github.myzhan.locust4j.runtime.Runner$Receiver.run(Runner.java:413)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)

Failures appear to be over-reported

It seems like however the failure counts are being reported to the locust master, they're being over-reported.

I'm frequently seeing runs where the main statistics screen in the UI shows over 3 million success compared to ~1000 failures, but then when I click on the failures tab it shows 1.5 million failures.

Furthermore if I click the "stop" button in the UI, and monitor the logs on my locust4j worker nodes, I can see that they've stopped sending traffic (my server metrics also confirm this), but the failure count in the locust UI continues to rise until I kill the worker process, at which point it stops.

It seems like there is some thread that is reporting the failure metrics in the background, and it's just not clearing out the failure data after it reports it, so it accumulates over time and just keeps inflating the failure stats.

recordFailure does not record as a request

I noticed that when an error is recorded it does not also record as a request causing the rps to be less than expected.

Both the python client and boomer appear to record errors also as requests however locust4j does not. I believe the difference is here

In boomer a record failure and request are done
https://github.com/myzhan/boomer/blob/e17d364d020f6feae49a06c05f02b19c6dcc6a5e/stats.go#L142-L144

In locust4j only the error is logged
https://github.com/myzhan/locust4j/blob/1.0.12/src/main/java/com/github/myzhan/locust4j/stats/Stats.java#L138-L142

Distributed workers stop reporting request stats

Hi. I am using locust4j 1.0.12 with locust master 1.4.3. My master is on a remote host from the workers. Frequently, I've noticed that when starting and stopping tests via the UI the workers stop reporting metrics to the master. This does not happen every time, I have seen it restart successfully too. But, I can get into this state pretty easily by just:

  1. Bring up distributed worker
  2. Start test in UI, let it run a few seconds.
  3. Stop test in UI
  4. Start new test in UI

Does not seem to be load related as it will happen even with very light load. Restarting the worker restores the stats again but the issue can be reproduced. Heartbeats appear not to be affected as the worker is reporting in its state and cpu stats without issue.

On another note, when I restart the worker, the worker count increases in the UI despite the Workers tab reflecting that the worker is missing (it also continues to report cpu stats for the missing worker). Presumably this is all because I am still using the same host when I restart the worker.

Let me know if I can provide additional details. I have been using tcpdump to inspect the traffic on port 5557 and its pretty clear that the heartbeat is consistently sent but the stat packets stop being sent. I can also see that the workers are actually executing the tasks just not reporting the stats for them.

Exception throws on attempt to report request results to maste

Hi all, i'm currently trying to use locust with java client in distributed mode. I have wrote a simple custom task that sends GET request to version. The approach i'm using described here: https://www.blazemeter.com/blog/locust-performance-testing-using-java-and-kotlin/
So problem i'm struggling with is that when the following line is executed:
Locust.getInstance().recordSuccess(String requestType, String name, long responseTime, long contentLength);
or
Locust.getInstance().recordFailure(String requestType, String name, long responseTime, String error

The Locust master node throwing an exception and states the following:
[2020-03-20 07:59:40,262] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s

I'm running locust in docker and i've tried different versions of Python and Locust master.
The most stable combination i've found is Python 3.6 + Locustio 0.11.0

Here is the full log:
docker run -ti -p 5557:5557 -p 8089:8089 locust
[2020-03-20 07:46:10,978] 5e94371d5d92/INFO/locust.main: Starting web monitor at *:8089
[2020-03-20 07:46:10,991] 5e94371d5d92/INFO/locust.main: Starting Locust 0.11.0
[2020-03-20 07:59:22,425] 5e94371d5d92/INFO/locust.runners: Client 'macbook0258_226e0d2986be793d6ac910f36f141a4c' reported as ready. Currently 1 clients ready to swarm.
[2020-03-20 07:59:37,413] 5e94371d5d92/INFO/locust.runners: Sending hatch jobs to 1 ready clients
[2020-03-20 07:59:40,262] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:43,269] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:46,272] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:49,281] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:52,254] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:55,261] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s
[2020-03-20 07:59:58,267] 5e94371d5d92/INFO/locust.util.exception_handler: Exception found on retry 1: -- retry after 1s

Expected behavior
The request results should have been recorded and displayed on UI

Actual behavior
The request results have not been recorded and exception thrown.

Steps to reproduce
Implement simple custom task using the instructions described https://www.blazemeter.com/blog/locust-performance-testing-using-java-and-kotlin/
In custom task send some simple request and record results
Setup Locust master and run the following command : locust -f locust-master.py --master --master-bind-host=127.0.0.1 --master-bind-port=5557
Check heartbeat on Locust masters output
Goto UI and run performance.
Environment
OS: macOS Mojave : 10.14.6
Python version: python:3.6.8
Locust version: 0.11.0
Locust command line that you ran: locust -f /locust-master.py --master --master-bind-host=0.0.0.0 --master-bind-port=5557
Locust file contents (anonymized if necessary):
from locust import Locust, TaskSet, task
class DummyTask(TaskSet):
@task(1)
def dummy(self):
pass
class Dummy(Locust):
task_set = DummyTask
Using locust4j lib https://github.com/myzhan/locust4j - 1.0.8

P.S.
I've tried different approaches, local run, docker, different Python versions, Locust versions, Locust4j versions and OS types, Including Windows 10 and have no ideas anymore whats wrong.

Exceptions when running with the newest locust (2.25.0)

Hi,

I am using the newest version of locust4j with the newest version of locust (2.25.0).
I am seeing a large number of exceptions like:

java.lang.IllegalArgumentException: null
  at java.base/java.util.concurrent.ThreadPoolExecutor.setMaximumPoolSize(Unknown Source)
  at com.github.myzhan.locust4j.runtime.Runner.startSpawning(Runner.java:217)
  at com.github.myzhan.locust4j.runtime.Runner.onSpawnMessage(Runner.java:291)
  at com.github.myzhan.locust4j.runtime.Runner.onMessage(Runner.java:336)
  at com.github.myzhan.locust4j.runtime.Runner.access$600(Runner.java:28)
  at com.github.myzhan.locust4j.runtime.Runner$Receiver.run(Runner.java:413)
  at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
  at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  at java.base/java.lang.Thread.run(Unknown Source)

any ideas what could cause this/Hints how to debug the Issue?

Thank you
Jens

wait_time support

Hey!

Does locust4j support waiting between tasks? An equivalent to wait_time = between(1000, 3000) that's supported by locust?

Not compatible with locust 0.13

Locust 0.13 was released on the 14th of November 2019. When trying to run a locust4j slave with a dockerized master on version 0.13, the master throws:

[2019-11-18 12:04:38,469] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: Traceback (most recent call last):
[2019-11-18 12:04:38,482] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: File "/usr/local/lib/python3.6/site-packages/locust/runners.py", line 382, in client_listener
    events.slave_report.fire(client_id=msg.node_id, data=msg.data)
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: File "/usr/local/lib/python3.6/site-packages/locust/events.py", line 34, in fire
    handler(**kwargs)
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: File "/usr/local/lib/python3.6/site-packages/locust/stats.py", line 642, in on_slave_report
    global_stats.total.extend(StatsEntry.unserialize(data["stats_total"]))
[2019-11-18 12:04:38,483] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: File "/usr/local/lib/python3.6/site-packages/locust/stats.py", line 428, in unserialize
    setattr(obj, key, data[key])
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: KeyError: 'num_fail_per_sec'
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: 2019-11-18T12:04:38Z
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr:
[2019-11-18 12:04:38,484] perf-test-ui-6744849bb7jw9ww/ERROR/stderr: <Greenlet at 0x7f425c7b8e48: <bound method MasterLocustRunner.client_listener of <locust.runners.MasterLocustRunner object at 0x7f425d525400>>> failed with KeyError

I suspect it's related to the new information being reported:
locustio/locust#1125
locustio/locust#945

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.