Giter Club home page Giter Club logo

mzbench's People

Contributors

alinpopa avatar doubleyou avatar getong avatar larshesel avatar loguntsov avatar moigagoo avatar mz-bmcqueen avatar parsifal-47 avatar timofey-barmin avatar vlm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mzbench's Issues

Turn off one of the counter graphs

Any counter now draws two graphs in the dashboard: absolute and rps. For the most metrics only one or another is usually needed, but rarely both. It would be great if I could specify what type I need explicitly or turn off one of the graphs.

Basic HTTP Worker Example

Please add a couple of example benchmarks for simple HTTP load testing.

If I want to test my service with ApacheBench, I just do ab mysite.com. If I want to do the same with mzbench, I have to read through the entire documentation and learn about workers and stuff, just to write a simple bench at the end.

documentation to use worker

There is a documentation to create a worker but nothing that explains how to use it. More generally how do we create custom nodes?

Loop arguments get evaluated every iteration

Imagine I have the following scenario:

{loop, [{time, {{calculate_loop_timeout}, sec}}], [
    {do_something}
]}

calculate_loop_timeout is a function defined in the worker module. I expect that this function will be called once, return a value and then this value will be used as an argument for the loop. What happens instead is that this function gets called every iteration of the loop. If the function is not deterministic (returns some random value, for example), then it will return a different value each iteration which will lead to an unpredictable behaviour of the scenario.

gitter client

Hi,

My name is Adrian Lewis and I work for the BBC Digital Load Test Team. I am thinking of incorporating MZBench into our toolbox of load-tools but I may need to ask some questions . I wondered if you would consider using a gitter client for your project, so myself and other users could ask these questions?

Many Thanks

Stale Erlang nodes on workers

When I run a scenario on specific nodes (I allocate them by myself), sometimes due to some abnormal termination of the test I run into the following situation: worker nodes keep some Erlang code running and when a new test starts it can not provision workers because there is already something there. Usually such a new test times out when in the provisioning stage.
In order to start the test I need to ssh to each worker node and kill Erlang processes there.

Fetching wrong version of jiffy

Hi,

I tried to get mzbench working but failed due to the wrong version of jiffy being downloaded.
I get this compiler error:
..
c_src/double-conversion/bignum.cc: In member function ‘void double_conversion::Bignum::AssignDecimalString(double_conversion::Vector)’:
c_src/double-conversion/bignum.cc:101:6: error: assuming signed overflow does not occur when assuming that (X + c) < X is always false [-Werror=strict-overflow]
void Bignum::AssignDecimalString(Vector value) {
^
cc1plus: all warnings being treated as errors
ERROR: compile failed while processing /home/zendenp/work/mzbench/mzbench/server/_build/default/deps/jiffy: rebar_abort
Makefile:29: recipe for target 'build' failed
..

This is caused by the fact that the wrong jiffy package is loaded. The correct version is skipped:
..
===> Fetching goldrush ({git,"git://github.com/DeadZen/goldrush.git",
{tag,"0.1.7"}})
===> Skipping jiffy (from {git,"git://github.com/davisp/jiffy.git",
{tag,"0.14.2"}}) as an app of the same name has already been fetched
..

I verified that the file jiffy/c_src_double-conversion/bignum.cc is indeed not the latest version:
..
void Bignum::AssignDecimalString(Vector value) {
// 2^64 = 18446744073709551616 > 10^19
const int kMaxUint64DecimalDigits = 19;
Zero();
int length = value.length();
int pos = 0; //<====== should be unsigned int pos = 0;
// Let's just say that each digit needs 4 bits.
while (length >= kMaxUint64DecimalDigits) {
...

Same of course happens when building the node software.

Running on Kubuntu 15.10 64-bit, Erlang/OTP 18.

Regards
Paul

no run server

edgi@ubuntu:$ cd mzbench
edgi@ubuntu:
/mzbench$ ./bin/mzbench start_server
Executing make -C /home/edgi/mzbench/bin/../server generate
....................................................
Executing /home/edgi/mzbench/bin/../server/_build/default/rel/mzbench_api/bin/mzbench_api start
.......................................................................................................................... .......................................................................................................................... .......................................................................................................................... .......................................................................................................................... ...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ....................

New metrics in runtime

Hi guys,
it would be great to have an opportunity to add new metrics in runtime or in special state when a worker can get information from somewhere (for example from scenario). Sometimes it needs to monitor metrics from different configurations of systems.

mzb_staticcloud_plugin

is there any documentation on how to use this plugin? more specifically how to deploy the nodes?

Missing tarballs and no rule to make target 'generate_tgz'

14:29:07.982 [info] [ API ] <0.691.0> Missing tarballs: [{"linux-3.19.0-58-generic_erts-7.3",
                                                          "/home/afa/.local/cache/mzbench_api/packages/vmq_mzbench-1460.730547.829038-linux-3.19.0-58-generic_erts-7.3.tgz"}]
14:29:07.983 [info] [ API ] <0.707.0> Building package vmq_mzbench on localhost
14:29:07.983 [info] [ API ] <0.708.0> [ EXEC ] bash -c "export PATH='/home/afa/.kiex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'; source /etc/profile;mkdir -p /tmp/bench_mzbench_api_capo_1460_730547_983224 && cd /tmp/bench_mzbench_api_capo_1460_730547_983224 && rsync -aW  /home/afa/wip/vmq_mzbench deployment_code && cd deployment_code/ && make generate_tgz && mv *.tgz /tmp/bench_mzbench_api_capo_1460_730547_983089.tgz " (<0.708.0>)
14:29:08.068 [error] [ API ] <0.708.0> [ EXEC ] Command execution failed in 85.4 ms
Cmd: bash -c "export PATH='/home/afa/.kiex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'; source /etc/profile;mkdir -p /tmp/bench_mzbench_api_capo_1460_730547_983224 && cd /tmp/bench_mzbench_api_capo_1460_730547_983224 && rsync -aW  /home/afa/wip/vmq_mzbench deployment_code && cd deployment_code/ && make generate_tgz && mv *.tgz /tmp/bench_mzbench_api_capo_1460_730547_983089.tgz "
Exit code: 2
Output: make: *** No rule to make target 'generate_tgz'.  Stop.

14:29:08.069 [info] [ API ] <0.709.0> [ EXEC ] bash -c "export PATH='/home/afa/.kiex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/afa/bin'; source /etc/profile;rm -rf /tmp/bench_mzbench_api_capo_1460_730547_983089.tgz; rm -rf /tmp/bench_mzbench_api_capo_1460_730547_983118; true " (<0.709.0>)
14:29:08.077 [info] [ API ] <0.709.0> [ EXEC ] OK in 8.288 ms (<0.709.0>)
14:29:08.077 [error] [ API ] <0.685.0> Stage 'pipeline - provisioning': failed
Command returned 2:
 bash -c "export PATH='/home/afa/.kiex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/afa/bin'; source /etc/profile;mkdir -p /tmp/bench_mzbench_api_capo_1460_730547_983224 && cd /tmp/bench_mzbench_api_capo_1460_730547_983224 && rsync -aW  /home/afa/wip/vmq_mzbench deployment_code && cd deployment_code/ && make generate_tgz && mv *.tgz /tmp/bench_mzbench_api_capo_1460_730547_983089.tgz "
Command output: make: *** No rule to make target 'generate_tgz'.  Stop.

Pulled and recompiled MZBench. Am I missing a step here? I have a vmq_mzbench-7ecb30e-linux-3.19.0-58-generic_erts-7.3 in the local cache, but not a linux-3.19.0-58-generic_erts-7.3.

Thanks for any help :)

Smallish confusion in defining statements

Hi, I'd like to pass a proplist as an argument in a scenario file like this:

[
{pool, [{size, 500},
    {worker_type, mqtt_pub_worker}], [
  {connect, [{clean_session, true}, {client_id, random}] %% <- and more properties
}]}]

while only defining a connect function in the worker. Mzbench expects statement functions for clean_session, client_id etc here, there's no way around that, right?

Looking for ways to dynamically configure the ami-3b90a80b image

Hello,
me again...
Had the idea to try some OS configuration on Amazon with a pre_hook script in my scenario files.
Should stuff like this work? (I tested, and it doesn't seem to increase file-max).

{pre_hook, [{exec, all, "sudo sysctl -w fs.file-max=100000"},
{exec, all, "sudo sysctl -e -p"}]},

Could I possibly reboot a system in a pre_hook?

Deterministic ClientIDs in a pool

Hello,

I just wondered what would be the best way to have a non-random, deterministic ID created for a worker pool. This could be as simple as just a counter.

How would you go about this? Let's say you have a pool with 10k workers, a connect function (that needs to give a client ID) and you want the IDs to be named "1" to "10000".

add a way to pass application config in scripts

It can be useful sometimes to be able to pass application config environment in the script file, to initialize some data for all workers. Right now I'm using sys.config for it, but having it in the script file would allow any user to simply ship their script file.

Packages for Server

Hi guys,

It would be great to have some kind of packages for the most popular operation systems: RPM and DEB.
It isn't easy to understand what version of C++ to install and how to do it.
It's easier with Python and PIP because they are in the most systems by default.
But I should think again how to install the Erlang version you need.

Additionally it would be great to have by default at least working config in
/etc/mzbench/server.config

So, I just need a possibility to install a package and enjoy having a server.

http worker: Add Support for Request Options

Currently, there's no way to do basic auth, which is sad.

Since mzbench uses hackney, and hackney supports it via Options, it'd be great to be able to pass options to hackney, e.g. with a set_options function.

Tried to copy ami-3b90a80b into my region: You do not have permission to access the storage of this ami

I'm currently fiddling a bit around with mzbench, and I'm very impressed. After taking some hurdles it is super easy to adjust the http_worker to my needs. I hope I can provide a PR or an release of my extended http woker (including hackey header and options).

But now to the reason of this issue: I tried to copy your ami to my region (eu-central-1), but it seems you granted no read access to your ami bucket. Can you adjust it or copy it for me into the region? I would appreciate it :)

Greetings and thanks to open source this handy tool.

Enlightenment Request - MZ_PYTHON_WORKER_FIFO_NAME

Hello!

I am working on authoring a python bench worker for the first time and hit a point of confusion that doesn't seem to be well documented. To do so, I have been following the python_empty example.

  1. What is the purpose of MZ_PYTHON_WORKER_FIFO_NAME?

  2. What is the expected value under normal operation? If they need to be dynamically created/assigned, then what is the prescribed method for bootstrapping such fifos for python workers?

cloud provider plugin API

Hi,
I'd be interested to add a cloud connector for project-fifo [1], but I am a bit confused about what exactly the functions are supposed to return.

The spec reads:

-spec create_cluster(Name :: string(), NumNodes :: pos_integer(), Config :: #{}) -> {ok, [string()], [string()]}.

and I suspect that to be ID's and IP's? where the ID is 'just a name' and the IP is well (an ip). argument N in the first list corresponds to argument N in the second?

However the function also returns a UserName in the tuple (having 3 entries) Would it be safe to default that to something like 'root' or 'admin' if those users are guaranteed to exist?

Problems with the unnderstanding of the work distribution in a distributed bench

Hi,

yesterday I was able to my scenario on aws-ec2, but I have problems to understand the work distribution. My Scenario is as simple as this:

[
    % GET pool
    { pool,
       [ {size, {numvar, "conn_count", 10}},
         {worker_type, http_worker}
       ],

       [
           { set_host, {var, "host"} },
           { set_port, {numvar, "port", 80} },

           % Get User
           { loop,
               [ {time, {{numvar, "time_minutes", 5}, min}},
                 {rate, {ramp, linear, {1, rps}, {{numvar, "max_rps", 10}, rps}}}
               ],
               [

                 % get a user
                 { get, "/user/id123"}
               ]
           }
       ]
    }
].

But executing this with default values for conn_count, max_rps, time_minutes on 5 nodes I would expect similar behavior as in a local single node execution. The rps value would scale from 10 to 1000 (or cap at a limit of the host): 10 jobs of the pool, each with an increasing rate from 1 to 10 over 5 minutes.

But this is the rps graph generated from a distributed execution on 5 nodes:

screenshot 2016-01-05 23 14 41

This is the graph with same settings but one local (dummy) node:

screenshot 2016-01-06 00 45 58

Why is the distributed rps so much lower with the same settings? How should I interpret the pool size? Possible as a unit to simulate individual clients? Why behaves the pool so differently when distributed? I think I have a fundamental misunderstanding about the distribution model. Can you help me out?

Greeting

Variables in scenario DSL?

within the sequential jobs of a pool especially.

what is the estimated effort to add this? is there a killer reason against it?

I tried to check if his has already been asked, sorry if it has.
Keep up the great work :)

Cannot choose between cloud plugins in GUI dropdown list.

I have the dummy/local and the ec2 cloud plugin in my server.config.

In that case, 'local' is missing in the GUI 'cloud' dropdown list. To switch to local I have to actually outcomment the ec2 plugin in the server.config.

BTW: some marginal friction: it wasn't immediately clear to me in the cloud plugin configuration that
{group_set, ["MZbench_cluster"]}, (i.e is a [set], not just a "string")
and that subnet_id and iam_instance_profile_name can be set to the atom undefined

Keep up the awesome MZBench work ! :)

Time Requirements?

I am working on rolling my own AMI to use with the provided EC2 cloud plugin.

During my runs, I am seeing the following error during provisioning:

Apr 11:24:25 ntpdate[1449]: adjust time server 45.127.112.2 offset 0.438031 sec"
11:24:25.461 [info] [ API ] <0.317.0> NTP time diffs are: [-0.002511,0.438031], max distance is 0.440542
11:24:25.462 [info] [ API ] <0.317.0> NTP check result: {'EXIT',
                                                         {{ntp_check_failed,
                                                           0.440542},
                                                          [{mzb_api_provision,
                                                            ntp_check,3,
                                                            [{file,
                                                              "/opt/mzb/server/_build/default/deps/mzbench_api/src/mzb_api_provision.erl"},
                                                             {line,104}]},
                                                           {mzb_api_provision,
                                                            provision_nodes,2,
                                                            [{file,
                                                              "/opt/mzb/server/_build/default/deps/mzbench_api/src/mzb_api_provision.erl"},
                                                             {line,26}]},
                                                           {mzb_api_bench,
                                                            handle_stage,3,
                                                            [{file,
                                                              "/opt/mzb/server/_build/default/deps/mzbench_api/src/mzb_api_bench.erl"},
                                                             {line,180}]},
                                                           {mzb_pipeline,
                                                            '-handle_cast/2-fun-0-',
                                                            6,
                                                            [{file,
                                                              "/opt/mzb/server/_build/default/deps/mzbench_api/src/mzb_pipeline.erl"},
                                                             {line,165}]}]}}

What recommendations do you have for getting system time to be within (default) tolerance?

Web UI not showing

Hello,

I just noticed that for a fresh clone of MZBench, the UI at localhost:4800 only shows the Navigation Bar with "MZBench, Docs and Issues".

Haven't investigated further, but what could be the reason for that?

Thanks :)

worker_type as a module, not as an app

Hi guys,

Currently worker_type directive refers to the name of a worker application. What if it was the name of the module inside the application?
In this case make_install would install the app the same way as before. But worker_type would use a certain module inside that app.
This would simplify worker development as you don't need a complete separate app (and repository!) for each worker and may have several workers as a part of the same application.

What do you think?

value out of range error

12:25:17.753 [error] <0.231.0> gen_fsm <0.231.0> in state connected terminated with reason: {value_out_of_range,-791} in mz_histogram:notify/2 line 71
12:25:17.774 [error] <0.231.0> CRASH REPORT Process <0.231.0> with 0 neighbours exited with reason: {value_out_of_range,-791} in mz_histogram:notify/2 line 71 in gen_fsm:terminate/7 line 626

Could this be caused by clock differences on amazon machines?

simple_http_worker: GET-params not supported

This scenario works:

[ {pool,
    [
        {size, 5},
        {worker_type, simple_http_worker}
    ],
    [
        {loop,
            [
                {time, {1, min}},
                {rate, {10, rps}}
            ],
            [
                {get, "http://localhost:8080"}
            ]
        }
    ]
}
].

This one fails:

[ {pool,
    [
        {size, 5},
        {worker_type, simple_http_worker}
    ],
    [
        {loop,
            [
                {time, {1, min}},
                {rate, {10, rps}}
            ],
            [
                {get, "http://localhost:8080?x=12"}
            ]
        }
    ]
}
].

Logs:

12:25:03.005 [info] [ API ] Node repo: {git_install_spec,
                                        "https://github.com/machinezone/mzbench.git",
                                        "3b07c4cd6af25a0ff70f22cde857c30277f3ebbd",
                                        "node"}
12:25:03.006 [info] [ API ] Stage 'pipeline - init': started
12:25:03.006 [info] [ API ] Starting benchmark at 1454675103 #{benchmark_name => "simple_http.erl",
                                                               cloud => undefined,
                                                               deallocate_after_bench => true,
                                                               director_host => undefined,
                                                               emulate_bench_crash => false,
                                                               env => [{<<"mzb_script_name">>,
                                                                 <<"simple_http.erl">>}],
                                                               exclusive_node_usage => true,
                                                               id => 100,
                                                               initial_user => "kmolchanov",
                                                               log_compression => deflate,
                                                               log_file => "log.txt",
                                                               metrics_compression => none,
                                                               metrics_file => "metrics_~s.txt",
                                                               node_install_spec => {git_install_spec,"https://github.com/machinezone/mzbench.git",
                                                                                 "3b07c4cd6af25a0ff70f22cde857c30277f3ebbd",
                                                                                 "node"},
                                                               node_log_port => 4801,
                                                               node_management_port => 4802,
                                                               nodes_arg => 1,
                                                               provision_nodes => true,
                                                               purpose => "bench-100-1454675103",
                                                               req_host => <<"localhost:4800">>,
                                                               script => #{body => <<"[ {pool,\n    [\n        {size, 5},\n        {worker_type, simple_http_worker}\n    ],\n    [\n        {loop,\n            [\n                {time, {1, min}},\n                {rate, {10, rps}}\n            ],\n            [\n                {get, \"http://localhost:8080?x=12\"}\n            ]\n        }\n    ]\n}\n].\n">>,
                                                                 filename => "43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl",
                                                                 name => "simple_http.erl"},
                                                               vm_args => [],
                                                               worker_hosts => []}
12:25:03.006 [info] [ API ] Stage 'pipeline - init': finished
12:25:03.006 [info] [ API ] Stage 'pipeline - checking_script': started
12:25:03.007 [info] [ API ] Stage 'pipeline - checking_script': finished
12:25:03.007 [info] [ API ] Stage 'pipeline - allocating_hosts': started
12:25:03.007 [info] [ API ] Allocating 2 hosts in undefined cloud...
12:25:03.007 [info] [ API ] Allocated hosts: [undefined] @ ["localhost"]
12:25:03.007 [info] [ API ] Stage 'pipeline - allocating_hosts': finished
12:25:03.007 [info] [ API ] Stage 'pipeline - provisioning': started
12:25:03.007 [info] [ API ] Provisioning nodes: ["localhost"]
With config: #{benchmark_name => "simple_http.erl",
               cloud => undefined,
               deallocate_after_bench => true,
               director_host => "localhost",
               emulate_bench_crash => false,
               env => [{<<"mzb_script_name">>,<<"simple_http.erl">>}],
               exclusive_node_usage => true,
               id => 100,
               initial_user => "kmolchanov",
               log_compression => deflate,
               log_file => "log.txt",
               metrics_compression => none,
               metrics_file => "metrics_~s.txt",
               node_install_spec => {git_install_spec,"https://github.com/machinezone/mzbench.git",
                                 "3b07c4cd6af25a0ff70f22cde857c30277f3ebbd",
                                 "node"},
               node_log_port => 4801,
               node_management_port => 4802,
               nodes_arg => 1,
               provision_nodes => true,
               purpose => "bench-100-1454675103",
               req_host => <<"localhost:4800">>,
               script => #{body => <<"[ {pool,\n    [\n        {size, 5},\n        {worker_type, simple_http_worker}\n    ],\n    [\n        {loop,\n            [\n                {time, {1, min}},\n                {rate, {10, rps}}\n            ],\n            [\n                {get, \"http://localhost:8080?x=12\"}\n            ]\n        }\n    ]\n}\n].\n">>,
                 filename => "43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl",
                 name => "simple_http.erl"},
               user_name => undefined,
               vm_args => [],
               worker_hosts => []}
12:25:03.008 [info] [ API ] [ MKDIR ] /tmp/mz/bench-100-1454675103
12:25:03.008 [info] [ API ] There's only one host, no need to make ntp check
12:25:03.008 [info] [ API ] NTP check result: ok
12:25:03.008 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;ps -ef | grep beam | grep -v grep | grep -v mzbench_api && ~/.local/share/mzbench/bin/mzbench stop; true " (<0.1713.0>)
12:25:03.046 [info] [ API ] [ EXEC ] OK in 37.295 ms (<0.1713.0>)
12:25:03.046 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;uname -sr " (<0.1715.0>)
12:25:03.057 [info] [ API ] [ EXEC ] OK in 10.722 ms (<0.1715.0>)
12:25:03.057 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;erl -noshell -eval 'io:fwrite(\"~s\", [erlang:system_info(version)]).' -s erlang halt " (<0.1716.0>)
12:25:03.175 [info] [ API ] [ EXEC ] OK in 118.36 ms (<0.1716.0>)
12:25:03.176 [info] [ API ] Missing tarballs: []
12:25:03.176 [info] [ API ] Uploading package node to localhost
12:25:03.176 [info] [ API ] [ COPY ] /Users/kmolchanov/.local/cache/mzbench_api/packages/node-3b07c4cd6af25a0ff70f22cde857c30277f3ebbd-darwin-15.3.0_erts-7.1.tgz -> /tmp/bench_mzbench_api_km_1454_675103_176026.tgz
12:25:03.200 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;mkdir -p /tmp/bench_mzbench_api_km_1454_675103_176053 && cd /tmp/bench_mzbench_api_km_1454_675103_176053 && tar xzf /tmp/bench_mzbench_api_km_1454_675103_176026.tgz " (<0.1719.0>)
12:25:03.481 [info] [ API ] [ EXEC ] OK in 281.129 ms (<0.1719.0>)
12:25:03.481 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;mkdir -p ~/.local/share && rsync -aW /tmp/bench_mzbench_api_km_1454_675103_176053/ ~/.local/share " (<0.1720.0>)
12:25:03.508 [info] [ API ] [ EXEC ] OK in 26.477 ms (<0.1720.0>)
12:25:03.508 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;rm -rf /tmp/bench_mzbench_api_km_1454_675103_176026.tgz; rm -rf /tmp/bench_mzbench_api_km_1454_675103_176053; true " (<0.1721.0>)
12:25:03.596 [info] [ API ] [ EXEC ] OK in 88.079 ms (<0.1721.0>)
12:25:03.597 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;hostname " (<0.1723.0>)
12:25:03.608 [info] [ API ] [ EXEC ] OK in 10.773 ms (<0.1723.0>)
12:25:03.608 [info] [ API ] Shortname for "localhost" are "km"
12:25:03.608 [info] [ API ] [ COPY ] /tmp/bench_mzbench_api_km_1454_675103_608323 -> /tmp/mz/bench-100-1454675103/vm.args
12:25:03.609 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;cd /tmp/mz/bench-100-1454675103 && ~/.local/share/mzbench/bin/mzbench start " (<0.1726.0>)
12:25:07.671 [info] [ API ] [ EXEC ] OK in 4061.902 ms (<0.1726.0>)
12:25:07.671 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;~/.local/share/mzbench/bin/wait_cluster_start.escript 30000 mzb_director100@km" (<0.1728.0>)
12:25:10.196 [info] [ API ] [ EXEC ] OK in 2524.939 ms (<0.1728.0>)
12:25:10.196 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;~/.local/share/mzbench/bin/nodetool -sname mzb_director100 rpcterms mzb_management_tcp_protocol get_port \"\"" (<0.1729.0>)
12:25:11.041 [info] [ API ] [ EXEC ] OK in 844.859 ms (<0.1729.0>)
12:25:11.041 [info] [ API ] Management port: 4802
12:25:11.043 [info] [ API ] Stage 'pipeline - provisioning': finished
12:25:11.043 [info] [ API ] Stage 'pipeline - starting_collectors': started
12:25:11.047 [info] [ API ] Log collector server: mzb_director100@km -> "localhost":4801
12:25:11.048 [info] [ API ] Stage 'pipeline - starting_collectors': finished
12:25:11.048 [info] [ API ] Stage 'pipeline - uploading_script': started
12:25:11.049 [info] [ API ] [ COPY ] /tmp/bench_mzbench_api_km_1454_675111_48580 -> /tmp/mz/bench-100-1454675103/environ.txt
12:25:11.050 [info] [ API ] [ COPY ] /tmp/bench_mzbench_api_km_1454_675111_49924 -> /tmp/mz/bench-100-1454675103/43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl
12:25:11.051 [info] [ API ] Stage 'pipeline - uploading_script': finished
12:25:11.051 [info] [ API ] Stage 'pipeline - uploading_includes': started
15:25:11.051 [info] <0.117.0> Started tcp lager backend for info #Port<0.3007>
15:25:11.051 [info] <0.105.0> Started tcp lager backend for info #Port<0.3007>
12:25:11.051 [info] [ API ] Stage 'pipeline - uploading_includes': finished
12:25:11.051 [info] [ API ] Stage 'pipeline - gathering_metric_names': started
15:25:11.979 [info] <0.200.0> signals graph: []
15:25:11.979 [info] <0.200.0> signals graph sccs: []
15:25:11.979 [info] <0.200.0> standalone signals: []
12:25:11.999 [info] [ API ] Stage 'pipeline - gathering_metric_names': finished
12:25:11.999 [info] [ API ] Stage 'pipeline - running': started
12:25:11.999 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;~/.local/share/mzbench/bin/run.escript mzb_director100@km /tmp/mz/bench-100-1454675103/43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl /tmp/mz/bench-100-1454675103/environ.txt" (<0.1779.0>)
15:25:12.878 [info] <0.204.0> signals graph: []
15:25:12.878 [info] <0.204.0> signals graph sccs: []
15:25:12.878 [info] <0.204.0> standalone signals: []
15:25:12.881 [info] <0.204.0> signals graph: []
15:25:12.881 [info] <0.204.0> signals graph sccs: []
15:25:12.881 [info] <0.204.0> standalone signals: []
15:25:12.882 [info] <0.204.0> [ mzb_bench_sup ] Loading "simple_http" Nodes: [mzb_director100@km]
15:25:12.885 [info] <0.205.0> [ director ] Bench name "simple_http", director node mzb_director100@km
15:25:12.885 [info] <0.205.0> [ director ] Pools: [{operation,false,pool,[[{operation,false,size,[5],[{function,size},{line,3},{pool_name,"pool1"}]},{operation,false,worker_type,[simple_http_worker],[{function,worker_type},{line,4},{pool_name,"pool1"}]}],[{operation,true,loop,[[{operation,false,time,[{constant,60000,ms,[]}],[{function,time},{line,9},{pool_name,"pool1"}]},{operation,false,rate,[{constant,10,rps,[]}],[{function,rate},{line,10},{pool_name,"pool1"}]}],[{operation,false,get,["http://localhost:8080?x=12"],[{function,get},{line,13},{pool_name,"pool1"}]}]],[{function,loop},{line,7},{pool_name,"pool1"}]}]],[{function,pool},{line,1},{pool_name,"pool1"}]}], Env: [{asserts,[]},{"nodes_num",1},{"bench_hosts",["km"]},{"bench_script_dir","/tmp/mz/bench-100-1454675103"},{"bench_workers_dir",["~/.local/share/mzbench_workers","../workers"]},{"nodes_num",1},{"bench_hosts",["km"]},{"bench_script_dir","/tmp/mz/bench-100-1454675103"},{"bench_workers_dir",["~/.local/share/mzbench_workers","../workers"]},{"mzb_script_name","simple_http.erl"}]
15:25:12.885 [info] <0.175.0> Nodes are: [mzb_director100@km]
15:25:12.889 [info] <0.206.0> Subscribing reporter mzb_exometer_report_apiserver to [{"http_ok",counter,#{worker => {mzb_erl_worker,simple_http_worker}}},{"http_fail",counter,#{worker => {mzb_erl_worker,simple_http_worker}}},{"other_fail",counter,#{worker => {mzb_erl_worker,simple_http_worker}}},{"http_ok.rps",gauge,#{rps => true,worker => {mzb_erl_worker,simple_http_worker}}},{"http_fail.rps",gauge,#{rps => true,worker => {mzb_erl_worker,simple_http_worker}}},{"other_fail.rps",gauge,#{rps => true,worker => {mzb_erl_worker,simple_http_worker}}},{"latency.min",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.max",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.mean",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.50",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.75",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.90",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.95",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.99",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"latency.999",gauge,#{worker => {mzb_erl_worker,simple_http_worker}}},{"systemload.la1.km",gauge,#{}},{"systemload.cpu.km",gauge,#{}},{"systemload.ram.km",gauge,#{}},{"systemload.nettx.km",gauge,#{}},{"systemload.netrx.km",gauge,#{}},{"systemload.interval.km",gauge,#{}},{"workers.pool1.started",counter,#{}},{"workers.pool1.ended",counter,#{}},{"workers.pool1.failed",counter,#{}},{"workers.pool1.started.rps",gauge,#{rps => true}},{"workers.pool1.ended.rps",gauge,#{rps => true}},{"workers.pool1.failed.rps",gauge,#{rps => true}},{"metric_merging_time",gauge,#{}},{"errors",counter,#{}},{"errors.rps",gauge,#{rps => true}},{"logs.written",counter,#{}},{"logs.dropped.mailbox_overflow",counter,#{}},{"logs.dropped.rate_limiter",counter,#{}},{"logs.written.rps",gauge,#{rps => true}},{"logs.dropped.mailbox_overflow.rps",gauge,#{rps => true}},{"logs.dropped.rate_limiter.rps",gauge,#{rps => true}},{"systemload.message_queue.km",gauge,#{}},{"systemload.process_count.km",gauge,#{}}].
15:25:12.894 [info] <0.205.0> Generating a module for vars: []
15:25:12.895 [info] <0.210.0> Loading mzb_compiled_vars module on mzb_director100@km...
15:25:12.898 [info] <0.212.0> Size, PerNode, Size2, Offset, NumNodes: 5, undefined, 5, 1, 1
15:25:12.898 [info] <0.212.0> Reading configuration from ~/.local/share/mzbench_workers/simple_http_worker/sys.config
15:25:12.906 [info] <0.7.0> Application idna started on node mzb_director100@km
15:25:12.915 [info] <0.7.0> Application hackney started on node mzb_director100@km
15:25:12.915 [info] <0.212.0> Worker offsets: [0,1,2,3,4]
15:25:12.915 [info] <0.7.0> Application simple_http_worker started on node mzb_director100@km
15:25:12.915 [info] <0.212.0> Starting worker on mzb_director100@km no 1
15:25:12.915 [info] <0.212.0> Starting worker on mzb_director100@km no 2
15:25:12.915 [info] <0.205.0> Start pool results: [{ok,<0.212.0>}]
15:25:12.915 [info] <0.205.0> [ director ] Started all pools
15:25:12.915 [info] <0.212.0> Starting worker on mzb_director100@km no 3
15:25:12.915 [info] <0.212.0> Starting remaining workers...
15:25:12.915 [info] <0.212.0> Starting worker on mzb_director100@km no 5
15:25:12.923 [error] <0.212.0> Worker <0.225.0> on mzb_director100@km has crashed: badarg
Stacktrace: [{mzb_erl_worker,terminate,[{error,badarg,[{erlang,list_to_integer,["8080?x=12"],[]},{hackney_url,parse_netloc,2,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_lib/hackney_url.erl"},{line,184}]},{hackney,request,5,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_client/hackney.erl"},{line,330}]},{simple_http_worker,get,3,[{file,"src/simple_http_worker.erl"},{line,26}]},{mzb_erl_worker,apply,4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,52}]},{mzbl_interpreter,'-eval/4-fun-0-',4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,19}]},{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},{mzbl_loop,timerun,10,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_loop.erl"},{line,243}]}]},undefined],[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,55}]},{mzb_worker_runner,run_worker_script,5,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_worker_runner.erl"},{line,29}]}]
15:25:12.923 [error] <0.212.0> Worker <0.222.0> on mzb_director100@km has crashed: badarg
Stacktrace: [{mzb_erl_worker,terminate,[{error,badarg,[{erlang,list_to_integer,["8080?x=12"],[]},{hackney_url,parse_netloc,2,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_lib/hackney_url.erl"},{line,184}]},{hackney,request,5,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_client/hackney.erl"},{line,330}]},{simple_http_worker,get,3,[{file,"src/simple_http_worker.erl"},{line,26}]},{mzb_erl_worker,apply,4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,52}]},{mzbl_interpreter,'-eval/4-fun-0-',4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,19}]},{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},{mzbl_loop,timerun,10,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_loop.erl"},{line,243}]}]},undefined],[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,55}]},{mzb_worker_runner,run_worker_script,5,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_worker_runner.erl"},{line,29}]}]
15:25:12.923 [error] <0.212.0> Worker <0.223.0> on mzb_director100@km has crashed: badarg
Stacktrace: [{mzb_erl_worker,terminate,[{error,badarg,[{erlang,list_to_integer,["8080?x=12"],[]},{hackney_url,parse_netloc,2,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_lib/hackney_url.erl"},{line,184}]},{hackney,request,5,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_client/hackney.erl"},{line,330}]},{simple_http_worker,get,3,[{file,"src/simple_http_worker.erl"},{line,26}]},{mzb_erl_worker,apply,4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,52}]},{mzbl_interpreter,'-eval/4-fun-0-',4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,19}]},{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},{mzbl_loop,timerun,10,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_loop.erl"},{line,243}]}]},undefined],[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,55}]},{mzb_worker_runner,run_worker_script,5,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_worker_runner.erl"},{line,29}]}]
15:25:12.923 [error] <0.212.0> Worker <0.224.0> on mzb_director100@km has crashed: badarg
Stacktrace: [{mzb_erl_worker,terminate,[{error,badarg,[{erlang,list_to_integer,["8080?x=12"],[]},{hackney_url,parse_netloc,2,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_lib/hackney_url.erl"},{line,184}]},{hackney,request,5,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_client/hackney.erl"},{line,330}]},{simple_http_worker,get,3,[{file,"src/simple_http_worker.erl"},{line,26}]},{mzb_erl_worker,apply,4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,52}]},{mzbl_interpreter,'-eval/4-fun-0-',4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,19}]},{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},{mzbl_loop,timerun,10,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_loop.erl"},{line,243}]}]},undefined],[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,55}]},{mzb_worker_runner,run_worker_script,5,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_worker_runner.erl"},{line,29}]}]
15:25:12.924 [error] <0.212.0> Worker <0.226.0> on mzb_director100@km has crashed: badarg
Stacktrace: [{mzb_erl_worker,terminate,[{error,badarg,[{erlang,list_to_integer,["8080?x=12"],[]},{hackney_url,parse_netloc,2,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_lib/hackney_url.erl"},{line,184}]},{hackney,request,5,[{file,"/private/tmp/bench_mzbench_api_km_1447_760220_908222/deployment_code/workers/http/_build/default/deps/hackney/src/hackney_client/hackney.erl"},{line,330}]},{simple_http_worker,get,3,[{file,"src/simple_http_worker.erl"},{line,26}]},{mzb_erl_worker,apply,4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,52}]},{mzbl_interpreter,'-eval/4-fun-0-',4,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,19}]},{lists,foldl,3,[{file,"lists.erl"},{line,1262}]},{mzbl_loop,timerun,10,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_loop.erl"},{line,243}]}]},undefined],[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,55}]},{mzb_worker_runner,run_worker_script,5,[{file,"/private/tmp/bench_mzbench_api_km_1454_672038_482705/deployment_code/node/_build/default/deps/mzbench/src/mzb_worker_runner.erl"},{line,29}]}]
15:25:12.924 [info] <0.212.0> [ "pool1" ] All workers have finished
15:25:12.924 [info] <0.206.0> [ metrics ] METRIC AGGREGATION:
15:25:12.924 [info] <0.227.0> [ metrics ] Waiting for metrics from mzb_director100@km...
15:25:12.924 [info] <0.227.0> [ local_metrics ] Getting local metric values on mzb_director100@km...
15:25:12.924 [info] <0.227.0> [ local_metrics ] Got 20 metrics on mzb_director100@km
15:25:12.924 [info] <0.227.0> [ metrics ] Received metrics from mzb_director100@km
15:25:12.925 [info] <0.206.0> [ metrics ] Updating metric values in exometer...
15:25:12.925 [info] <0.206.0> [ metrics ] Evaluating rates...
15:25:12.925 [info] <0.206.0> [ metrics ] Evaluating derived metrics...
15:25:12.925 [info] <0.206.0> [ metrics ] Current metrics values:
errors = 1
errors.rps = 0
http_fail = 0
http_fail.rps = 0
http_ok = 0
http_ok.rps = 0
latency.50 = 0.0
latency.75 = 0.0
latency.90 = 0.0
latency.95 = 0.0
latency.99 = 0.0
latency.999 = 0.0
latency.max = 0
latency.mean = 0.0
latency.min = 0
logs.dropped.mailbox_overflow = 0
logs.dropped.mailbox_overflow.rps = 0
logs.dropped.rate_limiter = 0
logs.dropped.rate_limiter.rps = 0
logs.written = 32
logs.written.rps = 0
metric_merging_time = 0.918
other_fail = 0
other_fail.rps = 0
systemload.cpu.km = 0
systemload.interval.km = 0
systemload.la1.km = 0
systemload.message_queue.km = 0
systemload.netrx.km = 0
systemload.nettx.km = 0
systemload.process_count.km = 0
systemload.ram.km = 0
workers.pool1.ended = 5
workers.pool1.ended.rps = 0
workers.pool1.failed = 5
workers.pool1.failed.rps = 0
workers.pool1.started = 5
workers.pool1.started.rps = 0
15:25:12.925 [info] <0.206.0> [ metrics ] CHECK ASSERTIONS:
15:25:12.925 [info] <0.206.0> Current assertions:
(empty)
15:25:12.925 [info] <0.206.0> [ metrics ] CHECK SIGNALS:
15:25:12.925 [info] <0.228.0> [ metrics ] Reading signals from mzb_director100@km...
15:25:12.925 [info] <0.228.0> [ metrics ] Received signals from mzb_director100@km
15:25:12.925 [info] <0.206.0> List of currently registered signals:

15:25:15.927 [info] <0.205.0> [ director ] All pools have finished, stopping mzb_director_sup <0.174.0>
15:25:15.927 [info] <0.205.0> [ director ] Succeed/Failed workers = 0/5
15:25:15.927 [info] <0.205.0> [ director ] Reporting benchmark results to {<0.204.0>,#Ref<0.0.5.614>}
15:25:15.928 [info] <0.230.0> [ mzb_bench_sup ] I'm at <0.230.0>
15:25:15.928 [info] <0.231.0> Signal server has been started
12:25:15.932 [error] [ API ] [ EXEC ] Command execution failed in 3932.866 ms
Cmd: bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;~/.local/share/mzbench/bin/run.escript mzb_director100@km /tmp/mz/bench-100-1454675103/43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl /tmp/mz/bench-100-1454675103/environ.txt"
Exit code: 1
Output: FAILED
5 of 5 workers failed
12:25:15.932 [error] [ API ] Stage 'pipeline - running': failed
Command returned 1:
 bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;~/.local/share/mzbench/bin/run.escript mzb_director100@km /tmp/mz/bench-100-1454675103/43742310C2FD2E626D70E2A28FA85DDC4ED7253A.erl /tmp/mz/bench-100-1454675103/environ.txt"
Command output: FAILED
5 of 5 workers failed
12:25:15.932 [info] [ API ] Bench final: failed
12:25:15.932 [info] [ API ] Stage 'finalize - saving_bench_results': started
12:25:15.933 [info] [ API ] Stage 'finalize - saving_bench_results': finished
12:25:15.933 [info] [ API ] Stage 'finalize - sending_email_report': started
12:25:15.941 [info] [ API ] Stage 'finalize - sending_email_report': finished
12:25:15.941 [info] [ API ] Stage 'finalize - stopping_collectors': started
12:25:15.943 [info] [ API ] Stage 'finalize - stopping_collectors': finished
12:25:15.943 [info] [ API ] Stage 'finalize - cleaning_nodes': started
12:25:15.943 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;cd /tmp/mz/bench-100-1454675103; ~/.local/share/mzbench/bin/mzbench stop; true " (<0.1837.0>)
12:25:19.538 [info] [ API ] [ EXEC ] OK in 3594.743 ms (<0.1837.0>)
12:25:19.538 [info] [ API ] [ EXEC ] bash -c "export PATH='/Users/kmolchanov/.local/share/virtualenvs/mzbench/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin'; source /etc/profile;rm -rf /tmp/mz/bench-100-1454675103 " (<0.1839.0>)
12:25:19.550 [info] [ API ] [ EXEC ] OK in 11.989 ms (<0.1839.0>)
12:25:19.550 [info] [ API ] Stage 'finalize - cleaning_nodes': finished
12:25:19.550 [info] [ API ] Stage 'finalize - deallocating_hosts': started
12:25:19.550 [info] [ API ] Deallocator has started
12:25:19.550 [info] [ API ] Stage 'finalize - deallocating_hosts': finished

Benchmark has failed on gathering_metric_names with reason 'Couldn't get module info'

Hi

can anyone point me in the right direction here?

Thanks! :)

16:02:51.726 [info] [ API ] <0.572.0> Stage 'pipeline - gathering_metric_names': started
17:02:51.727 [info] <0.196.0> Received request: {metric_names,"/tmp/mz/bench-54-1457366567/E9894AE9759785F5E72419F0DEBE90E145C6E535.erl",[{<<"mzb_script_name">>,<<"mqtt1.erl">>}]}
17:02:51.729 [info] <0.118.0> Started tcp lager backend for info #Port<0.2928>
17:02:51.729 [info] <0.103.0> Started tcp lager backend for info #Port<0.2928>
16:02:51.753 [error] [ API ] <0.604.0> Can't get metrics info: ["Script /tmp/mz/bench-54-1457366567/E9894AE9759785F5E72419F0DEBE90E145C6E535.erl is invalid:\n",
                                                                "pool1: Couldn't get module info for mqtt_pub_worker"]
16:02:51.754 [error] [ API ] <0.572.0> Stage 'pipeline - gathering_metric_names': failed
Benchmark has failed on gathering_metric_names with reason:
{get_metrics_error,["Script /tmp/mz/bench-54-1457366567/E9894AE9759785F5E72419F0DEBE90E145C6E535.erl is invalid:\n",
                    "pool1: Couldn't get module info for mqtt_pub_worker"]}

Stacktrace: [{mzb_api_bench,handle_stage,3,
                            [{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_api_bench.erl"},
                             {line,231}]},
             {mzb_pipeline,'-handle_cast/2-fun-0-',6,
                           [{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_pipeline.erl"},
                            {line,165}]}]

Problem viewing logs in web frontend

Hi,
I just updated mzbench, and there's some minor trouble with logs in the webfrontend.
For new tests, I can't seem to see userlogs, and
for older tests, the frontend tries to load logs, but just tries forever.

Should I clear all logs somehow? What has changed exactly?

Install a server and a worker in an isolated environment

Hi guys,

it would be great to don't mess my current environment by your Erlang's version.
I usually work with Erlang v18. So I can't have your server and worker at the same node. :(
And moreover I have Python3+ there.

What about to have a virtual environment for your Python dependencies and Erlang-release for the others?
Together with my request #24 it will help to avoid of installing C++ compiler.

How test a custom worker

I'm following the tutorial to write a custom worker http://machinezone.github.io/mzbench/workers/#how-to-write-a-worker, I created a new worker using tcp template using the command:
bin/mzbench new_worker --template tcp pingpong

To test this, I ran:
bin/mzbench run_local pingpong/examples/pingpong.erl

However, I'm getting an error:
pool1: Couldn't get module info for pingpong_worker
pool2: Couldn't get module info for pingpong_worker

I assume I'm missing the make_install command in pingpong/examples/pingpong.erl. However I'm not sure how to reference it to a local dir where my test worker is create (./pingpong) instead of a git repo:
{make_install, [{git, ""}, {branch, ""}, {dir, "

"}]}

Using self-compiled erlang does not work

Hi, I ran into an issue where mzbench could not find my self-compiled erlang which is only available in my user PATH variable. Unfortunately the value of the user PATH variable is discarded due to the source /etc/profile here:
https://github.com/machinezone/mzbench/blob/master/common_apps/mzbench_utils/src/mzb_subprocess.erl#L21

I made a hack where I removed the source /etc/profile which makes it work on my system:
larshesel@b9b40ed

This is probably not the correct fix, and there's of course a good chance that maybe I'm just doing something wrong in the first place.

Tutorial about a custom worker

Hi guys,

one day I needed a benchmark framework with a possibility to write as custom workers as it possible.
I compared several solutions like tsung and so on. Additionally to features list my second question was "is it easy to use this framework?" So, I spent several hours in creating my custom worker to figure out how it is going.
It would be great to have a simple tutorial about "how to use my own worker from scratch" because it seems to be the core kill-feature in comparison with other solutions.

Workers can't be gen_servers?

Hi,

from a quick look at the respective mzbench modules, worker processes can't be gen_servers, right?

Just want to be sure. Sorry for asking basic questions.

A URL to a specific graph

It would be great if graphs in the dashboard could be referred by direct URLs. This is helpful when showing someone the results of the benchmark.

Provide Local Install / Execution Options

While requiring sudo access is tolerable (not ideal in general) on my local laptop, as a user I don't necessarily have sudo access on the sorts of machines I would like to run by benchmarks on.

Installing / running the server shouldn't require sudo privs.

"bad_hdr_binary"

11:03:27.239 [error] [ API ] <0.3224.0> Stage 'pipeline - running': failed
Benchmark has failed on running with reason:
badarg

Stacktrace: [{hdr_histogram,from_binary,[{error,bad_hdr_binary}],[]},
{mzb_api_bench,'-aggregate_results/3-fun-4-',4,
[{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_api_bench.erl"},
{line,943}]},
{lists,flatmap,2,[{file,"lists.erl"},{line,1249}]},
{mzb_api_bench,aggregate_results,3,
[{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_api_bench.erl"},
{line,925}]},
{mzb_api_bench,handle_stage,3,
[{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_api_bench.erl"},
{line,293}]},
{mzb_pipeline,'-handle_cast/2-fun-0-',6,
[{file,"/home/afa/Erlang/machinezone/mzbench/server/_build/default/deps/mzbench_api/src/mzb_pipeline.erl"},
{line,172}]}]

Seen with a simple test run. Is it just me? ;)
The actual test runs through, this error shows basically at test end.

Upper pool size limit?

Is there an upper limit for a single pool size?
if not, is it bound by something?

I just tried to simplify a test by merging a couple pools and increasing the pool size. But one pool only seems to give me 12500.

Might be doing something wrong, of course.

Thanks :)

xmerl missing for the worker workers

I'm actually unsure how to deploy a worker using core applications like xmerl. I can start it localy, but when it's launched using mzbench I got the following error:

{error,{xmerl,{"no such file or directory","xmerl.app"}}}}

It seems that mzbench is using a custom erts image:

admin@server-1:~/mzbench$ ls ~/.local/cache/mzbench_api/packages/
mzbench_oncam-040abba-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-a6a3762-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-040abba-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-a9e5032-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-22b0a36-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-b0278ca-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-22b0a36-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-b0278ca-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-23200ed-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-b2df391-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-310796e-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-b2df391-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-324db9d-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-c1ecf76-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-324db9d-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-c1ecf76-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-36fbed3-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-c4cb1f1-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-38bd6c5-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-c970442-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-38bd6c5-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-c970442-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-431937d-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-ca365c9-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-431937d-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-ca365c9-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-4363c70-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-dff4634-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-4363c70-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-eaef650-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-4f70e3b-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-eaef650-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-4f70e3b-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-eb3e41f-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-5976165-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-eb3e41f-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-59938db-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-f198956-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-6129b8b-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-f198956-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-7a0aa71-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-f7ca1a5-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-808e896-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-f7ca1a5-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-808e896-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  mzbench_oncam-ff883bc-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-9f7b005-linux-3.13.0-48-generic_erts-7.3.tgz           mzbench_oncam-ff883bc-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz
mzbench_oncam-9f7b005-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz  node-06d13c618526b11792d3bb2c2e8f5d275943cfaa-linux-3.13.0-48-generic_erts-7.3.tgz
mzbench_oncam-9fd266c-linux-3.13.0-48-generic_erts-7.3.tgz           node-06d13c618526b11792d3bb2c2e8f5d275943cfaa-linux-3.14.48-33.39.amzn1.x86_64_erts-7.3.tgz

but how to set it? How can I have a worker using xmerl?

Can't provision EC2 nodes due to this permission issue

Hi,

This is not directly an MZBench issue... but maybe you guys have seen it?
What I want to do is to run the dashboard on my Laptop and connect to EC2 workers, but the provisioning fails with this kind of errors.

16:28:23.316 [error] [ API ] <0.10341.0> [ EXEC ] Command execution failed in 1931.868 ms
Cmd: ssh -A -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected] "source /etc/profile; mkdir -p /tmp/mz/bench-16-1462811172"
Exit code: 255
Output: Warning: Permanently added 'ec2-52-some-IP.us-west-2.compute.amazonaws.com,52.some.IP' (ECDSA) to the list of known hosts.
Permission denied (publickey).

Mind you, I'm kind of an Amazon newbie ;)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.