Giter Club home page Giter Club logo

loads's Introduction

Loads

Warning

This is an old version of Loads. Don't use it. To get started with the new version, you can look at https://github.com/loads

Loads is a framework for load testing an HTTP service.

https://loads.readthedocs.io/en/latest/_images/logo.jpg

by Juan Pablo Bravo

Installation:

$ bin/pip install loads

See the docs at http://loads.readthedocs.io

Build Status https://coveralls.io/repos/mozilla-services/loads/badge.png?branch=master https://pypip.in/v/loads/badge.png

loads's People

Contributors

almet avatar diyan avatar entequak avatar matrixise avatar mjpieters avatar natim avatar rfk avatar rodo avatar sibson avatar tarekziade avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loads's Issues

simplify configuration

right now we need to do things like

$ loads-agent --backend tcp://some-ip:7777 --heartbeat tcp://some-ip:7778 --register tcp://some-ip:7779 --broker-push tcp://some-ip:7781

This is complex. we should just do:

$ loads-agent --broker tcp://some-ip:7777

and let the agent ask the broker for the full config

Tests are hanging in some specific environments on SciLinux

I installed Loads from GitHub on SciLinux 6.3.
I started up "make tests" and appear to "hang" on the following test:
test_success (loads.tests.test_cluster.TestCluster)

I don't see any obvious errors.
Here are the procs running:
make test
/home/mozroot/loads/bin/python /home/mozroot/loads/bin/nosetests -s -d -v --cover-html --cover-html-dir=html --with-coverage --cover-erase --cover-package loads loads/tests
/home/mozroot/loads/bin/python -m loads.transport.broker --logfile stdout --frontend ipc:///tmp/f-tests-cluster --backend ipc:///tmp/b-tests-cluster --heartbeat ipc:///tmp/h-tests-cluster
/home/mozroot/loads/bin/python -m loads.transport.agent --logfile stdout --backend ipc:///tmp/b-tests-cluster --heartbeat ipc:///tmp/h-tests-cluster --timeout 1.0 --max-age -1 --max-age-delta 0

I see that this proc: ... --backend....
continuously updates, but the other two show the original start time of the tests (which is now past 8 minutes)

Not sure why this "hang" condition is different than my failures on Mac and @ametaireau 's success with Ubuntu.

I will try that OS next.

AssertError in callback does not display as 'failure' in report

  1. run a test that does this
    def test_reg(self):
        uaid =  self._get_uaid("")
        def callback(m):
            data = json.loads(m.data)
            self.assertIn('status', data.keys())
            self.assertIn(200, data.values())

        ws = self.create_ws('ws://localhost:8080',
                            callback=callback)
  1. run loads-runner loads.examples.test_pushgo.TestPushgo.test_reg -c 150 -u 1

result:

AssertionError: 200 not found in [500, u'LKLHLX0OFYR21NWV', u'abc', u'register', u'An unexpected error occurred']
<Greenlet at 0x1015d6cd0: <bound method WebSocketHook.run of <loads.websockets.WebSocketHook object at 0x1015faa50>>> failed with AssertionError
Hits: 0
Started: 2013-07-10 20:36:34.376760
Duration: 27.56 seconds
Approximate Average RPS: 0
Average request time: 0.00s
Opened web sockets: 149
Bytes received via web sockets : 41782

Success: 150
Errors: 0
Failures: 0

expected:
Success: 149
Failures: 1

Ran into fatal error buiding Loads on Fedora 18 and Ubuntu 12.10

This went much better than CentOS/SciLinux in terms of pre-reqs. I installed the following: python-setuptools, get-pip.py, libev-devel, python-zmq, zeromq-devel, virtualenv, and git

$ git clone git://github.com/mozilla-services/loads
$ cd loads
$ make build
This is what I see after cython is downloaded and built:

...etc...
creating build/temp.linux-x86_64-2.7/home/mozroot/loads/build/cython

creating build/temp.linux-x86_64-2.7/home/mozroot/loads/build/cython/Cython

creating build/temp.linux-x86_64-2.7/home/mozroot/loads/build/cython/Cython/Plex

gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c /home/mozroot/loads/build/cython/Cython/Plex/Scanners.c -o build/temp.linux-x86_64-2.7/home/mozroot/loads/build/cython/Cython/Plex/Scanners.o

/home/mozroot/loads/build/cython/Cython/Plex/Scanners.c:8:22: fatal error: pyconfig.h: No such file or directory

compilation terminated.

error: command 'gcc' failed with exit status 1


Command /home/mozroot/loads/bin/python -c "import setuptools;file='/home/mozroot/loads/build/cython/setup.py';exec(compile(open(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-VaO0lV-record/install-record.txt --single-version-externally-managed --install-headers /home/mozroot/loads/include/site/python2.7 failed with error code 1 in /home/mozroot/loads/build/cython
Storing complete log in /home/mozroot/.pip/pip.log
make: *** [build] Error 1

Be consistent in the naming of agents / workers

Loads-agent was first named loads-workers; There is still some places where they're called "workers". But we're also using the "worker" concept in the agent itself, leading to some hard times to understand what's going on sometimes.

Let's rename:

  • agents are the process started by the loads-agent command
  • workers are the processes started by the agents itself, running the tests.

Rename everything to agent outside the agent.py code and use workers in the agent code.

NotImplementedError: errors

tarek:pushgo tarek$ ../../../loads/bin/loads-runner --config loads.ini --attach
[=============================================================================================] 100%
Duration: 136.77 seconds
Hits: 0
Started: 2013-08-05 16:49:12.619494
Approximate Average RPS: 0
Average request time: 0.00s
Opened web sockets: -22
Bytes received via web sockets : 957707

Success: 903
Errors: 9583
Failures: 0

Traceback (most recent call last):
  File "../../../loads/bin/loads-runner", line 9, in <module>
    load_entry_point('loads==0.1.0', 'console_scripts', 'loads-runner')()
  File "/Users/tarek/Dev/github.com/loads/loads/main.py", line 254, in main
    res = run(args)
  File "/Users/tarek/Dev/github.com/loads/loads/main.py", line 75, in run
    return runner.attach(run_id, started, counts, metadata)
  File "/Users/tarek/Dev/github.com/loads/loads/distributed.py", line 178, in attach
    self.flush()
  File "/Users/tarek/Dev/github.com/loads/loads/runner.py", line 253, in flush
    output.flush()
  File "/Users/tarek/Dev/github.com/loads/loads/output/std.py", line 56, in flush
    self._print_tb(self.results.errors)
  File "/Users/tarek/Dev/github.com/loads/loads/test_result.py", line 297, in __getattribute__
    raise NotImplementedError(name)
NotImplementedError: errors

agent restart strategy

when the broker restarts, the agents might not notice it was restarted, The effect is that the agents are not registered anymore in the broker.

The solution is to send in the BEAT message a specific flag to ask the agents that are listening to the heartbeat to register again

output exceptions hide previous ones.

In the code, we currently have something like:

try:
    # do some stuff that will eventually crash
finally:
    # flush the outputs

But in the case something wrong happens in the outputs, we don't get the default exception, probably the important one. We should do a try/except around the output flush and be sure to re-raise the appropriate exception.

Split test_result

test_result is used to get the results from unittest and to get simplify their access. That would be better to have this provided by two different classes so we separate the concerns.

Attempt to run unit tests on Mac gives me errors on ~/loads/loads/runner.py

I installed loads with git clone (I have all the pre-reqs as far as I can tell after talking to @ametaireau )

I ran "make tests", but the sequence always appears to "hang" or fail on this test:
test_distributed_run (loads.tests.test_functional.DistributedFunctionalTest)

So, with @ametaireau 's advice, I do this:
In one terminal I run this:
bin/circusd conf/loads.ini

In another terminal I run this:
bin/loads-runner loads.examples.test_blog.TestWebSite.test_something -a2 -c10

On the loads.ini terminal I see this traceback:
http://jbonacci.pastebin.mozilla.org/2569404

On the test terminal I see this traceback:
http://jbonacci.pastebin.mozilla.org/2569405

I tried installing gevent-dev to get around this, but I still see the "hang" condition and errors...

Import Error running loads-runner on Fedora 18

I got both the GitHub repo and and loads tools to install successfully.
The loads libs/files get installed here by default:
/usr/lib/python2.7/site-packages/loads
And the apps are here by default:
/usr/bin/loads*

Compare that to Ubuntu:
/usr/local/lib/python2.7/dist-packages/loads
/usr/local/bin/loads*
And Mac:
/Library/Python/2.7/site-packages/loads
/usr/local/bin/loads*

On Fedora if I run:
$ loads-runner
I see this:
Traceback (most recent call last):
File "/usr/bin/loads-runner", line 9, in
load_entry_point('loads==0.1.0', 'console_scripts', 'loads-runner')()
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 337, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2311, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2017, in load
entry = import(self.module_name, globals(),globals(), ['name'])
File "/usr/lib/python2.7/site-packages/loads/main.py", line 15, in
from loads.distributed import DistributedRunner
File "/usr/lib/python2.7/site-packages/loads/distributed.py", line 4, in
from zmq.green.eventloop import ioloop, zmqstream
ImportError: No module named eventloop

libev error when running make build on mac

I'm on mac 10.8.

  1. run: make build

actual: see error below

$ ~/github/mozilla-services/loads$ make build
/Users/Edwin/github/mozilla-services/loads/bin/pip install cython
Requirement already satisfied (use --upgrade to upgrade): cython in ./lib/python2.4/site-packages
Cleaning up...
CYTHON=`pwd`/bin/cython /Users/Edwin/github/mozilla-services/loads/bin/pip install https://github.com/surfly/gevent/archive/master.zip
Downloading/unpacking https://github.com/surfly/gevent/archive/master.zip
Downloading master.zip (unknown size): 1.3Mb downloaded
Running setup.py egg_info for package from https://github.com/surfly/gevent/archive/master.zip
Traceback (most recent call last):
File "<string>", line 14, in ?
File "/var/folders/3h/5rfqphl56w17ztjcg3r6lvbm0000gn/T/pip-mqkgSo-build/setup.py", line 73
include_dirs=['libev'] if LIBEV_EMBED else [],
                        ^
SyntaxError: invalid syntax

Create reports.

I've landed what I called the StreamCollector. It's a stream object which can collect the data and provide a simple API to query it.

The goal is to ease the creation of reports. The first report I want to work on is the HTML one. Eventually, it could later be done with Web Sockets in real time, but I will focus on a static view to start with.

Technologically-wise, I plan to create the graphs in the browser, rather than beforehand, in python (with matplotlib for instance). One solution is to use d3.js / Rickshaw to create the graphs, and I plan to give it a go.

Here are the graphs I want to display in a report. Don't hesitate to add some here if you think they would be valuable.

  1. successful tests / failures / errors, depending on the cycles / number of concurrent users. This could be a stacked bar chart.
  2. Tests per second, depending on the cycles. This could be a line graph.
  3. Requests per second, still depending on the cycles. Line graph.
  4. Maybe that could be interesting to have a graph with the calls that take the most of the time?

Some information aren't graphs (or not only graphs). We can double each of these graphs tables containing the raw numbers. Additionally, we can have a list of URLs that had been called, with the success rate for each of them.

Anything else? Thoughts?

cltr-C managment + detach mode

Hitting ctrl-C should ask the user if she wants to stop the test or simply detach the console

when running loads-runner again we should offer an --attach option that lists running tests and hook back the output to the selected one

add a simple AMI pause

we should add a simple pause/resume thing for AWS so we don't run the cluster for nothing

add a loads-ctl command

a shell-like command to manage the broker :

  • check the broker/agents states
  • check what tests are running
  • etc

copy files and agents conflicts

if 2 agents run on the same box they might conflict on files & test dir.

We want to prefix everything with the agent process id to avoid this

AssertionError when dealing with gzip-encoded content

I'm trying out a simple loadtest of the picl IdP, and getting this strange error:

AssertionError: Content-Length is different from actual app_iter length (83!=75)
 Traceback: 
  File "/usr/lib/python2.7/unittest/case.py", line 332, in run
    testMethod()
  File "stress.py", line 18, in test_entropy
    response = self.app.get("/entropy", status=[200])
  File "/home/rfk/repos/mozilla/identity/picl-idp/loadtest/local/lib/python2.7/site-packages/webtest/app.py", line 199, in get
    expect_errors=expect_errors)
  File "/home/rfk/repos/mozilla/identity/picl-idp/loadtest/local/lib/python2.7/site-packages/webtest/app.py", line 476, in do_request
    res.body
  File "/home/rfk/repos/mozilla/identity/picl-idp/loadtest/local/lib/python2.7/site-packages/webob/response.py", line 361, in _body__get
    % (self.content_length, len(body))

As far as I can tell, this is caused by mis-handling of gzip-encoded content somewhere in the loads http stack. WSGIProxy ends up returning a Response object with the content-length header matching the original gzipped length, but a body containing the unzipped contents.

If I explicitly set "Accept-Encoding: identity" on the request then the problem goes away.

To reproduce, pull from https://github.com/mozilla/picl-idp/tree/loads-encoding-issue and:

cd ./loadtest
make build
make test

unify unittest imports

add

import unittest2 as unittest

into loads.tests.support, and use

from loads.tests.support import unittest

everywhere

Embed a paramiko SSH server in the tests

That's useful to run the tests that are interacting through SSH. For now the tests are run through localhost:22 and skipped unless TEST_SSH is in the environ

Change the zmq messages format

Currently, we're using a message format with metadata and data at the same level. e.g:

{
worker_id: 1234,
data_type: 'addSuccess',
test_name: 'name of the test',
}

Whereas when implementing the zmq reporter for mocha (js unittest framework) I was expecting the data to be separated, under a data key, e.g:

{
worker_id: 1234,
data_type: 'addSuccess',
data: {
         test_name: 'name of the test',
         another_key: 'value'
        }
}

Any different thoughts?

ssh-related tests are blocking under linux

I don't exactly know when but it seems to be blocking on the

I've tried to run the test with bin/nosetests loads.tests.test_deploy_host:TestHost.test_chdir

I'm getting this traceback when I hit ^C

Traceback (most recent call last):
  File "bin/nosetests", line 9, in <module>
    load_entry_point('nose==1.3.0', 'console_scripts', 'nosetests')()
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/core.py", line 118, in __init__
    **extra_args)
  File "/usr/lib/python2.7/unittest/main.py", line 95, in __init__
    self.runTests()
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/core.py", line 197, in runTests
    result = self.testRunner.run(self.test)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/core.py", line 61, in run
    test(result)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/suite.py", line 176, in __call__
    return self.run(*arg, **kw)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/suite.py", line 223, in run
    test(orig)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/suite.py", line 176, in __call__
    return self.run(*arg, **kw)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/suite.py", line 223, in run
    test(orig)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/case.py", line 45, in __call__
    return self.run(*arg, **kwarg)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/case.py", line 133, in run
    self.runTest(result)
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/nose/case.py", line 151, in runTest
    test(result)
  File "/usr/lib/python2.7/unittest/case.py", line 391, in __call__
    return self.run(*args, **kwds)
  File "/usr/lib/python2.7/unittest/case.py", line 318, in run
    self.setUp()
  File "/home/alexis/dev/github.com/loads/loads/tests/test_deploy_host.py", line 38, in setUp
    start_ssh_server()
  File "/home/alexis/dev/github.com/loads/loads/tests/test_deploy_host.py", line 23, in start_ssh_server
    Host('0.0.0.0', 2200, 'tarek')
  File "/home/alexis/dev/github.com/loads/loads/deploy/host.py", line 42, in __init__
    sftp = paramiko.SFTPClient.from_transport(self.client.get_transport())
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/paramiko/sftp_client.py", line 105, in from_transport
    chan.invoke_subsystem('sftp')
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/paramiko/channel.py", line 245, in invoke_subsystem
    self._wait_for_event()
  File "/home/alexis/dev/github.com/loads/local/lib/python2.7/site-packages/paramiko/channel.py", line 1115, in _wait_for_event
    self.event.wait()
  File "/usr/lib/python2.7/threading.py", line 403, in wait
    self.__cond.wait(timeout)
  File "/usr/lib/python2.7/threading.py", line 243, in wait
    waiter.acquire()

A strace -p on the process tells me nothing more than it's waiting. cpu is as 100%.

smart storage of errors

  • store each unique instance of an error and count its occurence (unique instance: a given exception at a given line of code)
  • provide an API in the broker to grab them get_errors(run_id)
  • provide a wrap-up at the end of the run where we display the most frequent ones

The number of success / failures / etc aren't reported when using --duration

$ bin/loads-runner loads.examples.test_blog.TestWebSite.test_something --duration=1 -a1
[============================] 100% 
Hits: 19                                                                                                 
Started: 2013-06-27 15:45:55.433709                                                                      
Duration: 1.01 seconds                                                                                   
Approximate Average RPS: 18                                                                              
Average request time: 0.04s                                                                              
Opened web sockets: 19                                                                                   
Bytes received via web sockets : 10830                                                                   

Success: 0     < This shouldn't be zero
Errors: 0 
Failures: 0

Avoid collisions between workers in the agent

At the moment, all the workers run by an agent needs to concern the same run. If that's not the case, we will end up in a weird situation and the agent will not be able to know which runs are finished and which ones aren't.

This can be fixed by using the run-id to distinguish the runs from each other.

.light mode for TestResult

have a mode where TesResult just have counters (given by the server)

if something need some detailed data, we lazy load it from the server/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.