matteofigus / api-benchmark Goto Github PK
View Code? Open in Web Editor NEWA node.js tool to benchmark APIs
License: MIT License
A node.js tool to benchmark APIs
License: MIT License
Do you think it would be possible to add the compared services as different line items on the chart? This could be really powerful.
To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:
.travis.yml
package.json
files, so that was left aloneIf you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.
Greenkeeper has checked the engines
key in any package.json
file, the .nvmrc
file, and the .travis.yml
file, if present.
engines
was only updated if it defined a single version, not a range..nvmrc
was updated to Node.js 10.travis.yml
was only changed if there was a root-level node_js
that didn’t already include Node.js 10, such as node
or lts/*
. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.For many simpler .travis.yml
configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖
There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.
Your Greenkeeper Bot 🌴
Is is possible to listen to the events emitted by the runner that the suite manager listens for?
I've thrown web sockets at my small project now and was hoping to possibly display a progress bar in real time and even possibly later at some point stream the results in real time.
In a practical environment each request has different data. Now if i send the same data in each request that data would come in the database cache so its not a real load test. So what i can do is
var service = {
api: "http://my.api.host"
};
var routes = {};
var i = 0;
while(i<1000) {
routes['route'+i] = {
method: 'post',
route: 'endpoint',
data: getSomeRandomData(i), // this function return some different data each time
expectedStatusCode: 200
}
}
var options = {
debug: true,
minSamples: 1000,
runMode: "parallel",
maxConcurrentRequests: 20,
stopOnError: false
};
apiBenchmark.measure(service, routes, options, function(err, results){
console.log('error is ' + err);
console.log('result is ' + results);
});
Here getSomeRandomData
return different data each time so my each request has a different data. But is there a better way?
instead of -10%
The err
variable declared in lib/runner.js:137
will be overwritten by the one in :153
because those variables live in the same scope.
To mitigate this problem, I avoided creating a variable at all and refactored by inlining the object into the call to emit()
, which is no less readable and slightly more efficient (avoids a variable assignment).
Sort of similarly, we can't re-scope a named parameter with var
, as in lib/debug-helper.js:9
. That line could be improved (and made slightly safer) by being changed to:
module.exports = function (logger) {
// other code
logger = typeof logger !== 'undefined' ? logger : console;`
// other code
}
Do you think it would be nice to be able to chain endpoints somehow? Some REST APIs need to be chained from a previous request body like identifiers in order to query the next and could be done somewhat synchronous.
Right now the generated date displayed on the HTML report is set to the current date and decoupled from the benchmark report results
object, but it might be better to couple it with the results
object that is returned from benchmarking.
The way I interface with api-benchmark
is that I take the results object and store in Redis and had to extend it to store that date and alter report generation to pull the date form the results object rather than the current date.
Have you ever experienced Superagent be slow compared to the request package?
I switched from one API server to Azure API management and it came to a crawl and when I started debugging I've narrowed it down to that Superagent.
If I use the request package in another script with the same request my calls go through fine, but in the Superagent version it takes about 120 seconds.
@matteofigus What do you think about build the HTML server side? My results JSON is 835kb sent to the browser and takes quite a while for it to build the elements client side. I wonder if something like Handlebars could be used too.
Rather than use the object key would it be possible to just set a property with the names?
I think that route should be allowed to be a function, because then GET endpoints like:
/api/customer/12345
/api/customer/67890
could be tested much better with random data.
I've got an use case where I would like to submit different data per request and evaluate the overall performance of the API, despite data sent. This would require POST data to be dynamic, possibly changed per request.
Do you think it would be viable to have the .measure()
function receiving a function that would generate the data on each request? Or through any other approach?
Is the request time for parallel requests supposed to be high? I was getting a graph that said basically 35 seconds and when I switch back to sequential it says about 1-2 seconds per each request. It seems maybe it is adding them up for parallel or doing some kind of logic maybe?
I made docker version of api-benchmark with little bit of extensions. Result of load test of APIs are browsable through an interface.
Link: https://hub.docker.com/r/johnshumon/docker-api-benchmar
If you find it a good docker version of the api-benchmark then I would like to integrate docker-version to this repo.
Hello,
I like this tool but I was curious how does it relate to the more famous benchmarking tool like Apache Benchmark (ab). What are the pros and cons of each? How do they relate to each other?
Thanks
I would like to use service-specific headers for each endpoint request. In other words, if I had servers A, B, and C and only one endpoint X (e.g. http://A/X
, http://B/X
, http://C/X
) and all of these had different authorization headers, under the current model I could not provide the correct headers.
I thought of a few options:
endpoints.X.headers
to a function and this function was called with the service name (e.g. A
) I could map to the right headersObject.assign
) I could set up my headers per-serviceDo either of these make sense to do or is there something I'm missing in reading the documentation?
First of all thank you for creating the benchmark utility, I find it very useful.
My use case is benchmarking a search API endpoint. I am passing an arbitrary number of query parameters (e.g. /search?a1=b1&a2=b2&a3=b3) and performing a GET call.
I can see in docs that api-benchmark allows a function to generate payloads for each POST call. Would be great if we could have the same ability to dynamically generate/randomize query strings for GET calls.
I created an issue in grunt-api-benchmark, but wanted to leave a pointer to it here, because the source of the issue is in this module.
@matteofigus In order to find API routes that have errors, in addition to status code checking, I was thinking it could be useful to be able to pass a callback handler which is passed the response object and can determine if the response body is what is expected and would return a boolean.
Running npm test
several times, I saw this failure a few times:
Uncaught AssertionError: expected 0.009798363 to be within 0.01..0.015
MacBook Air i7, uptime reports load averages: 1.94 2.50 2.82
so maybe my system is just bogged down.
I don't see any references to restricting http-only calls in your library, however when I am trying to reach a service which is https-based I am getting a null returned.
Any suggestions please? Thanks!
JSHint the entire project:
.jshintrc
configuration fileNotable changes should include:
{}
) around all statements;Hi,
I'm using this code:
'use strict';
var fs = require('fs');
var open = require('open');
var apiBenchmark = require('api-benchmark');
var services = {
server1: "http://localhost:4000/",
server2: "http://localhost:9999/"
};
var routes = { route1: '/create/email' };
var opts = {
runMode: 'parallel',
minSamples: 5000
}
apiBenchmark.compare(services, routes, opts, function(err, results){
apiBenchmark.getHtml(results, function(error, html){
fs.writeFile('./output.html', html, function () {
open('./output.html');
});
});
});
The HTML output opens correctly in the browser, but it only displays stats from "server1".
Is this the expected behavior? Shouldn't it compare the performance of both (possibly wuth different colored lines)? Also, the "Request details" and "Response details" tabs don't seem to work in this case.
Thanks!
Hello, thanks a lot for this project!
I've tried to understand the code but I didn't realised everything.
What time is measured for the analysis? Is it the time the first byte received or including the download of the response?
And is this customisable which time is used?
Thanks for your answer!
Dominik
Is there a way to track how long it took run the benchmark? I was thinking under the generated date in the report view there could also be like a time it took to complete the benchmark which could be returned in the results object.
Hi ,
\node_modules\api-benchmark\lib\runner.js:177
var results = {};
^
Options are as below:
var options = {
"minSamples": 10,
"runMode": "parallel",
"maxConcurrentRequests": 2,
"debug": true,
"stopOnError": false
};
Thanks in advance.
I have got a use case where I have to send zip file as a binary data (like in curl --data-binary '@input.zip')
Is there a way to handle such requests?
When the request handler class detects the response status code not matching the expected one it returns the error code and message, but doesn't include the response so the tab displays undefined.
https://github.com/matteofigus/api-benchmark/blob/master/lib/request-handler.js
I'm wondering what the best way to fix this is. I also noticed if you pass in a string it doesn't match since it is doing strict comparison to of them and expects ints. We could do parseInt() if we want to support passing in a string.
Maybe we can investigate alternative plotting libraries like d3 and chart-js
Hi!
Is there somehow I can provide a client side certificate when hitting an https endpoint?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.