Giter Club home page Giter Club logo

Comments (8)

leeoniya avatar leeoniya commented on May 24, 2024 1

interesting. very odd that i get very different results locally for NanoExpress and HyperExpress, and NanoExpress is much closer to raw uWS.js.

let me know if you'd like any of my logs to investigate this (and let me know how to generate them). otherwise, thanks for the discussion ๐Ÿ‘

from hyper-express.

kartikk221 avatar kartikk221 commented on May 24, 2024

It seems the wide variation between your results is caused by testing locally on the machine. I am not sure why testing locally on your machine would cause inconsistent results but after profiling the benchmark for uWebsockets.js and HyperExpress. I retrieved the following node profiler logs:
uws.txt
hyperexpress.txt

I then copied the command that you used with wrk but this time ran the benchmark on a 1vCPU Vultr instance that is running Ubuntu v21 and I got almost the following results from wrk with your command.

uWebsockets.js

Running 10s test @ http://SERVER_IP_REDACTED:8081/benchmark
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    21.22ms   14.53ms 256.07ms   98.95%
    Req/Sec     4.98k   142.07     5.17k    93.00%
  49505 requests in 10.01s, 25.49MB read
Requests/sec:   4947.74
Transfer/sec:      2.55MB

HyperExpress

Running 10s test @ http://SERVER_IP_REDACTED:8082/benchmark
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    21.12ms   14.48ms 253.57ms   98.95%
    Req/Sec     5.00k   178.33     5.23k    97.00%
  49779 requests in 10.01s, 24.92MB read
Requests/sec:   4975.14
Transfer/sec:      2.49MB

Also, the results are so similar because your command is actually very lightweight and causes the 1vCPU server to only be at 20% CPU usage during the test. If the test was more extensive I'm sure uWS would pull ahead due to the lower overhead. But yeah there should not be too much of a difference between uWS, HyperExpress and NanoExpress since HyperExpress and in some cases NanoExpress only construct new instances of predefined classes and creating objects in JS is plenty fast. If you can run the benchmarks on a remote machine or a different machine then your stresser machine then you should see accurate results.

from hyper-express.

leeoniya avatar leeoniya commented on May 24, 2024

If you can run the benchmarks on a remote machine or a different machine then your stresser machine then you should see accurate results.

why would this be any better? you're just benchmarking the limits/bottleneck of a network, or a specific host.

in my test, wrk runs on a different core than the node server, so the fact that it is the same machine makes minimal difference -- this machine is far more powerful than this test exerts. more importantly, NanoExpress shows much higher numbers than hyper-express. both are less that raw uWS.js, which is expected of course. your benchmarks show that they are all close, but i'm not seeing this.

from hyper-express.

leeoniya avatar leeoniya commented on May 24, 2024

the only conclusion you can draw from your benchmark table is that Express and Fastify hit the Vultr CPU limits before they hit the network limits, and uWS.js, NanoExpress and HyperExpress all hit the network limits before the CPU limits.

from hyper-express.

kartikk221 avatar kartikk221 commented on May 24, 2024

Like I said I am not sure exactly why testing on a local machine would cause this big difference but that is precisely why I profiled the node process when testing with uWebsockets.js vs HyperExpress to see why HyperExpress was scoring so much lower in wrk and where the overhead was coming from. If you look at the profile logs that I attached above you will see that HyperExpress's Javascript logic takes up less than 5% of the total execution ticks. Unless you think that creating a new instance of a constructor in Javascript is a "heavy" operation there is practically no room of a bottleneck in HyperExpress vs uWebsockets.js or NanoExpress for that matter.

Also, you mentioned that Express/Fastify hit CPU limits and uWS.js, NanoExpress and HyperExpress hit the network limits. That is precisely what is supposed to happen as stated in the benchmarks section. Also, you are more than welcome to go out and try these benchmarks on a different server with a faster network card or your own environment with an extremely fast network adapter, you will see that these results will remain relatively the same simply due to the little overhead in HyperExpress vs. uWebsockets.js as shown in the Node profiler logs above.

from hyper-express.

kartikk221 avatar kartikk221 commented on May 24, 2024

interesting. very odd that i get very different results locally for NanoExpress and HyperExpress, and NanoExpress is much closer to raw uWS.js.

I looked a bit further into this yesterday and after reviewing the request flow between both HyperExpress and NanoExpress, the only main difference I could derive was that HyperExpress may be performing more JS/C++ bridge communications (Not even sure they can be called those) but essentially when a HyperExpress.Request instance is created, HyperExpress will fetch the method, path, query, ip, proxy_ip and all headers and store them locally as shown here:

this.#method = request.getMethod().toUpperCase();
this.#path = request.getUrl();
this.#query = request.getQuery();
this.#url = this.#path + (this.#query ? '?' + this.#query : '');
this.#remote_ip = response.getRemoteAddressAsText();
this.#remote_proxy_ip = response.getProxiedRemoteAddressAsText();
this.#raw_request.forEach((key, value) => (this.#headers[key] = value));
no other operations are performed other than wrapping the middlewares/user route handler in a try/catch/promise which will catch both sync/async errors for the global error handler. All of these additional calls and actions are essentially neccessary for HyperExpress to be as user-friendly as possible and essentially provide the most "Express-like" experience even though uWebsockets.js is vastly different than the built-in Node http/https module which is what Express/Fastify use. I would say in terms of practical performance where you are running on servers and clusters, HyperExpress/NanoExpress/uWebsockets.js will perform relatively similar as the network will almost always be bottlenecked first. If for some reason you do need every single possible request/sec from a webserver than you might as well opt-out to build directly in C++ with uSockets/uWebsockets since those would provide the most raw performance without any Node overhead.

from hyper-express.

leeoniya avatar leeoniya commented on May 24, 2024

great, thanks for looking into it and that's a completely fair trade-off. i'll try to re-bench soon on e.g. a 1GB/1VCPU linode instance w/debian-unstable. still a bit odd that your numbers show ~5x higher perf vs Fastify, but locally i only get 2x faster.

from hyper-express.

kartikk221 avatar kartikk221 commented on May 24, 2024

The ~5x higher performance against Fastify most likely occurs because of the test scenario which simulates with 2500 concurrent connections over a pipeline of 4 requests for 30 seconds. Express/Fastify or any other Node based webserver gets heavily bottlenecked the more stress you place upon it. Your results of 43k reqs/sec for Fastify look about right since your single core performance is likely a bit faster than the Vultr instance I use for benchmarking.

You can test the CPU bottleneck by monitoring CPU/Memory usage of a remote instance while stressing it and you'll notice the uWS based webservers remain fairly reasonable in terms of CPU/Memory usage while the Node native based will spike and hit the peak limit.

from hyper-express.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.