Giter Club home page Giter Club logo

masstransit-benchmark's Introduction

MassTransit Benchmark

A set of benchmarks for measuring the performance of MassTransit with the supported transports.

Message Latency

Measures the throughput (send, consume) and latency (time from send to receive) of messages. The number of clients can be scaled to simulate multiple concurrent messages being written to the queue, and the concurrency, prefetch counts, and other settings can also be adjusted.

Usage

To see the usage, enter:

dotnet run -f netcoreapp3.1 -c Release -- -?

That will show all the details of using the benchmark.

RabbitMQ

A good example that really hits RabbitMQ pretty hard.

dotnet run -f netcoreapp3.1 -c Release -- --count=100000 --prefetch=1000 --clients=100

Output

MassTransit Benchmark

Transport: RabbitMQ
Host: localhost
Virtual Host: /
Username: guest
Password: *****
Heartbeat: 0
Publisher Confirmation: False
Running Message Latency Benchmark
Message Count: 100000
Clients: 100
Durable: False
Payload Length: 0
Prefetch Count: 1000
Concurrency Limit: 0
Total send duration: 0:00:09.1151465
Send message rate: 10970.75 (msg/s)
Total consume duration: 0:00:09.765957
Consume message rate: 10239.65 (msg/s)
Concurrent Consumer Count: 10
Avg Ack Time: 8ms
Min Ack Time: 0ms
Max Ack Time: 214ms
Med Ack Time: 5ms
95t Ack Time: 29ms
Avg Consume Time: 714ms
Min Consume Time: 246ms
Max Consume Time: 963ms
Med Consume Time: 770ms
95t Consume Time: 908ms

  246ms ****                                                         (   2272)
  318ms ****                                                         (   2219)
  389ms ******                                                       (   3235)
  461ms ***********                                                  (   5605)
  533ms ******************************                               (  14327)
  604ms *************************                                    (  11927)
  676ms ***************                                              (   7403)
  748ms **********************************                           (  16409)
  819ms ************************************************************ (  28472)
  891ms *****************                                            (   8130)
Host: localhost
Virtual Host: /
Username: guest
Password: *****
Heartbeat: 0
Publisher Confirmation: False
Running Request Response Benchmark
Message Count: 100000
Clients: 100
Durable: False
Prefetch Count: 1000
Concurrency Limit: 0
Total consume duration: 0:00:20.6300097
Consume message rate: 4847.31 (msg/s)
Total request duration: 0:00:20.6330248
Request rate: 4846.60 (msg/s)
Concurrent Consumer Count: 25
Avg Request Time: 20ms
Min Request Time: 4ms
Max Request Time: 104ms
Med Request Time: 20ms
95t Request Time: 25ms
Avg Consume Time: 5ms
Min Consume Time: 0ms
Max Consume Time: 54ms
Med Consume Time: 4ms
95t Consume Time: 9ms

Request duration distribution
    4ms                                                              (    527)
   14ms ************************************************************ (  92393)
   24ms ****                                                         (   6736)
   34ms                                                              (    227)
   44ms                                                              (     17)
   84ms                                                              (     16)
   94ms                                                              (     83)

masstransit-benchmark's People

Contributors

jadanah avatar phatboyg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

masstransit-benchmark's Issues

Slow performance results when clients =5

On my project on asp.net core what i need is to process about 700-1000 messages/sec.
I've expected to install a balance loader (NGINX+ or any) and 3-5 backends, publishing messages to RabbitMq.
Nginx successfully processes it's task, but when send on durable queue i can't overcome rate at 50-70 messages/sec per client when Publish or in this benchmark (you use send).
Number of sent messages is about linearly proportional to number of clients. So i get about 10 times better performance when launch 10 times more clients, but I can't increase number of backends to more than 5.
What could you suggest to overcome 50-70 messages/sec threshold?
Maybe to keep open connection in pool? I looked at RabbitMq Client implementation and the most time consuming operation is creating connection and creating model.
Any suggestions please

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.