Giter Club home page Giter Club logo

Comments (11)

noizex avatar noizex commented on July 23, 2024 2

I tried profiling non-async RPC call with Nameko over vanilla rabbitmq installation, and performance is pretty bad or I'm doing something totally wrong. Using ClusterRpcProxy and calling just a method that returns some number (and does nothing else) I'm getting to 20 calls per second. I'm not printing to screen or doing anything that could slow it down, so no idea why an empty RPC call takes 50ms on average (AMQP is local, as is nameko service). What number of calls are you getting with just RPC over AMPQ (not HTTP-enabled).

I tried running 10, 20, 30 workers and it doesn't make a difference (it made a difference with 1 worker, and it went down to a crawl). I tried running 2 nameko services on 2 different machines, still no difference. Any idea what I'm doing wrong?

from nameko.

mattbennett avatar mattbennett commented on July 23, 2024 2

nameko run will start all the services defined in the module you pass it. You could run the "server" service and the "client" service in two different processes and the performance would be the same (potentially better if it's CPU-bound).

All I'm demonstrating is that the RpcProxy dependency provider reuses its connections, whereas the proxies in the standalone module (ClusterRpcProxy and ServiceRpcProxy) do not. There's no reason why they shouldn't reuse connections, but the current implementation does not, which makes them unsuitable for a performance test.

from nameko.

mattbennett avatar mattbennett commented on July 23, 2024

Not yet. I have done some high-level comparisons of nameko's HTTP entrypoint to other web frameworks (names withheld to prevent flamewars) and it is fast, presumably because it's so lightweight. I'll publish them when I'm confident I'm doing a fair test.

But profiling, no. I would love to see some though. I'm sure there's plenty of low-hanging fruit to be found and picked.

from nameko.

astawiarski avatar astawiarski commented on July 23, 2024

We had similar issues and decided to turn off the persistancy of messages - they need to be done immediately or not at all was our approach. I started working on a nicely configurable approach in nameko, but didn't have time to finish. For the moment, we are overriding settings for the whole project.

You can see some comments and work started in #292

In summary, the following on its own gave 10x increase in message rates.

from kombu import Exchange
Exchange.delivery_mode = 1

from nameko.

mattbennett avatar mattbennett commented on July 23, 2024

@noizex I suspect your benchmark involved creating a new ClusterRpcProxy for each call. Setting up the rabbit connections and reply queue is expensive.

This silly example of one service talking to another gives me ~110 calls per second locally:

import itertools

import eventlet

from nameko.rpc import RpcProxy, rpc
from nameko.timer import timer


class Server(object):
    name = "server"

    @rpc
    def method(self):
        return 42


class Client(object):
    name = "client"

    server_rpc = RpcProxy("server")

    @timer(interval=1)
    def start(self):
        with eventlet.Timeout(1, exception=False):
            for count in itertools.count():
                self.server_rpc.method()
        print("{} calls/s".format(count))

screen shot 2016-06-22 at 15 38 34

The RpcProxy keeps its reply queue alive between uses, so it's much faster. You will also get a performance boost if you turn off persistent messages as @astawiarski suggests, but that gain comes from processes inside the broker rather than nameko.

from nameko.

noizex avatar noizex commented on July 23, 2024

Thanks Artur & Matt, I did as suggested and managed to get some decent amount of processed calls (~600 on a better machine than my laptop). Indeed, ClusterRpcProxy was the problem, as it made a new connection to RabbitMQ every time I called RPC method, and slowed whole thing down significantly.

I haven't tried playing with persistent messages but I will try this too once the amount of processed calls becomes bottleneck. For now it looks promising and we will give it a go in our upcoming microservice-oriented project.

from nameko.

noizex avatar noizex commented on July 23, 2024

Hmm, one thing Matt I don't understand in your example. Does your method requires these 2 services to be defined in the same Python module? What I used was some external script that wasn't service itself, so I used the nameko.standalone namespace. I guess then I need to provide AMQP config and use ClusterRpcProxy?

from nameko.

yunfan avatar yunfan commented on July 23, 2024

i just happened to this issues too, after a quickstart of nameko, i found its convenient for writing services
but the benchmakrs made my heart breaks, i only got about 300 pqs on my company's server(controled by me, yes, i am a devops), i remember the old time we use our own handcrafted rpc solution, which is about 3k to 6k qps, this nameko performance really made me cant try microservice, or at least cant try microservice on it

from nameko.

mattbennett avatar mattbennett commented on July 23, 2024

Nameko optimises for the first thing you mentioned, convenience of writing services. If you need to do thousands of requests/sec there are more appropriate tools.

That said, there is probably a ton of low-hanging fruit that can be picked to improve performance in Nameko, but nobody has really tried to do it yet. One area I've regularly seen performance bottlenecks is serialization -- Nameko does a lot of it and JSON serialization in Python is generally pretty slow. You could switch out the default serializer for something faster pretty easily.

Also note that since Nameko 2.7.0 it's been possible specify delivery options in your message publishers, for example to turn off persistence and publish confirmations. You will probably get decent mileage out of adjusting these options to your usecase.

from nameko.

yunfan avatar yunfan commented on July 23, 2024

@mattbennett thanks for the tips . for other tools , do you have any recomends?

from nameko.

mattbennett avatar mattbennett commented on July 23, 2024

This issue was moved to https://discourse.nameko.io/t/namekos-performance/318

from nameko.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.