Giter Club home page Giter Club logo

Comments (19)

andrewgodwin avatar andrewgodwin commented on May 19, 2024 3

It's partially a problem with the synchronous polling, yes - I'm working on fixing that soon. It's also something else, though, because calling epoll 100 times a second should not cause a lot of load. If you look at the output from @cbay you can see it's polling far more than it's talking to Redis.

I'm not sure why this is, but I need to sit down and sort this out, make IPC more efficient, and probably redesign the way daphne handles socket listening entirely.

from daphne.

bpeschier avatar bpeschier commented on May 19, 2024

Ran some simple instances of channels with both in-memory and redis backends on macOS and Linux. No sign of really hot idling. Code mostly sits in channels' Worker/run/time.sleep. Might be a Linux for Windows 10 thing.

from daphne.

urbaniak avatar urbaniak commented on May 19, 2024

I've also tested that on both linux and macOS, it was using ~4% cpu only due to hot reloading (watching for file changes) and select() call all the time.

from daphne.

vanikin avatar vanikin commented on May 19, 2024

I run Daphne with 'newrelic-admin run-program' as supervisor task and it constantly uses 12.3% cpu. Even with almost no users on server

from daphne.

vanikin avatar vanikin commented on May 19, 2024

Here is memory and cpu consumption graphs
2016-12-13 10 19 46
2016-12-13 10 20 11

from daphne.

andrewgodwin avatar andrewgodwin commented on May 19, 2024

@vanikin Withouts steps to replicate that's not super useful; the newrelic hooks change Python internals and add enough overhead I would want to get it replicated on a normal Python.

from daphne.

vanikin avatar vanikin commented on May 19, 2024

@andrewgodwin you're right, that's not useful at all because without new-relic cpu load is just slightly lower. It's still around 12% when running as supervisor task.
Also cpu usage is high even without custom consumers and no views when running with runserver. (I know maybe it's not useful too)

from daphne.

PaulGregor avatar PaulGregor commented on May 19, 2024

I have the same problem, as @vanikin described.
I run daphne under supervisor and nginx as proxy. When there are no web-sockets running back and forth and only standard Django views, daphne uses ~0.55% of cpu, but as soon as one web-socket connection established cpu usage spikes to ~8-10 %.
System specification:

Debian GNU/Linux 8.6 (jessie) x64 
supervisord 3.0
nginx/1.11.5

Python 3.5 on virtual environment
asgi-redis==0.14.1
asgiref==0.14.0
autobahn==0.16.0
channels==0.17.3
daphne==0.15.0
Django==1.10.2
msgpack-python==0.4.8
redis==2.10.5
six==1.10.0
Twisted==16.4.1
txaio==2.5.1
zope.interface==4.3.2

I tried other Linux version Ubuntu 16.04 x64 and Ubuntu 16.04 x86, result is the same.

from daphne.

bastbnl avatar bastbnl commented on May 19, 2024

Not exactly the same here; daphne==0.15.0 running on its own on Debian with some of the packages:

asgi-redis==1.0.0
asgiref==1.0.0
asset==0.6.11
autobahn==0.17.1
...
redis==2.10.5
requests==2.12.4
requests-oauthlib==0.7.0
rjsmin==1.0.12
service-identity==16.0.0
six==1.10.0
Twisted==16.6.0
txaio==2.6.0
zope.interface==4.3.3

Running since Jan 02 with currently just one websocket connection:

...
30311 brwnppr   20   0  218m  12m 2868 S   8.0  1.3  46:20.03 daphne
...

from daphne.

cbay avatar cbay commented on May 19, 2024

Likewise, CPU usage is around 12/13%, using Daphne 1.0.3 with Redis, under Debian Jessie and Python 2.7. Here's a strace excerpt, the sheer number of epoll_wait seems abnormal:

15:20:14.015386 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015430 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015477 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015525 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015573 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015624 epoll_wait(6, {}, 47, 0) = 0
15:20:14.015671 epoll_wait(6, {}, 47, 0) = 0
15:20:14.016175 sendto(11, "*47\r\n$7\r\nEVALSHA\r\n$40\r\n3640886a0"..., 1789, 0, NULL, 0) = 1789
15:20:14.016397 recvfrom(11, "*0\r\n", 65536, 0, NULL, NULL) = 4
15:20:14.016501 epoll_wait(6, {}, 47, 9) = 0
15:20:14.025644 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025699 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025760 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025811 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025858 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025902 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025949 epoll_wait(6, {}, 47, 0) = 0
15:20:14.025998 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026045 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026090 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026138 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026185 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026233 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026280 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026328 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026376 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026425 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026472 epoll_wait(6, {}, 47, 0) = 0
15:20:14.026898 sendto(11, "*47\r\n$7\r\nEVALSHA\r\n$40\r\n3640886a0"..., 1789, 0, NULL, 0) = 1789
15:20:14.027054 recvfrom(11, "*0\r\n", 65536, 0, NULL, NULL) = 4
15:20:14.027159 epoll_wait(6, {}, 47, 9) = 0
15:20:14.036293 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036348 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036398 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036445 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036490 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036536 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036580 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036628 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036674 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036722 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036769 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036819 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036865 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036914 epoll_wait(6, {}, 47, 0) = 0
15:20:14.036961 epoll_wait(6, {}, 47, 0) = 0
15:20:14.037015 epoll_wait(6, {}, 47, 0) = 0
15:20:14.037060 epoll_wait(6, {}, 47, 0) = 0
15:20:14.037106 epoll_wait(6, {}, 47, 0) = 0
15:20:14.037522 sendto(11, "*47\r\n$7\r\nEVALSHA\r\n$40\r\n3640886a0"..., 1789, 0, NULL, 0) = 1789
15:20:14.037695 recvfrom(11, "*0\r\n", 65536, 0, NULL, NULL) = 4
15:20:14.037795 epoll_wait(6, {}, 47, 9) = 0
15:20:14.046966 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047026 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047091 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047143 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047192 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047241 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047289 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047338 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047387 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047434 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047483 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047530 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047576 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047624 epoll_wait(6, {}, 47, 0) = 0
15:20:14.047673 epoll_wait(6, {}, 47, 0) = 0

from daphne.

PaulGregor avatar PaulGregor commented on May 19, 2024

@andrewgodwin maybe you know if this an issue or Daphne supposed to has this kind of CPU usages?

from daphne.

JohnDoee avatar JohnDoee commented on May 19, 2024

@PaulGregor I think it's due to the way ASGI is designed and works.

If you want to test this out, try changing this timer: https://github.com/django/daphne/blob/master/daphne/server.py#L157 to e.g. 0.20.

Everything will run slow, but CPU usage should drop. If CPU usage does drop, then it is a design issue or efficiency issue with the ChannelLayer.

from daphne.

PaulGregor avatar PaulGregor commented on May 19, 2024

@andrewgodwin I've tried changing this value to 0.2 and more, and there wasn't any effect on CPU usage. But there is another variable that I found to be "responsible" =)
https://github.com/django/daphne/blob/master/daphne/server.py#L116
After I changed it value to 0.03, CPU usages decreased to 3-4 %, and after 0.05, usage was around 2 %.

from daphne.

JohnDoee avatar JohnDoee commented on May 19, 2024

@PaulGregor I'm not andrew but I have been looking into this issue too.

Anyways, that's because you're using a different channel layer, just thought you were using redis as it is the theme in this ticket.

The problem is that there's not really any real solution as everything currently is, at least not without redoing a bunch of the Daphne logic. It has to be more intelligent about the change in channels to pull from.

from daphne.

PaulGregor avatar PaulGregor commented on May 19, 2024

@JohnDoee I'm using Redis 3.2.6, also tried 2.8, CPU usages was the same.

from daphne.

proofit404 avatar proofit404 commented on May 19, 2024

I'm happy to help if you need any info from me regarding RabbitMQ abilities in this redesign.

from daphne.

darklow avatar darklow commented on May 19, 2024

It would be great if this issue would be resolved or maybe some configuration hack made available. We are using amazon ec2 instances for our project and because of daphne using cpu higher than usual (even if there are no users on our site at all) we are forced to use to larger instances and therefore pay more. If we stay with single daphne process on t2.micro instance, we run out of CPU credits in about 24h hours and we get penalised and whole project gets super slow. So for a while I was keep rotating instances to re-balance CPU credits and eventually moved up to t2.small just because of this issue. Thanks.

from daphne.

andrewgodwin avatar andrewgodwin commented on May 19, 2024

I agree, but I need to find time to sit down and trace it, it's not an easy task.

from daphne.

andrewgodwin avatar andrewgodwin commented on May 19, 2024

Hot idling is gone with daphne 2 as it's all asyncio now.

from daphne.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.