django / daphne Goto Github PK
View Code? Open in Web Editor NEWDjango Channels HTTP/WebSocket server
License: BSD 3-Clause "New" or "Revised" License
Django Channels HTTP/WebSocket server
License: BSD 3-Clause "New" or "Revised" License
Switched to Daphne behind Nginx. This morning there were many errors:
...*5144 upstream timed out (110: Connection timed out) while reading response header from upstream...
Would this mean that I need more workers? I have 4. I'll try increasing log verbosity and see if I can get some more information.
Combination of versions I am getting the error with:
Works fine with daphne 0.12.2.
On 0.13.0 I get the following stacktrace:
File "/usr/local/lib/python3.4/dist-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/views.py", line 466, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/views.py", line 454, in dispatch
self.initial(request, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/views.py", line 381, in initial
neg = self.perform_content_negotiation(request)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/views.py", line 296, in perform_content_negotiation
return conneg.select_renderer(request, renderers, self.format_kwarg)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/negotiation.py", line 44, in select_renderer
format = format_suffix or request.query_params.get(format_query_param)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/request.py", line 353, in __getattribute__
return super(Request, self).__getattribute__(attr)
File "/usr/local/lib/python3.4/dist-packages/rest_framework/request.py", line 178, in query_params
return self._request.GET
File "/usr/local/lib/python3.4/dist-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python3.4/dist-packages/channels/handler.py", line 141, in GET
self.message.get('query_string', '').replace("+", "%2b").encode("utf8"),
TypeError: 'str' does not support the buffer interface
My configuration of Websocket work in Firefox but in Opera/Chrome not working,
I have a few question about the using daphne:
Please help me understand the correct configuration daphne with HTTPS!!!
All work fine except Websocket connection (
I run daphne as:
Django(Channels) -> daphne -> nginx
run django worker:
python manage.py runworker -v2
run daphne with LetsEncrypt certificates:
daphne -e ssl:8443:privateKey=/etc/letsencrypt/live/my.domain.net/privkey.pem:certKey=/etc/letsencrypt/live/my.domain.net/fullchain.pem project.asgi:channel_layer -p 8000 -b 0.0.0.0
When I run my site without HTTPS (HTTP only) and run daphne as:
daphne project.asgi:channel_layer -p 8000 -b 0.0.0.0
Websocket is connected in all browsers.
nginx configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name my.domain.net <my.ip.address>;
ssl on;
ssl_certificate /etc/letsencrypt/live/my.domain.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.domain.net/privkey.pem;
# allow only the most secure SSL protocols and ciphers, and use the strong Diffie-Hellman group
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers '...';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
client_max_body_size 512M;
access_log /webapps/project/logs/nginx-access.log;
error_log /webapps/project/logs/nginx-error.log;
location /static/ {
alias /webapps/project/static/;
}
location /media/ {
alias /webapps/project/media/;
}
location /downloads/ {
alias /webapps/project/static/downloads/;
}
location /docs/ {
alias /webapps/project/static/docs/en/;
}
location /docs/en/ {
alias /webapps/project/static/docs/en/;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# need for google oauth
proxy_set_header Host $host;
}
location ~ /.well-known {
allow all;
}
}
server {
listen 80;
server_name my.domain.net <my.ip.address>;
location /downloads/ {
alias /webapps/project/static/downloads/;
}
location / {
return 301 https://$host$request_uri;
}
}
Websocket connection on client (browser) with using ReconnectingWebSocket:
...
var ws_path = "wss://my.domain.net:8443/" + userId.toString() + "/stream/";
console.log("WebSocket connecting to " + ws_path);
webSocket = new ReconnectingWebSocket(ws_path);
webSocket.timeoutInterval = 5000;
webSocket.onopen = function() {
console.log("WebSocket connected.");
};
webSocket.onclose = function(event) {
console.log("WebSocket disconnected.");
};
webSocket.onmessage = function(message) { ... }
...
Hi,
I'm tring to start daphne as a system task on ubuntu, like uWSGI because I'm serving the application via nginx and uWSGI, but I can't start daphne without enable the virtualenv. How can I deploy my project knowing I have severals other projects in the same nginx server?
By the way, in case I need to use daphne to more then one application, is there a way to start more then one worker at once? (like uWSGI emperor mode)
how can I increase the queue size of daphne server? When I am load testing my service, the Nginx server is returning
503 Service Unavailable
Request queue full.
Daphne
Where are these configurations ?
I have a one nginx server running , 3 daphne instances(same host), 6 runworkers.
During a server migration, I ran into the following warning which was causing the Django app to crash. I have posted my stack trace and requirements.txt below. For now, commenting out the
raise DeprecationWarning
in daphne/server.py
allows me to run my app but clearly, that is dangerous.
This issue is similar but matching daphne and channels versions didn't work
Any ideas for permanent fixes?
Huge thanks in advance! 🌮
ubuntu@host:~/user$ python3.4 manage.py runserver 0.0.0.0:8000 --settings=foo.bar.production
Performing system checks...
System check identified some issues:
System check identified 1 issue (0 silenced).
January 30, 2017 - 08:18:30
Django version 1.10.1, using settings 'foo.bar.production'
Starting Channels development server at http://0.0.0.0:8000/
Channel layer default (asgiref.inmemory.ChannelLayer)
Quit the server with CONTROL-C.
2017-01-30 08:18:30,941 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-01-30 08:18:30,947 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-01-30 08:18:30,952 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-01-30 08:18:30,959 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f329a9a2378>
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/channels/management/commands/runserver.py", line 83, in inner_run
ws_protocols=getattr(settings, 'CHANNELS_WS_PROTOCOLS', None),
File "/usr/local/lib/python3.4/dist-packages/daphne/server.py", line 41, in __init__
''' % self.__class__.__name__)
DeprecationWarning:
The host/port/unix_socket/file_descriptor keyword arguments to Server are deprecated.
requirements.txt
Django==1.10.1
djangorestframework==3.4.6
drf-nested-routers==0.11.1
Pillow==3.3.1
djoser==0.5.1
gunicorn==19.6.0
dj-database-url==0.4.1
whitenoise==3.2.1
psycopg2==2.6.2
django-widget-tweaks==1.4.1
django-cors-middleware==1.3.1
django-extensions==1.7.4
channels==0.17.2
celery==3.1.24
django-channels-presence==0.0.7
django-phonenumber-field==1.1.0
django-suit==0.2.21
twilio==6.0.0rc12
xlrd==1.0.0
django-s3-storage==0.9.11
pydub==0.16.7
Hi there,
Just wondering whether Daphne correctly handles routing of responses in an environment where multiple instances of Daphne may be servicing requests/responses simultaneously against a single channel layer? I was hoping to be able to have multiple Daphne instances running behind Nginx in order to have fail-overs and load balancing in situations where a lot of data is being returned. Each of the Daphne instances would be connected to the same pool of workers behind a Redis server.
Thanks!
Luke
Without SSL usefulness in production is limited.
Daphne uses the incoming connection's address and port to determine the client_addr
, which obviously don't exist when listening to a unix socket. When listening to a unix socket we can assume that there's a reverse proxy in front of Daphne so we can check for "X-Forwarded-For" and "X-Forwarded-Port" headers and log those, if they exist.
Also, I'm not sure it's useful to log client ports. It's almost certainly going to be a high random port in both cases. Looking at other servers for reference, neither apache nor nginx seem to log the client port by deafult.
How does one run multiple insatnces in daphe, is there some parameter like in gunicorn to set number of workers?
Do I just run multiple isntances, if I want 3 workers:
daphne -b 127.0.0.1 -p 8001 restapi.asgi:channel_layer
daphne -b 127.0.0.1 -p 8001 restapi.asgi:channel_layer
daphne -b 127.0.0.1 -p 8001 restapi.asgi:channel_layer
This seems strange since they all bind to same port?
Hi,
I just tested daphne
with --proxy-headers
behind a nginx server, and got the following exception when my browser initiates the websocket connection:
2017-02-07 15:13:43,115 ERROR Traceback (most recent call last):
File "/path/ve/local/lib/python2.7/site-packages/daphne/ws_protocol.py", line 62, in onConnect
self.requestHeaders,
AttributeError: 'WebSocketProtocol' object has no attribute 'requestHeaders'
I'm using the channels-examples/databinding app with the following virtualenv content:
appdirs==1.4.0
asgi-redis==1.0.0
asgiref==1.0.0
autobahn==0.17.1
channels==1.0.3
constantly==15.1.0
daphne==1.0.2
Django==1.10.5
incremental==16.10.1
msgpack-python==0.4.8
packaging==16.8
pyparsing==2.1.10
redis==2.10.5
six==1.10.0
Twisted==16.6.0
txaio==2.6.0
uWSGI==2.0.14
zope.interface==4.3.3
Is it a version compatibility issue, or a bug maybe ?
Thanks !
When Daphne receives SIGTERM it should call close on all websockets so that the clients get the right error code (and can reconnect immediately) and so websocket.disconnect
is sent for these sockets.
What limitations exist for running daphne behind a reverse proxy such as:
Can daphne be used as an (almost) drop in replacement for e.g. gunicorn?
We tried to start the server with
daphne -u ../daphne.sock asgi:channel_layer
and put this in our nginx config:
location /socket/ {
uwsgi_pass unix:/srv/platform/daphne.sock;
include uwsgi_params;
}
but Daphne did not respond and nginx returned a 404 after its timeout.
Did we mess up? or is there something missing.
I'm trying to create an init
script in order to manage the lifecycle of daphne
; actually I'm using
this jinja
template for it
#!/bin/sh
### BEGIN INIT INFO
# Provides: websocketd
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Put a short description of the service here
# Description: Put a long description of the service here
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DIR={{ site_web_root }}/app
DAEMON="../.virtualenv/bin/daphne"
DAEMON_NAME=awsgi
# Add any command line options for your daemon here
DAEMON_OPTS="--port 8000 --bind 127.0.0.1 {{ project_name}}.asgi:channel_layer"
# This next line determines what user the script runs as.
DAEMON_USER={{ webapp_username }}
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
start-stop-daemon --start --chdir $DIR --background --no-close --retry 1 --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER:$DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
sleep 3
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
sleep 3
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
the problem is that with the --no-close
option of start-stop-daemon
(inside do_start()
) the requests hang; the strange thing is that if I start the script with an user that remains logged all is fine, when the user disconnects from the system the connection hangs.
I don't know if this is expected or if I'm missing something, by the way, there is a way to debug similar problems when happen?
I have a Twisted Looping call that pumps out the current time (from a Django app).
If I set that Looping call so it's sending out a bad key (before it's ever called), then load the page that causes that Looping call to start, the first time it tries to send, I get this:
[2016/09/11 16:21:27] WebSocket CONNECT /liveblog/ssteinerx-live-blog/stream/ [127.0.0.1:50558]
('Trying to create channel for:', u'ssteinerx-live-blog')
[2016/09/11 16:21:27] WebSocket CONNECT /update_time/stream/ [127.0.0.1:50560]
('Adding update_time group...', <channels.channel.Channel object at 0x000000010a7ca330>)
{'time': '2016-09-11 16:21:27.696304'}
LoopingCall started...
No handlers could be found for logger "daphne.server"
{'time': '2016-09-11 16:21:28.697522'}
This is 100% repeatable. Yay. If the logger is set up and ready to roll on the next bad call, I don't know, but the "No handlers..." message does not repeat.
There is no handler for the logger's error()
call.
So, this:
Python 2.7.10 (0e2d9a73f5a1818d0245d75daccdbe21b2d5c3ef, Sep 07 2016, 19:23:10)
[PyPy 5.4.1 with GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>> import logging
>>>> l = logging.getLogger(__name__)
>>>> l.debug("Debugooy")
>>>> l.info("Infoooy")
>>>> l.warn("Warnooy")
>>>> l.error("Ahfooey")
No handlers could be found for logger "__main__"
The reason the message about the handler is coming out at all logging doc on this
The error() handler needs to be configured with destination for the logger.error in server.py (and probably elsewhere as well). Not sure how you want it handled, but it has to be explicitly handled. The error message only appears once, but I'm pretty sure the rest of those logger.error() calls are just falling on the floor, so they need to be hooked up to something. Who knew?
While developing, I bulk update all my requirements, twisted updated to 17.1.0 and stopped to answer to the request. After downgrading twisted to 16.6.0 everything back to normal.
I hope I filled the bug in the right place.
Inspired by a chat with @meshy about how to run multiple Django sites using a single channels deployment.
A logical thing for smaller operators who run multiple sites to want to do is to deploy multiple Django sites but to share as much as is possible to do to reduce the number of moving pieces in the system.
Clearly, each worker needs to be different (as they literally run different Python code!), so we cannot reduce the redundancy there, but it would be nice to be able to reduce redundancy at the channel and server layer. Exposing multiple virtual channel layers on top of a single logical channel infrastructure is something that I'm not well-suited to opine about (prefix all the channel names, maybe? I dunno), but doing this at the server layer is super obvious: virtual hosting! Let's talk about this here.
In essence, it would be nice to be able to deploy daphne something like this: daphne -b 0.0.0.0 -p 8001 google.com:django_project1.asgi:channel_layer1 twistedmatrix.com:django_project2.asgi:channel_layer2
: essentially, to be able to tell Daphne "please serve multiple sites from the same IP/port combination in the same process".
There are some questions about the Twisted support for this in the mainline or whether you'd need custom or third-party code (certainly for TLS-based sites you'd need txsni for sure), but it seems like a worthy extension. This would allow you to achieve efficient use of machine resources and reduce redundancy.
Thoughts?
We have 7 workers running on production. We see linear time increase as the number of request per second increases.
If suppose there are X workers, does that mean only X number of request can be handled at a time and all other requests will be queued which in turn will increase the response time ?
Hi,
I'm having issues running daphne as a unit in systemd.
It works great when executed interactively.
If anyone has input this would be highly appreciated.
[Unit]
Description=Daphne Web Server
After=network.target
[Service]
Type=simple
User=daphne
Group=daphne
WorkingDirectory=/project
ExecStart=/project/bin/daphne -b 0.0.0.0 -p 8080 --root-path=/project asgi:channel_layer
Restart=always
[Install]
WantedBy=multi-user.target
When started, it starts fine, via systemd all I get is a 404.
When launching daphne 0.11.3 with --unix-socket=/tmp/daphne.sock
, I get an error. TCP works great though.
016-05-03 20:12:14,143 INFO Starting server at /tmp/daphne.sock, channel layer app.asgi:channel_layer
Unhandled Error
Traceback (most recent call last):
File "/app/lib/python2.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/app/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/app/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/app/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/app/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
File "/app/lib/python2.7/site-packages/twisted/internet/unix.py", line 189, in doRead
return self._dataReceived(data)
File "/app/lib/python2.7/site-packages/twisted/internet/tcp.py", line 215, in _dataReceived
rval = self.protocol.dataReceived(data)
File "/app/lib/python2.7/site-packages/twisted/protocols/basic.py", line 571, in dataReceived
why = self.lineReceived(line)
File "/app/lib/python2.7/site-packages/twisted/web/http.py", line 1688, in lineReceived
self.allContentReceived()
File "/app/lib/python2.7/site-packages/twisted/web/http.py", line 1767, in allContentReceived
req.requestReceived(command, path, version)
File "/app/lib/python2.7/site-packages/twisted/web/http.py", line 768, in requestReceived
self.process()
File "/app/lib/python2.7/site-packages/daphne/http_protocol.py", line 117, in process
"client": [self.client.host, self.client.port],
exceptions.AttributeError: 'UNIXAddress' object has no attribute 'host'
Our site just went down few times, because apparently some bots are requesting our site with illegal unicode chars in the path. The bad part is that daphne shows an error, stops serving, but the process stays, therefore supervisorctl is not restarting a task.
Test case:
daphne project.asgi:channel_layer -b 0.0.0.0 -p 8000
Run following curl. These three dots is a unicode character:
curl http://localhost:8000/…/test
After the request daphne shows error and stops serving, but process stays up and if you try to load url it will hang on loading and never show anything until get timed outed:
Unhandled Error
Traceback (most recent call last):
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/daphne/cli.py", line 133, in run
ws_protocols=args.ws_protocols,
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/daphne/server.py", line 52, in run
reactor.run(installSignalHandlers=self.signal_handlers)
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/twisted/internet/base.py", line 1194, in run
self.mainLoop()
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/twisted/internet/base.py", line 1203, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/twisted/internet/base.py", line 825, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/daphne/server.py", line 72, in backend_reader
self.factory.dispatch_reply(channel, message)
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/daphne/http_protocol.py", line 269, in dispatch_reply
self.reply_protocols[channel].serverResponse(message)
File "/Users/darklow/www/30sec/env/lib/python2.7/site-packages/daphne/http_protocol.py", line 199, in serverResponse
"path": self.path.decode("ascii"),
exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1: ordinal not in range(128)
With an incorrect nginx
setup (one that does not include proxy_set_header Connection "upgrade";
) Daphne crashes resulting in a client-side 502 Bad gateway.
When the log statement from daphne/http_protocol.py:107
is removed Daphne no longer crashes resulting in a client-side 400 HTTP Connection headers do not include 'upgrade' value (case-insensitive) : close, which is more helpful in determining what is wrong with the setup.
Stacktrace:
2016-06-26 22:02:12,500 ERROR Traceback (most recent call last):
File "<...>/virtual/lib/python3.4/site-packages/daphne/http_protocol.py", line 107, in process
logger.debug("Upgraded connection %s to WebSocket %s", self.reply_channel, protocol.reply_channel)
AttributeError: 'WebSocketProtocol' object has no attribute 'reply_channel'
Unhandled Error
Traceback (most recent call last):
File "<...>/virtual/lib/python3.4/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "<...>/virtual/lib/python3.4/site-packages/twisted/python/log.py", line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "<...>/virtual/lib/python3.4/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "<...>/virtual/lib/python3.4/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "<...>/virtual/lib/python3.4/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
File "<...>/virtual/lib/python3.4/site-packages/twisted/internet/tcp.py", line 209, in doRead
return self._dataReceived(data)
File "<...>/virtual/lib/python3.4/site-packages/twisted/internet/tcp.py", line 215, in _dataReceived
rval = self.protocol.dataReceived(data)
File "<...>/virtual/lib/python3.4/site-packages/twisted/protocols/basic.py", line 571, in dataReceived
why = self.lineReceived(line)
File "<...>/virtual/lib/python3.4/site-packages/twisted/web/http.py", line 1692, in lineReceived
self.allContentReceived()
File "<...>/virtual/lib/python3.4/site-packages/twisted/web/http.py", line 1781, in allContentReceived
req.requestReceived(command, path, version)
File "<...>/virtual/lib/python3.4/site-packages/twisted/web/http.py", line 768, in requestReceived
self.process()
File "<...>/virtual/lib/python3.4/site-packages/daphne/http_protocol.py", line 147, in process
self.basic_error(500, b"Internal Server Error", "HTTP processing error")
File "<...>/virtual/lib/python3.4/site-packages/daphne/http_protocol.py", line 256, in basic_error
}).encode("utf8"),
File "<...>/virtual/lib/python3.4/site-packages/daphne/http_protocol.py", line 216, in serverResponse
http.Request.write(self, message['content'])
File "<...>/virtual/lib/python3.4/site-packages/twisted/web/http.py", line 951, in write
self.transport.writeSequence(l)
builtins.AttributeError: 'NoneType' object has no attribute 'writeSequence'
Several people have now reported Daphne being unresponsive to new requests after an error in a handling thread but only under Python 2.
How to resolve this error AttributeError: 'module' object has no attribute 'OP_NO_TLSv1_1'
Hello,
I tried to run daphne
using the following command:
daphne -e ssl:443:privateKey=my.key:certKey=my.crt app.asgi:channel_layer
I'm getting the following error:
usage: daphne [-h] [-p PORT] [-b HOST] [-u UNIX_SOCKET] [--fd FILE_DESCRIPTOR]
[-v VERBOSITY] [-t HTTP_TIMEOUT] [--access-log ACCESS_LOG]
[--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT]
[--ws-protocol [WS_PROTOCOLS [WS_PROTOCOLS ...]]]
[--root-path ROOT_PATH]
channel_layer
daphne: error: unrecognized arguments: -e webapp.asgi:channel_layer
pip list
shows the following installed packages:
asgi-redis (1.0.0)
asgiref (1.0.0)
autobahn (0.17.1)
channels (0.17.3)
constantly (15.1.0)
daphne (0.15.0)
Django (1.10.4)
django-sslserver (0.19)
djangorestframework (3.5.3)
incremental (16.10.1)
msgpack-python (0.4.8)
Pillow (3.4.2)
pip (9.0.1)
redis (2.10.5)
reportlab (3.3.0)
setuptools (31.0.0)
six (1.10.0)
Twisted (16.6.0)
txaio (2.6.0)
wheel (0.30.0a0)
zope.interface (4.3.3)
Everything works fine without ssl.
I tried switching an existing django project hosted by gunicorn behind a nginx proxy to daphne instead of gunicorn. The relevant part of the nginx configuration is:
location /webapi/ {
proxy_pass http://127.0.0.1:7777/webapi/;
proxy_set_header SCRIPT_NAME /webapi;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
With gunicorn the url resolution works as expected and /webapi is stripped from the path before the url resolution takes place. Under daphne the script name isn't stripped from the path and so the url resolution stopped working.
If I change the proxy_pass entry to proxy_pass http://127.0.0.1:7777/
the url resolution works but the url reversion in django delivers wrong results.
Is daphne using a different header for the SCRIPT_NAME than gunicorn?
I'm opening this issue to document what was originally posted to the Django Channels project but after investigation deemed related to Daphne itself:
django/channels#392
In short, when running a Cordova/PhoneGap app on Android, the app runs within an embedded browser in a web view. When initiating a Websocket connection to Daphne, the browser sends an Origin: file://
header on the initial request, which gets refused by Daphne.
Investigation uncovered that Autobahn (the underlying Websocket library used by Daphne) refuses the connection because the value file://
is considered invalid - more precisely, it is considered as if no origin was provided.
Autobahn does allow the Origin: file://
to be accepted, but for that to happen the allowNullOrigin
protocol configuration option must be set to True
.
As suggested by @andrewgodwin on the original issue, I'm going to work on a change to Daphne that will enable that flag by default, and pass the Origin
header value to Channels in the websocket.connect
message headers so that the application can decide wether or not a specific origin is allowed.
When Daphne is embedded as a subthread inside something else (such as channels' runserver
) it idles very hot, using a full CPU core on my machine, even with a Twisted-native ASGI backend.
It looks like the Twisted reactor might be the cause, but can't be sure yet.
Currently, there are only a few tests for ASGI encoding and decoding to/from HTTP (https://github.com/django/daphne/blob/master/daphne/tests/test_http.py), and none at all for WebSockets. In order to make sure we keep to the spec, it would be good to have full coverage for both protocols.
Not sure how or why I'm getting this error but I'm developing a project setup under Docker. I realized sometimes I try to load the page and my images and fonts take a while to load, then I saw this error in the console. I was developing fine before, not exactly sure what I changed.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/channels/management/commands/runserver.py", line 82, in inner_run
http_timeout=60, # Shorter timeout than normal as it's dev
File "/usr/local/lib/python2.7/site-packages/daphne/server.py", line 49, in run
reactor.run(installSignalHandlers=self.signal_handlers)
File "/usr/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1194, in run
self.mainLoop()
File "/usr/local/lib/python2.7/site-packages/twisted/internet/base.py", line 1203, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/local/lib/python2.7/site-packages/twisted/internet/base.py", line 825, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/local/lib/python2.7/site-packages/daphne/server.py", line 69, in backend_reader
self.factory.dispatch_reply(channel, message)
File "/usr/local/lib/python2.7/site-packages/daphne/http_protocol.py", line 244, in dispatch_reply
self.reply_protocols[channel].serverResponse(message)
File "/usr/local/lib/python2.7/site-packages/daphne/http_protocol.py", line 171, in serverResponse
self.finish()
File "/usr/local/lib/python2.7/site-packages/daphne/http_protocol.py", line 144, in finish
self.send_disconnect()
File "/usr/local/lib/python2.7/site-packages/daphne/http_protocol.py", line 126, in send_disconnect
"reply_channel": self.reply_channel,
File "/usr/local/lib/python2.7/site-packages/asgiref/inmemory.py", line 50, in send
raise self.ChannelFull(channel)
It would be nice for Daphne to support an endpoint that would allow an health check on itself and the channel layer subsystem, with no need for active workers. Maybe send a message on a predefined channel and check if it receives it?
Right now, when we launch the Daphne component, there's no URL that I know of that we can query to check if just the Daphne component is healthy. We need active workers to respond to request, and so it's harder to diagnose a problem.
Our initial Configuration : 3 daphne dynos and 6 workers having 1/2 GB memory each.
We saw a sudden spike in the memory usage(198%) and we were supposed to increase the memory size to 1gb.Not able to understand why workers needed more than 1/2 GB memory ?
What all factors should we consider while defining the size of memory required for each worker?
I saw one article in heroku about Gunicorn : https://devcenter.heroku.com/articles/optimizing-dyno-usage#basic-methodology-for-optimizing-memory
Couldn't find the same for daphne.
We need to hook up an onPong
handler to a time variable and then force-close the socket if we don't get a pong for a while (tie it to ping_interval somehow)
The symptom is that the websocket.receive
channel never gets anything sent to it when people send in websocket frames.
What's happening as far as Daphne is concerned is that onMessage never gets called on the WebsocketProtocol class. I've tested on Python 2 and 3 and both aren't working. Reverting to 16.2 fixes it, so I've pinned Daphne to that for now in the latest patch release while this is investigated.
Twisted seems to support systemd socket activation: https://twistedmatrix.com/documents/13.2.0/core/howto/systemd.html
I was not able to understand the twisted documentation or sourcecode. So I could not create a pull request.
I would be nice if daphne would support his, because socket activation is the best way (I know) to start systemd units in parallel.
This would replace this code (https://github.com/andrewgodwin/daphne/blob/master/daphne/server.py#L46) with a user passing a strports description on the command line, which is fed into http://twistedmatrix.com/documents/current/api/twisted.internet.endpoints.html#serverFromString. For example, to support unix sockets it would be unix:/var/run/finger
, for TCP it'd be tcp:8000
, for basic TLS it would be ssl:443:privateKey=key.pem:certKey=crt.pem
, for txsni (TLS w/ SNI) it would be txsni:certificates:tcp:443
, and for the automatic, turn-on TLS+SNI, it would be le:/srv/www/certs:tcp:443
(https://github.com/mithrandi/txacme).
I'm happy to implement this, but I thought I'd file an issue since it's been on my mind for so long :)
Daphne has an option to log requests to a file. That file is opened using the default system buffering and is never flushed to disk, so log entries are appended to the log file only when that buffer fills up, or when Daphne restarts.
The file should probably be line buffered since log entries should appear as soon as they happen.
I experience strange problems when I connect a lot of websocket connections (nearly) at the same time to one daphne instance.
I wrote this go program to connect a specific amount of websocket clients to daphne. Then I started the liveblock example with python manage.py runserver
, created a blog and connected the go script to it:
./testWsConnections -url ws://localhost:8000/liveblog/foo/stream/ --clients 400
Of cause, there are a lot of 503-responses (channel layer full). In this case the program ignores the error and retires to connect after 100ms. But there are a lot of cases, in which the connection is closed without any response. The program counts this cases and retires to connect after 100ms. In the end, all clients are connected and all clients receive messages when the liveblog changes.
When I try to connect 200 clients, it takes around 2 Seconds and I get no connection lost.
When I try to connect 400 clients, it takes around 5 Seconds and I get only a few connection lost.
When I try to connect 600 clients, it takes around 10 Seconds and I get around 70 connection lost.
When I try to connect 800 clients, it takes around 60 Seconds and I get around 200 connection lost.
Is it a bug, that daphne closes a websocket handshake without a response? Why does the time to connect clients increases exponential?
The connection lost only happens, when there are a lot of unhandled websocket handshakes at the same time. When I connect 100 clients at the same time, wait until all are connected and then connect the next 100 clients, it takes only 14 seconds until until 800 client are connected and there are no lost connections.
In the current master, ws_protocoll.WebSocketProtocol.onOpen
has the following code:
try:
self.channel_layer.send("websocket.connect", self.request_info)
except self.channel_layer.ChannelFull:
# We don't drop the connection here as you don't _have_ to consume websocket.connect
pass
This means, that if (for any reason) the Channel Layer is full, websocket.connect
is not called for this connection. But the only way (I know of) to track a websocket connection is to write some code like this:
channel_routing = [
route("websocket.connect", ws_add),
]
# Connected to websocket.connect
def ws_add(message):
Group("chat").add(message.reply_channel)
So if ChannelFull
is raised when a connection is created, there is an open websocket connection, which the Django-app does not know of and probably wont send any message. The client has no way to know, that he has an untracked websocket connection.
Therefore I would suggest that the try except block in the code above is removed. Then, if ChannelFull
is raised, the websocket connection is closed. Therefore the client can try to open a new connection, which maybe has more luck. Or the client could show a warning to the user.
For example when I'm asking server for a url like http://localhost:8000/api/social/player/rooms/1/messages?limit=10
, in console it prints as [2016/10/04 12:25:19] HTTP GET /api/social/player/rooms/1/messages 200 [0.25, 127.0.0.1:57101]
which is quite inconvenient and make debug process more dificult.
Could you add suggestThreadPoolSize for server?
Or, if you could, add reactor.getThreadPool().adjustPoolsize(minthreads=some_param1, maxthreads=some_param2)?
Not sure if Twisted does this for us, but we should response with 413 Entity Too Large if a header is more than about 16k.
Everything works under channels==1.0.1 and daphne 1.0.1
However I tried updating channels to 1.0.3 and daphne to 1.0.3 and app doesn't work any more, all I am seeing in console is repeating every 5 seconds:
2017-02-14 17:37:35,850 - ERROR - server - Error at trying to receive messages: 'RedisChannelLayer' object has no attribute 'receive'
So I narrowed down versions and even if I stay on channels 1.0.1 as soon as I upgrade to daphne 1.0.2, no matter what channels version, I got the same error and whole Django app doesn't even start. Last working daphne is 1.0.1 Also tried flushing redis db. Doesn't changed anything.
Mac, Python 2.7.10
Twisted==17.1.0 (Also tried Twisted 16.6.0)
django-redis==4.3.0
redis==2.10.5
I've recently upgraded Channels to the latest version, and found that WebSocket broke:
2017-02-10 22:22:47,341 ERROR Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/daphne/ws_protocol.py", line 62, in onConnect
self.requestHeaders,
AttributeError: 'WebSocketProtocol' object has no attribute 'requestHeaders'
You can try this out: curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H 'Sec-WebSocket-Version: 13' -H 'Sec-WebSocket-Key: XXXXXXXXXXXXXXXXXXXXXX==' https://tmstreamlabs.cupco.de
. Note that this URL is not in the routing configuration, so there is no application code. Valid endpoints also have this problem.
It would be great if daphne had a feature to auto reload whenever code changes similar to what gunicorn does with the reload flag.
I know the runworker
is an option but this would allow for an much easier deploy for me. Now I have to give my development servers a separate run command and it creates a different environment which could cause problems.
When deploying my apps to production I usually build containers, and I was expecting to be able to separate the interface servers containers from the workers containers, since they have only the channel layer configuration in common. However, it seems Daphne relies exclusively on the channel layer configuration as can be found on the Django settings.
It seems to me that it should be possible to also pass the needed layer configuration through command line parameters to daphne, and this way have a completely independent package with no need to include application code.
I am having outages for random delays of 1-5 minutes and can't get any more debug info than following.
I am using one daphne process and few runworker processes.
The problem is there is not enough info about reason why worker was suddenly terminated.
I am using Sentry to catch all python exceptions and when one appears usually i receive it on Sentry dashboard, but in this case, there is no error, just these lines in the log file:
Is there anyway to get more info about these cases?
I tried enabling more verbosity, but it didn't help either.
127.0.0.1:40726 -2016-06-04 01:00:05,450 INFO exited: worker (terminated by SIGKILL; not expected)
2016-06-04 01:00:06,458 INFO spawned: 'worker' with pid 456
2016-06-04 01:00:07,460 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-06-03 20:00:08,958 - INFO - runworker - Running worker against channel layer default (asgi_redis.RedisChannelLayer)
These are sequential rows from consolidate log - both daphne
and ./manage.py runworker
logs to same output. Looks like runworker
uses date with timezone by Django, while daphne doesn't, that why time differ.
I am running daphne and workers using supervisor on docker containers on Amazon ECS, which makes debugging little more complicated too:
[program:daphne]
command = daphne thirtysec.asgi:channel_layer -b 0.0.0.0 -p 8000 -v2
[program:worker]
command = python manage.py runworker -v3
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.