inhabitedtype / httpaf Goto Github PK
View Code? Open in Web Editor NEWA high performance, memory efficient, and scalable web server written in OCaml
License: Other
A high performance, memory efficient, and scalable web server written in OCaml
License: Other
I noticed this because of a mistake i made while making a request to an httpaf server via curl.
If requesting something like curl -H "Host : foo.com" <url>
there is no response sent by the server. I'd expect the error handler to be invoked as the parser implementation indicates that spaces before the colon will be considered parse failures.
Some care is necessary here as only exceptions that originate from user code should exhibit this behavior. Probably need a special monitor in the async case.
I would like to know if you have any plan to provide a multipart parser specially when we send a POST request with multipart/from-data
?
Hey @seliopou,
First of all, you probably noticed I've been quite active in this repository this past week. I wanted to take just a second to apologize for the sudden flood of incoming pull requests. Please don't feel any pressure to review them any time soon. I know how overwhelming it can be to maintain an open source project, and every new PR just puts more on your plate.
Secondly, I really love the work you've been doing with http/af, so thanks a lot for that. I've been getting my hands dirty with its internals too, and I feel like I have a little bit of a grasp on how http/af is architected and implemented.
I've been using it for a little while too, and I can't help but to think that you don't have enough time to tend to its maintenance, which is why I'd love to offer my time to help you maintain http/af going forward.
I'd be willing to take over whatever tasks you feel are the most time consuming for you such that http/af evolves to be the foundational HTTP library for the OCaml ecosystem.
Let me know how you feel about this.
If I compile the async_echo_post example code, I can reliably cause a segfault if I use siege to hit it with 10 concurrent requests.
The siege command I'm running is:
siege -c10 'http://localhost:9009 POST hi'
I start the server with:
jbuilder build examples/async_echo_post.bc
./_build/default/examples/async_echo_post.bc -p 9009
Eventually, it will output
Segmentation fault: 11
If a timeout occurs before the client has received the full response body, no error is triggered despite the fact that the length of the returned body does not match the Content-Length
header.
If a timeout occurs before the client has received the full response body, an error condition should be raised.
let set_error_and_handle t error =
Reader.force_close t.reader;
begin match !(t.state) with
| Closed -> ()
| Awaiting_response ->
set_error_and_handle_without_shutdown t error;
| Received_response(_, response_body) ->
Body.close_reader response_body;
Body.execute_read response_body;
set_error_and_handle_without_shutdown t error;
end
;;
The order of instructions here is that the response body is closed before the error condition is raised. This triggered a handler for EOF before the error is set.
Setting the error first, fixes the issue in my testing:
let set_error_and_handle t error =
Reader.force_close t.reader;
begin match !(t.state) with
| Closed -> ()
| Awaiting_response ->
set_error_and_handle_without_shutdown t error;
| Received_response(_, response_body) ->
set_error_and_handle_without_shutdown t error;
Body.close_reader response_body;
Body.execute_read response_body;
end
;;
Might be related with #64, but I'm experiencing segfault in lwt_echo_post compiled to native binary.
Steps to reproduce:
curl -H"Expect:" -XPOST -d @very_big_file -o/dev/null http://0.0.0.0:8080/
Observed behavior:
#0 camlFaraday__shift_buffers_1564 () at lib/faraday.ml:371
#1 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#2 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#3 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#4 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#5 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#6 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
<...>
#29907 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#29908 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#29909 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#29932 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#29933 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#29934 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
<...>
#174595 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#174596 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#174597 0x000055555576ffee in camlFaraday__shift_buffers_1564 () at lib/faraday.ml:372
#174598 0x00005555557694f2 in camlHttpaf__Body__fun_1699 () at lib/faraday.ml:408
#174599 0x00005555557700ca in camlFaraday__shift_flushes_1569 () at lib/faraday.ml:402
#174600 0x00005555557377f6 in camlHttpaf_lwt_unix__fun_2310 () at lwt-unix/httpaf_lwt_unix.ml:168
#174601 0x0000555555756a8c in camlLwt__catch_24942 () at src/core/lwt.ml:2025
#174602 0x0000555555759345 in camlLwt__async_53201 () at src/core/lwt.ml:2463
#174603 0x00005555557690f1 in camlHttpaf__Body__do_execute_read_1452 () at lib/body.ml:115
#174604 0x000055555576a06b in camlHttpaf__Parse__fun_2248 () at lib/parse.ml:137
#174605 0x00005555557715f3 in camlAngstrom__Parser__succ$27_1437 () at lib/parser.ml:53
#174606 0x000055555576aaf9 in camlHttpaf__Parse__read_with_more_1808 () at lib/parse.ml:300
#174607 0x000055555576d252 in camlHttpaf__Server_connection__read_with_more_1824 () at lib/server_connection.ml:236
#174608 0x0000555555736c4b in camlHttpaf_lwt_unix__get_1168 () at lwt-unix/httpaf_lwt_unix.ml:66
#174609 0x00005555557374d5 in camlHttpaf_lwt_unix__fun_2241 () at lwt-unix/httpaf_lwt_unix.ml:133
#174610 0x0000555555755c4e in camlLwt__callback_13871 () at src/core/lwt.ml:1866
#174611 0x0000555555754748 in camlLwt__iter_callback_list_4529 () at src/core/lwt.ml:1209
#174612 0x00005555557548b9 in camlLwt__run_in_resolution_loop_4573 () at src/core/lwt.ml:1275
#174613 0x0000555555754a67 in camlLwt__resolve_4591 () at src/core/lwt.ml:1311
#174614 0x0000555555755c6f in camlLwt__callback_13871 () at src/core/lwt.ml:1880
#174615 0x0000555555754748 in camlLwt__iter_callback_list_4529 () at src/core/lwt.ml:1209
#174616 0x00005555557548b9 in camlLwt__run_in_resolution_loop_4573 () at src/core/lwt.ml:1275
#174617 0x0000555555754a67 in camlLwt__resolve_4591 () at src/core/lwt.ml:1311
#174618 0x0000555555756d0c in camlLwt__callback_24954 () at src/core/lwt.ml:2041
#174619 0x0000555555754748 in camlLwt__iter_callback_list_4529 () at src/core/lwt.ml:1209
#174620 0x00005555557548b9 in camlLwt__run_in_resolution_loop_4573 () at src/core/lwt.ml:1275
#174621 0x0000555555754a67 in camlLwt__resolve_4591 () at src/core/lwt.ml:1311
#174622 0x0000555555755c6f in camlLwt__callback_13871 () at src/core/lwt.ml:1880
#174623 0x0000555555754748 in camlLwt__iter_callback_list_4529 () at src/core/lwt.ml:1209
#174624 0x00005555557548b9 in camlLwt__run_in_resolution_loop_4573 () at src/core/lwt.ml:1275
#174625 0x0000555555754a67 in camlLwt__resolve_4591 () at src/core/lwt.ml:1311
#174626 0x0000555555754cbc in camlLwt__wakeup_general_4627 () at src/core/lwt.ml:1385
#174627 0x000055555575308d in camlLwt_sequence__loop_1061 () at src/core/lwt_sequence.ml:128
#174628 0x000055555575308d in camlLwt_sequence__loop_1061 () at src/core/lwt_sequence.ml:128
#174629 0x0000555555783881 in camlList__iter_1083 () at list.ml:100
#174630 0x0000555555739d24 in camlLwt_engine__fun_3023 () at src/unix/lwt_engine.ml:357
#174631 0x000055555573c34c in camlLwt_main__run_1134 () at src/unix/lwt_main.ml:33
#174632 0x00005555556d15b6 in camlLwt_echo_post__entry () at examples/lwt/lwt_echo_post.ml:32
#174633 0x00005555556ce5e9 in caml_program ()
#174634 0x00005555558060d0 in caml_start_program ()
#174635 0x00005555557eae15 in caml_startup_common (argv=0x7fffffffd728, pooling=<optimized out>, pooling@entry=0) at startup.c:156
#174636 0x00005555557eae7b in caml_startup_exn (argv=<optimized out>) at startup.c:161
#174637 caml_startup (argv=<optimized out>) at startup.c:166
#174638 0x00005555556cd85c in main (argc=<optimized out>, argv=<optimized out>) at main.c:44
Probably process ran out of stack. Is shift_buffers function expected to be subject to tailcall optimization, or the issue is due to excessive buffering?
While running benchmarks for #53, I noticed that there were connections left after wrk2 was done. I think the issue is in the generic httpaf code, since the async benchmark also reports remaining connections with the same test. Here is an example output from the benchmark:
2018-06-21 09:27:38.670988+02:00 conns: 0
2018-06-21 09:27:39.174527+02:00 conns: 0
2018-06-21 09:27:39.674665+02:00 conns: 0
2018-06-21 09:27:40.174861+02:00 conns: 0
2018-06-21 09:27:40.675038+02:00 conns: 0
2018-06-21 09:27:41.220915+02:00 conns: 284
2018-06-21 09:27:41.721138+02:00 conns: 684
2018-06-21 09:27:41.908716+02:00 Error (monitor.ml.Error (Unix.Unix_error "Connection reset by peer" read "")
("Raised at file \"src/import0.ml\" (inlined), line 351, characters 22-32"
"Called from file \"src/result.ml\" (inlined), line 168, characters 17-26"
"Called from file \"src/raw_fd.ml\", line 272, characters 4-60"
"Called from file \"src/raw_fd.ml\", line 265, characters 10-26"
"Re-raised at file \"async/httpaf_async.ml\", line 72, characters 6-15"
"Called from file \"src/deferred0.ml\", line 61, characters 64-69"
"Called from file \"src/job_queue.ml\", line 159, characters 6-47"
"Caught by monitor (id 26)"))
2018-06-21 09:27:41.908801+02:00 Error (monitor.ml.Error (Unix.Unix_error "Connection reset by peer" read "")
("Raised at file \"src/import0.ml\" (inlined), line 351, characters 22-32"
"Called from file \"src/result.ml\" (inlined), line 168, characters 17-26"
"Called from file \"src/raw_fd.ml\", line 272, characters 4-60"
"Called from file \"src/raw_fd.ml\", line 265, characters 10-26"
"Re-raised at file \"async/httpaf_async.ml\", line 72, characters 6-15"
"Called from file \"src/deferred0.ml\", line 61, characters 64-69"
"Called from file \"src/job_queue.ml\", line 159, characters 6-47"
"Caught by monitor (id 20)"))
2018-06-21 09:27:42.221287+02:00 conns: 2
2018-06-21 09:27:42.721474+02:00 conns: 2
2018-06-21 09:27:43.221622+02:00 conns: 2
2018-06-21 09:27:43.721809+02:00 conns: 2
2018-06-21 09:27:44.221947+02:00 conns: 2
^C
My wrk2 command line for this was
../wrk2/wrk --rate 1K --connections 1K --timeout 5m --duration 1s --threads 4 --latency http://127.0.0.1:8080
Running multiple times, usually leaving more open connections, I can verify that the number of exceptions reported is always the same as the number of connections left.
So it looks like this library demonstrates a real weakness in cohttp and how to solve it. But what's the plan going forward? Many of the standard libraries in this domain still rely on cohttp. Should cohttp be patched with the idea in this library? Should everybody switch to using httpaf instead of cohttp? I'm just surprised that httpaf came out and nothing changed in the ecosystem.
Hey! httpaf looks great, I see theres a httpaf-async, what are the requirements needed to make an adapter for lwt? Or is there a reason that there cannot be a lwt adapter? If there is no blocker i'd be interested in writing an adapter for lwt, but some guidance as to the best way to go about this would be appreciated 😄
Rather than allocating and GC-ing bigstrings for each request, maintain a pool of bigstrings from which a client or server connection can draw from. When the connection is done with the buffer, it can return the buffer to the pool.
Some of get request to my service started to get this error:
((Failure \"prompt: input shrunk!\")
\"Raised at file \\\"pervasives.ml\\\", line 32, characters 17-33d
Called from file \\\"lib/angstrom.ml\\\", line 156, characters 6-38d
Called from file \\\"lib/parse.ml\\\", line 300, characters 21-49d
Called from file \\\"lib/parse.ml\\\" (inlined), line 310, characters
12-69d
Called from file \\\"lib/server_connection.ml\\\", line 149, characters 2-29d
Called from file \\\"lib/server_connection.ml\\\", line 259, characters 2-38d
Called from file \\\"lwt-unix/httpaf_lwt_unix.ml\\\", line 165, characters 16-65d
Called from file \\\"src/core/lwt.ml\\\", line 2026, characters 16-20d
\")
My first thought was network problems, but after careful packet examination nothing was found.
This behavior is very hard to reproduce, and happens when client(nginx) and server(httpaf) are on different nodes in our network.
Any ideas how to pinpoint the problem?
Parsing hex values is currently slow. This is used when determining the length of a body chunk under the chunked
transfer encoding. Avoiding the allocation of the string would be a good first start, but in addition, relying on exceptions to detect failure should probably be avoided as well.
Hello guys,
Thank you for the wonderful work you do on this library. I noticed that the Unprocessable_entity http code ( 422 ) is missing. Is this intentional ? If not is it possible to add it ? I am willing to work on a PR if needed
Thank you guys so much
Need an example or documentation. Thanks.
/cc @aantron
I've modified lwt_echo_post example as follows:
open Base
open Lwt.Infix
module Arg = Caml.Arg
open Httpaf_lwt_unix
module Reqd = Httpaf.Reqd
module Request = Httpaf.Request
module Headers = Httpaf.Headers
module Response = Httpaf.Response
module Body = Httpaf.Body
let slow_echo_post reqd =
match Reqd.request reqd with
| { Request.meth = `POST; headers; _ } ->
let response =
let content_type =
match Headers.get headers "content-type" with
| None -> "application/octet-stream"
| Some x -> x
in
Response.create ~headers:(Headers.of_list ["content-type", content_type; "connection", "close"]) `OK
in
let request_body = Reqd.request_body reqd in
let response_body = Reqd.respond_with_streaming reqd response in
let rec on_read buffer ~off ~len =
Lwt.async @@ fun () -> Lwt.Infix.(
Lwt_unix.sleep(1.0) >>= fun () ->
Body.schedule_bigstring response_body buffer ~off ~len;
Body.flush response_body (fun () ->
Body.schedule_read request_body ~on_eof ~on_read);
Lwt.return ()
);
and on_eof () =
Body.close_writer response_body
in
Body.schedule_read (Reqd.request_body reqd) ~on_eof ~on_read
| _ ->
let headers = Headers.of_list [ "connection", "close" ] in
Reqd.respond_with_string reqd (Response.create ~headers `Method_not_allowed) ""
;;
let request_handler (_ : Unix.sockaddr) = slow_echo_post (* Httpaf_examples.Server.echo_post *)
let error_handler (_ : Unix.sockaddr) = Httpaf_examples.Server.error_handler
let main port =
let listen_address = Unix.(ADDR_INET (inet_addr_loopback, port)) in
Lwt.async (fun () ->
Lwt_io.establish_server_with_client_socket
listen_address
(Server.create_connection_handler ~request_handler ~error_handler)
>|= fun _server ->
Stdio.printf "Listening on port %i and echoing POST requests.\n" port;
Stdio.printf "To send a POST request, try one of the following\n\n";
Stdio.printf " echo \"Testing echo POST\" | dune exec examples/async/async_post.exe\n";
Stdio.printf " echo \"Testing echo POST\" | dune exec examples/lwt/lwt_post.exe\n";
Stdio.printf " echo \"Testing echo POST\" | curl -XPOST --data @- http://localhost:%d\n\n%!" port);
let forever, _ = Lwt.wait () in
Lwt_main.run forever
;;
let () =
let port = ref 8080 in
Arg.parse
["-p", Arg.Set_int port, " Listening port number (8080 by default)"]
ignore
"Echoes POST requests. Runs forever.";
main !port
;;
Upload large file to this app:
curl -H"Expect:" -XPOST -d @very_big_file -o/dev/null http://0.0.0.0:8080/
Expected behavior would be to stop consuming data from the socket as there's nowhere to feed it, but that's not the case. I've instrumented read
function from httpaf_lwt_unix.ml with a print like this:
During upload print statement executes constantly, memory footprint grows until full request is buffered in memory. Response is being sent back slowly due to delay as expected.
When running ./wrk_async_benchmark.native
, http/af gets stuck in an infinite loop in Connection.next_read_operation
if I pass the header Connection: close
.
This works fine:
$ echo -e 'GET / HTTP/1.1\r\nHost: localhost:8080\r\n\r\n' | nc -vn 127.0.0.1 8080
Connection to 127.0.0.1 8080 port [tcp/*] succeeded!
HTTP/1.1 200 OK
content-length: 2053
CHAPTER I. Down the Rabbit-Hole ...
but this hangs:
$ echo -e 'GET / HTTP/1.1\r\nHost: localhost:8080\r\nConnection: close\r\n\r\n' | nc -vn 127.0.0.1 8080
Connection to 127.0.0.1 8080 port [tcp/*] succeeded
I've been trying to understand how the httpaf code works, and I'm confused about this bit:
Lines 279 to 282 in 7d7906b
Why does it reset to Done
on error if (and only if) the error occurred immediately after the last commit? It seems to be deliberate, because it matches the 0
here explicitly. The effect is e.g.
$ nc -C localhost 8080
GET index.html HTTP/1.1
index.html HTTP/1.1
HTTP/1.1 405 Method Not Allowed
connection: close
Here, it failed to parse the first header line because it started with a space. It then re-parsed the line as a fresh request, and then the application rejected it because ""
is not an allowed HTTP method.
What is the intended purpose of this code?
Related to #57. If a connection is inactive for a certain period of time, the file descriptors should be closed. This will require support in the async and lwt runtimes.
Example curl to trigger the issue (assuming you have an httpaf server running):
curl 'http://localhost:8000/example?application_id=0b33e830-7cde-4b90-ad7e-2a39c57c0e11' -X OPTIONS -H 'Access-Control-Request-Method: POST' -H 'Origin: http://localhost:3001' -H 'Referer: http://localhost:3001/?query=%7B%0A%20%20awsDiscovery%20%7B%0A%20%20%20%20endpoints%20%7B%0A%20%20%20%20%20%20partitions%20%7B%0A%20%20%20%20%20%20%20%20services%20%7B%0A%20%20%20%20%20%20%20%20%20%20endpoints%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20credentialScope%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20service%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20region%0A%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20protocols%0A%20%20%20%20%20%20%20%20%20%20%20%20hostname%0A%20%20%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20defaults%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20credentialScope%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20service%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20region%0A%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20protocols%0A%20%20%20%20%20%20%20%20%20%20%20%20hostname%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20partitionEndpoint%0A%20%20%20%20%20%20%20%20%20%20isRegionalized%0A%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20defaults%20%7B%0A%20%20%20%20%20%20%20%20%20%20signatureVersions%0A%20%20%20%20%20%20%20%20%20%20protocols%0A%20%20%20%20%20%20%20%20%20%20hostname%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20regions%20%7B%0A%20%20%20%20%20%20%20%20%20%20description%0A%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20partitionName%0A%20%20%20%20%20%20%20%20regionRegex%0A%20%20%20%20%20%20%20%20partition%0A%20%20%20%20%20%20%20%20dnsSuffix%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20version%0A%20%20%20%20%7D%0A%20%20%20%20apis%20%7B%0A%20%20%20%20%20%20service%20%7B%0A%20%20%20%20%20%20%20%20operations%20%7B%0A%20%20%20%20%20%20%20%20%20%20output%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20resultWrapper%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20http%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20requestUri%0A%20%20%20%20%20%20%20%20%20%20%20%20method%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20errors%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20documentation%0A%20%20%20%20%20%20%20%20%20%20input%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20shapes%20%7B%0A%20%20%20%20%20%20%20%20%20%20members%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20documentation%0A%20%20%20%20%20%20%20%20%20%20%20%20locationName%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20error%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20httpStatusCode%0A%20%20%20%20%20%20%20%20%20%20%20%20senderFault%0A%20%20%20%20%20%20%20%20%20%20%20%20code%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20member%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20locationName%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20documentation%0A%20%20%20%20%20%20%20%20%20%20value%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20key%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20shape%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20exception%0A%20%20%20%20%20%20%20%20%20%20sensitive%0A%20%20%20%20%20%20%20%20%20%20awsType%0A%20%20%20%20%20%20%20%20%20%20pattern%0A%20%20%20%20%20%20%20%20%20%20enum%0A%20%20%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20%20%20max%0A%20%20%20%20%20%20%20%20%20%20min%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20metadata%20%7B%0A%20%20%20%20%20%20%20%20%20%20serviceAbbreviation%0A%20%20%20%20%20%20%20%20%20%20signatureVersion%0A%20%20%20%20%20%20%20%20%20%20timestampFormat%0A%20%20%20%20%20%20%20%20%20%20serviceFullName%0A%20%20%20%20%20%20%20%20%20%20endpointPrefix%0A%20%20%20%20%20%20%20%20%20%20xmlNamespace%0A%20%20%20%20%20%20%20%20%20%20jsonVersion%0A%20%20%20%20%20%20%20%20%20%20apiVersion%0A%20%20%20%20%20%20%20%20%20%20protocol%0A%20%20%20%20%20%20%20%20%20%20uid%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20documentation%0A%20%20%20%20%20%20%20%20version%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20paginators%20%7B%0A%20%20%20%20%20%20%20%20outputTokens%0A%20%20%20%20%20%20%20%20outputToken%0A%20%20%20%20%20%20%20%20inputTokens%0A%20%20%20%20%20%20%20%20moreResults%0A%20%20%20%20%20%20%20%20inputToken%0A%20%20%20%20%20%20%20%20resultKeys%0A%20%20%20%20%20%20%20%20resultKey%0A%20%20%20%20%20%20%20%20limitKey%0A%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36' -H 'Access-Control-Request-Headers: auth-token,content-type,show_beta_schema' --compressed
(this came from a real-world case that we have to support, unfortunately)
The above snippet should trigger the ~error_handler
in create_connection_handler
, with an error that matches | #Status.standard as error => print_endline (Status.default_reason_phrase error)
and returns Bad request
. I suspect this should be 413 Request_entity_too_large
instead.
I searched a bit, but not sure if there's a knob we can tweak somewhere to allow for larger headers.
Hi,
Small issue here : the function [replace] in the [Header] module seems to not be working as attended. Here are two examples of what seems to be bad behaviors:
if the replaced header is not the head of the associative list, nothing happens :
replace (["connection", "close"; "accept", "text/*"]) "accept" "image/*"
returns the input headers (i.e. [("connection", "close"); ("accept", "text/*")]
)
the function should (according to its description) remove all header fields with the same name but this is not true if they are not consecutive :
replace (["accept", "application/*"; "connection", "close"; "accept", "text/*"]) "accept" "image/*"
returns [("accept", "image/*"); ("connection", "close"); ("accept", "text/*")]
.
When I clone and follow the instructions for running tests, everything builds fine but running _build/lib_test/test_httpaf.native
fails with:
=== Single OK
state: running
iloop: read
> handler called
oloop: write
state: running
iloop: eof
oloop: closed
state: closed
Fatal error: exception "Assert_failure lib_test/test_httpaf.ml:122:2"
With a little investigation, it looks like one of the two string list
s being compared has an extra trailing element that's just the empty string. I didn't look into things any farther than that.
Does this mean Travis isn't running the tests properly? It is reporting success.
If the server receives an incomplete request, it can get into an infinite loop.
It looks like it keeps asking for input because the parser state is still `Partial.
The repo is pretty simple, just run the async_post example and send an incomplete request:
jbuilder build examples/async_post.bc
./_build/default/examples/async_echo_post.exe -p 9009
echo "" | nc 127.0.0.1 9009
At the bottom is a self-contained program using Httpaf_async
to serve a few routes that show interesting behaviors. It depends on core
, async
, httpaf_async
, and ppx_jane
(ppx_let
might be sufficient). Let's focus on request_handler
; the rest is basically boilerplate.
First, the route named "sync":
$ curl http://localhost:8080/sync
done
$
This is approximately the behavior I want. Note that we've set the content-length header; we'll see in a bit that that's more important than one might expect.
Next the route named "no-header":
$ curl http://localhost:8080/no-header
The lack of a final $
is intentional; I see no data, but the connection doesn't close. I think this one is straightforward - Reqd.write_string
neither closes the response body nor returns it so we can close it. Unlike for the "sync" route, we failed to write a content-length header, so curl couldn't tell we were done. In fact, using nc I can verify that the extra header is the only difference between those two routes - neither closes the connection; for the sync one, curl just has enough information to do it for us.
Finally, the route named "async":
$ curl http://localhost:8080/async
curl: (52) Empty reply from server
$
This is the most curious one. Here, we see the connection was closed while waiting for the Clock.after
to finish. Neither the response nor the body was written out. I don't know why this happens; I suspect there's something synchronous that causes it to think we're done if no writes have been scheduled by the end of the handler. But why does the connection get closed for this one but not the other routes?
open! Core
open! Async
open Httpaf
let request_handler _ reqd =
let request = Reqd.request reqd in
let target =
String.strip ~drop:(function '/' -> true | _ -> false) request.target
|> String.split ~on:'/'
in
match (request.meth, target) with
| (`GET, ["sync"]) ->
let str = "done" in
let response =
Response.create `OK
~headers:(Headers.of_list [
("content-length", String.length str |> Int.to_string)
])
in
Reqd.respond_with_string reqd response str
| (`GET, ["no-header"]) ->
let str = "done" in
let response = Response.create `OK ~headers:Headers.empty in
Reqd.respond_with_string reqd response str
| (`GET, ["async"]) ->
don't_wait_for begin
let%map () = Clock.after (Time.Span.of_ms 1.) in
let str = "done" in
let response =
Response.create `OK
~headers:(Headers.of_list [
("content-length", String.length str |> Int.to_string)
])
in
Reqd.respond_with_string reqd response str
end
| _ ->
Reqd.respond_with_string reqd (Response.create `Not_found) ""
let error_handler _ ?request error start_response =
let response_body = start_response Headers.empty in
begin
match error with
| `Exn exn ->
Body.write_string response_body (Exn.to_string exn)
| #Status.standard as error ->
Body.write_string response_body (Status.default_reason_phrase error)
end;
Body.write_string response_body "\n";
Body.close response_body
let start_http_server ~port =
Tcp.Server.create_sock
~on_handler_error:`Raise
(Tcp.Where_to_listen.of_port port)
(Httpaf_async.Server.create_connection_handler ~request_handler ~error_handler)
let run_server_command =
let open Command.Let_syntax in
Command.async ~summary:"start the server"
[%map_open
let http_port =
flag "-http-port" (optional_with_default 8080 int)
~doc:"INT http port"
in
fun () ->
let open Deferred.Let_syntax in
let%bind server = start_http_server ~port:http_port in
Tcp.Server.close_finished server
]
let () = Command.run run_server_command
The benchmark in mirage/ocaml-cohttp#328 seems unrealistic to me since the request handler responds immediately. Usually, some database requests are performed to build the response. So, I have investigated what happens when a Lwt.yield
is added inside the request handler.
The code is here: https://gist.github.com/vouillon/5002fd0a8c33eb0634fb08de6741cec0
I'm using the following command to perform the benchmark. Compared to mirage/ocaml-cohttp#328, I had to significantly raise the request rate to overwhelm the web servers.
wrk2 -t8 -c10000 -d60S --timeout 2000 -R 600000 --latency -H 'Connection: keep-alive' http://localhost:8080/
Cohttp is significantly slower than http/af, as expected. But http/af seems to exhibit some queueing as well, with a median latency of almost 10 seconds.
So, I'm wondering whether I'm doing anything wrong. Or maybe this is just something one should expect, since there is no longer any backpressure to limit the number of concurrent requests being processed?
angstrom-async
is listed as a dependency for httpaf-async
(https://github.com/inhabitedtype/httpaf/blob/master/httpaf-async.opam#L17) but it doesn't look like its used anywhere.
Thank you for providing a performant alternative to cohttp. Are you going to support HTTP/2 in the future?
I just wrote some code to learn how to use Httpaf and I was confused that all of my requests seemed to hang. After reading the examples, I realized I needed to set the content-length header.
I was wondering though, since Reqd.respond_with_string
has the entire response, shouldn't it add this header automatically?
I have some difficulties generating responses asynchronously with lwt. I use wrk as client stress testing tool and it looks like my httpaf-powered server hangs in the following scenario (an excerpt from my GET
handler):
Lwt.async
(fun () ->
(* let%lwt .... here reproduces hang *)
let data =
Bigstring.of_string "hello, world!" in
let size = Bigstring.length data in
let response =
Response.create
~headers:(Headers.of_list
[("content-length", Int.to_string size)]) `OK in
let response_body =
Reqd.respond_with_streaming request_descriptor response in
Body.write_bigstring response_body data;
Body.close_writer response_body;
Lwt.return ())
If I leave let%lwt ...
clause uncommented, but use respond_with_bigstring
instead of streaming interface, everything works just fine.
wrk hangs after generating the next request which I never see with debug print right at the start of the handler. Idea is that handler does not know the size of the response before resolving Lwt promise, and I don't want to do chunked encoding, yet I also don't want to buffer full response in-memory.
Probably I'm doing something wrong which messes the state of keepalive connection. Any help is greatly appreciated!
I've ran into an issue with response not being sent to the client when (1) using streaming interface and (2) response is zero size. Content-length header is set to zero, response status set to `OK, Httpaf.Body.close_writer does not flush the response to the client (tested with curl). Calling Httpaf.Body.write_string with empty string does not help. Adding a single byte to output in the same conditions cause the response to be sent to the client, which complains about trailing bytes in response (as expected as response size exceeds content length). Is that a bug, or misuse of API on my end, i.e. one should not use streaming interface with empty responses?
I am trying to figure out how I could use httpaf with ocaml-tls
in order to make requests over HTTPS.
It looks like there might currently be a few things missing, and I wanted to know if you had any existing ideas on how to accomplish this in httpaf.
From what I understand, currently both httpaf-lwt
and httpaf-async
hardcode their read and write functions from sockets. In the case of ocaml-tls
, I think that we would need to be able to provide read and write functions such that ocaml-tls can manage the writing to the socket.
I haven't explored this too much because I wanted to gauge whether you'd be open to supporting this, but I'm happy to do some more work to make this happen.
This pull request fixed a trivially-broken example. We should build the examples in CI so we don't get into this state again.
I'm trying to upgrade an incoming request to a Websocket connection. Even though I'm able to upgrade the connection and send a response, it seems that for fixed length requests, httpaf closes the request body immediately, meaning that any more incoming data from the client is discarded.
I'm looking for another way to set this up, but I need your help:
Is there something else I should be doing that I'm missing?
Repro app (modified lwt_echo_post with modified benchmark handler): https://gist.github.com/Lupus/864e94f036e97e17c62c2630737e6c45
When fetching the content via curl
, responses are always different in size.
$ curl -v http://localhost:8080/ | wc -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< transfer-encoding: chunked
<
{ [102353 bytes data]
* Leftovers after chunking: 93822 bytes
100 2808k 0 2808k 0 0 914M 0 --:--:-- --:--:-- --:--:-- 914M
* Connection #0 to host localhost left intact
1400
$ curl -v http://localhost:8080/ | wc -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< transfer-encoding: chunked
<
{ [21841 bytes data]
* Leftovers after chunking: 69426 bytes
100 17.6M 0 17.6M 0 0 1602M 0 --:--:-- --:--:-- --:--:-- 1602M
* Connection #0 to host localhost left intact
9000
$ curl -v http://localhost:8080/ | wc -l
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< transfer-encoding: chunked
<
{ [21841 bytes data]
* Leftovers after chunking: 77343 bytes
100 15.0M 0 15.0M 0 0 1675M 0 --:--:-- --:--:-- --:--:-- 1675M
* Connection #0 to host localhost left intact
7700
Also tried with wget, same behavior:
$ wget http://localhost:8080/
--2019-08-02 13:46:32-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified
Saving to: ‘index.html.1’
index.html.1 [ <=> ] 5,88M --.-KB/s in 0,05s
2019-08-02 13:46:32 (128 MB/s) - ‘index.html.1’ saved [6162000]
$ wget http://localhost:8080/
--2019-08-02 13:46:33-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified
Saving to: ‘index.html.2’
index.html.2 [ <=> ] 6,07M --.-KB/s in 0,04s
2019-08-02 13:46:33 (159 MB/s) - ‘index.html.2’ saved [6367400]
$ wget http://localhost:8080/
--2019-08-02 13:46:33-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified
Saving to: ‘index.html.3’
index.html.3 [ <=> ] 4,70M --.-KB/s in 0,03s
2019-08-02 13:46:33 (154 MB/s) - ‘index.html.3’ saved [4929600]
$ wc -l index.html*
2400 index.html
3000 index.html.1
3100 index.html.2
2400 index.html.3
10900 total
Is there any misuse of API in sample app?
Hello,
Is there any interest in adding an example for decoding Json payload, as this is a common use case? I just started playing around with this library and I would be interested in contributing that example except I don't know the best way to do it, yet. I have been using the schedule.read
function to create a string payload first and then using a Json deserializer like YoJson. Is this the best way?
great start to have a real performant http base.
how nicely this plays with something like real world needs of things like JWT, websockets?
They are currently different for no apparent reason.
The lwt and async client implementations currently do not support encryption. Add support for SSL and/or TLS without adopting the approach that conduit did, i.e., build hacks and direct dependencies.
Instead, investigate and, if possible, implement a first-class module interface where different implementations can be passed in when constructing the client. That way, people are free to use the http state machine and runtime while being able to select the encryption library of their own choosing.
Alternatively, if ocaml-tls support is mature enough, then it'd be simpler and preferable to just adopt that library as the only option.
We've stumbled across this in an internal service, but it can be reproduced on modified lwt_echo_post server as well: https://gist.github.com/Lupus/42a609ef20cb8967ff5b54654a080f62
There's only one handler, which responds with large body. When running Apache benchmark (ab), it will show very high latency for the longest request, or even timeout.
$ ab -c 100 -n 10000 http://127.0.0.1:8080/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
^C
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 2054 bytes
Concurrency Level: 100
Time taken for tests: 22.485 seconds
Complete requests: 9989
Failed requests: 0
Total transferred: 20707197 bytes
HTML transferred: 20517406 bytes
Requests per second: 444.25 [#/sec] (mean)
Time per request: 225.098 [ms] (mean)
Time per request: 2.251 [ms] (mean, across all concurrent requests)
Transfer rate: 899.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 91.1 0 1034
Processing: 0 17 374.3 0 13255
Waiting: 0 17 374.3 0 13255
Total: 0 25 424.7 1 14273
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 3
98% 16
99% 25
100% 14273 (longest request)
i just discovered your new release, 0.2.0, of this library. excellent!
would you mind creating a CHANGES.md and keep track at a high level about modifications for releases.
thanks for developing this library! looking forward to use it (hopefully soon)!
I've ran into an interesting situation. A handler is just forwarding the data back to the client, which is wrk script, POSTing 10MB body (1 thread, 10 concurrent connections). When client is terminated by pressing Ctrl+C, server continues running write_loop_step
(I'm trying with lwt flavor). next_write_operation
returns "`Write", writev(io_vectors)
returns "`Closed", which is passed further down to report_write_result
. Next call to next_write_operation
returns "`Write" again and so on.
When handler sets explicit content-length in response headers to a value received by the client in original request, the issue is not longer reproducible. When using chunked encoding in the handler, the issue is easily reproducible.
problem with module httpaf_async
open Httpaf_async
Error: Unbound module Httpaf_async
#require "httpaf_async";;
No such package: httpaf_async
opam install httpaf_async
[ERROR] No package named httpaf_async found.
First test in the below snippet demonstrates the problem, while the second test passes just fine:
let test_immediate_flush_on_zero_length_body () =
let reader_woken_up = ref false in
let continue_response = ref (fun () -> ()) in
let error_handler ?request:_ _ = assert false in
let response = Response.create `No_content in
let request_handler reqd =
continue_response := (fun () ->
let resp_body = Reqd.respond_with_streaming
~flush_headers_immediately:true reqd response
in
Body.close_writer resp_body;
);
in
let t = create ~error_handler request_handler in
reader_ready t;
yield_writer t (fun () -> write_response t ~body:"" response);
read_string t "HEAD / HTTP/1.1\r\nHost: example.com\r\n\r\n";
yield_reader t (fun () -> reader_woken_up := true);
!continue_response ();
writer_yielded t;
Alcotest.(check bool) "Reader woken up"
true !reader_woken_up;
;;
let test_length_body_no_immediate_flush () =
let reader_woken_up = ref false in
let continue_response = ref (fun () -> ()) in
let error_handler ?request:_ _ = assert false in
let response = Response.create `No_content in
let request_handler reqd =
continue_response := (fun () ->
let resp_body = Reqd.respond_with_streaming
~flush_headers_immediately:false reqd response
in
Body.close_writer resp_body;
);
in
let t = create ~error_handler request_handler in
reader_ready t;
writer_yielded t;
read_string t "HEAD / HTTP/1.1\r\nHost: example.com\r\n\r\n";
yield_reader t (fun () -> reader_woken_up := true);
!continue_response ();
write_response t ~body:"" response;
writer_yielded t;
Alcotest.(check bool) "Reader woken up"
true !reader_woken_up;
;;
Reproduces with 0.6.5 and 0.6.4. Subsequent request on keep-alive connection gets stuck.
If multiple requests are received in the same TCP frame / buffer, the LWT backend will perform a blocking read after the first one, leaving the following ones to hang until more data is received. The answers will lag multiple responses behind, until a request is received in separate batches, waking the read loop multiple times per request.
Minimal example server :
https://gist.github.com/mefyl/431c69acf32c8e6f205487a8a6ec42e3
Steps to reproduce :
$ cat request-1
GET /first HTTP/1.1
$ cat request-2
GET /second HTTP/1.1
# With a small delay between the two queries to force to wake the read loop twice, responses are received in a timely manner
$ { cat request-1; sleep .1; cat request-2; sleep 5 } | socat - TCP:localhost:8080,crlf
HTTP/1.1 200 OK
Content-Length: 7
Connection: Keep-Alive
/first
HTTP/1.1 200 OK
Content-Length: 8
Connection: Keep-Alive
/second
# If the two request are received in a single read round, the response for the second one is starved until the connection is closed, showing as a 5s delay between the two answers.
$ { cat request-1 request-2; sleep 5 } | socat - TCP:localhost:8080,crlf
HTTP/1.1 200 OK
Content-Length: 7
Connection: Keep-Alive
/first
HTTP/1.1 200 OK
Content-Length: 8
Connection: Keep-Alive
/second
$
Running into trouble with httpaf trying to read a request body. With the example code it basically just hangs
Here's the code/curl snippet I'm working with https://gist.github.com/sgrove/fb629515fd1b1077e9931b39a65ba61d
I'm not sure why it never hits eof. If on_read
returns more than 34 (the size of the body in this case), faraday throws an assert (makes sense), and same if it return < 0. I thought returning the length of what I've consumed would bump faraday's offset and we'd get a eof on the next scheduled_read.
Also, if the body is completely empty then there's no problem.
I assume there's something terribly obvious I'm missing, but can't seem to track it down.
Hi guys,
I was wondering why httpaf-lwt is not present in OPAM?
I can see:
There is a reason for having it only in the dev repo?
Thanks!
I was wondering what's the purpose of the Bigstring
module in this library given that you've created https://github.com/inhabitedtype/bigstringaf?
It looks to me like an artifact of an earlier implementation that could be replaced with Bigstringaf. I can provide a patch that does that if I'm reading things correctly.
Posting a large file though examples/async_echo_post.ml
from the current master gives me the following error:
eof
2018-06-21 09:09:51.020525+02:00 Error (monitor.ml.Error
(Unix.Unix_error "Invalid argument" writev_assume_fd_is_nonblocking "")
("Raised at file \"src/import0.ml\" (inlined), line 351, characters 22-32"
"Called from file \"src/result.ml\" (inlined), line 168, characters 17-26"
"Called from file \"src/raw_fd.ml\", line 272, characters 4-60"
"Called from file \"src/raw_fd.ml\", line 265, characters 10-26"
"Re-raised at file \"async/faraday_async.ml\", line 52, characters 6-15"
"Called from file \"async/httpaf_async.ml\", line 132, characters 10-23"
"Called from file \"lib/body.ml\", line 115, characters 4-28"
"Called from file \"lib/parse.ml\" (inlined), line 137, characters 2-26"
"Called from file \"lib/parse.ml\", line 168, characters 18-31"
"Called from file \"lib/parser.ml\", line 53, characters 38-43"
"Called from file \"lib/parse.ml\", line 301, characters 19-53"
"Called from file \"lib/server_connection.ml\", line 244, characters 17-50"
"Called from file \"async/httpaf_async.ml\", line 38, characters 12-44"
"Called from file \"async/httpaf_async.ml\", line 113, characters 14-127"
"Called from file \"src/job_queue.ml\", line 159, characters 6-47"
"Caught by monitor (id 6)"))
^C
As you can see it issues eof, but there is no data returned
curl --data-binary @/boot/initrd.img-4.15.0-23-generic http://localhost:8080 | wc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 55.6M 0 0 100 55.6M 0 36.5M 0:00:01 0:00:01 --:--:-- 36.5M
curl: (52) Empty reply from server
0 0 0
This is built from the current master against faraday 0.5.1 and async v0.11.0, though I also got the same with faraday pinned to bd1a9321
(the lwt PR).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.