Giter Club home page Giter Club logo

aleph's Introduction

Clojars Project cljdoc badge CircleCI

Aleph exposes data from the network as a Manifold stream, which can easily be transformed into a java.io.InputStream, core.async channel, Clojure sequence, or many other byte representations. It exposes simple default wrappers for HTTP, TCP, and UDP, but allows access to full performance and flexibility of the underlying Netty library.

Leiningen:

[aleph "0.7.1"]

deps.edn:

aleph/aleph {:mvn/version "0.7.1"}
;; alternatively
io.github.clj-commons/aleph {:git/sha "..."}

HTTP

Server

Aleph follows the Ring spec fully, and can be a drop-in replacement for any existing Ring-compliant server. However, it also allows for the handler function to return a Manifold deferred to represent an eventual response. This feature may not play nicely with synchronous Ring middleware which modifies the response, but this can be easily fixed by reimplementing the middleware using Manifold's let-flow operator. The aleph.http/wrap-ring-async-handler helper can be used to convert async 3-arity Ring handler to Aleph-compliant one.

(require '[aleph.http :as http])

(defn handler [req]
  {:status 200
   :headers {"content-type" "text/plain"}
   :body "hello!"})

(http/start-server handler {:port 8080}) ; HTTP/1-only

;; To support HTTP/2, do the following:
;; (def my-ssl-context ...)
(http/start-server handler {:port 443
                            :http-versions [:http2 :http1]
                            :ssl-context my-ssl-context})
;; See aleph.examples.http2 for more details

The body of the response may also be a Manifold stream, where each message from the stream is sent as a chunk, allowing for precise control over streamed responses for server-sent events and other purposes.

Client

For HTTP client requests, Aleph models itself after clj-http, except that every request immediately returns a Manifold deferred representing the response.

(require
  '[aleph.http :as http]
  '[manifold.deferred :as d]
  '[clj-commons.byte-streams :as bs])

(-> @(http/get "https://google.com/")
    :body
    bs/to-string
    prn)

(d/chain (http/get "https://google.com")
         :body
         bs/to-string
         prn)

;; To support HTTP/2, do the following:
(def conn-pool
  (http/connection-pool {:connection-options {:http-versions [:http2 :http1]}}))
@(http/get "https://google.com" {:pool conn-pool})
;; See aleph.examples.http2 for more details

Aleph attempts to mimic the clj-http API and capabilities fully. It supports multipart/form-data requests, cookie stores, proxy servers and requests inspection with a few notable differences:

  • proxy configuration should be set for the connection when seting up a connection pool, per-request proxy setups are not allowed

  • HTTP proxy functionality is extended with tunneling settings, optional HTTP headers and connection timeout control, see all configuration keys

  • :proxy-ignore-hosts is not supported

  • both cookies middleware and built-in cookies storages do not support cookie params obsoleted since RFC2965: comment, comment URL, discard, version (see the full structure of the cookie)

  • when using :debug, :save-request? and :debug-body? options, corresponding requests would be stored in :aleph/netty-request, :aleph/request, :aleph/request-body keys of the response map

  • :response-interceptor option is not supported

  • Aleph introduces :log-activity connection pool configuration to switch on the logging of the connections status changes as well as requests/response hex dumps

  • :cache and :cache-config options are not supported as for now

Aleph client also supports fully async and highly customizable DNS resolver.

To learn more, read the example code.

HTTP/2

As of 0.7.0, Aleph supports HTTP/2 in both the client and the server.

For the most part, Aleph's HTTP/2 support is a drop-in replacement for HTTP/1. For backwards compatibility, though, Aleph defaults to HTTP/1-only. See the the example HTTP/2 code for a good overview on getting started with HTTP/2.

Things to be aware of:

  1. Multipart uploads are not yet supported under HTTP/2, because Netty doesn't support them under HTTP/2. For new development, open a new H2 stream/request for each file instead. (HTTP/2 generally doesn't need multipart, since it doesn't have the same limitations on the number of connections as HTTP/1.) For existing multipart code, stick with HTTP/1. Ideally, this will be added in a future release.
  2. Aleph does not currently support the CONNECT method under HTTP/2. Stick with HTTP/1 if you're using CONNECT.
  3. Aleph will not support HTTP/2 server push, since it's deprecated, and effectively disabled by Chrome.
  4. Aleph does not currently support HTTP/2 trailers (headers arriving after the body).
  5. Aleph does nothing with priority information. We would like to expose an API to support user use of prioritization, but the browsers never agreed on how to interpret them, and some (e.g., Safari) effectively never used them. We think back-porting the HTTP/3 priority headers to HTTP/2 is a better aim.
  6. Aleph currently uses Netty's default flow control. This is a 64 kb window, which with bytes acknowledged as soon they're received. We plan to add support for adjusting the default window size and flow control strategy in a future release.
  7. If you were using pipeline-transform to alter the underlying Netty pipeline, you will need to check your usage of it for HTTP/2. Under the hood, the new HTTP/2 code uses Netty's multiplexed pipeline setup, with a shared connection-level pipeline that feeds stream-specific frames to N pipelines created for N individual streams. (A standard HTTP request/response pair maps to a single H2 stream.)

WebSockets

On any HTTP request which has the proper Upgrade headers, you may call (aleph.http/websocket-connection req), which returns a deferred which yields a duplex stream, which uses a single stream to represent bidirectional communication. Messages from the client can be received via take!, and sent to the client via put!. An echo WebSocket handler, then, would just consist of:

(require '[manifold.stream :as s])

(defn echo-handler [req]
  (let [s @(http/websocket-connection req)]
    (s/connect s s)))

This takes all messages from the client, and feeds them back into the duplex socket, returning them to the client. WebSocket text messages will be emitted as strings, and binary messages as byte arrays.

WebSocket clients can be created via (aleph.http/websocket-client url), which returns a deferred which yields a duplex stream that can send and receive messages from the server.

To learn more, read the example code.

TCP

A TCP server is similar to an HTTP server, except that for each connection the handler takes two arguments: a duplex stream and a map containing information about the client. The stream will emit byte-arrays, which can be coerced into other byte representations using the byte-streams library. The stream will accept any messages which can be coerced into a binary representation.

An echo TCP server is very similar to the above WebSocket example:

(require '[aleph.tcp :as tcp])

(defn echo-handler [s info]
  (s/connect s s))

(tcp/start-server echo-handler {:port 10001})

A TCP client can be created via (aleph.tcp/client {:host "example.com", :port 10001}), which returns a deferred which yields a duplex stream.

To learn more, read the example code.

UDP

A UDP socket can be generated using (aleph.udp/socket {:port 10001, :broadcast? false}). If the :port is specified, it will yield a duplex socket which can be used to send and receive messages, which are structured as maps with the following data:

{:host "example.com"
 :port 10001
 :message ...}

Where incoming packets will have a :message that is a byte-array, which can be coerced using byte-streams, and outgoing packets can be any data which can be coerced to a binary representation. If no :port is specified, the socket can only be used to send messages.

To learn more, read the example code.

Development

Aleph uses Leiningen for managing dependencies, running REPLs and tests, and building the code.

Minimal tools.deps support is available in the form of a deps.edn file which is generated from project.clj. It provides just enough to be able to use Aleph as a git or :local/root dependency. When committing changes to project.clj, run deps/lein-to-deps and commit the resulting changes, too.

License

Copyright © 2010-2024 Zachary Tellman

Distributed under the MIT License.

Support

Many thanks to YourKit for supporting Aleph. YourKit supports open source projects with innovative and intelligent tools for monitoring and profiling Java and .NET applications.

YourKit is the creator of YourKit Java Profiler, YourKit .NET Profiler, and YourKit YouMonitor.

aleph's People

Contributors

alexander-yakushev avatar arnaudgeiser avatar dajac avatar danielcompton avatar datskos avatar dergutemoritz avatar duck1123 avatar ertugrulcetin avatar gsnewmark avatar jeroenvandijk avatar jjl avatar julianwegkamp avatar kachayev avatar kingmob avatar mhaemmerle avatar mpenet avatar nyoung-figly avatar pyr avatar rborer avatar rje avatar rosejn avatar ryuuseijin avatar scramjet avatar shilder avatar skynet-gh avatar slipset avatar solatis avatar tonyvanriet avatar valerauko avatar ztellman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aleph's Issues

Document return value of start-http-server

    update docstring to show the return value of start-http-server

    Modified src/aleph/http/server.clj
diff --git a/src/aleph/http/server.clj b/src/aleph/http/server.clj
index e6d68a3..7f77e59 100644
--- a/src/aleph/http/server.clj
+++ b/src/aleph/http/server.clj
@@ -119,8 +119,9 @@
     pipeline))

 (defn start-http-server
-  "Starts an HTTP server on the specified :port.  To support WebSockets, set :websocket to
-   true.
+  "Starts an HTTP server on the specified :port.
+  Returns a function that stops the server.
+  To support WebSockets, set :websocket to true.

    'handler' should be a function that takes two parameters, a channel and a request hash.
    The request is a hash that conforms to the Ring standard, with :websocket set to true

HTTP wiki clarification request

On the HTTP wiki, in the HTTP Clients section:

  • I'm seeing http-request not following redirects, which conflicts with the text "Redirects will be automatically followed."
user=> (def resp (http-request {:method :get, :url "http://www.clojure-conj.org"}))
#'user/resp
user=> resp
<< {:status 301, :content-type "text/html", :headers {"date" "Sat, 01 Oct 2011 02:47:38 GMT", "status" "301 Moved Permanently", "connection" "close", "location" "http://clojure-conj.org", "content-type" "text/html"}, :content-length nil, :character-encoding nil, :body <== [nil]} >>

I'm guessing it's the wiki that needs to be updated and not the code?

  • Also, it took me forever - to my discredit :) - to figure out how to interpret the TruncatedChannelBuffer or BigEndianHeapChannelBuffer objects when they come back in the :body channel when using http-request and sync-http-request, so an example like this might be nice:
user=> (def resp (http-request {:method :get, :url "http://clojure-conj.org"}))
#'user/resp
user=> (def body-channel (map* bytes->string (:body @resp)))
#'user/body-channel
user=> (receive body-channel println)
<!--[if lt IE 7 ]><html class="ie ie6" lang="en"> <![endif]--><!--[if IE 7 ]><html class="ie ie7" lang="
en"> <![endif]--><!--[if IE 8 ]><html class="ie ie8" lang="en"> <![endif]--><!--[if (gte IE 9)|!(IE)]><!
--><html lang="en"> <!--<![endif]-->
[... etc. ...]

Thanks again for the great docs!

Support "HTTP hijacking" (for CONNECT)

I'm working on a proxy that performs JavaScript injection and modification for debugging, and I'd like to use Aleph. Would you consider adding a feature that lets users implement CONNECT, by for example converting/replacing an HTTP response channel into a raw TCP channel?

I've hacked together code that performs something similar to Go's HTTP hijacking, where a function invoked from the context of an aleph.http handler will modify the Netty pipeline and return a raw TCP channel. This suits my needs, but it leaves the HTTP response channel hanging there, which seems kinda gross.

Thanks!

array-maps must not be used for request-headers

array-maps make it easy for multiple pipelines to set multiple Host headers. When this happens some servers return a 400 error.

I could do a full copy of the headers into a sorted-map for every pipeline stage for every request to ensure this doesn't happen, but I'd really rather not do this. Plus, I suspect aleph is actually at fault for adding the second "Host" header - I haven't debugged it fully.

There doesn't seem to be too many places where :headers is constructed. Would you accept a patch if I created one?

(I'd simply use (sorted-map) instead of {}).

Thanks.

Using merge on aleph.http.core.LazyMap gives unexpected results.

=> (use 'aleph.http.core)
=> (def l-map (lazy-map :a 1 :b "two"))
=> (type l-map)
aleph.http.core.LazyMap
=> l-map
{:a 1, :b "two"}
=> (merge l-map {:foo :bar})
{[:foo :bar] nil, :a 1, :b "two"}
=> (merge (merge {} l-map) {:foo :bar})
{:foo :bar, :b "two", :a 1}

This means that following Compojure route works wrong with Aleph's (start-http-server (wrap-ring-handler ...) ), same route works correctly using Ring's jetty adapter (run-jetty ...). request is aleph.http.core.LazyMap when using Aleph and clojure.lang.PersistentHashMap when using jetty adapter.

  (GET ["/some-path/:ids/stuff" :ids #"([0-9]+,?)+"] [ids]
       (-> (fn [request] (some-function (merge request {:ids ids})))
           ))

Workaround is to convert request object to PersistentHashMap before new entries are merged:

(some-function (merge (merge {} request) {:ids ids}))

I tried it with aleph 0.3.0-alpha2 and 0.2.1-rc5. Both have same problem.

High CPU use at random

Hi,
This is just a "poke", because I'm not really sure this is a problem with Aleph, Netty or my own code. I've been in contact witth some of the Netty folks but .... not much there. Here's a screenshot:
http://luksefjell.nostdal.org/always-here/temp/netty.png

..this is after starting a server, then having 1-3 users on it on-and-off for some hours. It's at 25% CPU (of 4 cores totalling 400%) at this point. A bit later it increases to about 50% on what seems to be random intervals. I know if i go to sleep now I'll wake up to a very noisy and hot computer at 350% -- 390% CPU usage. Sometimes it seems to take only minutes, not hours, before it ends up at 200 -- 350+% CPU use; i.e., it seems very random.

I'm quite lost here. I can throw 5000 connections on this stuff and it doesn't even break a sweat. But trigger some random unknown thing and this (stack-trace...) starts happening. The CPU usage does not drop, and the same stack-traces as shown as in the screenshot remain, even with 0 currently active users for hours.

Anyone seeing something like this?

edit:
This is on Ubuntu Linux x86-64; latest release. I've tried JVM6 and JVM7 and the JVM7 early access "beta" versions.

perf: connection-callback key change + persistent-connection not initialized in redis-stream

The lamina connection callback key has changed it seems, this breaks redis-stream and possibly redis-client if you used connection-callback

I tried to just update the options on redis.clj but it seems it is not enough.

edit:

ok I understood the issue: you used to initiate the connection in connect-loop without using delay in persistent-connection, delay will only execute its body on the first deref, since we never call connection in redis-stream this prevents the on-connected callback to be triggered (and thus no wiring the internal strream/command channels).

So either we just invoke (connection) in redis-stream, or just get rid of the delay call.

Max

beta 14 depends on beta of lamina which depends on yammer metrics which seems to be missing dependencies

lein deps :tree

....
[aleph "0.3.0-beta14"]
   [gloss "0.2.2-beta5" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
   [io.netty/netty "3.6.2.Final" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
   [lamina "0.5.0-beta13" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
     [com.yammer.metrics/metrics-core "3.0.0-20121019.210556-4" :exclusions  [[org.clojure/contrib] [org.clojure/clojure-contrib] [org.slf4j/slf4j-api]   [com.yammer.metrics/metrics-annotation]]]
     [org.flatland/useful "0.9.0" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
     [potemkin "0.2.0" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
   [org.apache.commons/commons-compress "1.4.1" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
     [org.tukaani/xz "1.0"]
   [org.clojure/data.xml "0.0.7" :exclusions [[org.clojure/contrib] [org.clojure/clojure-contrib]]]
 ....

Could not find artifact com.yammer.metrics:metrics-core:jar:3.0.0-20121019.210556-4 in clojars (https://clojars.org/repo/)
Could not find artifact com.yammer.metrics:metrics-core:jar:3.0.0-20121019.210556-4 in sonatype-oss-public (https://oss.sonatype.org/content/groups/public/)
Could not find artifact com.yammer.metrics:metrics-core:jar:3.0.0-20121019.210556-4 in jboss (http://repository.jboss.org/nexus/content/groups/public/)

At least at the moment, perhaps com.yammer.metrics:metrics-core:jar:3.0.0-SNAPSHOT will come back.

Implement hybi-10 Standard

Chrome implements in the hybi-10 standard since 14.0.835 (Dev-Channel). Because of changes in the protocol handshake, implementations of the last standard aren't compatible with the new one. Aleph should (at least in a branch) implement the newer and more secure standard.

Unusual Date response header

Hello, i working in environment with non-en locale and aleph produces such kind of response headers:

HTTP/1.1 200 OK
Server: aleph/0.3.0
Date: ??, 06 ??? 2012 14:40:53 MSK
Connection: keep-alive
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Content-Length: 5540

Please note the Date header line

Websockets in Chrome 11.0.696.57

I'm getting an error during websocket handshakes, both in my project's connection and also in the example usage in aleph's readme:

invalid UTF-8 sequence in header value

When I tested against the example usage, the client side implementation was simply:

var socket = new WebSocket("ws://localhost:8080");

with handlers that simply logged the events to console.

A small amount of digging suggests that the error above might be caused by a null value in a header key/value pair.

Limit to size of Redis hash? Error when hget'ing

This may be a gloss issue, but happened using Aleph so posting here.

I scraped some html from a web site and stored the raw escaped HTML into a redis hash.

Some of the values are quite large, but Redis can handle objects of this size. But I ran into an error when using "hgetall" with aleph

(apply hash-map @(@r [:hgetall "medications:14" ]))
Feb 16, 2012 8:04:17 PM sun.reflect.NativeMethodAccessorImpl invoke0
SEVERE: Unhandled error in Netty pipeline.
java.lang.AssertionError: Assert failed: success
    at gloss.data.bytes$wrap_finite_block$fn__4881$fn__4882.invoke(bytes.clj:75)
    at gloss.core.protocols$compose_callback$reify__4740.read_bytes(protocols.clj:58)
    at gloss.data.bytes$wrap_finite_block$fn__4881.invoke(bytes.clj:70)

The full stacktrace and a copy of the hgetall from redis-cli is below it:

https://gist.github.com/1dd54e07c78de7c7f60d

Is there a limit to how much data can be received in a single request?

aleph http POST response body returns a SlicedChannelBuffer instance

Hi Zach,

This was fine until I updated to the latest beta (this breaks compatibility), I guess this is a change that happened when you updated to the latest netty (I couldn't find any reference to this object in Aleph's code).

<#<SlicedChannelBuffer SlicedChannelBuffer(ridx=0, widx=185, cap=185)>>

edit: I noticed that on a server using aleph to serve http requests.

on-realized and http-request example from wiki does not seem to succeed/fail with latest versions of lamina and aleph

Hi,

Just working through some of the examples on the Aleph wiki pages. When I execute this example:

(on-realized (http-request {:url "http://example.net" :method :get})
  #(println "Success: " %)
  #(println "Fail: " %))

It just returns immediately with ":lamina/subscribed" and never actually reaches the success/fail handlers.

If I deref the call to http-request directly, it works:

@(http-request {:url "http://example.net" :method :get})
; => {:headers {"content-length" "0", "connection" "close", "server" "BigIP", "location" "http://example.iana.org"}, :character-encoding nil, :content-type nil, :content-length 0, :status 302, :body nil}

I can declare a result-channel directly and it works too (dunno if this is correct usage):

(def foo (result-channel)) 
(on-realized foo
  #(println "Success: " %)
  #(println "Fail: " %))
; => :lamina/subscribed 
(enqueue foo 1)
; Success:  1 

Right now using the following deps

[lamina "0.5.0-beta15"]
[aleph "0.3.0-beta15"]

implement websocket client

potentially useful in the real world, but necessary for automated testing of the websocket functionality

Heroku timesout when connecting to RedisToGo with Aleph

I'm trying to deploy a clojure Noir app to heroku, but the deployment fails because the app timesout and never properly starts

When I remove the Aleph redis code, it deploys fine. Also, oddly, when I connect to the heroku server and use a REPL, paste in the same aleph connection code, I can access redis no problem.

(def r (redis-client {:host redis-url :password redis-pass :port redis-port}))

So since the problem is at bootup, is there a way to load the redis connection differently or at a later time? For example, on the first request: connect to redis if it hasn't yet?

The code for my app is here: https://github.com/dmix/documeds

Wiki link needs update

The wiki link to consume a twitter stream should actually be,

Consuming-and-broadcasting-a-Twitter-stream

Clojure 1.3 support

When trying to port my project to Clojure 1.3 I noticed the following:

Exception in thread "main" java.lang.IllegalStateException: 
Can't dynamically bind non-dynamic var: clojure.contrib.pprint/*format-str*, compiling:(dispatch.clj:90)

This comes from aleph's format.clj which uses clojure.contrib.json which can be replaced with [org.clojure/data.json "0.1.1"]

I don't know what clojure.contrib.prxml can be replaced with.

Websockets in Chrome: 14.0.835.186

It looks like Chrome recently updated to the draft 10 websocket specification. It's now expecting a different handshake procedure. The exact error from the js console is:

Error during WebSocket handshake: 'Sec-WebSocket-Accept' header is missing

I think at some point I enabled auto-downloads of Chrome betas, so I'm not sure if all Chrome users are seeing this yet.

Dependency mismatch version between aleph, gloss and lamina

lein deps :tree gives this warning about version mismatches with aleph 0.3.0-rc2. Probably doesn't affect anything but though I should mention about this.

WARNING!!! possible confusing dependencies found:
[aleph "0.3.0-rc2"] -> [org.clojure/tools.logging "0.2.3"]
 overrides
[aleph "0.3.0-rc2"] -> [lamina "0.5.0-rc4"] -> [org.clojure/tools.logging "0.2.4"]
 and
[aleph "0.3.0-rc2"] -> [gloss "0.2.2-rc1"] -> [lamina "0.5.0-rc1"] -> [org.clojure/tools.logging "0.2.4"]

[aleph "0.3.0-rc2"] -> [potemkin "0.2.2"]
 overrides
[aleph "0.3.0-rc2"] -> [lamina "0.5.0-rc4"] -> [potemkin "0.3.0"]

when using this project.clj

(defproject my-project "1.0.0"
  :dependencies [[org.clojure/clojure "1.5.1"]
                 [aleph "0.3.0-rc2"]])

Hang on receive when frame >= 126 bytes received

Sending a payload larger than 125 bytes cause the receive to hang in my aleph server.

According to the spec, if the length field is 126 bytes, then the following two bytes are used as the length. If the field is 127, then the following 8 bytes indicate the length. I'm not familiar with how the defcodec logic works in src/aleph/http/websocket/protocol.clj, but I'm guessing that the length isn't being parsed out and used correctly. I'm still looking at it but figuring it might be obvious to you.

http client issue with timeouts

Hello Zach,

I noticed an issue with timeouts in the http client. When a timeout is fired by the http client, the underlying netty channel seams to not be closed so the connection is kept alive until the server closes it (if the server closes it). It causes memory and file descriptors leaks. You can easily reproduce it with a log the close callback in netty/create-client and netcat as server.

Here, at Smallrivers, we heavily rely on aleph on production. We would appreciate your help as quickly as possible on this issue.

Thanks.

David

twitter stream example broken on alpha3

Hi,

I think i may have found a bug on the twitter stream example (stumbled on hit while trying to update our codebase from 0.1.4 to latest alpha)
From a quick test I think it may have been broken from 0.1.5-SNAPSHOT.

(defn twitter-stream
  []
(let [ch (:body
           (sync-http-request
             {:method :get
              :basic-auth ["aleph_example" "_password"]
              :url "http://stream.twitter.com/1/statuses/sample.json"}))]
  (doseq [tweet (lazy-channel-seq ch)]
    (println tweet ))))

user=> (twitter-stream)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192]>)

Using autotransform doesn t seem to help either, it apears the seq elements aren t valid json, they are partitioned in the wrong place.

with autotransform:

user=> (use 'bug-playground.core)
(twitter-stream)
nil
user=> {:entities {:user_mentions [{:indices [0 7], :screen_name crodim, :name crodim adimas damara, :id 79398653, :id_str 79398653}], :urls [], :hashtags []}, :text @crodim good good. Udh dikirim ?, :retweet_count 0, :coordinates nil, :in_reply_to_status_id_str nil, :contributors nil, :in_reply_to_user_id_str 79398653, :id_str 80613956649091072, :in_reply_to_screen_name crodim, :retweeted false, :truncated false, :created_at Tue Jun 14 12:33:59 +0000 2011, :geo nil, :place nil, :in_reply_to_status_id nil, :user {:profile_use_background_image true, :follow_request_sent nil, :default_profile false, :profile_sidebar_fill_color ffffff, :protected false, :following nil, :profile_background_image_url http://a2.twimg.com/profile_background_images/247584894/aussie1.jpg, :default_profile_image false, :contributors_enabled false, :favourites_count 12, :time_zone Madrid, :name Meme, :id_str 66289184, :listed_count 8, :utc_offset 3600, :profile_link_color f5160e, :profile_background_tile true, :location Jakarta, Indonesia, :statuses_count 9601, :followers_count 462, :friends_count 192, :created_at Mon Aug 17 05:48:02 +0000 2009, :lang en, :profile_sidebar_border_color ffffff, :url nil, :notifications nil, :profile_background_color 1A1B1F, :geo_enabled false, :show_all_inline_media false, :is_translator false, :profile_image_url http://a1.twimg.com/profile_images/1393969660/Photo_on_2011-06-13_at_14.21_normal.jpg, :verified false, :id 66289184, :description There must be something good bout me, :profile_text_color 0a090a, :screen_name Megadechrista}, :favorited false, :source <a href="http://ubersocial.com" rel="nofollow">UberSocial</a>, :id 80613956649091072, :in_reply_to_user_id 79398653}
83
Jun 14, 2011 2:34:01 PM clojure.contrib.logging$impl_write_BANG_ invoke
WARNING: aleph.netty
java.lang.RuntimeException: java.lang.Exception: JSON error (expected true): t[\w \e \e]
    at clojure.lang.LazySeq.sval(LazySeq.java:47)
    at clojure.lang.LazySeq.seq(LazySeq.java:56)
    at clojure.lang.RT.seq(RT.java:450)
    at clojure.core$seq.invoke(core.clj:122)
    at clojure.core$empty_QMARK_.invoke(core.clj:4966)
    at lamina.core.observable.Observable.message(observable.clj:108)

Disabling response body compression

Should the use of HttpContentCompressor be configurable via some option?

I'm implementing an EventSource server. Chrome at least will send "Accept-Encoding: gzip, ...", and HttpContentCompressor automatically obliges, apparently no matter what Content-Encoding header I return in my response. gzipping naturally breaks the EventSource, though. I've got it working for now by manually removing the HttpContentCompressor: https://github.com/loku/aleph/commit/c9146612d132af86d61a675dd60a84afe8dddef4

Broken beta (lamina issue most likely)

Hi Zach,

I tried to upgrade one of our lib (it only uses the redis client + channels and assorted fns) from beta4 to the latest beta but this failed throwing the following:

Caused by: java.lang.IllegalArgumentException: Can't define method not in interfaces: close, compiling:(lamina/core/graph/node.clj:191). It seems this is a function that should extend IPropagator and not INode but I didn't investigate further (on a deadline!).

Missing yammer.metrics jar

There appears to be a problem downloading the yammer.metrics jar:

Could not find artifact com.yammer.metrics:metrics-core:jar:3.0.0-20130315.200005-6

When I exclude this in my project.clj:

java.lang.ClassNotFoundException: com.yammer.metrics.Clock

Is there a quick way around this?

Thanks!

redis-stream channel not closed when downstream channels are closed

Problem: when we setup a redis-stream channel and then create several dependent channels, and then close the last dependent channel in the chain, the original redis channel is not closed, though intermediate channels are closed.

We have a workaround (described below).

Without the workaround, the server will eventually fail with too many open file handles.

With the workaround, it seems to be ok under normal load, though under heavy load, connections are still leaked somehow, perhaps due to the fact that it might error out before the callback is called to close the redis-stream.

Setup code on the repl:

(ns test.test
  (:use aleph.formats
        lamina.core))

(def ch1 (aleph.redis/redis-stream {:host "localhost", :port 6379}))
(on-closed ch1 #(println "ch1 closed"))
(def ch2 (map* #(:message %) ch1))
(on-closed ch2 #(println "ch2 closed"))
(def ch3 (channel))
(siphon ch2 ch3)
(on-closed ch3 #(println "ch3 closed"))

Test:

(close ch3)

Expected output:

ch3 closed
ch2 closed
ch1 closed
true

Actual output:

ch3 closed
ch2 closed
true

Current workaround:

(on-closed ch2 #(do (println "ch2 closed") (close ch1)))

Considered netty-socketio ?

Aleph looks awesome. I was just now reading about your project, so sorry if I'm asking a question that has been asked before.

Netty-socketio is netty-based implementation of the socketio back-end, intended for use with the standard socket.io client. This provides compatibility with more browsers than websocket alone and of course is using the browser-side sockets library that is winning the internets. From their readme:

Supports xhr-polling transport
Supports flashsocket transport
Supports websocket transport (Hixie-75/76/Hybi-00, Hybi-10..Hybi-13)

Have you considered integrating something like this or otherwise adding support for different flavours of browser sockets?

Wrong number of args (3) passed to: server$respond

Hi,
I encountered following exception when accessing aleph http server.
The cause is obvious.
I will send a patch.

java.lang.IllegalArgumentException: Wrong number of args (3) passed to: server$respond
at clojure.lang.AFn.throwArity(AFn.java:439)
at clojure.lang.AFn.invoke(AFn.java:47)
at aleph.http.server$simple_request_handler$fn__4657.invoke(server.clj:240)
at lamina.core.queue$receive.invoke(queue.clj:36)
at lamina.core.channel$receive.doInvoke(channel.clj:98)
at clojure.lang.RestFn.invoke(RestFn.java:424)
at aleph.http.server$simple_request_handler.invoke(server.clj:234)
at aleph.http.server$http_session_handler$fn__4663.invoke(server.clj:252)
at aleph.netty$message_stage$fn__3744.invoke(netty.clj:127)
at aleph.netty$upstream_stage$reify__3736.handleUpstream(netty.clj:108)
at aleph.http.websocket$websocket_handshake_handler$reify__4588.handleUpstream(websocket.clj:145)
at aleph.netty$upstream_stage$reify__3736.handleUpstream(netty.clj:109)
at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:83)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndfireMessageReceived(ReplayingDecoder.java:523)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:507)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:444)
at aleph.netty$upstream_stage$reify__3736.handleUpstream(netty.clj:110)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

dead lock?!

Hi Zack,

I'm facing a weird issue with aleph (clj-1.3). One of my thread is hanging on a semaphore when I try to close the channel yielded by an http connection while an i/o worker try to push something in an internal channel. At the end, both threads are locked. Any clue of what can lead to such a dead lock? (concurrent closing?) Unfortunately, I can't reproduce it but it occurs from time to time in production.

Here is the thread dump: https://gist.github.com/8794b94b9a8ffe2142a6

I would appreciate your help on this issue.

Thanks,
David

error in twitter example

Shouldn t it be map* instead of map here ?

(let [ch (:body
           (sync-http-request
             {:method :get
              :basic-auth ["aleph_example" "_password"]
              :url "http://stream.twitter.com/1/statuses/sample.json"
              :delimiters ["\r"]}))]
  (doseq [tweet (map decode-json (lazy-channel-seq ch))]
    (prn tweet)))

Broken synchronous routes?

Glad to hear about the new hybi support! However...

An initial test of on my web-refactor branch of parbench https://github.com/andrewvc/parbench/tree/web-refactor completely barfed on the new aleph.

(just check out the web refactor branch and lein run, then visit localhost:3000 in a browser)

This gives me errors when handling non-async routes via noir, specifically the home URL here, which is a standard noir route, and worked before this upgrade: https://github.com/andrewvc/parbench/blob/web-refactor/src/parbench/views/index.clj#L7

I'm setting noir up with: https://github.com/andrewvc/parbench/blob/web-refactor/src/parbench/core.clj

This is based on my noir-async library, but that shouldn't come into play at this point, as we're still dealing with synchronous noir-defpage routes, not my async variant. That page should be a good demo, as when it loads it uses a websocket.

I'm preparing, by the way, a new web-ui version of Parbench (which is extremely alpha at this point) that lets you invoke a multitude of aleph / lamina based clients for http load testing in the web-refactor branch above (screenshot: https://skitch.com/andrewvc/g5n92/parbench-http-benchmarker ). You may find it useful Zach, as I know you're doing some work on lamina, and it hits it pretty hard, and gives nice realtime stats.

At any rate, the exception I'm seeing with the defpage stuff is listed below:

org.jboss.netty.handler.codec.http.websocketx.WebSocketHandshakeException: not a WebSocket handshake request: missing upgrade
at org.jboss.netty.handler.codec.http.websocketx.WebSocketServerHandshaker00.handshake(WebSocketServerHandshaker00.java:121)
at aleph.http.websocket$websocket_handshake_handler$reify__3771.handleUpstream(websocket.clj:91)
at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:79)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:522)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:501)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:438)
at aleph.netty$upstream_stage$reify__3027.handleUpstream(netty.clj:121)
at aleph.netty$upstream_stage$reify__3027.handleUpstream(netty.clj:121)
at aleph.netty$upstream_stage$reify__3027.handleUpstream(netty.clj:121)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

How can I respond to client with Connection:upgrade?

The hundshake of WebSocket can't complete after upgrade of Chrome
(to version 11.0.696.16 beta) though same program could complete
hundshake in Chrome previous version.

I questioned some forum and got reply that,
there was a change regarding handshake so that values of Upgrade
and Connection are checked correctly and
if server does not return Conection:Upgrade in Response Headers
hundshake can't be completed.

Is it possible that Connection:Upgrade is specified instead of
Connection:close in Response Headers?

The following is output of Network of Developer Tools of
Chrome (11.0.696.16) when handshake is failed.

Request URL:ws://127.0.0.1:8080/?a=b&c=d
Request Headers
Connection:Upgrade
Host:127.0.0.1:8080
Origin:null
Sec-WebSocket-Key1:23/2 6 1 6 3J 6 9 0
Sec-WebSocket-Key2:L2wg 11 27] 0 WM572[ 0
Upgrade:WebSocket
(Key3):83:5E:6E:7D:2D:7A:67:BA
Query String Parameters
a:b
c:d

The following is output of Network of Developer Tools of
Chrome(10.0.648.204) when handshake is success.

3Request URL:ws://192.168.1.21:8080/?a=b&c=d
Request Method:GET
Status Code:101 Switching Protocols
Request Headers
Connection:Upgrade
Host:192.168.1.21:8080
Origin:null
Sec-WebSocket-Key1:m3 1 Y}07( 09115>aQ6
Sec-WebSocket-Key2:161061 4 u94 n E 4
Upgrade:WebSocket
(Key3):2B:E3:C9:28:A9:0F:D9:DB
Query String Parameters
a:b
c:d
Response Headers
Connection:close
Content-Length:16
Sec-Websocket-Location:ws://192.168.1.21:8080/?a=b&c=d
Sec-Websocket-Origin:null
Upgrade:WebSocket
(Challenge Response):D9:4A:BF:0B:19:0B:2F:3E:5E:08:C9:1A:99:24:3E:B8

query-string blocks aleph

I'm using the following function when starting aleph:
(defn dump [request](respond! request
{:status 200
:headers {"Content-Type" "text/plain"}
:body %28str request)}))

All it does is dump request to the body. It works fine if I don't have any query parameters, but as soon as I add "?" in url, aleph just waits and doesn't respond.

'Consuming twitter stream' example broken

repl=> (use 'aleph.http 'lamina.core 'aleph.formats)
nil
repl=> (sync-http-request {:method :get 
  :basic-auth ["aleph_example" "_password"]
  :url "https://stream.twitter.com/1/statuses/sample.json"
  :delimiters ["\r"]}))
12:32:31,027  WARN node:6 - Error in map*/filter* function.
java.lang.Exception: Cannot convert {:status 200, :content-type "application/json", :headers {"connection" "close", "content-type" "application/json"}, :content-length nil, :character-encoding nil, :body <== […]} to ChannelBuffer.
  at aleph.formats$to_channel_buffer.invoke(formats.clj:143)
  at aleph.formats$bytes__GT_byte_buffers.invoke(formats.clj:218)
  at aleph.formats$bytes__GT_byte_buffers.invoke(formats.clj:215)
  at lamina.core.graph.node.Node$fn__4945.invoke(node.clj:279)
  at lamina.core.graph.node.Node.propagate(node.clj:279)
  at lamina.core.channel.Channel.enqueue(channel.clj:56)
  at lamina.core$enqueue.invoke(core.clj:106)
  at aleph.http.core$collapse_reads$fn__11846.invoke(core.clj:230)
  at lamina.core.graph.propagator$bridge_join$fn__5272.invoke(propagator.clj:174)
  at lamina.core.graph.propagator.BridgePropagator.propagate(propagator.clj:57)
  at lamina.core.graph.node.Node.propagate(node.clj:273)
  at lamina.core.channel.SplicedChannel.enqueue(channel.clj:103)
  at lamina.core$enqueue.invoke(core.clj:106)
  at aleph.netty.client$client_message_handler$reify__11437.handleUpstream(client.clj:123)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
  at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:100)
  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:567)
  at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:551)
  at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:445)
  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
  at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
  at aleph.netty.core$upstream_traffic_handler$reify__11129.handleUpstream(core.clj:203)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
  at aleph.netty.core$connection_handler$reify__11123.handleUpstream(core.clj:192)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:208)
  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
  at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
  at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  at java.lang.Thread.run(Thread.java:680)

coding style

Please accept my humble vote to "fix" the coding style for this project. I find it unreadable and have to reformat it to be able to understand it.

Suggestion: move lambdas completely up and out of the fn when you find text starting too far to the right.

Peace.
Btw, I think Aleph is really interesting and I think you've done a great job so far.

ZeroMQ channel

Hey Zachary,

I'd love to see ZeroMQ supported as a transport mechanism. I have checked out the source and will definitely see if I can implement something...

Any thoughts on where I should start? I'm thinking in aleph.redis is a good place to look into...

Wrong arg number

Hi,

We have in format.clj (line 158) channel-buffer->xml->data taking just one argumente, while in
core.clj (line 96) you pass two arguments to such a function:
(update-in aleph-msg [:body] #(channel-buffer->xml->data % charset))

Any plan to a new release?

make clear how to stop listening

From the wiki/documentation currently present, it is not clear whether there ever exists a way to make for example start-tcp-server stop listening.
Would be nice to have that documented/explained somehow.

0.2.0-rc1: Can't define method not in interfaces: server_thread_pool

Upgrading to 0.2.0-rc1 (as recommended in the list), I see this:

Caused by: java.lang.IllegalArgumentException: Can't define method not in interfaces: server_thread_pool (netty.clj:388) [...] at aleph.http.server$eval927$loading__4414__auto____928.invoke(server.clj:9) at aleph.http.server$eval927.invoke(server.clj:9)

https client is broken?

Somewhere between beta8 and now, the https client broke. I'm sorry I haven't had time to debug this, so to patch, just error report. I reproduced this with a trivial project:

(defproject test "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.4.0"]
                 [aleph "0.3.0-betaX"]])

With beta8:

(do (use 'lamina.core 'aleph.http) (clojure.pprint/pprint (wait-for-result (http-request {:url "https://google.com" :method :get}))))
{:status 301,
 :content-type "text/html; charset=UTF-8",
 :headers
 {"x-xss-protection" "1; mode=block",
  "server" "gws",
  "x-frame-options" "SAMEORIGIN",
  "content-type" "text/html; charset=UTF-8",
  "date" "Tue, 12 Mar 2013 01:40:30 GMT",
  "cache-control" "public, max-age=2592000",
  "expires" "Thu, 11 Apr 2013 01:40:30 GMT",
  "location" "https://www.google.com/",
  "content-length" "220",
  "connection" "close"},
 :content-length 220,
 :character-encoding "UTF-8",
 :body
 #<SlicedChannelBuffer SlicedChannelBuffer(ridx=0, widx=220, cap=220)>}

With beta9 (something is wrong, see :status):

(do (use 'lamina.core 'aleph.http) (clojure.pprint/pprint (wait-for-result (http-request {:url "https://google.com" :method :get}))))
{:headers
 {"connection" "close",
  "server" "GFE/2.0",
  "date" "Tue, 12 Mar 2013 01:41:53 GMT",
  "content-length" "958",
  "content-type" "text/html; charset=UTF-8"},
 :character-encoding "UTF-8",
 :content-type "text/html; charset=UTF-8",
 :content-length 958,
 :status 405,
 :body
 #<BigEndianHeapChannelBuffer BigEndianHeapChannelBuffer(ridx=0, widx=958, cap=958)>}

With beta11:

(do (use 'lamina.core 'aleph.http) (clojure.pprint/pprint (wait-for-result (http-request {:url "https://google.com" :method :get}))))
IllegalStateException cannot read from a drained channel  lamina.core.return-codes/error-code->exception (return_codes.clj:23)

Mar 11, 2013 6:44:26 PM clojure.lang.Reflector invokeMatchingMethod
SEVERE: error on inactive probe: http-client:error
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
    at sun.nio.ch.IOUtil.read(IOUtil.java:186)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)

Handler event on Websocket open?

Is it possible to create a handler that will respond to a new websocket connection being opened by a client? From what I can tell, the handlers seem to fire when the client sends a message. I'd like to be able to send some data to the client as soon as they make an initial connection.

Great library.

http-request doesn't seem to support https (fallbacks to http instead)

Hello Zach,

It seems the http-client uses http when I try to do a request on a https url:

user=> (use 'aleph.http)
nil
user=> (sync-http-request {:method :get :url "https://e-finance.postfinance.ch/ef/secure/html/?login&p_spr_cd=4"})
{:status 302, :headers {"location" "https://www.postfinance.ch", "connection" "closed"}, :body <== [nil]}

If you check with curl you will see that this url only triggers a redirect if the user hits http
You can compare the output of these 2 commands:

 curl  -v https://e-finance.postfinance.ch/ef/secure/html/?login&p_spr_cd=4
 curl  -v http://e-finance.postfinance.ch/ef/secure/html/?login&p_spr_cd=4

I will try to find the cause of this tomorrow, but i'd guess something needs to be done with the netty api for https scheme, or maybe i am missing the obvious.

wrap-ring-handler broken

It looks like wrap-ring-handler is broken at the moment. I have tried the examples from the wiki and also this modified version of the hello-world from the README without success:

(use 'lamina.core 'aleph.http)

(defn hello-world [request]
    {:status 200
     :headers {"content-type" "text/html"}
     :body "Hello World!"})

(start-http-server (wrap-ring-handler hello-world) {:port 8080})

SEVERE: Error in handler, closing connection.
java.lang.IllegalArgumentException: Wrong number of args (0) passed to: http$request-body
    at clojure.lang.AFn.throwArity(AFn.java:439)
    at clojure.lang.AFn.invoke(AFn.java:35)
    at aleph.http$wrap_ring_handler$fn__4694.invoke(http.clj:120)
    at aleph.http.server$handle_request.invoke(server.clj:204)
    at aleph.http.server$non_pipelined_loop$fn__4417.invoke(server.clj:222)
    at lamina.core.pipeline$start_pipeline$fn__1008$fn__1029.invoke(pipeline.clj:193)
    at lamina.core.pipeline$start_pipeline$fn__1008.invoke(pipeline.clj:192)
    at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:149)
    at lamina.core.pipeline$start_pipeline$fn__1008$fn__1012$fn__1023.invoke(pipeline.clj:182)
    at clojure.lang.AFn.applyToHelper(AFn.java:159)
    at clojure.lang.AFn.applyTo(AFn.java:151)
    at clojure.core$apply.invoke(core.clj:540)
    at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1502)
    at clojure.lang.RestFn.invoke(RestFn.java:426)
    at lamina.core.pipeline$start_pipeline$fn__1008$fn__1012.invoke(pipeline.clj:181)
    at lamina.core.queue.ConstantEventQueue$fn__707$fn__708.invoke(queue.clj:255)
    at lamina.core.observable$observer$reify__161.on_message(observable.clj:40)
    at lamina.core.observable.ConstantObservable$fn__399.invoke(observable.clj:191)
    at lamina.core.observable.ConstantObservable.message(observable.clj:187)
    at lamina.core.channel$enqueue.doInvoke(channel.clj:111)
    at clojure.lang.RestFn.invoke(RestFn.java:424)
    at lamina.core.channel$poll$callback__794$fn__795$fn__796.invoke(channel.clj:186)
    at lamina.core.queue.ConstantEventQueue$fn__711$fn__712.invoke(queue.clj:266)
    at lamina.core.observable$observer$reify__161.on_message(observable.clj:40)
    at lamina.core.observable.ConstantObservable$fn__399.invoke(observable.clj:191)
    at lamina.core.observable.ConstantObservable.message(observable.clj:187)
    at lamina.core.channel$enqueue.doInvoke(channel.clj:111)
    at clojure.lang.RestFn.invoke(RestFn.java:424)
    at lamina.core.pipeline$success_BANG_.invoke(pipeline.clj:77)
    at lamina.core.pipeline$read_channel$fn__1061.invoke(pipeline.clj:274)
    at lamina.core.queue.ConstantEventQueue$fn__707$fn__708.invoke(queue.clj:255)
    at lamina.core.observable$observer$reify__161.on_message(observable.clj:40)
    at lamina.core.observable.ConstantObservable$fn__399.invoke(observable.clj:191)
    at lamina.core.observable.ConstantObservable.message(observable.clj:187)
    at lamina.core.channel$enqueue.doInvoke(channel.clj:111)
    at clojure.lang.RestFn.invoke(RestFn.java:424)
    at lamina.core.channel$poll$callback__794$fn__795$fn__796.invoke(channel.clj:186)
    at lamina.core.queue$send_to_callbacks.invoke(queue.clj:182)
    at lamina.core.queue.EventQueue.enqueue(queue.clj:84)
    at lamina.core.queue$setup_observable__GT_queue$fn__690.invoke(queue.clj:190)
    at lamina.core.observable$observer$reify__161.on_message(observable.clj:40)
    at lamina.core.observable.Observable.message(observable.clj:117)
    at lamina.core.observable$siphon$fn__437$fn__439.invoke(observable.clj:232)
    at lamina.core.observable$observer$reify__161.on_message(observable.clj:40)
    at lamina.core.observable.Observable.message(observable.clj:114)
    at lamina.core.channel$enqueue.doInvoke(channel.clj:111)
    at clojure.lang.RestFn.invoke(RestFn.java:424)
    at aleph.http.server$http_session_handler$fn__4435$fn__4436.invoke(server.clj:271)
    at aleph.http.server$http_session_handler$fn__4435.invoke(server.clj:265)
    at aleph.netty$message_stage$fn__3352.invoke(netty.clj:127)
    at aleph.netty$upstream_stage$reify__3344.handleUpstream(netty.clj:108)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
    at aleph.netty$upstream_stage$reify__3344.handleUpstream(netty.clj:109)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
    at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:83)
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:100)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndfireMessageReceived(ReplayingDecoder.java:523)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:507)
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:444)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
    at aleph.netty$upstream_stage$reify__3344.handleUpstream(netty.clj:110)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:540)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350)
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:636)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.