Giter Club home page Giter Club logo

nsq's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nsq's Issues

file_to_nsq: a utility for STDIN or file to NSQ

There are utilities that pull data from NSQ and write it to disk or nsq_to_http, but there is not a utility to take a file or an input stream and pipe it into NSQ. This should be added and probably work similarly to grep where it can either work on a file or work from STDIN.

on amd64 Consumer connect to linux i386 nsqd

[gao@kaixin nsq]$ ./nsqd --lookupd-tcp-address=127.0.0.1:4160
2013/08/26 12:00:40 nsqd v0.2.22-alpha (built w/go1.1.2)
2013/08/26 12:00:40 worker id 206
2013/08/26 12:00:40 NSQ: persisting topic/channel metadata to nsqd.206.dat
2013/08/26 12:00:40 LOOKUP: adding peer 127.0.0.1:4160
2013/08/26 12:00:40 LOOKUP connecting to 127.0.0.1:4160
2013/08/26 12:00:40 TCP: listening on [::]:4150
2013/08/26 12:00:40 HTTP: listening on [::]:4151
2013/08/26 12:00:40 LOOKUPD(127.0.0.1:4160): peer info {TcpPort:4160 HttpPort:4161 Version:0.2.22-alpha Address:kaixin.yd BroadcastAddress:kaixin.yd}
2013/08/26 12:00:46 TCP: new client(192.168.0.20:51650)
2013/08/26 12:00:46 CLIENT(192.168.0.20:51650): desired protocol magic ' V2'
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x1 pc=0x809a88c]

goroutine 12 [running]:
sync/atomic.AddUint64()
/usr/local/go11/src/pkg/sync/atomic/asm_386.s:69 +0xc
main.(_ProtocolV2).IOLoop(0x18500278, 0x185f0240, 0x185000e0, 0x1860c240, 0x1, ...)
/Users/gao/code/mygo/src/github.com/bitly/nsq/nsqd/protocol_v2.go:33 +0x64
main.(_tcpServer).Handle(0x18609958, 0x185f0240, 0x185000e0)
/Users/gao/code/mygo/src/github.com/bitly/nsq/nsqd/tcp.go:41 +0x390
created by github.com/bitly/nsq/util.TCPServer
/Users/gao/code/mygo/src/github.com/bitly/nsq/util/tcp_server.go:31 +0x35f

goroutine 1 [chan receive]:
main.main()
/Users/gao/code/mygo/src/github.com/bitly/nsq/nsqd/main.go:146 +0xc3f

behaviour of channels

What happen if multiple consumers subscribe to the same channel when a message is published to a topic? Is the message dispatched to all consumer or loadbalanced between consumers of the channel?

Add `nsq_to_pipe` or modify `nsq_to_file`

We'd like to be able to consume from a nsq topic with one of these apps, but is seems more in in line with the unix philosophy to write out to stdout or a file.

I've taken a look and it seems that there's precedent in the existing code for responding to signals, so it seems like just writing to a file would be compatible with something like logrotate.

I may go ahead and hack on this myself, but I figured there might be someone more well-versed in Go than me that wanted to :-)

Channels Revived From the Dead

It's unclear if this is the desired behavior, but it strikes me as odd. If I create a topic and channel, and then proceed to delete the channel and topic, when I re-create the topic only, the channel comes back, too. I'm running 0.2.26

# Check our version
curl 'http://localhost:4151/info'
{"status_code":200,"status_txt":"OK","data":{"version":"0.2.26"}}

# First, we verify nothing is here
curl 'http://localhost:4151/stats?format=json'
# >>> {"status_code":200,"status_txt":"OK","data":{"topics":[]}}

# Now we create a topic and channel
curl 'http://localhost:4151/create_topic?topic=foo'
curl 'http://localhost:4151/create_channel?topic=foo&channel=bar'

# And verify that both the topic and channel exist
$ curl 'http://localhost:4151/stats?format=json'
# >>> {"status_code":200,"status_txt":"OK","data":{"topics":[{"topic_name":"foo","channels":[{"channel_name":"bar","depth":0,"backend_depth":0,"in_flight_count":0,"deferred_count":0,"message_count":0,"requeue_count":0,"timeout_count":0,"clients":[],"paused":false,"e2e_processing_latency":{"count":0,"percentiles":null}}],"depth":0,"backend_depth":0,"message_count":0,"paused":false,"e2e_processing_latency":{"count":0,"percentiles":null}}]}}

# Now I'll delete the channel and we can verify that it's "gone"
curl 'http://localhost:4151/delete_channel?topic=foo&channel=bar'
curl 'http://localhost:4151/stats?format=json'
# >>> {"status_code":200,"status_txt":"OK","data":{"topics":[{"topic_name":"foo","channels":[],"depth":0,"backend_depth":0,"message_count":0,"paused":false,"e2e_processing_latency":{"count":0,"percentiles":null}}]}}

# Delete the topic and verify that it's gone
curl 'http://localhost:4151/delete_topic?topic=foo'
curl 'http://localhost:4151/stats?format=json'
# >>> {"status_code":200,"status_txt":"OK","data":{"topics":[]}}

# Now here's the fishy part. I only create the topic, but then the channel comes back
curl 'http://localhost:4151/create_topic?topic=foo'
curl 'http://localhost:4151/stats?format=json'
# >>> {"status_code":200,"status_txt":"OK","data":{"topics":[{"topic_name":"foo","channels":[{"channel_name":"bar","depth":0,"backend_depth":0,"in_flight_count":0,"deferred_count":0,"message_count":0,"requeue_count":0,"timeout_count":0,"clients":[],"paused":false,"e2e_processing_latency":{"count":0,"percentiles":null}}],"depth":0,"backend_depth":0,"message_count":0,"paused":false,"e2e_processing_latency":{"count":0,"percentiles":null}}]}}

I encountered this behavior when writing integration tests for a branch of the python client that includes HTTP clients for nsqd and nslookupd.

reader: improve RDY logic

In nsqio/pynsq#37 and nsqio/pynsq#38 we greatly improved pynsq RDY handling. I think the following are remaining differences between go-nsq and pynsq in regards to RDY handling

  • At connection add, a new connection can be starved because a separate connection could have the full max-in-flight value (bug)
  • New connections that can't send the full RDY for a connection should send a truncated RDY value and delay/retry sending RDY when unable to send as a result of global max-in-flight
  • When going into backoff all connections should be updated to RDY 0 immediately
  • When coming out of backoff, a single connection should be updated with RDY 1 (and redistribute should move it to another connection as appropriate)
  • Comprehensive reader tests (matching pynsq)

I'm going to open separate issues to tackle these items. This can track the overall issue of parity (there are probably things i'm forgetting) and we can close it when we are satisfied.

cc: @mreiferson

Trendrr Java Client

The java client I've been working on is ready:

https://github.com/dustismo/TrendrrNSQClient

danielhfrank and I have talked about merging the two java clients, but until that happens perhaps you could link to it in the clients section?. I believe it is feature complete and we have been using it in production for many weeks.

-Dustin

working towards 1.0

We're looking to solidify a 1.0 release and I wanted to document some possible changes as early as possible.

As mentioned in the README we intend on following the semantic versioning scheme. A 1.0 release would mean, in short:

  1. the public API in terms of protocol and parameters will be frozen for this release
  2. all future point releases would only be for backwards compatible bug fixes
  3. all future minor releases would be for backwards compatible functionality enhancements
  4. all future major releases would be for significant changes that could be backwards incompatible

We have a slew of open issues tagged with 1.0 and if other bugs are identified in the interim they will most likely be included as well.

Structurally, the biggest change will be the Go library moving into its own repo, most likely http://github.com/bitly/go-nsq. It will also be considered 1.0 at that point. What this means for applications using this library is that import paths will change from github.com/bitly/nsq/nsq to github.com/bitly/go-nsq. Any other changes will be documented in the respective ChangeLog.

If you have any questions/comments/suggestions for things you'd like to see in 1.0 please use this as a forum for that discussion.

Thanks!

python nsqreader disconnection issue

there seems to be an edge case where a python reader thinks it still has a connection to a given nsqd (the nsqd does not report having the client) and therefore the reader never attempts to reconnect.

cc @jehiah

nsqadmin: counter doesn't increment

Here's the setup:

1 EC2 instance has an nsqlookupd instance running.
1 EC2 instance has an nsqd instance and an nsqadmin instance running.
nsqadmin uses the nsqlookupd to find its nsqd

When I publish messages to the nsqd instance, however, the counter (/counter) does not increment. When I consume messages from the nsqd instance, the counter does not increment. I've never actually seen the counter read anything but 0.

nsqd: authentication

Any plans on adding additional authentication functionality, i.e password authentication? Use case is subscribers connecting from public networks.

redundant HTTP response enveloping

not a huge deal, just cosmetic but maybe in a later version that breaks compat anyway we could move .data to the root and ditch the other two props

{ status_code: 200,
  status_txt: 'OK',
  data: { producers: [ [Object], [Object] ] } }

nsqadmin: producers and consumers disappear on deleted channel

We've noticed a few times that when deleting a channel (either through the interface or by turning off an ephemeral channel), the topic page starts reporting that the topic has no producers or consumers.

The remaining channels and their readers don't seem to be affected and continue to process data fine. The problem with the interface goes away if we restart the lookupd and wait a bit.

Running nsq (and deps) cloned a couple of days ago on an EC2 Ubuntu 12.04 machine.

nsqd: some consumer commands do not return success responses

There could be reasonable ways around this, but with the few commands that sometimes respond (on error) it makes writing clients that support command callbacks a little tricky. Ideally it's just a FIFO of callbacks, but we have to handle the case of RDY not receiving a response, but maybe receiving an error and so on.

Maybe always responding with OK would be more elegant? For now I'll probably just .emit() those.

Does NSQD ignores consumer disconnects? (server-wide message id)

Hello.

As I know, RabbitMQ handles consumer disconnects and resend lost-in-flight messages (without delivery reports) automatically to another connected consumer. It's looks ok as RabbitMQ has no unique (at least, per server) message id's (only per consumer).

As I realized, nsqd do not immediatelly resend messages from disconnected consumers which are in the "in-flight" state but wait confugred message timeout before. As I understand, customer can reconnect and send fin/req commands for messages, received in previous session.

Is it correct nsqd behaviour or side-effect? May be, it should be documented somewhere?

nsqd: throttled RDY count support for high throughput

We have a few workers that need to consume basically as many messages as they can to optimize for batching, but a large RDY (several million) overwhelms node pretty easily haha. I have a hack right now that keeps incrementing RDY, while throttling concurrency with a manual in-flight count.

I'm not sure how common this sort of use-case would be, but we have two workers already that need this sort of throughput. It's another one of those things that is more elegant to handle in the client, but then you have to account for timeouts etc. Maybe we could come up with some sort of throttling mechanism to simplify this for clients

Create sampling stream using channel name modifier

The same way taht #ephemeral creates a temporary queue, it would be cool to have a sampling modifier. For instance, the channel name could test#sample=0.25. This means that either:

  1. The topic would send 25% of the messages to the channel
  2. The channel would discard 75% of the messages upon receipt

In other words, only 1 in 4 messages would make it to the consumer given the above. The reason for this feature is that it would take the onus off the application for sampling messages.

nsqadmin has hardcoded templates path

May be, it may be calculated at compile time?

How to reproduce:

Change PREFIX in Makefile to something other than /usr/local (example - /usr)

# make
...
# make install
...
# nsqadmin
2012/12/24 12:27:57 nsqadmin v0.2.16-alpha
2012/12/24 12:27:57 --template-dir must be specified (or install the templates to /usr/local/share/nsqadmin/templates)

What expected - nsqadmin to use templates from /usr/share/nsqadmin/templates directory

nsqadmin Compilation error

./install.sh
building nsqd...
installing nsqd in /home/stephane/bin
building nsqlookupd...
installing nsqlookupd in /home/stephane/bin
building nsqadmin...

_/home/stephane/Documents/Devel/nsq/nsq/nsqadmin

./lookupd_utils.go:74: producers.GetIndex undefined (type *simplejson.Json has no field or method GetIndex)
installing nsqadmin in /home/stephane/bin
cp: cannot stat `nsqadmin': No such file or directory

nsq_to_file: only supports one topic

It would be nice to have nsq_to_file log many topics from a single binary/process.

It would be even nicer to have nsq_to_file pay attention to all the topics in the cluster by default, unless a topic is specified.

We discussed this briefly in IRC:

[15:43:59]  paddyforan: nobody's bothered modifying nsq_to_file to listen to all topics in a cluster yet, right?
[15:45:06]  mreiferson: nope
[15:45:12]  mreiferson: thats an interesting idea
[15:45:21]  mreiferson: then you can just scale horizontally as volume increases
[15:45:26]  mreiferson: GOOD IDEA PADDY
[15:46:10]  paddyforan: hahaha
[15:46:32]  paddyforan: our ops team was like "We really don't want to have to deply a binary for every topic and make sure they're all running.."
[15:46:46]  paddyforan: so we'll probably end up building something like that. I'm trying to get it open sourced
[15:46:57]  mreiferson: aren't there all these config management tools for that stuff ;)
[15:47:12]  paddyforan: probably
[15:47:19]  paddyforan: but our ops guys don't tell me how to do my job, soo...
[15:54:10]  mreiferson: I think a patch for nsq_to_file (if --topic is not specified) would be perfect, FYI
[15:54:44]  mreiferson: periodically poll the configured nsqd or nsqlookupd for topics, spawn a new reader for that topic, etc.

nsqd: get messages over HTTP

May be, this will be funny not only for me.

What I need for one of my future applications - get messages one by one. May be, I mixed with application logic. Let's explain.

I have pool of data for work on it. I populate this pool once per week/month. Operators (people in callcenter) pick one message from pool and process it. Operators works with web-application. One page access - one message from queue should be got. So, tcp-daemon usecase is useless here - in a case of empty message queue it will hung forever.

My propose is to add HTTP-handler for NSQD to process requests like this:

curl -i 'http://127.0.0.1:4151/get?topic=test&channel=test_channel'

And response may be, for existing message:

HTTP/1.1 200 OK
X-MESSAGE-ATTEMPTS: 1
X-MESSAGE-ID: cfd852793302455c8c1a97d2908c77ce
X-MESSAGE-CREATED: 1358791153
Content-Length: 12

Hello world!

Or, for empty queue, it may return... http "204 no content" code?

nsqd: disk-backed deferred queue

the deferred queue is the only spot where there are no bounds to memory footprint.

we could write deferred messages to disk and keep the index in memory which would for most use cases completely eliminate this problem.

When using a nsq.NewReader instance in my go code, heartbeat messages show up on console

I'm creating this because its causing an issue for me, but i've been unable to find any indication how to turn this off or disable it. I have a console-type app which instances a nsq.NewReader to receive incoming messages and display them in the console, but the heartbeat messages are also popping up in the console too. (along with the startup output whne the reader is first initialized)

2013/10/08 17:23:15 starting Handler go-routine
2013/10/08 17:23:15 LOOKUPD: querying http://127.0.0.1:4161/lookup?topic=mdc-gpsd
2013/10/08 17:23:15 [raspberrypi:4150] connecting to nsqd
2013/10/08 17:23:45 [raspberrypi:4150] heartbeat received

is there a way to silence this output?

nsqd: operation tailing

Just an idea, though I know the answer will probably be "it's easy to do that in the client", and that would be completely true :D. Anyway the idea is that nsqd dogfoods itself events for optional tailing. This would allow you to very easily abstract out metric collection. For example all our of our metrics really need to be done on a per-customer basis.

We do this internally with counters in the client right now, and it works fine but this could be interesting, you could use it for dtrace-style analysis of the queue operations, ack times, if a particular customer is causing issues etc.

Question, getting "invalid command PUB"

Hi there. I am building a node.js client and am having some difficulty with the publish cmd. I get "invalid command PUB" as a response and wonder if I have misread the protocol doc.

I build the command msg here
https://gist.github.com/3877832

Sending:
<Buffer@0x101808d90 50 55 42 20 74 65 73 74 0a 68 65 6c 6c 6f 20 77 6f 72 6c 64 2e 00 0c 00 48>

NSQD log:

2012/10/12 03:41:06 CLIENT(127.0.0.1:55024): desired protocol 538990130
2012/10/12 03:41:06 ERROR: CLIENT(127.0.0.1:55024) - invalid command PUB
2012/10/12 03:41:06 PROTOCOL(V2): [127.0.0.1:55024] exiting ioloop
2012/10/12 03:41:06 ERROR: client(127.0.0.1:55024) - EOF

intelligent clients in large scale NSQ clusters

In the documentation it says:

NOTE: in future versions, the heuristic nsqlookupd uses to return addresses could be based on depth, number of connected clients, or other "intelligent" strategies. The current implementation is simply all. Ultimately, the goal is to ensure that all producers are being read from such that depth stays near zero.

Has any work on this been started? If not, what's your current thinking on it?

We've started deploying nsq in our environment but find that the number of connections each consumer makes is entirely untenable.

nsqd: message TTL

One of the great things about NSQ is that it can be used as a message bus; e.g., event X happened, so everyone that is interested in event X gets notified. This is super cool.

thumbs up

One of the things that I'm really interested in about NSQ is that it helps me decouple my event producers and consumers. So the process getting information from NSQ doesn't even know the process putting the information there exists.

There's a problem with this, though (I think). From what I can tell, should I start publishing to a topic that has no consumers, that topic queue will build up forever until I run out of memory assigned to NSQ. Then it will start transparently persisting to disk (hooray!). Then, given enough time, the disk will fill up. At which point, I imagine, catastrophic failure.

boom

So here's my question/proposal: is there some way to set a "decay" on messages on a topic? After a certain amount of time, that data is no longer worth processing, so I'd like NSQ to just delete it.

Is there any support for this? If not, is there any plan to add support for something like this? Or am I missing a design decision somewhere?

nsqd:use fdatasync in diskqueue for efficiency in *nix os.

As I know, fdatasync is more efficient than fsync which used in *os.File.Sync().
Because fdatasync don't update metadata of file if unnecessary.
And there is no direct func in *os.File, just use syscall.Fdatasync().
And the way to write to file in diskqueue should be rewrite a lot for fixing metadata of file first.

Besides, I love to do this work, but it's awkward that I'am just stuck in how to create a file with the size being set at the same time.

nsqd: track stats of end-to-end message processing time

It would be incredibly useful information for nsqd to keep track of end-to-end statistics of how long it's taking to process messages (from ns time of PUB to ns time of FIN) in percentiles, for each topic/channel.

This would need to take a fast/efficient stream approach of approximating these values so as not to add significant memory/CPU overhead.

They should be available via the /stats endpoint and pushed to statsd for context over time.

cc @jehiah @michaelhood

Template errors in nsqadmin

Using the nsqadmin binaries from the 0.2.20 and 0.2.21 releases, nsqadmin fails to render the streams and topic pages. I see the following errors in the output:

2013/06/12 17:27:54 Template Error template: topic.html:25: can't evaluate field TopicHostStats in type struct { Title string; GraphOptions *main.GraphOptions; Version string; Topic string; TopicProducers []string; TopicStats []*lookupd.TopicStats; GlobalTopicStats *lookupd.TopicStats; ChannelStats map[string]*lookupd.ChannelStats }

and

2013/06/12 17:29:00 Template Error template: index.html:19: can't evaluate field Sparkline in type *main.Topic

nsq ./test.sh panics when run on raspberry pi (ARM)

the RPi isnt a very fast machine. So when attempting to do the quick-start on a RPi, running ./test.sh part fails due to the 15s timeout in the test scripts.

Changing those values to 55s allow the test script to successfully complete. This is just a FYI, it could lead people to think that nsq would not work on a RPi.

cheers,

nsqadmin: add ops/s and deltas

It would be awesome to see the ops/s for each metric in nsqadmin. It would be nice to maybe send deltas to statsd as well, but not a huge deal.

nsq examples use relative imports, cannot be go get'd

lucky(/tmp) % export GOPATH=/tmp/t
lucky(/tmp) % go get -v github.com/bitly/nsq/...
github.com/bitly/nsq (download)
/tmp/t/src/github.com/bitly/nsq/examples/bench_reader/bench_reader.go:4:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/examples/bench_writer/bench_writer.go:4:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/examples/nsq_pubsub/nsq_pubsub.go:6:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/examples/nsq_tail/nsq_tail.go:4:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/examples/nsq_to_file/nsq_to_file.go:6:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/examples/nsq_to_http/http.go:4:2: local import "../../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/nsqadmin/http.go:4:2: local import "../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/nsqd/channel.go:4:2: local import "../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/nsqlookupd/http.go:4:2: local import "../nsq" in non-local package
/tmp/t/src/github.com/bitly/nsq/util/lookupd_requests.go:4:2: local import "../nsq" in non-local package
github.com/bitly/go-simplejson (download)

nsqd: support client "user agent"

we already support arbitrary JSON IDENTIFY blobs and this would add support for keeping around a user_agent string supplied by the client library.

this would be exposed in the /stats payload and displayed in nsqadmin.

this is super useful to identify out of date clients

cc @jehiah

topic capacity

hey! just a general question since I'm not super familiar with the internals yet, would you say it would be bad practice to use hundreds of thousands of topics? I was planning on using one per-client, in the past we've just hashed but that obviously has visibility and fair-queue cons, but I can't see things scaling too well if we go with per-project topic mapping either haha. really enjoying nsq so far! thanks :D

nsqd: e2e latency calculation with no consumers

If you have a channel with consumers and has been feeding those consumers for a period of time...when all the the consumers disconnect the e2e latency calculations stop instead of growing infinitely until the next consumer connects. But as soon as the consumer connects, the calculation shows increases to that delta.

This is happening because the next e2e calculation can't happen until a FIN is received for a message. If there are no consumers to FIN messages, then no e2e calculations are happening and there is no new data for the percentiles.

I'm not suggesting this is a bug, just an edge case and something worth noting and deciding if deltas between FIN times is the correct method of e2e calculations.

protocol description & frame id size

The protocol specification says that a frame has this format:

[x][x][x][x][x][x][x][x][x][x][x][x]...
|  (int32) ||  (int32) || (binary)
|  4-byte  ||  4-byte  || N-byte
------------------------------------...
    size      frame ID     data

But when I read the code of nsq, I see:

[x][x][x][x][x][x][x][x]...
|  (int32) || (binary)
|  4-byte  || N-byte
------------------------...
    size       data

Does it means that size == size(frame ID + Data)

?

fail to run nsq_pubsub.go

I according to the readme and run nsq_pubsub.go (the file in my workspace, not in nsq source directory), but take a error: "nsq_pubsub.go:7:2: local import "../nsq" in non-local package".

I see the source file and found some clue - import "../nsq" in nsq/util/topic_channel_args.go. Did that mean the import in here is incorrect?

nsqadmin: add client metadata

In #286 we added the ability for a client to sample a channel. This information, in addition to other IDENTIFY negotiated features and settings, would be useful to surface in nsqadmin.

cc @elubow

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.