Giter Club home page Giter Club logo

liftbridge's Introduction

Liftbridge Logo

Build License ReportCard Coverage

Liftbridge provides lightweight, fault-tolerant message streams by implementing a durable, replicated, and scalable message log. The vision for Liftbridge is to deliver a "Kafka-lite" solution designed with the Go community first in mind. Unlike Kafka, which is built on the JVM and whose canonical client library is Java (or the C-based librdkafka), Liftbridge and its canonical client, go-liftbridge, are implemented in Go. The ultimate goal of Liftbridge is to provide a lightweight message-streaming solution with a focus on simplicity and usability. Use it as a simpler and lighter alternative to systems like Kafka and Pulsar or to add streaming semantics to an existing NATS deployment.

See the introduction post on Liftbridge and this post for more context and some of the inspiration behind it.

Documentation

Community

liftbridge's People

Contributors

0xflotus avatar alexrudd avatar annismckenzie avatar caioaao avatar dependabot[bot] avatar dvolodin7 avatar jmgr avatar lapetitesouris avatar mfcochauxlaberge avatar mihaitodor avatar pervrosen avatar pvr1 avatar riptl avatar rl-0x000 avatar ruseinov avatar sosiska avatar stephane-moreau avatar testwill avatar tsingson avatar tylertreat avatar vcabbage avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

liftbridge's Issues

panic: runtime error: slice bounds out of range

panic: runtime error: slice bounds out of range

goroutine 83 [running]:
github.com/liftbridge-io/liftbridge/server/commitlog.(*Index).WriteAt(0xc42001f020, 0xc4202fc000, 0x3000, 0x4bf4, 0xa01c7c, 0x0)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/commitlog/index.go:154 +0xf3
github.com/liftbridge-io/liftbridge/server/commitlog.(*Index).WriteEntries(0xc42001f020, 0xc421bba000, 0x400, 0x400, 0x400, 0x12400)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/commitlog/index.go:108 +0x1b1
github.com/liftbridge-io/liftbridge/server/commitlog.(*Segment).WriteMessageSet(0xc4202d05a0, 0xc421a6e000, 0x12400, 0x1cb03, 0xc421bba000, 0x400, 0x400, 0xc42057ffc0, 0xc42016e1c0)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/commitlog/segment.go:111 +0xb3
github.com/liftbridge-io/liftbridge/server/commitlog.(*CommitLog).append(0xc4200ae240, 0xc4202d05a0, 0xc421a6e000, 0x12400, 0x1cb03, 0xc421bba000, 0x400, 0x400, 0xc421bba000, 0x400, ...)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/commitlog/commitlog.go:207 +0x74
github.com/liftbridge-io/liftbridge/server/commitlog.(*CommitLog).Append(0xc4200ae240, 0xc420436000, 0x400, 0x400, 0xc421462000, 0x400, 0x400, 0x0, 0x0)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/commitlog/commitlog.go:186 +0x1a5
github.com/liftbridge-io/liftbridge/server.(*stream).messageProcessingLoop(0xc4200eed10, 0xc42001f080, 0xc42008aa80)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/stream.go:465 +0x2a1
github.com/liftbridge-io/liftbridge/server.(*stream).becomeLeader.func1()
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/stream.go:243 +0x3f
github.com/liftbridge-io/liftbridge/server.(*Server).startGoroutine.func1(0xc42017e5c0, 0xc4201ca0b0)
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/server.go:632 +0x27
created by github.com/liftbridge-io/liftbridge/server.(*Server).startGoroutine
        /Users/tylertreat/Go/src/github.com/liftbridge-io/liftbridge/server/server.go:631 +0x99

Feature matrix

Make a feature matrix comparing Liftbridge with similar systems to help people understand the differences. This should live in the documentation website.

Consumer groups question

Hi there,

Very interesting project!

Im busy with an event sourcing microservice design and looking at all the options. One thing that is missing from liftbridge is the consumer groups from you roadmap. It simplifies a lot in a microservice design (HA workflows and loadbalancing). Do you have another way to accomplish this with the current features or do you plan to add support for something like this? Sorry for the question here.

Regards,
Riaan

Will liftbridge server be available for Windows

Currently we're building & running liftbridge with great success on Mac & Linux (wild-card subscriptions are a huge win!)

Attempted a golang cross-compile from Mac with GOOS=windows, with both go11 and go10 (there are changes to the syscall package in go11, thought that might be the cause, but same issue when reverting to go10).

to reproduce:

cd liftbridge-io/liftbridge
GOOS=windows GOARCH=amd64 go build

github.com/liftbridge-io/liftbridge/vendor/github.com/tysontate/gommap
vendor/github.com/tysontate/gommap/gommap.go:52:12: undefined: syscall.Stat_t
vendor/github.com/tysontate/gommap/gommap.go:53:13: undefined: syscall.Fstat
vendor/github.com/tysontate/gommap/gommap.go:58:15: undefined: mmap_syscall
vendor/github.com/tysontate/gommap/gommap.go:77:30: not enough arguments in call to syscall.Syscall

looking through the gommap source (perhaps unsurprisingly) there's no explicit support for architectures other than linux/mac.

Not a huge issue for us for now, but is the intention to have a version of liftbridge that will work on windows in the future?

Once we get current release out the way happy to take a look at gommap in more detail to see if this is possible to extend or replace.

Thanks again for the hard work on the project, everything else is working very well of us.

Optimistic concurrency control

Hi,

is the above on the roadmap?

OffsetOfTheLastMessage := 10 // Get somehow the offset of the last message in a stream

// Create a message envelope to publish.
msg := lift.NewMessage([]byte("Hello, world!"), lift.MessageOptions{
    Key: []byte("foo"), // Key to set on the message
    AckInbox: 'asd453dadfa', // Some random subject name
    OffsetOfTheLastMessage: OffsetOfTheLastMessage, // Set the ID which will be sent on the ack
})

nc.Publish("foo.bar", msg)

The leader/master liftbridge process would accept the message if and only if the offset of the last message in the stream is equal to the value of the OffsetOfTheLastMessage field sent in the message options.

Also, there is an unresolved issue about the same thing on Kafka's issue tracker.

Improve NATS envelope header

  • The current envelope cookie "LIFT" has an above-average likelihood of appearing at the beginning of a nats message, since it's ascii (and an English word). I think the magic number should ideally be 4 random bytes that are not valid UTF-8.
  • Should we add a checksum to further reduce the probability of an accidental lift message? I think a CRC32 of the payload is probably sufficient. Flatbuffers-wise it's actually more convenient to put this in the header rather than footer, but that's probably something I could hack around if we want it in the footer.

Single-stream fanout enhancements

In addition to stream partitioning and load-balance groups, there are a couple other enhancements that could help with fanning out a stream to a large number of consumers.

Opt-in ISR replica reads

Allow consumers to read from ISR followers rather than having to read from the leader. This behavior should be opt-in since there are consistency implications, but in cases where this isn't an issue it would allow distributing load amongst ISR replicas.

Read-replica support

Allow Liftbridge servers to act as read replicas for stream partitions without participating in the ISR. This would allow fanout without impacting the ISR. This should also be opt-in behavior.

Message delivery guarantee

I have compiled and tried liftbridge on top of a cluster of NATS servers. It worked well. In order to send append a message to be appended to a log managed by liftbridge, I had to send the message to one of the NATS server.

I'm probably missing something, but how is message delivery guaranteed*? As far as I can tell, the guarantee offered by NATS is at-most-once.

https://nats.io/documentation/faq/#gmd (more about NATS delivery guarantees)

*If liftbridge is trying to achieve that.

Make the repository a Go module

Is the project going to move to Go modules in the near future?

I removed everything related to dep in my fork of the project and I am now using the repository as a Go module without any problem (at least so far).

I can propose a pull request if you are interested.

Server not stream leader

I am trying to run the basic usage example code from the go-liftbridge. However, if the stream leader of a newly created stream is not the first one in the address list then I got the following error when calling client.Subscribe():

panic: rpc error: code = FailedPrecondition desc = Server not stream leader

Is this a bug or did I made a mistake while setting up my cluster?


In depth:

In my network, dev.lan resolves to the IP of my development PC.
I have a three-node Liftbridge cluster.

Starting the Nats server:

$ nats-server
[4657] 2019/08/14 13:33:08.433251 [INF] Starting nats-server version 2.0.2
[4657] 2019/08/14 13:33:08.433286 [INF] Git commit [not set]
[4657] 2019/08/14 13:33:08.433388 [INF] Listening for client connections on 0.0.0.0:4222
[4657] 2019/08/14 13:33:08.433468 [INF] Server id is NDAGJBAAD4KN3J7ZNVMPT7HJSCGSOCX3YFKG765PCNHUJ6BXOK3HBQRB
[4657] 2019/08/14 13:33:08.433471 [INF] Server is ready

My liftbridge.conf file:

listen: 0.0.0.0:9292
log.level: debug
nats {
  servers: ["dev.lan:4222"]
}
clustering {
  raft.logging: true
}

The peers of the Liftbridge cluster are running inside Docker containers. I built the Docker image from the latest version of Liftbridge, 2612491 with the following command:

DOCKER_BUILDKIT=1 docker build -t liftbridge:latest .

Starting the first peer:

$ docker run -it -p "9292:9292" --mount type=bind,source=liftbridge.conf,target=/liftbridge.conf liftbridge:latest --config /liftbridge.conf
INFO[2019-08-14 13:34:52] Server ID:        fp8btGxtuxbqn5LZl64AEf
INFO[2019-08-14 13:34:52] Namespace:        liftbridge-default
INFO[2019-08-14 13:34:52] Retention Policy: [Age: 1 week, Compact: false]
INFO[2019-08-14 13:34:52] Starting server on 0.0.0.0:9292...
INFO[2019-08-14 13:34:52]  raft: Initial configuration (index=0): []
DEBU[2019-08-14 13:34:52] Attempting to join metadata Raft group...
INFO[2019-08-14 13:34:52]  raft: Node at fp8btGxtuxbqn5LZl64AEf [Follower] entering Follower state (Leader: "")
DEBU[2019-08-14 13:34:52] raft-net: fp8btGxtuxbqn5LZl64AEf accepted connection from: RDEU1MVK78fAFhqDoGrvnr
WARN[2019-08-14 13:34:52]  raft: Failed to get previous log: 4 log not found (last: 0)
DEBU[2019-08-14 13:34:53] raft-net: fp8btGxtuxbqn5LZl64AEf accepted connection from: RDEU1MVK78fAFhqDoGrvnr

Second peer, also this is the (Raft) leader:

$ docker run -it -p "9293:9292" --mount type=bind,source=liftbridge.conf,target=/liftbridge.conf liftbridge:latest --raft-bootstrap-seed --config /liftbridge.conf
INFO[2019-08-14 13:34:46] Server ID:        RDEU1MVK78fAFhqDoGrvnr
INFO[2019-08-14 13:34:46] Namespace:        liftbridge-default
INFO[2019-08-14 13:34:46] Retention Policy: [Age: 1 week, Compact: false]
INFO[2019-08-14 13:34:46] Starting server on 0.0.0.0:9292...
INFO[2019-08-14 13:34:46]  raft: Initial configuration (index=0): []
DEBU[2019-08-14 13:34:46] Bootstrapping metadata Raft group as seed node
INFO[2019-08-14 13:34:46]  raft: Node at RDEU1MVK78fAFhqDoGrvnr [Follower] entering Follower state (Leader: "")
WARN[2019-08-14 13:34:47]  raft: Heartbeat timeout from "" reached, starting election
INFO[2019-08-14 13:34:47]  raft: Node at RDEU1MVK78fAFhqDoGrvnr [Candidate] entering Candidate state in term 2
DEBU[2019-08-14 13:34:47] raft: Votes needed: 1
DEBU[2019-08-14 13:34:47] raft: Vote granted from RDEU1MVK78fAFhqDoGrvnr in term 2. Tally: 1
INFO[2019-08-14 13:34:47]  raft: Election won. Tally: 1
INFO[2019-08-14 13:34:47]  raft: Node at RDEU1MVK78fAFhqDoGrvnr [Leader] entering Leader state
INFO[2019-08-14 13:34:47] Server became metadata leader, performing leader promotion actions
INFO[2019-08-14 13:34:52]  raft: Updating configuration with AddStaging (fp8btGxtuxbqn5LZl64AEf, fp8btGxtuxbqn5LZl64AEf) to [{Suffrage:Voter ID:RDEU1MVK78fAFhqDoGrvnr Address:RDEU1MVK78fAFhqDoGrvnr} {Suffrage:Voter ID:fp8btGxtuxbqn5LZl64AEf Address:fp8btGxtuxbqn5LZl64AEf}]
INFO[2019-08-14 13:34:52]  raft: Added peer fp8btGxtuxbqn5LZl64AEf, starting replication
WARN[2019-08-14 13:34:52]  raft: AppendEntries to {Voter fp8btGxtuxbqn5LZl64AEf fp8btGxtuxbqn5LZl64AEf} rejected, sending older logs (next: 1)
INFO[2019-08-14 13:34:52]  raft: pipelining replication to peer {Voter fp8btGxtuxbqn5LZl64AEf fp8btGxtuxbqn5LZl64AEf}
INFO[2019-08-14 13:34:54]  raft: Updating configuration with AddStaging (GNw4jnTEFA2OzvbYV3Pdyq, GNw4jnTEFA2OzvbYV3Pdyq) to [{Suffrage:Voter ID:RDEU1MVK78fAFhqDoGrvnr Address:RDEU1MVK78fAFhqDoGrvnr} {Suffrage:Voter ID:fp8btGxtuxbqn5LZl64AEf Address:fp8btGxtuxbqn5LZl64AEf} {Suffrage:Voter ID:GNw4jnTEFA2OzvbYV3Pdyq Address:GNw4jnTEFA2OzvbYV3Pdyq}]
INFO[2019-08-14 13:34:54]  raft: Added peer GNw4jnTEFA2OzvbYV3Pdyq, starting replication
WARN[2019-08-14 13:34:54]  raft: AppendEntries to {Voter GNw4jnTEFA2OzvbYV3Pdyq GNw4jnTEFA2OzvbYV3Pdyq} rejected, sending older logs (next: 1)
INFO[2019-08-14 13:34:54]  raft: pipelining replication to peer {Voter GNw4jnTEFA2OzvbYV3Pdyq GNw4jnTEFA2OzvbYV3Pdyq}

Third peer:

$ docker run -it -p "9294:9292" --mount type=bind,source=liftbridge.conf,target=/liftbridge.conf liftbridge:latest --config /liftbridge.conf
INFO[2019-08-14 13:34:54] Server ID:        GNw4jnTEFA2OzvbYV3Pdyq
INFO[2019-08-14 13:34:54] Namespace:        liftbridge-default
INFO[2019-08-14 13:34:54] Retention Policy: [Age: 1 week, Compact: false]
INFO[2019-08-14 13:34:54] Starting server on 0.0.0.0:9292...
INFO[2019-08-14 13:34:54]  raft: Initial configuration (index=0): []
DEBU[2019-08-14 13:34:54] Attempting to join metadata Raft group...
INFO[2019-08-14 13:34:54]  raft: Node at GNw4jnTEFA2OzvbYV3Pdyq [Follower] entering Follower state (Leader: "")
DEBU[2019-08-14 13:34:54] raft-net: GNw4jnTEFA2OzvbYV3Pdyq accepted connection from: RDEU1MVK78fAFhqDoGrvnr
WARN[2019-08-14 13:34:54]  raft: Failed to get previous log: 5 log not found (last: 0)
DEBU[2019-08-14 13:34:54] raft-net: GNw4jnTEFA2OzvbYV3Pdyq accepted connection from: RDEU1MVK78fAFhqDoGrvnr

go-liftbridge's version from go.mod:

# go.mod
github.com/liftbridge-io/go-liftbridge v0.0.0-20190704003903-285fd55a55b5

I only changed the addrs variable from the example code, which now looks as follows:

addrs := []string{"dev.lan:9292", "dev.lan:9293", "dev.lan:9294"}

When I run the example I got the error above, and the following messages got logged by Liftbridge:

# Peer1
DEBU[2019-08-14 13:45:26] api: FetchMetadata []                                                                                                                                                                                                       [7/1989]
DEBU[2019-08-14 13:45:26] api: CreateStream [subject=foo, name=foo-stream, replicationFactor=1]                                                                    
DEBU[2019-08-14 13:45:26] fsm: Created stream [subject=foo, name=foo-stream]                             
DEBU[2019-08-14 13:45:26] api: Publish [subject=foo]
DEBU[2019-08-14 13:45:26] api: FetchMetadata []
DEBU[2019-08-14 13:45:26] api: Subscribe [subject=foo, name=foo-stream, start=EARLIEST, offset=0, timestamp=-6795364578871345152]
ERRO[2019-08-14 13:45:26] api: Failed to subscribe to stream [subject=foo, name=foo-stream]: server not stream leader
DEBU[2019-08-14 13:45:26] api: FetchMetadata []
DEBU[2019-08-14 13:45:26] api: Subscribe [subject=foo, name=foo-stream, start=EARLIEST, offset=0, timestamp=-6795364578871345152]
ERRO[2019-08-14 13:45:26] api: Failed to subscribe to stream [subject=foo, name=foo-stream]: server not stream leader
DEBU[2019-08-14 13:45:26] api: FetchMetadata []
DEBU[2019-08-14 13:45:26] api: Subscribe [subject=foo, name=foo-stream, start=EARLIEST, offset=0, timestamp=-6795364578871345152]
ERRO[2019-08-14 13:45:26] api: Failed to subscribe to stream [subject=foo, name=foo-stream]: server not stream leader
DEBU[2019-08-14 13:45:26] api: FetchMetadata []
DEBU[2019-08-14 13:45:26] api: Subscribe [subject=foo, name=foo-stream, start=EARLIEST, offset=0, timestamp=-6795364578871345152]
ERRO[2019-08-14 13:45:26] api: Failed to subscribe to stream [subject=foo, name=foo-stream]: server not stream leader
DEBU[2019-08-14 13:45:26] api: FetchMetadata []

# Peer 2:
DEBU[2019-08-14 13:45:26] Server becoming leader for stream [subject=foo, name=foo-stream], epoch: 6                                    
DEBU[2019-08-14 13:45:26] Updated stream leader epoch. New: {epoch:6, offset:-1}, Previous: {epoch:0, offset:-1} for stream [subject=foo, name=foo-stream]. Cache now contains 1 entry.
DEBU[2019-08-14 13:45:26] fsm: Created stream [subject=foo, name=foo-stream] 

# Peer 3
DEBU[2019-08-14 13:45:26] fsm: Created stream [subject=foo, name=foo-stream]

Contributing

Hi, how open is this project to outside contributions?

Related, is there an expectation it will ever be merged back into "mainline" nats work?

Is development here ongoing? Does it relate in any way to the mention of "Jetstream" on the nats-io project page?

Fill in autogenerated correlationId if sync publish

I think the current implementation of the sync Publish may listen on a shared ackInbox without a correlationId. It should always force a non-empty correlationId and verify it when the acks come in. For the fire-and-forget, that verification is let to the caller, which I think is reasonable, but if we're internally blocking and waiting we should make sure we wait for the right one.

Chaos testing

Implement chaos testing to validate correctness and guarantees.

Embed NATS

I was wondering if we can support NATS embedding ? This is when you import the NATS server into your own server.

For my uses case I intend to run lift bridge on desktops.

some confusing about go.mod ( module split )

in liftbidge go.mod

module github.com/liftbridge-io/liftbridge

go 1.12

require (
 ....
	github.com/liftbridge-io/go-liftbridge v0.0.0-20190703015712-9f8cb1ad3118
...
}
 

and , in go-liftbridge go.mod

module github.com/liftbridge-io/go-liftbridge

go 1.12

require (
...
	github.com/liftbridge-io/liftbridge v0.0.0-20190703152401-d5d0fe2ef597
...
)

it's cross depend, so , a suggest:

server side:
just import gRPC protobuffer define file from https://github.com/liftbridge-io/liftbridge-grpc and generate a pb.go for server gRPC

in client side:
the same way to just import proto file from https://github.com/liftbridge-io/liftbridge-grpc , and generate pb.go client code for client in go / java /........

so, in this way to broke the cross depend in server and go client, and server / client should be release independent.

.

How to build a docker image?

I tried to build a Docker image based on the latest code. I did the following:

go get github.com/liftbridge-io/liftbridge
cd $(go env GOPATH)/src/github.com/liftbridge-io/liftbridge
docker build -t liftbridge:latest .

But I got the following error:

Sending build context to Docker daemon  5.889MB
Step 1/11 : FROM golang:1.9-alpine as build-base
 ---> b0260be938c6
Step 2/11 : RUN apk update && apk upgrade &&     apk add --no-cache bash git openssh make
 ---> Using cache
 ---> 84c05703df2d
Step 3/11 : ADD . /go/src/github.com/liftbridge-io/liftbridge
 ---> 0f6471c8f6ec
Step 4/11 : WORKDIR /go/src/github.com/liftbridge-io/liftbridge
 ---> Running in 71cb62433619
Removing intermediate container 71cb62433619
 ---> 0d5ff66e07c8
Step 5/11 : RUN go get golang.org/x/net/...
 ---> Running in 0636c8d79030
Removing intermediate container 0636c8d79030
 ---> 683c9c0f8a8d
Step 6/11 : RUN GOOS=linux GOARCH=amd64 go build
 ---> Running in 1c9c7fe77623
server/stream.go:11:2: cannot find package "github.com/Workiva/go-datastructures/queue" in any of:
	/usr/local/go/src/github.com/Workiva/go-datastructures/queue (from $GOROOT)
	/go/src/github.com/Workiva/go-datastructures/queue (from $GOPATH)
server/config.go:10:2: cannot find package "github.com/dustin/go-humanize" in any of:
	/usr/local/go/src/github.com/dustin/go-humanize (from $GOROOT)
	/go/src/github.com/dustin/go-humanize (from $GOPATH)
server/fsm.go:10:2: cannot find package "github.com/dustin/go-humanize/english" in any of:
	/usr/local/go/src/github.com/dustin/go-humanize/english (from $GOROOT)
	/go/src/github.com/dustin/go-humanize/english (from $GOPATH)
server/proto/internal.pb.go:35:8: cannot find package "github.com/golang/protobuf/proto" in any of:
	/usr/local/go/src/github.com/golang/protobuf/proto (from $GOROOT)
	/go/src/github.com/golang/protobuf/proto (from $GOPATH)
server/config.go:11:2: cannot find package "github.com/hako/durafmt" in any of:
	/usr/local/go/src/github.com/hako/durafmt (from $GOROOT)
	/go/src/github.com/hako/durafmt (from $GOPATH)
server/fsm.go:11:2: cannot find package "github.com/hashicorp/raft" in any of:
	/usr/local/go/src/github.com/hashicorp/raft (from $GOROOT)
	/go/src/github.com/hashicorp/raft (from $GOPATH)
server/raft.go:13:2: cannot find package "github.com/hashicorp/raft-boltdb" in any of:
	/usr/local/go/src/github.com/hashicorp/raft-boltdb (from $GOROOT)
	/go/src/github.com/hashicorp/raft-boltdb (from $GOPATH)
server/api.go:7:2: cannot find package "github.com/liftbridge-io/go-liftbridge/liftbridge-grpc" in any of:
	/usr/local/go/src/github.com/liftbridge-io/go-liftbridge/liftbridge-grpc (from $GOROOT)
	/go/src/github.com/liftbridge-io/go-liftbridge/liftbridge-grpc (from $GOPATH)
server/raft.go:16:2: cannot find package "github.com/liftbridge-io/nats-on-a-log" in any of:
	/usr/local/go/src/github.com/liftbridge-io/nats-on-a-log (from $GOROOT)
	/go/src/github.com/liftbridge-io/nats-on-a-log (from $GOPATH)
server/commitlog/commitlog.go:17:2: cannot find package "github.com/natefinch/atomic" in any of:
	/usr/local/go/src/github.com/natefinch/atomic (from $GOROOT)
	/go/src/github.com/natefinch/atomic (from $GOPATH)
server/api.go:8:2: cannot find package "github.com/nats-io/nats.go" in any of:
	/usr/local/go/src/github.com/nats-io/nats.go (from $GOROOT)
	/go/src/github.com/nats-io/nats.go (from $GOPATH)
server/api.go:9:2: cannot find package "github.com/nats-io/nuid" in any of:
	/usr/local/go/src/github.com/nats-io/nuid (from $GOROOT)
	/go/src/github.com/nats-io/nuid (from $GOPATH)
server/commitlog/index.go:11:2: cannot find package "github.com/nsip/gommap" in any of:
	/usr/local/go/src/github.com/nsip/gommap (from $GOROOT)
	/go/src/github.com/nsip/gommap (from $GOPATH)
server/commitlog/commitlog.go:18:2: cannot find package "github.com/pkg/errors" in any of:
	/usr/local/go/src/github.com/pkg/errors (from $GOROOT)
	/go/src/github.com/pkg/errors (from $GOPATH)
server/logger/logger.go:6:2: cannot find package "github.com/sirupsen/logrus" in any of:
	/usr/local/go/src/github.com/sirupsen/logrus (from $GOROOT)
	/go/src/github.com/sirupsen/logrus (from $GOPATH)
main.go:12:2: cannot find package "github.com/urfave/cli" in any of:
	/usr/local/go/src/github.com/urfave/cli (from $GOROOT)
	/go/src/github.com/urfave/cli (from $GOPATH)
server/server.go:21:2: cannot find package "google.golang.org/grpc" in any of:
	/usr/local/go/src/google.golang.org/grpc (from $GOROOT)
	/go/src/google.golang.org/grpc (from $GOPATH)
server/api.go:11:2: cannot find package "google.golang.org/grpc/codes" in any of:
	/usr/local/go/src/google.golang.org/grpc/codes (from $GOROOT)
	/go/src/google.golang.org/grpc/codes (from $GOPATH)
server/server.go:22:2: cannot find package "google.golang.org/grpc/credentials" in any of:
	/usr/local/go/src/google.golang.org/grpc/credentials (from $GOROOT)
	/go/src/google.golang.org/grpc/credentials (from $GOPATH)
server/api.go:12:2: cannot find package "google.golang.org/grpc/status" in any of:
	/usr/local/go/src/google.golang.org/grpc/status (from $GOROOT)
	/go/src/google.golang.org/grpc/status (from $GOPATH)
The command '/bin/sh -c GOOS=linux GOARCH=amd64 go build' returned a non-zero code: 1

The strange thing that I used the same steps to successfully build the image with an earlier version of Liftbridge.

Also, what is the benefit of having go.mod and go.sum files in the project when the Dockerfile does not seem to use the new module feature of golang?

EDIT:

Upon go getting Liftbridge

go get -v github.com/liftbridge-io/liftbridge

I am prompted with the following warnings/errors:

github.com/liftbridge-io/liftbridge/server
# github.com/liftbridge-io/liftbridge/server
liftbridge/server/api.go:112:55: undefined: "github.com/liftbridge-io/go-liftbridge/liftbridge-grpc".PublishRequest
liftbridge/server/api.go:113:3: undefined: "github.com/liftbridge-io/go-liftbridge/liftbridge-grpc".PublishResponse
liftbridge/server/api.go:137:24: undefined: "github.com/liftbridge-io/go-liftbridge/liftbridge-grpc".PublishResponse

Auth/AZ questions with NATS interoperability

First of all, I really like what you've done here. As I look more closely I'd like to have a good understanding of some details. I didn't want to hijack the other issue so I've opened this one for my questions.

Unlike NATS Streaming, it uses the core NATS protocol with optional extensions. This means it can be added to an existing NATS deployment to provide message durability with no code changes.

If authentication and authorization are not using NATS auth/az - how does this mean no code changes?

On first read of your blog post and other information, it seems like liftbridge is the solution that nats-streaming should (wanted to?) be. I'm trying to find a good streaming solution, nats looks great, but it seems that the two simply are not fitting.

As I look at NATS-Streaming more closely I don't see authorization for streaming clients there either - can you help me understand what the options are to authorize connections based on your past interaction with the NATS streaming project and current efforts on liftbridge?

Finally, if it's just easier to point me to a tutorial or example, that's OK too :)

Latest offset in stream metadata

I am not sure how costly this would be to maintain in practice, but the use case is to support single producer streams to know when the end-of-stream has been reached.

There is a way to emulate this which relies on a timeout on the latency of a message being received. Assuming some p99.x of this latency is stable, then this addition may not be necessary.

Best GRPC Cli?

I tried grpc_cli to query from a liftbridge server and it failed. (grpc_cli ls localhost:9292)

Does anyone know a GRPC client that works with liftbridge?

ps does liftbridge support "grpc reflection"? (I think that is required by grpc_cli...)

Webhook extension

Hi all,
I want to discuss about a Webhook extension.

The main idea is to add a Webhook gateway feature to liftbridge.
Pub/sub software clients registered to some subject and listening to the messages.
How we can extend the regular behavior, and send the messages with an HTTP request.
Let's say that on the registration the client will declare a URL to send the messages in a known format.

What do you think about the approach? and how the design should look like?

Production ready.

Hi,
This project looks awesome.
Do you know when it going to be production ready?

Thanks.

Failed to install

Today trying to install Liftbridge on a fresh Vagrant box, Ubuntu 16.04 LTS with go version go1.12.7 linux/amd64, I got the following error:

vagrant@ubuntu-xenial:~$ go get github.com/liftbridge-io/liftbridge
# github.com/liftbridge-io/nats-on-a-log
go/src/github.com/liftbridge-io/nats-on-a-log/nats_transport.go:373:53: cannot use logger (type *log.Logger) as type hclog.Logger in argument to raft.NewNetworkTransportWithLogger:
	*log.Logger does not implement hclog.Logger (missing Debug method)
go/src/github.com/liftbridge-io/nats-on-a-log/nats_transport.go:386:60: cannot use config.Logger (type hclog.Logger) as type *log.Logger in argument to createNATSTransport

Alter Replication Protocol to use Leader Epoch rather than High Watermark for Truncation

Currently, when a replica starts, it truncates its log up to the high watermark (HW) to remove any potentially uncommitted messages. There are a couple edge cases with this method of truncating the log that could result in data loss or replica divergence.

These can be solved by using a leader epoch rather than the high watermark for truncation. This solution was implemented in Kafka and is described here. Liftbridge already implements a leader epoch, so this should be fairly straightforward to change.

Revisit config format

Consider moving away from NATS config format and flattening/simplifying config. Also ensure config names follow a pattern, e.g. log.level and log.recovery instead of log.level and show.recovery.logs.

Visibility of messages between Liftbridge and NATS

The README does a great job describing how Liftbridge is not NATS Streaming:

NATS Streaming provides a similar log-based messaging solution. However, it is an entirely separate protocol built on top of NATS. NATS is simply the transport for NATS Streaming. This means there is no "cross-talk" between messages published to NATS and messages published to NATS Streaming.

But in the next paragraph I'm missing a key insight: Does Liftbridge allow "cross-talk"? Are messages I publish with the Liftbridge client visible to a "standard" NATS client? Can a NATS client subscribe to the same channels and see both "standard" and "liftbridge" messages?

Liftbridge was built to augment NATS with durability rather than providing a completely separate system. NATS Streaming also provides a broader set of features such as durable subscriptions, queue groups, pluggable storage backends, and multiple fault-tolerance modes. Liftbridge aims to have a small API surface area.

Dart Client

If your interested in using Liftbridge with Flutter, it might be interesting to know that a Dart Client has been raised as a quasi proposal over at the Nats Repo
nats-io/nats-general#24

As it stands, if you were building a Flutter app with a Liftbridge serve, if the Flutter app wants to Publish a message they need GRPC and NATS libs.

For Subscriptions, the client is only dependent on GRPC, so this works today.

I should also mention that its not just great for Flutter apps, but also for Dart Services or Gateways, where you might to use Liftbridge to connect up or orchestrate / choreograph your microservices.

Anyway here#s hoping this happens !!

Failed to create stream reader: entry not found

I just ran the go-liftbridge "Basic Usage" example against a local 1 node liftbridge server and then tried to push some messages to it using:

func main() {
	nc, err := nats.Connect(nats.DefaultURL)
	if err != nil {
		log.Fatal(err)
	}
	defer nc.Close()
	// Simple Publisher
	for i := 0; i < 1000000; i++ {
		if err := nc.Publish("foo", []byte(fmt.Sprintf("Hello World %v", i))); err != nil {
			log.Fatal(err)
		}
	}
	if err := nc.Flush(); err != nil {
		log.Fatal(err)
	}
}

After around 300000 messages the go-liftbridge example panicked with "panic: rpc error: code = Unknown desc = entry not found".

I suspect this is actually an issue with the liftbridge server, not go-liftbridge because if I try to re-run the go-liftbridge example again (after it panics) the server shows the following

ERRO[2018-09-03 12:56:27] api: Failed to subscribe to stream [subject=foo, name=foo-stream]: rpc error: code = Internal desc = Failed to create stream reader: entry not found

Is this a bug or am I missing something somewhere?

Stream-level configs

Currently, stream configs such as compaction and retention limits are global to all streams. We will need to support stream-level configs.

Transparent offload to object storage

Pulsar and Gazette both support this feature. Closed, immutable segments are copied to object storage so brokers only need to keep the open segments on local disk. When a consumer needs to read older segments the broker transparently reads from object storage.

Obviously not a pressing feature until storage scale becomes an issue, but just wanted to create an issue to discuss.

Help Building GRPC/Ruby Client

I'm building a simple ruby client to learn Liftbridge and GRPC.

See my WIP at https://github.com/andyl/liftbridge_ruby

I have generated the language bindings using protoc but could use some help figuring out how to use the bindings in an executable ruby script. (lb_client)

Anyone interested in a quick pair-programming session, or to send me a PR?

Quorum size limit

Currently, all servers in a cluster participate in the metadata Raft quorum. This severely limits scalability of the cluster (e.g. in a 100 node cluster, 51 have to respond to commit any change, Raft has n^2 messages, etc.).

Liftbridge should allow only having a subset of servers form the Raft quorum. The Raft library should make this fairly straightforward since non-quorum servers can still run Raft as non-voters and receive committed logs and follow the state without increasing quorum size. The challenge here is making the UX for actually promoting one of them to be a quorum member if one of the existing quorum fails, but the Raft library provides a way to do this.

backpressure?

Hi Tyler! Cool looking project!

A problem with basic nats is that it is very easy for a fast producer to overrun even a lone consumer. There's no backpressure built in. Consumers get cut off for being slow.

Kafka gives backpressure. So I wonder if Liftbridge will as well--so as to remedy this problem with basic nats?

Jason

Grpc and proto

Great project and idea.

I wanted to share some feedback and not really an issue per se. The main reason I've moved away from both Nats streaming and passed on liftbridge has been the heavy use of protobufs.

Most implementations of protobufs allocate, some heavily so, making it pretty terrible for low latency messaging. I ended up with doing something very similar (albeit much much more barebones) than liftbridge on top of pure NATS for that reason only.

I understand it' a trade off between simplicity/adoption/speed, but figured it might be some useful feedback as sooner or later even if you optimize the storage pipeline, protos will become a limiting factor or at least they did in my case.

Would be great to see a streaming messaging system supporting pluggable ser/des formats, that would really bring something great to the table. Or at least one that uses an encoding that doesn't allocate on the hot path - flatbuf, capnproto, sbe (latter is only good for numeric data, so unlikely to be useful for the general case).

👍 👍

Authentication and Authorization

Hi,

I see that Authentication & Authorization is in planned features.
Any specific plans for using embedded DB / file based or others? Any reference architecture would help.

Thanks

Connect to NATS cluster with username/password

In the documentation of NATS server I found that you can connect to a NATS server with the Go NATS client with a username and password in the url. When I use a username and password in the NATS server url for Liftbridge I get the following message:

panic: failed to connect to NATS: parse nats://myuser:mypassword: invalid port ":mypassword" after host

(I have changed the username and password with placeholders, the actual password has a special character in it (^))

Should I connect in some other way to use a username/password for the NATS cluster, or is this not possible yet?

Encapsulate the NATS protocol with GRPC

The intent is to not expose the NATS import to the clients.
This means that we are building a NATS proxy and exposing it over GRPC to the Clients.

The payoff is that any language that can use GPRC ( many ) can use NATS.

Add higher-level stream partitioning

Liftbridge currently has a couple mechanisms for horizontal scaling of message consumption:

  1. Creating multiple streams bound to the same NATS subject: This allows adding more brokers such that the streams are spread across them which in turn allows adding more consumers. The downside of this is that it's redundant processing of messages—consumers of each stream are all processing the same set of messages. The other issue is because each stream is independent of each other, they have separate guarantees. Separate leaders/followers, separate ISR, and separate acking means the logs for each stream are not guaranteed to be identical, even though they are bound to the same NATS subject.

  2. Streams can join a load-balance group, which load balances a NATS subject among the streams in the group. This effectively partitions a NATS subject across a set of streams. This means we can add more streams to the group, which are distributed across the cluster, and consumers can independently consume different parts of the subject. The downside here is the partitioning is random since load-balance groups rely on NATS queue groups.

I would like to add support for higher-level stream partitioning, similar to Kafka's topic partitioning. The idea being we would map Liftbridge stream partitions to NATS subjects which would individually map to Liftbridge streams. Here is how this would work:

  1. Client creates a "partitioned" stream using a new StreamOption which specifies the number of partitions to create.
client.CreateStream(ctx, "foo.bar", "my-stream", lift.Partitioned(3))

Internally, this would create three streams in Liftbridge mapped to the following NATS subjects: foo.bar.0, foo.bar.1, and foo.bar.2.

  1. Client publishes messages to the NATS subject using a new MessageOption which specifies a partition mapping policy. This may be based on the message key, round robin, or even a custom mapping policy. Alternatively, the client could explicitly specify which partition to publish to. Internally, the client will need to fetch the number of partitions for a subject from the metadata leader. If no partition mapping policy is specified, the existing publish behavior is used, i.e. publish to the subject literal foo.bar instead of foo.bar.0, foo.bar.1, or foo.bar.2.
client.Publish(ctx, "foo.bar", []byte("hello world!"),
        lift.Key([]byte("foo")),
        lift.PartitionKey(),
)
  1. Client subscribes to stream partition using a new SubscriptionOption which specifies the partition to consume from. Additionally, we will implement consumer groups similar to Kafka to consume an entire stream such that partitions are balanced among consumers.
client.Subscribe(ctx, "foo.bar", "my-stream", func(msg *proto.Message, err error) {
        // ...
}, lift.Partition(1))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.