Giter Club home page Giter Club logo

cete's Introduction

Cete

Cete is a distributed key value store server written in Go built on top of BadgerDB.
It provides functions through gRPC (HTTP/2 + Protocol Buffers) or traditional RESTful API (HTTP/1.1 + JSON).
Cete implements Raft consensus algorithm by hashicorp/raft. It achieve consensus across all the instances of the nodes, ensuring that every change made to the system is made to a quorum of nodes, or none at all.
Cete makes it easy bringing up a cluster of BadgerDB (a cete of badgers) .

Features

  • Easy deployment
  • Bringing up cluster
  • Database replication
  • An easy-to-use HTTP API
  • CLI is also available
  • Docker container image is available

Building Cete

When you satisfied dependencies, let's build Cete for Linux as following:

$ mkdir -p ${GOPATH}/src/github.com/mosuka
$ cd ${GOPATH}/src/github.com/mosuka
$ git clone https://github.com/mosuka/cete.git
$ cd cete
$ make build

If you want to build for other platform, set GOOS, GOARCH environment variables. For example, build for macOS like following:

$ make GOOS=darwin build

Binaries

You can see the binary file when build successful like so:

$ ls ./bin
cete

Testing Cete

If you want to test your changes, run command like following:

$ make test

Packaging Cete

Linux

$ make GOOS=linux dist

macOS

$ make GOOS=darwin dist

Configure Cete

CLI Flag Environment variable Configuration File Description
--config-file - - config file. if omitted, cete.yaml in /etc and home directory will be searched
--id CETE_ID id node ID
--raft-address CETE_RAFT_ADDRESS raft_address Raft server listen address
--grpc-address CETE_GRPC_ADDRESS grpc_address gRPC server listen address
--http-address CETE_HTTP_ADDRESS http_address HTTP server listen address
--data-directory CETE_DATA_DIRECTORY data_directory data directory which store the key-value store data and Raft logs
--peer-grpc-address CETE_PEER_GRPC_ADDRESS peer_grpc_address listen address of the existing gRPC server in the joining cluster
--certificate-file CETE_CERTIFICATE_FILE certificate_file path to the client server TLS certificate file
--key-file CETE_KEY_FILE key_file path to the client server TLS key file
--common-name CETE_COMMON_NAME common_name certificate common name
--log-level CETE_LOG_LEVEL log_level log level
--log-file CETE_LOG_FILE log_file log file
--log-max-size CETE_LOG_MAX_SIZE log_max_size max size of a log file in megabytes
--log-max-backups CETE_LOG_MAX_BACKUPS log_max_backups max backup count of log files
--log-max-age CETE_LOG_MAX_AGE log_max_age max age of a log file in days
--log-compress CETE_LOG_COMPRESS log_compress compress a log file

Starting Cete node

Starting cete is easy as follows:

$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1

You can get the node information with the following command:

$ ./bin/cete node | jq .

or the following URL:

$ curl -X GET http://localhost:8000/v1/node | jq .

The result of the above command is:

{
  "node": {
    "raft_address": ":7000",
    "metadata": {
      "grpc_address": ":9000",
      "http_address": ":8000"
    },
    "state": "Leader"
  }
}

Health check

You can check the health status of the node.

$ ./bin/cete healthcheck | jq .

Also provides the following REST APIs

Liveness prove

This endpoint always returns 200 and should be used to check Cete health.

$ curl -X GET http://localhost:8000/v1/liveness_check | jq .

Readiness probe

This endpoint returns 200 when Cete is ready to serve traffic (i.e. respond to queries).

$ curl -X GET http://localhost:8000/v1/readiness_check | jq .

Putting a key-value

To put a key-value, execute the following command:

$ ./bin/cete set 1 value1

or, you can use the RESTful API as follows:

$ curl -X PUT 'http://127.0.0.1:8000/v1/data/1' --data-binary value1
$ curl -X PUT 'http://127.0.0.1:8000/v1/data/2' -H "Content-Type: image/jpeg" --data-binary @/path/to/photo.jpg

Getting a key-value

To get a key-value, execute the following command:

$ ./bin/cete get 1

or, you can use the RESTful API as follows:

$ curl -X GET 'http://127.0.0.1:8000/v1/data/1'

You can see the result. The result of the above command is:

value1

Deleting a key-value

Deleting a value by key, execute the following command:

$ ./bin/cete delete 1

or, you can use the RESTful API as follows:

$ curl -X DELETE 'http://127.0.0.1:8000/v1/data/1'

Bringing up a cluster

Cete is easy to bring up the cluster. Cete node is already running, but that is not fault tolerant. If you need to increase the fault tolerance, bring up 2 more data nodes like so:

$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000

Above example shows each Cete node running on the same host, so each node must listen on different ports. This would not be necessary if each node ran on a different host.

This instructs each new node to join an existing node, each node recognizes the joining clusters when started. So you have a 3-node cluster. That way you can tolerate the failure of 1 node. You can check the cluster with the following command:

$ ./bin/cete cluster | jq .

or, you can use the RESTful API as follows:

$ curl -X GET 'http://127.0.0.1:8000/v1/cluster' | jq .

You can see the result in JSON format. The result of the above command is:

{
  "cluster": {
    "nodes": {
      "node1": {
        "raft_address": ":7000",
        "metadata": {
          "grpc_address": ":9000",
          "http_address": ":8000"
        },
        "state": "Leader"
      },
      "node2": {
        "raft_address": ":7001",
        "metadata": {
          "grpc_address": ":9001",
          "http_address": ":8001"
        },
        "state": "Follower"
      },
      "node3": {
        "raft_address": ":7002",
        "metadata": {
          "grpc_address": ":9002",
          "http_address": ":8002"
        },
        "state": "Follower"
      }
    },
    "leader": "node1"
  }
}

Recommend 3 or more odd number of nodes in the cluster. In failure scenarios, data loss is inevitable, so avoid deploying single nodes.

The above example, the node joins to the cluster at startup, but you can also join the node that already started on standalone mode to the cluster later, as follows:

$ ./bin/cete join --grpc-addr=:9000 node2 127.0.0.1:9001

or, you can use the RESTful API as follows:

$ curl -X PUT 'http://127.0.0.1:8000/v1/cluster/node2' --data-binary '
{
  "raft_address": ":7001",
  "metadata": {
    "grpc_address": ":9001",
    "http_address": ":8001"
  }
}
'

To remove a node from the cluster, execute the following command:

$ ./bin/cete leave --grpc-addr=:9000 node2

or, you can use the RESTful API as follows:

$ curl -X DELETE 'http://127.0.0.1:8000/v1/cluster/node2'

The following command indexes documents to any node in the cluster:

$ ./bin/cete set 1 value1 --grpc-address=:9000 

So, you can get the document from the node specified by the above command as follows:

$ ./bin/cete get 1 --grpc-address=:9000

You can see the result. The result of the above command is:

value1

You can also get the same document from other nodes in the cluster as follows:

$ ./bin/cete get 1 --grpc-address=:9001
$ ./bin/cete get 1 --grpc-address=:9002

You can see the result. The result of the above command is:

value1

Cete on Docker

Building Cete Docker container image on localhost

You can build the Docker container image like so:

$ make docker-build

Pulling Cete Docker container image from docker.io

You can also use the Docker container image already registered in docker.io like so:

$ docker pull mosuka/cete:latest

See https://hub.docker.com/r/mosuka/cete/tags/

Pulling Cete Docker container image from docker.io

You can also use the Docker container image already registered in docker.io like so:

$ docker pull mosuka/cete:latest

Running Cete node on Docker

Running a Cete data node on Docker. Start Cete node like so:

$ docker run --rm --name cete-node1 \
    -p 7000:7000 \
    -p 8000:8000 \
    -p 9000:9000 \
    mosuka/cete:latest cete start \
      --id=node1 \
      --raft-address=:7000 \
      --grpc-address=:9000 \
      --http-address=:8000 \
      --data-directory=/tmp/cete/node1

You can execute the command in docker container as follows:

$ docker exec -it cete-node1 cete node --grpc-address=:9000

Securing Cete

Cete supports HTTPS access, ensuring that all communication between clients and a cluster is encrypted.

Generating a certificate and private key

One way to generate the necessary resources is via openssl. For example:

$ openssl req -x509 -nodes -newkey rsa:4096 -keyout ./etc/cete-key.pem -out ./etc/cete-cert.pem -days 365 -subj '/CN=localhost'
Generating a 4096 bit RSA private key
............................++
........++
writing new private key to 'key.pem'

Secure cluster example

Starting a node with HTTPS enabled, node-to-node encryption, and with the above configuration file. It is assumed the HTTPS X.509 certificate and key are at the paths server.crt and key.pem respectively.

$ ./bin/cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=/tmp/cete/node1 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node2 --raft-address=:7001 --grpc-address=:9001 --http-address=:8001 --data-directory=/tmp/cete/node2 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost
$ ./bin/cete start --id=node3 --raft-address=:7002 --grpc-address=:9002 --http-address=:8002 --data-directory=/tmp/cete/node3 --peer-grpc-address=:9000 --certificate-file=./etc/cert.pem --key-file=./etc/key.pem --common-name=localhost

You can access the cluster by adding a flag, such as the following command:

$ ./bin/cete cluster --grpc-address=:9000 --certificate-file=./cert.pem --common-name=localhost | jq .

or

$ curl -X GET https://localhost:8000/v1/cluster --cacert ./cert.pem | jq .

cete's People

Contributors

christian-roggia avatar mosuka avatar vniche avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cete's Issues

Create working setup for Kubernetes

I will start soon working again on a working cete cluster on Kubernetes and was wondering if anyone here already achieved something of a sort ๐Ÿค”

Benchmark?

Against standard Redis and Aerospike for example.

Use Cete with docker-compose

I tried to launch cete with docker-compose file.
Tried multiple solutions for it but every time, I run docker-compose up it gives port error or permission denied error for creating folder for external directory.
I also tried to add volumes instead of direct directory path still the same.
I am using Mac M1

version:  '3.9'
services: 
    node_1:
      platform: linux/amd64
      image: mosuka/cete:latest
      network_mode: host
      # command: "start --id=node_1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000"
      ports:
        - 2000:7000
        - 8000:8000
        - 9000:9000
      volumes:
        - data:/tmp/cete/data/node_1:delegated
      environment:
        - CETE_ID=node_1
        - CETE_DATA_DIRECTORY=node_1
        - CETE_RAFT_ADDRESS=7000
        - CETE_GRPC_ADDRESS=9000
        - CETE_HTTP_ADDRESS=8000

Context should be passed per operation

Following the standard patterns for storage and databases written in Golang, the context should be passed per operation and not per connection.

The current setup works in the following way:

cete.NewGRPCClientWithContext(..., ctx)

and Get / Set / Delete / etc. operation are executed this way:

cli.Set(req)

the expected call is the following:

cli.Set(ctx, req)

and this is useful when we want to limit, for example, how long a Set or Get should takes before timing out:

ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
defer cancel()

cli.Set(ctx, req)

this useful to control timeouts, cascade cancellation, and similar operations.

Also, as a final note, the context variable in Golang is normally expected to be the first parameter of the function, this is not a written rule but a very common and popular standard:

cete.NewGRPCClientWithContext(ctx, ...)

See https://golang.org/src/net/dial.go#L369 as an example.

replication or sharding?

Hello,

I am interested in Cete for possible use in a highly-scalable P2P system and was wandering if Cete does replication or horizontal-scaling or both?

bug in go defer

You have same bug in a LOT of places. For example in kvs/raft_server.go .
The order is not correct. In case of error, client variable is nil:

	client, err := NewGRPCClient(string(node.GrpcAddr))
	defer func() {
		err := client.Close()
		if err != nil {
			s.logger.Printf("[ERR] %v", err)
		}
	}()
	if err != nil {
		s.logger.Printf("[ERR] %v", err)
		return nil
	}

Here you do it correctly: cmd/cete/cluster.go and other places

client, err := kvs.NewGRPCClient(grpcAddr)
if err != nil {
	return err
}
defer func() {
	err := client.Close()
	if err != nil {
		_, err = fmt.Fprintln(os.Stderr, err)
	}
}()

In NewGRPCClient() function in case of error you return nil object and and error. Here is relevant code:

func NewGRPCClient(address string) (*GRPCClient, error) {

    .........
conn, err := grpc.DialContext(ctx, address, dialOpts...)
if err != nil {
	cancel()
	return **nil**, err
}

Logs are too verbose and generate high amount of data

First of all, I would like to thank you for the amazing work you did here, outstanding!

This ticket is related to issues we encountered in a cloud environment, and more specifically in Kubernetes deployments.

A little background about our setup: we are migrating around 10 million records from ETCD to CETE for a total of ca. 30 GB of data. The data migrated contains JSON, HTML, and other formats.

I would like to address the issue of logs as currently too much traffic/data is being generated and logs are too verbose:

  • Readiness / Liveness check should not log anything unless the log level is set to DEBUG or anything similar. (NOTE: In a kubernetes environment checks are executed regularly every 1-5 seconds).
  • Set / Get / Delete / etc. operations should not generate a log for each change executed - that is a DEBUG log, to give some context when we launched the first migration we have seen over 50 MB of logs generated per second. We have been forced to set the log level to WARNING to avoid high costs for logs ingestion by our cloud provider, which is not ideal.
  • NOT FOUND errors should be less verbose (there are 3 logs generated for each failed lookup, one of which contains the stacktrace). I am not even sure whether a NOT FOUND error should be logged at all - especially with level ERROR - as it is the expected behavior and the server did not encounter any error, the resource was simply missing.

I will update this ticket with more information about other logs as we encounter them.

Feedback: Create a new repository for Helm charts

We are using a Helm chart in production, which deploys a healthy cete cluster. We would be happy to contribute with our code.

Our environment has been tested for a single-node cluster in production but can be scaled up.
Maybe migrating cete to its own organization along with its dependencies could be an idea (this is simply a personal suggestion).

An example of how to setup the repository is the following (from dgraph): https://github.com/dgraph-io/charts

Design flaw in the Raft <-> BadgerDB implementation

During our investigation of why the size of our database was endlessly growing, even when no data was being written to cete, we figured out that there is an important design flaw in how BadgerDB and Raft interact.

The flaw is explained as follows:

  1. The cete server is started
  2. Data is sent to the server
  3. Raft generates new snapshots at regular intervals
  4. Badger writes new vlog files with the logs related to the incoming data
  5. The server is shutdown

Here starts the issue:

  1. The server is restarted
  2. Raft restores the latest snapshot with all key-value pairs snapshotted up to this point
  3. All pairs are replayed through a call to Set() which stores the data in Badger
  4. Badger writes again all pairs coming from the snapshot, generating new logs which will be stored in the vlog files
  5. The server is shutdown - go to 6 and repeat this over

TL;DR: every time the server is restarted all kv pairs are replayed in badger, causing a massive increase in the size of the database and eventually leading to a disk full.

Please note that while KV pairs are being replayed, the garbage collector is not useful. This also causes a massive consumption of resources (CPU, RAM, I/O) at startup time. The situation is even worse when a Kubernetes environment is taken into account where probes could kill the process if it takes to long to start - causing an exponential growth of the issue.

The three options that I could think of to solve this issue are the following:

  • Snapshots restore is disabled on start via config.NoSnapshotRestoreOnStart = true, but can be executed manually in order to recover from disasters (which is what we use since we are running on a single node)
  • Badger is cleaned completely at startup via db.DropAll() and the snapshot is used to re-populate the database (RAM, CPU, I/O intensive)
  • Snapshots use an index and only the records that have an index greater of what is available in badger is replayed (aka incremental snapshots)

restart cete problem

cete-v0.3.1.windows-amd64
window 7 OS
cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=node1

it's ok when I intially start.
cete server start ok and client put and get ok.

but it's error when I restart cete.
the error output is:

I:\Cete\bin>cete start --id=node1 --raft-address=:7000 --grpc-address=:9000 --http-address=:8000 --data-directory=node1
{"level":"error","timestamp":"2022-08-17T11:19:30.762+0800","name":"cete",
"caller":"storage/kvs.go:28","message":"failed to open database","opts":{"Di
r":"node1\kvs","ValueDir":"node1\kvs","SyncWrites":false,"TableLoadingMode":2,
"ValueLogLoadingMode":2,"NumVersionsToKeep":1,"ReadOnly":false,"Truncate":false,
"Logger":null,"Compression":1,"EventLogging":true,"MaxTableSize":67108864,"Level
SizeMultiplier":10,"MaxLevels":7,"ValueThreshold":32,"NumMemtables":5,"BlockSize
":4096,"BloomFalsePositive":0.01,"KeepL0InMemory":true,"MaxCacheSize":1073741824
,"NumLevelZeroTables":5,"NumLevelZeroTablesStall":10,"LevelOneSize":268435456,"V
alueLogFileSize":1073741823,"ValueLogMaxEntries":1000000,"NumCompactors":2,"Comp
actL0OnClose":true,"LogRotatesToFlush":2,"VerifyValueChecksum":false,"Encryption
Key":"","EncryptionKeyRotationDuration":864000000000000,"ChecksumVerificationMod
e":0},"error":"During db.vlog.open: Value log truncate required to run DB. This
might result in data loss","errorVerbose":"Value log truncate required to run DB
. This might result in data loss\ngithub.com/dgraph-io/badger/v2.init\n\t/Users/
m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/errors.go:103\nruntime.
doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5414\nruntime.do
Init\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.doIn
it\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.doInit
\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.doInit\n
\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.main\n\t/u
sr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:190\nruntime.goexit\n\t/usr/
local/Cellar/go/1.14/libexec/src/runtime/asm_amd64.s:1373\nDuring db.vlog.open\n
github.com/dgraph-io/badger/v2/y.Wrapf\n\t/Users/m-osuka/go/pkg/mod/github.com/d
graph-io/badger/[email protected]/y/error.go:82\ngithub.com/dgraph-io/badger/v2.Open\n\t
/Users/m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/db.go:357\ngithu
b.com/mosuka/cete/storage.NewKVS\n\t/Users/m-osuka/go/src/github.com/mosuka/cete
/storage/kvs.go:26\ngithub.com/mosuka/cete/server.NewRaftFSM\n\t/Users/m-osuka/g
o/src/github.com/mosuka/cete/server/raft_fsm.go:36\ngithub.com/mosuka/cete/serve
r.NewRaftServer\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/server/raft_serv
er.go:44\ngithub.com/mosuka/cete/cmd.glob..func11\n\t/Users/m-osuka/go/src/githu
b.com/mosuka/cete/cmd/start.go:55\ngithub.com/spf13/cobra.(*Command).execute\n\t
/Users/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.go:838\ngithub.c
om/spf13/cobra.(*Command).ExecuteC\n\t/Users/m-osuka/go/pkg/mod/github.com/spf13
/[email protected]/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\t/User
s/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.go:883\ngithub.com/mo
suka/cete/cmd.Execute\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/cmd/root.g
o:16\nmain.main\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/main.go:10\nrunt
ime.main\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:203\nruntime.g
oexit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/asm_amd64.s:1373"}
{"level":"error","timestamp":"2022-08-17T11:19:30.787+0800","name":"cete",
"caller":"server/raft_fsm.go:38","message":"failed to create key value store
","path":"node1\kvs","error":"During db.vlog.open: Value log truncate required
to run DB. This might result in data loss","errorVerbose":"Value log truncate re
quired to run DB. This might result in data loss\ngithub.com/dgraph-io/badger/v2
.init\n\t/Users/m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/errors.
go:103\nruntime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:
5414\nruntime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:54
09\nruntime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409
\nruntime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\n
runtime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nru
ntime.main\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:190\nruntime
.goexit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/asm_amd64.s:1373\nDurin
g db.vlog.open\ngithub.com/dgraph-io/badger/v2/y.Wrapf\n\t/Users/m-osuka/go/pkg/
mod/github.com/dgraph-io/badger/[email protected]/y/error.go:82\ngithub.com/dgraph-io/ba
dger/v2.Open\n\t/Users/m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/
db.go:357\ngithub.com/mosuka/cete/storage.NewKVS\n\t/Users/m-osuka/go/src/github
.com/mosuka/cete/storage/kvs.go:26\ngithub.com/mosuka/cete/server.NewRaftFSM\n\t
/Users/m-osuka/go/src/github.com/mosuka/cete/server/raft_fsm.go:36\ngithub.com/m
osuka/cete/server.NewRaftServer\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/
server/raft_server.go:44\ngithub.com/mosuka/cete/cmd.glob..func11\n\t/Users/m-os
uka/go/src/github.com/mosuka/cete/cmd/start.go:55\ngithub.com/spf13/cobra.(*Comm
and).execute\n\t/Users/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.
go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/Users/m-osuka/go/pkg/mod/
github.com/spf13/[email protected]/command.go:943\ngithub.com/spf13/cobra.(*Command).
Execute\n\t/Users/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.go:88
3\ngithub.com/mosuka/cete/cmd.Execute\n\t/Users/m-osuka/go/src/github.com/mosuka
/cete/cmd/root.go:16\nmain.main\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/
main.go:10\nruntime.main\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.g
o:203\nruntime.goexit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/asm_amd64
.s:1373"}
{"level":"error","timestamp":"2022-08-17T11:19:30.789+0800","name":"cete",
"caller":"server/raft_server.go:46","message":"failed to create FSM","path":
"node1\kvs","error":"During db.vlog.open: Value log truncate required to run DB
. This might result in data loss","errorVerbose":"Value log truncate required to
run DB. This might result in data loss\ngithub.com/dgraph-io/badger/v2.init\n\t
/Users/m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/errors.go:103\nr
untime.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5414\nrun
time.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nrunti
me.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime
.doInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.d
oInit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:5409\nruntime.mai
n\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:190\nruntime.goexit\n
\t/usr/local/Cellar/go/1.14/libexec/src/runtime/asm_amd64.s:1373\nDuring db.vlog
.open\ngithub.com/dgraph-io/badger/v2/y.Wrapf\n\t/Users/m-osuka/go/pkg/mod/githu
b.com/dgraph-io/badger/[email protected]/y/error.go:82\ngithub.com/dgraph-io/badger/v2.O
pen\n\t/Users/m-osuka/go/pkg/mod/github.com/dgraph-io/badger/[email protected]/db.go:357
\ngithub.com/mosuka/cete/storage.NewKVS\n\t/Users/m-osuka/go/src/github.com/mosu
ka/cete/storage/kvs.go:26\ngithub.com/mosuka/cete/server.NewRaftFSM\n\t/Users/m-
osuka/go/src/github.com/mosuka/cete/server/raft_fsm.go:36\ngithub.com/mosuka/cet
e/server.NewRaftServer\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/server/ra
ft_server.go:44\ngithub.com/mosuka/cete/cmd.glob..func11\n\t/Users/m-osuka/go/sr
c/github.com/mosuka/cete/cmd/start.go:55\ngithub.com/spf13/cobra.(*Command).exec
ute\n\t/Users/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.go:838\ng
ithub.com/spf13/cobra.(*Command).ExecuteC\n\t/Users/m-osuka/go/pkg/mod/github.co
m/spf13/[email protected]/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n
\t/Users/m-osuka/go/pkg/mod/github.com/spf13/[email protected]/command.go:883\ngithub
.com/mosuka/cete/cmd.Execute\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/cmd
/root.go:16\nmain.main\n\t/Users/m-osuka/go/src/github.com/mosuka/cete/main.go:1
0\nruntime.main\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/proc.go:203\nru
ntime.goexit\n\t/usr/local/Cellar/go/1.14/libexec/src/runtime/asm_amd64.s:1373"}

Error: During db.vlog.open: Value log truncate required to run DB. This might re
sult in data loss
Usage:
cete start [flags]

Flags:
--certificate-file string path to the client server TLS certificate fil
e
--common-name string certificate common name
--config-file string config file. if omitted, cete.yaml in /etc an
d home directory will be searched
--data-directory string data directory which store the key-value stor
e data and Raft logs (default "/tmp/cete/data")
--grpc-address string gRPC server listen address (default ":9000")
-h, --help help for start
--http-address string HTTP server listen address (default ":8000")
--id string node ID (default "node1")
--key-file string path to the client server TLS key file
--log-compress compress a log file
--log-file string log file (default "/dev/stderr")
--log-level string log level (default "INFO")
--log-max-age int max age of a log file in days (default 30)
--log-max-backups int max backup count of log files (default 3)
--log-max-size int max size of a log file in megabytes (default
500)
--peer-grpc-address string listen address of the existing gRPC server in
the joining cluster
--raft-address string Raft server listen address (default ":7000")

Authentication? HTTPS?

Hi!
Is there any way to secure the cluster? Because running it all plain open is a complete dealbreaker...

Add a SQL / Document feature

Cete looks pretty strong.

I have a few thoughts on some things that can be easily add.

  1. Lets Encrypt.
  1. Search

Feature Request: Query Language Support

Cete really works well as KV, however, I'd like to be able to do basic queries also. BadgerHold provides querying a Badger database. Could this be integrated into Cete thank you

BadgerHold Queries

  • Equal - Where("field").Eq(value)
  • Not Equal - Where("field").Ne(value)
  • Greater Than - Where("field").Gt(value)
  • Less Than - Where("field").Lt(value)
  • Less than or Equal To - Where("field").Le(value)
  • Greater Than or Equal To - Where("field").Ge(value)
  • In - Where("field").In(val1, val2, val3)
  • IsNil - Where("field").IsNil()
  • Regular Expression - Where("field").RegExp(regexp.MustCompile("ea"))
  • Matches Function - Where("field").MatchFunc(func(ra *RecordAccess) (bool, error))
  • Skip - Where("field").Eq(value).Skip(10)
  • Limit - Where("field").Eq(value).Limit(10)
  • SortBy - Where("field").Eq(value).SortBy("field1", "field2")
  • Reverse - Where("field").Eq(value).SortBy("field").Reverse()
  • Index - Where("field").Eq(value).Index("indexName")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.