Giter Club home page Giter Club logo

bioyino's Introduction

Bioyino

The StatsD server written in Rust

Description

Bioyino is a distributed statsd-protocol server with carbon backend.

Features

  • all basic metric types supported (gauge, counter, diff-counter, timer), new types are easy to be added
  • fault tolerant: metrics are replicated to all nodes in the cluster
  • clustering: all nodes gather and replicate metrics, but only leader sends metrics to backend
  • precise: 64-bit floats, full metric set is stored in memory (for metric types that require post-processing), no approximation algorithms involved
  • standalone: can work without external services
  • safety and security: written in memory-safe language
  • networking tries to do it's best to avoid dropping UDP packets as much as possible
  • networking is asynchronous
  • small memory footprint and low CPU consumption

Status

Currently works in production at Avito, processing production-grade metric stream (~4M metrics per second on 3 nodes)

Installing

One of Bioyino's most powerful features - multimessage mode - require it to be working on GNU/Linux.

  • Install capnp compiler tool to generate schemas. It's usually downloaded using your distribution's package manager.
  • Do the usual Rust-program build-install cycle. Please note, that building is always tested on latest stable version of Rust. Rust 2018 edition is required.
$ git clone <this repo>
$ cargo build --release && strip target/release/bioyno

Build RPM package (for systemd-based distro)

  1. Install requirements (as root or with sudo)
    yum install -y cargo capnproto capnproto-devel
    yum install -y ruby-devel
    gem install fpm
  1. Build
    bash contrib/fpm/create_package_rpm.sh

Build DEB package (for systemd-based distro)

  1. Install requirements (as root or with sudo)
    apt-get install -y capnproto libcapnp-dev
    apt-get install -y ruby-dev
    gem install fpm
  1. Build
    bash contrib/fpm/create_package_deb.sh

Configuring

To configure, please, see config.toml, all the options are listed there and all of them are commented.

Contributing

You can help project by doing the following:

  • find TODOs/FIXMEs and unwraps in the code fix them and create a PR
  • solve issues
  • create issues to request new features
  • add new features, like new metric types
  • test the server on your environment and creating new issues if/when bugs found

bioyino's People

Contributors

albibek avatar kirugan avatar msaf1980 avatar mynameiswhm avatar palfrey avatar polachok avatar sti-jans avatar vaccarium avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bioyino's Issues

Sampling rate support

Hello!
If I understand correctly, the sampling rate is not supported like in other statsd implementations.
Bioyino does not "restore" the original value ​​in proportion to the sample rate?
For example, after sending
metric:1|c|@0.01
I expect to receive
metric 100

Log to file

Log to file (and reopen file after signal, SIGHUP or other, for log rotation)

cargo: command not found

Now i use Dokcer.

Can you advise or share your experience. Should I use it in Docker or is it better native?
Сan you tell me if it's worth using "async" (was it used in Avito)?
And which load balancer was used in cluster Bioyino, Haproxy for UDP?

I try compile, but recieve error: cargo: command not found

Found a solution: curl https://sh.rustup.rs -sSf | sh -s -- -y
[solution](https://github.com/paritytech/substrate/issues/1070)

Not sure, got a new error:

[error: cannot find macro `error` in this scope
   --> src/util.rs:182:13
    |
182 |             error!(log, "leader state change: {} -> {}", is_leader, acquired);
    |             ^^^^^
    |
    = help: consider importing one of these items:
            crate::error
            log::error
            slog::error
            slog_scope::error

warning: unused import: `warn`
  --> src/util.rs:12:15
   |
12 | use slog::{o, warn, Drain, Logger};
   |               ^^^^
   |
   = note: `#[warn(unused_imports)]` on by default

warning: `bioyino` (bin "bioyino") generated 1 warning
error: could not compile `bioyino` due to previous error; 1 warning emitted
Build error](url)

Maybe I need some specific distribution
I nave Ubuntu 18.04

Parsing error "end of input"

Incoming packet parsing error "end of input" with disabled multimessage

There's a problem in bioyino 0.7+
When multimessage = false every incoming message invokes such error:
bioyino071_1 | Mar 01 16:18:20.840 WARN parsing error, error: Errors { position: PointerOffset(0x7f5264005d0c), errors: [Unexpected(Static("end of input")), Expected(Token(58))] }, position: 1491, buffer: "\u{0}\u{0}\u{0}...\u{0}\u{0}\u{0}", thread: main, program: bioyino

Reproduce:

[network]
multimessage = false
buffer-flush-time = 100
buffer-flush-length = 0
[metrics]
log-parse-errors = true

echo "test:1|c" | nc -u -w 0 127.0.0.1 8125

Versions:
Tested v0.7.0, v0.7.1 and v0.8.0
I built a docker image from the official Dockerfile (see attached log)
Also built binary and rpm for centos using cargo 1.45.1 and rustc 1.45.2

It's very important for dev enviroment to disable buffering for convenient testing.
That's why multimessage = true is not a solution )
log-parse-errors = false also not ok because of incrementing resources.monitoring.bioyino.parse-error

Log to file

Hi
I see that there is another issue about log to file (#28)
I also found this feature it useful in prod.

How you resolve this in your env?

Incorrect display of "Counting" when sending 0

Hello.
When creating a certain metric through "Counting", it showed strange values.
Sometimes sending 0 counts as 1.

Tests:
0: 50 times + 1: 50 times => 50
0: 50 times + 1: 100 times=> 100
0: 100 times + 1: 100 times=> 200
0: 400 times + 1: 20 times => 420
0: 400 times + 1: 1 time => 401

That is, when there are more zeros than ones, then the zeros become one

I continue to test and explore, I will add more information later.

Better parsing sampled metrics

For now bioyino parse sample rate as "all from @ to '\n' symbol" here.
It works with many metrics with '\n' delimiter in one packet.
But doen't work with one metric per packet.
With one metric per packet all metrics look like one long string without '\n' symbols.
And parsing "all from @ to '\n' symbol" doesn't parse it right.

Can you please fix parsing to expect float after "@"?

[bug] Frequent gauge metric incorrect interpretation

Description

In case of send some gauge metric more than 200 times per server stats interval / discrete period, count of sends replaces actual value.

Example:

Send path:2|g 200 times (or more).

send_to_statsd () {
  nc -u 127.0.0.1 8125;
}

generate_metrics () {
  local I=1;

  while true; do
    echo "path:2|g";
    usleep 1000;
    I=$((${I}+1))
    if [ ${I} -gt 200 ]; then
      break;
    fi
  done
}

generate_metrics | send_to_statsd

Expected behavior:

Path value equals 2.

Actual behavior:

Path value equals 200.

Addition

Value doesn't matter.
Probably 200 means update-count-threshold server parameter.

Healthcheck for Bioyino

Hello.
Tell me please, is there any Healthcheck to understand whether the Bioyino cluster node is working?
I want to use Keepalived or HaProxy for balancing. If not, I will listen to port 8136.

Gauge value after flush

We have an issue with metric of gauge type. We expect behavior as described at statsd server's documentation

If the gauge is not updated at the next flush, it will send the previous value.

Expected behavior

Let's say we send gaugor:5|g, so we would get gaugor metric with value of 5 on first flush.
Later if we don't send any gaugor update we still would get gaugor metric with value of 5 on second flush.
Or if we send gaugor:+10|g we would get gaugor metric with value of 15 at second flush.

Actual behavior

We send gaugor:5|g, we get gaugor metric with value of 5 at first flush.
Later if we don't send any gaugor update we wouldn't get any gaugor metric on second flush at all.
Or If we send gaugor:+10|g we would get gaugor metric with value of 10 on second flush.

Consequence

This behavior also affects gauge relative updates, i.e. sending gaugor:+5|g or gaugor:-5|g to update existing gaugor value by specified difference. It is impossible to update existing gauge, because it is forgotten after every flush.

Defference between ingress and ingress-m

Hi
Maybe it is not a good place to ask questions but...
Could you tell the difference between INGRESS and INGRESS-METRIC in bioyino services metrics?
They make me confused because of show so different values

Build issues on nightly

https://github.com/palfrey/bioyino/runs/3273000488?check_suite_focus=true running rustc 1.56.0-nightly (574d37568 2021-08-07)

error: internal compiler error: unexpected concrete region in borrowck: ReEarlyBound(0, 'a)
Error:    --> /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/bioyino-metric-0.3.1/src/parser.rs:48:1
    |
48  | / pub fn metric_stream_parser<'a, I, F>(max_unparsed: usize, max_tags_len: usize) -> impl Parser<I, Output = ParsedPart<F>, PartialState = ...
49  | | where
50  | |     I: 'a + combine::StreamOnce<Token = u8, Range = &'a [u8], Position = PointerOffset<[u8]>> + std::fmt::Debug + RangeStream,
51  | |     I::Error: ParseError<I::Token, I::Range, I::Position>,
...   |
150 | |     ))
151 | | }
    | |_^
    |
    = note: delayed at compiler/rustc_mir/src/borrow_check/region_infer/opaque_types.rs:88:44

thread 'rustc' panicked at 'no errors encountered even though `delay_span_bug` issued', compiler/rustc_errors/src/lib.rs:1065:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I think this is rust-lang/rust#83190 and I've added a note about this there

Valid example config

The current example of the config is incorrect by 50 percent.
Can you give the minimum working config for building from the master?
Maybe there is a key in to generate a configuration example?
An interesting option with raft protocol.

New prefix for metrics per type

Hi, can I add prefix for metrics per type?
For example:
If metric type timer - add prefix stats.timers
If metric type count - add prefix stats
If gauges - add statsd.gauges

Thanks!

statsd compatible rate aggregation for counter

Hello!
I seem to have found an inconsistency in the implementation of rate aggregation for counters.
Bioyino calculates rate as "counts per sec", while statsd implies "value per sec"
Not to be confused with timer's count_ps in statsd (also called rate in bioyino) - it's different aggregation

Can we make it compatible with statsd?
https://github.com/statsd/statsd/blob/7c07eec4e7cebbd376d8313b230cea96c6571423/lib/process_metrics.js#L19

How to reproduce:

  • Prepare instances with such configs:

statsd

{
, flushInterval: 10000
, deleteCounters: true
, graphite: {
    legacyNamespace: false
  , globalPrefix:    ""
  , prefixCounter:   ""
  }
}

bioyino

[carbon]
interval = 10000

[aggregation]
round-timestamp = "down"
update-count-threshold = 0
aggregates.counter = [ "value", "rate", "updates" ]

[naming.default]
destination = "name"
prefix = ""

[naming.counter]
postfixes = { "value" = "count" }
  • And compare results:

Samples:

for v in {1..100}; do echo "test.counter:$v|c" | ncat -u 127.0.0.1 51125; echo "test.counter:$v|c" | ncat -u 127.0.0.1 52125; done;

Results:

statsd

ncat -klp 51003 | grep -P "^test"
test.counter.rate 505 1704117415
test.counter.count 5050 1704117415

bioyino

cat -klp 52003 | grep -P "^test"
test.counter.count 5050.0 1704117410
test.counter.rate 10.0 1704117410
test.counter.updates 100.0 1704117410

As we see test.counter.rate uses different logic to calculate.

I see that rate was asked ealier in issue #37
as

rate as sum / (carbon.interval / 1000)

but implemented only

sample_rate as agg.len() / (carbon.interval / 1000)

Later we discussed rate in implementation of sample rate
#53 (comment)

Real events count inside app: 300
Sample rate: 0.1
Sent\received count: 30
Interval: 30
Aggregates: sum = 30/0.1 = 300, rate = sum / 30 = 10

My fault for not checking rate for counter

Question about settings for Best Performance

Hello.
We started using multimessage = true
Are there any best practices from you on the settings of this block?

Until today, there were such settings (changed to try to fix the docker reboot that appeared)

[network]
listen = "0.0.0.0:8126"
peer-listen = "0.0.0.0:8136"
mgmt-listen = "0.0.0.0:8137"
bufsize = 50000
multimessage = true
mm-packets = 100
mm-async = false
buffer-flush-time = 5000
buffer-flush-length = 255536
greens = 7
async-sockets = 7
nodes = []
snapshot-interval = 1000

I took from the example in the repository:

[network]
listen = "0.0.0.0:8126"
peer-listen = "0.0.0.0:8136"
mgmt-listen = "0.0.0.0:8137"
bufsize = 1500
multimessage = true
mm-packets = 1000
mm-async = false
buffer-flush-time = 10000
buffer-flush-length = 655360
greens = 7
async-sockets = 7
nodes = []
snapshot-interval = 1000

The server has 8 cores:
n-threads = 7
w-threads = 7

Currently working in docker. Single Node.
Previously, the processor was 100%.
Now 50% or lower.

The parse-error metric shows approximately 3000 (the developer is working on correcting the sending)
ingress-metric about 5M. And will increase

It is unknown or because of the increase in the number of metrics, it is possible that sometimes the docker with the service is overloaded
(I think for stability to take out of the docker and make a cluster in the future)

Can you tell me in such a situation what parameters of bioyino can be improved?
Or how to change them if you switch to a cluster of 3 nodes.

If you send frequently, COUNT shows the number of sends instead of the amount

Now for testing I send 100 packets every 5 seconds. Each packet sends the value 10000
=> we get about 600
That is, the service does not count the value, but the number of sends

But if I send, for example, once every 30 seconds, then I receive correctly - approximately 1000,000

The influence of the parameter is suspected
update-count-threshold,  I would like to understand the meaning of this parameter .. more details.
Does it somehow affect the work or is it just warnings in the log if sending is too frequent?

inner slow-w-len metric

Hello!
Please explain what inner slow-w-len metric means?
Periodically I get 1 as value. Which settings should be tuned if something wrong?

Corruption of last char in tag value

The last symbol of the last tag is chaotically replaced

Sent tag values

test_corrupted_tags;y=y;x=x
test_corrupted_tags;y=yyyyyy;x=xxxx
test_corrupted_tags;y=yyy;x=xxx

Received

test_corrupted_tags;x=x;y=x
test_corrupted_tags;x=xxxx;y=yyyyyx
test_corrupted_tags;x=xxx;y=yyx

IMPORTANT: tags must be sent in reversed order (for example y before x)

All versions affected: 0.6, 0.7.2, 0.8.0

It's a little hard to reproduce and depends from some factors
Here is config which makes it possible:

bufsize = 1500
multimessage = true
mm-packets = 10
buffer-flush-time = 0
buffer-flush-length = 16384

(It's possible with other values too, but needs more time to happens)

Here some commands to save time and speed up testing

while true; do for x in x xx xxx xxxx xxxxx xxxxxx xxxxxxx xxxxxxxx xxxxxxxxxx; do for y in y yy yyy yyyy yyyyy yyyyyy yyyyyyy yyyyyyyy yyyyyyyyy yyyyyyyyyy; do echo "test_corrupted_tags;y=$y;x=$x:1|c" | nc -u 127.0.0.1 8125; echo -n "."; done; done; echo; done

nc -k -l -p 2003 | grep test_corrupted_tags | grep -v -P "test_corrupted_tags;x=x+;y=y+ "

Please fix )

Using bioyino as statsd-agent

Hello!

I want to clarify some details.

  1. Did I understand correctly that if we use bioyino as a statsd-agent, it can write its data to the cluster via the peer port over tcp? Maybe you have an example of bioyino configuration as agent?
  2. Now bioyino can only write to one carbon backend over tcp?

Load balancer for the cluster [question]

Hello, colleagues.
I wanted to find out if there is already a working practice of using a load balancer in front of the Bioyino cluster to ensure fault tolerance of the incoming point.
I was thinking of using Keepalived+ Haproxy.

Feature request: Sharding metrics

Hi! Our bioyino cluster consumes a lot of resources from each host. Probably, the cluster node will run out of resources soon. Are there any plans to shard the collected metrics?

[Bug] Incorrect postfix naming behaviour for tagged metrics

Hello.
I discovered that when sending metrics the postfix is not added.
Important note on task context: must be explicitly simulated statsd behaviour

When sending metrics without tags:
echo "test.counter:1|c" | ncat -v -u 127.0.0.1 52125
expected/actual result(ALL IS OK):
test.counter.count 1.0 1707753580

When sending metrics with tags(HERE IS THE PROBLEM):
echo "test.counter;foo=bar:1|c" | ncat -v -u 127.0.0.1 52125
actual result: (postfix ".count" is not set)
test.counter;foo=bar 1.0 1707753750

expected result:
test.counter.count;foo=bar 1.0 1707753750

used config.toml:

verbosity-console = "warn"
n-threads = 4
p-threads = 2
w-threads = 4
a-threads = 4
task-queue-size = 2048
start-as-leader = true
consensus = "none"
stats-interval = 10000
stats-prefix = "resources.monitoring.bioyino"

[carbon]
address = "127.0.0.1:50003"
interval = 10000
connect-delay = 250
connect-delay-multiplier = 2
connect-delay-max = 10000
send-retries = 30
chunks = 8

[network]
listen = "127.0.0.1:52125"
peer-listen = "127.0.0.1:52136"
mgmt-listen = "127.0.0.1:52137"
bufsize = 1500
multimessage = true
mm-packets = 100
mm-async = false
buffer-flush-time = 100
buffer-flush-length = 16384
snapshot-interval = 1000

[metrics]
log-parse-errors = true

[aggregation]
round-timestamp = "down"
aggregates.counter = [ "value" ]
aggregates.timer = [ "count", "rate", "min", "max", "sum", "mean", "median", "percentile-90" ]
aggregates.gauge = [ "value" ]
aggregates.set = [ "count" ]

[naming.default]
destination = "name"
prefix = ""

[naming.counter]
prefix = ""
postfixes = { "value" = "count" }

[naming.timer]
prefix = ""
postfixes = { "rate" = "count_ps", "min" = "lower", "max" = "upper", "percentile-90" = "upper_90" }

[naming.gauge]
prefix = ""

[naming.set]
prefix = ""

New docker build not starting

Hello. Maybe the problem is already known and you can help.
Previously used version 0.7.1.
I tried today to build docker with version 0.8.0.
When docker starts errors:

bioyino: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by bioyino)
bioyino: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by bioyino)

But as I understand it, the problem is in the debian version (package libc6 ver ) , while I downgraded it step by step to debian:stretch-20220228. Trying install another version libc6.

thread 'bioyino_udp0' panicked at 'assertion failed: self.remaining_mut() >= src.remaining()'

thread 'bioyino_udp0' panicked at 'assertion failed: self.remaining_mut() >= src.remaining()', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/bytes-0.4.11/src/buf/buf_mut.rs:230:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: std::panicking::rust_panic_with_hook
             at libstd/panicking.rs:477
   5: std::panicking::begin_panic
   6: bytes::buf::buf_mut::BufMut::put
   7: <futures::future::chain::Chain<A, B, C>>::poll
   8: futures::task_impl::std::set
   9: tokio_current_thread::CurrentRunner::set_spawn
  10: <tokio_current_thread::scheduler::Scheduler<U>>::tick
  11: <tokio_current_thread::Entered<'a, P>>::block_on
  12: <std::thread::local::LocalKey<T>>::with
  13: <std::thread::local::LocalKey<T>>::with
  14: <std::thread::local::LocalKey<T>>::with
  15: <std::thread::local::LocalKey<T>>::with
  16: tokio::runtime::current_thread::runtime::Runtime::block_on

I can reproduce the panic at my setup. Do you need some additional information to fix it up?

Cant compile 0.3.3.

I am on 0.3.3 branch and during compiling I am getting next error:

cargo build --release && strip target/release/bioyno
   Compiling tokio-reactor v0.1.6
error[E0658]: use of unstable library feature 'thread_local_state': state querying was recently added (see issue #27716)
   --> /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-reactor-0.1.6/src/sharded_rwlock.rs:158:18
    |
158 |     REGISTRATION.try_with(|reg| reg.index).unwrap_or(0)
    |                  ^^^^^^^^

error: aborting due to previous error

error: Could not compile `tokio-reactor`.

To learn more, run the command again with --verbose.

I am on Ubuntu 16.04 with rustc 1.25.0 (standardf repo version). Do I need newer one?

Error and Reboot in v.8.

Bioyino in docker, network host

thread 'bioyino_cnt4' panicked at 'called Option::unwrap() on a None value', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/bioyino-metric-0.5.0/src/metric.rs:511:51
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
thread 'bioyino_cnt3' panicked at 'called Option::unwrap() on a None value', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/bioyino-metric-0.5.0/src/metric.rs:511:51

Feature request: count per sec for metrics with "ms" type

Hello.
For metrics "XXX" with type "ms" bioyino generate metric "XXX.count".
It send to carbon pure count of recieved packets every "interval" miliseconds.
"interval" - setting from config file.
I use it for get "packet per second" value this way: scale(XXX.count, 0.1) (i have interval = 10000).
I ask to create another metric - count_ps (float) with value = count * 1000 / interval (from config).
With it will no need to use "scale" for every "pps" metric.
I think, it will be very usefull and handy - easy way to get "packets per second" value.

One node sends to carbon more rarely when other node is physically turned off.

Hello.
We have cluster of three node: node1, node2, node3.
Let's say, that statsd metrics are sent only to node1.
When we turn off server node2, node1 start to send to carbon more rarely - one time per 1-2 minutes.
When i commented option "nodes" in block "[network]" on node1, all started to be ok.
I think, problem near tcp timeouts when node1 try to communicate with node2 for exchange (or aggregation) metrics.
node1, node2 and node3 in different networks.
So, tcp timeouts not limited by arp request timeouts.

Here screenshot from graphite.
Blue dots - it's all that's left from line.
First time (11:50 - 12:05) - i try to understand, what happening.
Second (12:11 - 12:17) - i commented "nodes" and trying to test the idea

firefox_2019-01-22_16-35-16

Config:

verbosity = "warn"
n-threads = 8
w-threads = 8
task-queue-size = 1024
start-as-leader = false
stats-interval = 10000
stats-prefix = "resources.monitoring.bioyino"
consensus = "internal"
[metrics]
count-updates = true
update-counter-prefix = "resources.monitoring.bioyino.updates"
update-counter-suffix = ""
update-counter-threshold = 200
fast-aggregation = false
[carbon]
address = "127.0.0.1:2003"
interval = 10000
connect-delay = 250
connect-delay-multiplier = 2
connect-delay-max = 10000
send-retries = 30
[network]
listen = "0.0.0.0:8125"
peer-listen = "0.0.0.0:8136"
mgmt-listen = "0.0.0.0:8137"
bufsize = 1500
multimessage = true
mm-packets = 100
mm-async = false
buffer-flush-time = 3000
buffer-flush-length = 65536
greens = 4
async-sockets = 4
nodes = ["192.168.2.2:8136", "192.168.3.3:8136"]
snapshot-interval = 1000
[raft]
start-delay = 5000
this-node = "192.168.1.1:8138"
nodes = {"192.168.1.1:8138" = 1, "192.168.2.2:8138" = 2, "192.168.3.3:8138" = 3}
[consul]
start-as = "disabled"
agent = "127.0.0.1:8500"
session-ttl = 11000
renew-time = 1000
key-name = "service/bioyino/lock"

missing capnp

when compiling bioyino get this:

error: failed to run custom build command for `raft-consensus v0.4.0`
process didn't exit successfully: `/Users/d.khominich/code/bioyino/target/release/build/raft-consensus-48156342c8e0f7c1/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'Failed compiling messages schema: Error { kind: Failed, description: "Error while trying to execute `capnp compile`: Failed: No such file or directory (os error 2).  Please verify that version 0.5.2 or higher of the capnp executable is installed on your system. See https://capnproto.org/install.html" }', libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

after installing capnp brew install capnp it goes futher.

Should we add requirements of capnp to README file or somewhere else?

Cluster settings when instance in docker

Hello.
I receive error:

note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'bioyino_raft' panicked at 'no entry found for key', /usr/local/cargo/git/checkouts/raft-tokio-77bb75eade836e87/212ec9b/src/tcp.rs:177:37

When I have the following settings:

[raft]
this-node="%real_server_ip%:8138"
nodes = {"%node_ip1%:8138" = 1,"%node_ip2%:8138" = 2,"%node_ip3%:8138" = 3}
# client-bind = "127.0.0.0:8138"

or

[raft]
this-node="%real_server_ip%:8138"
nodes = {"%node_ip1%:8138" = 1,"%node_ip2%:8138" = 2,"%node_ip3%:8138" = 3}
client-bind = "0.0.0.0:8138"

or

[raft]
this-node= "0.0.0.0:8138"
nodes = {"%node_ip1%:8138" = 1,"%node_ip2%:8138" = 2,"%node_ip3%:8138" = 3}
# client-bind = "127.0.0.0:8138"

Error:

thread 'bioyino_raft' panicked at 'no entry found for key', /usr/local/cargo/git/checkouts/raft-tokio-77bb75eade836e87/212ec9b/src/tcp.rs:177:37
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'bioyino_raft' panicked at 'tcp listener couldn't start: Os { code: 99, kind: AddrNotAvailable, message: "Cannot assign requested address" }', /usr/local/cargo/git/checkouts/raft-tokio-77bb75eade836e87/212ec9b/src/lib.rs:237:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Works without error with the following settings, but I'm not sure if they are correct. Help me, tell me how to do it right:

[raft]
this-node= "0.0.0.0:8138"
nodes = {"%node_ip1%:8138" = 1,"%node_ip2%:8138" = 2,"%node_ip3%:8138" = 3}
client-bind = "0.0.0.0:8138"

[question] Launch in kubernetes

Hi!
We plan to use bioyino in our infrastructure. And we have a couple of questions:

  1. What could be the problems if we put bioyino in kubernetes (besides consensus management)?
  2. Do you plan to make bioyino operator for kubernetes? Or maybe you have some ideas that we can use.

Dev docker conf

Hi. Is it possible to configure bioyino to send metrics every 10 seconds regardless buffers config, like original statsd ?
I have problem on docker, metrics are not delivered to backend most often.

Raft: UDP or TCP

Hi,
I am interested in what proto raft works in your implementation: UDP or TCP?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.