exonum / exonum Goto Github PK
View Code? Open in Web Editor NEWAn extensible open-source framework for creating private/permissioned blockchain applications
Home Page: https://exonum.com
License: Apache License 2.0
An extensible open-source framework for creating private/permissioned blockchain applications
Home Page: https://exonum.com
License: Apache License 2.0
Regression after #6 pull request.
It's a proposition to add logic of broadcasting a newer Connect
, sent by a PublicKey
, to all existing peers in exonum::node::NodeHandler.handle_connect(&mut self, message: Connect)
.
Resolution will allow to easily add new fullnodes on blockchain network.
This has to correlate with #14 and have a limit on allowed frequency of handling each new Connect
message from the same PublicKey
.
https://*************/projects/22/tasks/1307.
Update a leader election algorithm to provide weak censorship resistance.
Changes are the following
disabled
state for F
blocks (During the next F
blocks one don't have a right to create new block proposals). The node behaves as usual in other activities, including voting for a new block, signing messages, etc.M = N - F
validators. The number of permutation is calculated as T = Hash(H) mod M!
. Such calculation provides uniform distribution of the orders, that is byzantine validators would be randomly distributed inside the current height H
.To compile on travis we need to set ubuntu: trusty
.
The errors looks like:
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libleveldb.a(env_posix.o): In function `leveldb::(anonymous namespace)::PosixEnv::Schedule(void (*)(void*), void*)':
(.text+0xaf2): undefined reference to `operator delete(void*)'
/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libleveldb.a(env_posix.o): In function `leveldb::(anonymous namespace)::PosixEnv::Schedule(void (*)(void*), void*)':
(.text+0xb8b): undefined reference to `std::__throw_bad_alloc()'
The error probably in C++ runtime.
Currently we are using SystemTime
for timeouts, for example:
pub fn add_status_timeout(&mut self) {
let time = self.channel.get_time() + Duration::from_millis(self.status_timeout());
self.channel.add_timeout(NodeTimeout::Status, time);
}
Duration
can be used instead:
add_timeout(NodeTimeout::Status, Duration::from_millis(self.status_timeout()));
Although, straightforward implementation will change timeouts behavior because they are handled through the same channel as other events. Perhaps, we need separate channels/queues.
t:
hash
) - private; get status (mempool, committed, unknown) of transaction; its body; its location, if committed. Closed as part of #224ip-address
) - private; generate Event(ConnecTo(ip-address
)) . Closed as part of #224Now we implicity that any types used as keys in storage table serialized to bytes in big endian
order. But the trait for keys is AsRef<[u8]>
which cannot provide proper flexibility.
See the PR for details.
There should be a way to create configuration for real net:
In internal discussion we decide to split creating configuration in next steps
Review all codebase according to naming conventions (2.4.* pages) and suggest renaming.
Propose
, Block
и Request*
, и логику их валидации.Precommit
(указывается текущее время на момент создания сообщения, при получении сообщения никак не валидируется)Merge Request и код пока выливается на gitlab.
The fact, that node is sending initially statically defined addr
of itself in Connect
message may be problematic for deployment of nodes across different networks/organisations.
In general case, a node cannot know its own ip, as seen by another peer, without using external services.
Moreover, a node's ip may vary from different peers' perspective.
#16
https://*********/projects/22/tasks/1307
time crate should be replaced by std::time.
@defuz @alekseysidorov ; found 1 likely bugs in precommits verification:
It seems there's no code to verify, that precommmits are from distinct validators. Replicating single precommit self.state.majority_count() times will suffice to pass verification.
let precommits = msg.precommits();
if precommits.len() < self.state.majority_count() ||
precommits.len() > self.state.validators().len() {
error!("Received block without consensus, block={:?}", msg);
return;
}
let precommit_round = precommits[0].round();
for precommit in &precommits {
let r = self.verify_precommit(&block_hash, block.height(), precommit_round, precommit);
if let Err(e) = r {
error!("{}, block={:?}", e, msg);
return;
}
}
needed for verifying merkle tree proofs to evaluate the merkle tree depth on client; the https://tools.ietf.org/html/rfc6962#section-2.1 CT solution with hashing prefixes was dicarded.
Currently tx_hash
field in block is the root of merkle table of block's transactions.
Add tx_length
field in block with the length of the same merkle table.
propose_timeout=500
in all cases. These are empty blocks.
6
and 8
nodesround_timeout = 3000
status_timeout = 5000
5-12 blocks per minute on 6 nodes.
6
and 8
nodesround_timeout = 3000
status_timeout = 3000
74 blocks per minute on 6 nodes.
5/1
and 6/2
nodes (5/1
means 6
validators total, 1
stopped for maintenance or due to denial).round_timeout = 3000
status_timeout = 1000
As proposed in #39, it would be convenient to change functions returning ()
, that contains code like
if some_bad_case {
error!("ERROR MESSAGE");
return;
}
or
let val = match get() {
Some(val) => val,
Error(err) => {
error!("{:?}", err);
return;
}
};
into functions returning Result<(), Error>
, so the code above could be rewritten into:
let val = get()?;
I've got into a problem understanding how the core schema interacts with the service code (on the example of the anchoring service). I'm in the dark here, so a clarification could be helpful.
I don't quite understand why the core exposes its schema as a part of its public interface. (Which leads to some questionable choices, such as having the notion of configurations and especially configuration changes embedded into the core - whereas there is a separate service for that.) It could be more developer-friendly to have a pseudo-service interface for the core. Furthermore, this hypothetical interface is similar in its goal to the one used now for service HTTP GET requests; only the middleware could automatically decide not to provide Merkle proofs in the case of inter-service interaction within a full node. This interface could return, via dedicated methods:
And so on. Now, the anchoring service has an optional dependency on the configuration change service (e.g., in order to change the anchoring address), and it should probably:
Perhaps, I'm misunderstanding something, but I would describe the current approach as hacking the core (e.g., with get_following_configuration
and the like) just in case it runs with one particular service. Is this done for efficiency reasons?
Proposed solution: A good solution would require inter-process communication. A good place to start seems to treat a View
passed to the transaction's execute
method as the execution context of the transaction. Then, it could be passed to other service calls (ideally, implicitly - middleware should take care of that). Behind the scenes, an execution context would correspond to many things, including the DB view, but we would want to hide these details from service developers, right?
So, instead of
pub fn execute(&self, view: &View) {
let schema = Schema::new(view);
let actual_cfg = schema.get_actual_configuration()?;
let validators = actual_cfg.validators;
}
it would look like
pub fn execute(&self, context: &ExecutionContext) {
// narrow() notation is taken from CORBA
let service = context.get_service(CoreService::SERVICE_ID).narrow<CoreService>();
let validators = service.get_validators(context);
}
Sorry for my Rust, but you probably get the idea.
This was observed to sometimes result in
pub fn handle_request_peers(&mut self, msg: RequestPeers) {
let peers: Vec<Connect> = self.state.peers().iter().map(|(_, b)| b.clone()).collect();
for peer in peers {
self.send_to_peer(*msg.from(), peer.raw());
}
}
------
pub fn send_to_peer(&mut self, public_key: PublicKey, message: &RawMessage) {
if let Some(conn) = self.state.peers().get(&public_key) {
trace!("Send to addr: {}", conn.addr());
self.channel.send_to(&conn.addr(), message.clone());
} else {
warn!("Hasn't connection with peer {:?}", public_key);
}
}
If node A missed node B's Connect
, node A won't send its peers to B upon being requested.
Proposed fix: add addr
and time
fields to RequestPeers
, effectively combining Connect
and RequestPeers
. (and combining handling logic too).
addr: SocketAddr [32 => 38]
time: SystemTime [38 => 50]
@alekseysidorov Accidentally spotted this tiny method. https://github.com/exonum/exonum-core/blob/master/exonum/src/node/consensus.rs#L766
It's likely to cause panics when performing incoming consensus messages verification from rogue nodes:
`handle_propose`
--> src/node/consensus.rs:106:28
|
106 | let key = self.public_key_of(msg.validator());
| ^^^^^^^^^^^^^
`handle_prevote`
--> src/node/consensus.rs:246:28
|
246 | let key = self.public_key_of(prevote.validator());
| ^^^^^^^^^^^^^
--> src/node/consensus.rs:285:32
|
285 | let key = self.public_key_of(validator);
| ^^^^^^^^^^^^^
`handle_precommit`
--> src/node/consensus.rs:340:25
|
340 | let peer = self.public_key_of(msg.validator());
| ^^^^^^^^^^^^^
--> src/node/consensus.rs:676:28
|
676 | let key = self.public_key_of(validator);
| ^^^^^^^^^^^^^
https://*********/projects/22/discussions?modal=Discussion-56-22
#16
Currently have bug. Scenario:
cargo test --all
Compiling lazy_static v0.2.2
error: linking with `cc` failed: exit code: 1
****
= note: ld: library not found for -lleveldb
clang: error: linker command failed with exit code 1 (use -v to see invocation)
For now, we can connect to node with self generated pair (public_key, secret_key).
We should add filter to disallow connection from node with unauthorized public_key.
At least two modules are implicitily assume that current hardware is little-endian.
It seems not to be critical issue because most of modern hardware is little-endian.
exonumctl
seems like a natural part of exonum crate.helpers.rs
into exonum crate.@alekseysidorov @gisochre Any thought?
message!
and storage_value!
share same code to generate packed
like structures, so we need tomessage!
and storage_value!
has borrowed fields, so we cant derive deserialize for itmessage!
should semantically depend from serviceconst MESSAGE_TYPE
, const SIZE
, const SERVICE_ID
, and [from => to] to each field
exonum-propose-timeout-adjuster should implement a corresponding trait instead of being a service. So there should be a default implementation in the core.
Does anyone remember why these folders exist? I would like to delete them.
@vldm Perhaps these pieces can be useful for deploying a test net, but I would still prefer to remove it in the current form.
cc @alekseysidorov @asmsoft @vldm
If generator send many transaction, some node could not receive transactions, and keep pool empty. In generator log error: - Unable to send to, an error occurred: Full
appears.
Currently we have "typedefs" for some things like height, round, etc.:
pub type Round = u32;
pub type Height = u64;
pub type ValidatorId = u32;
Instead they can be made into tuple struct.
Advantages:
fn foo(Round, ValidatorId)
.Disadvantages:
round.0
instead of round
if we need underlying value.I can make such refactoring if we decide that we need it.
Why?
flame_dump
flame_dump
from execute
method of a transaction is a good example of bad design :)What to do:
flame_dump
handle_terminate
of Node
(look at this for more info)It seems that RequestPrecommits
message is not needed anymore and should be removed.
@gisochre:
@DarkEld3r:
Each responsible provide separate PR which add #![deny(missing_docs)]
to their modules. After that, we add #![deny(missing_docs)]
for overall exonum
.
blockchain
@alekseysidorovnode
@DarkEld3rstorage
@defuzTBD
#86
Rich feature set may enable additional capabilities for exonum-core.
We need this to fulfill the requirements of the consensus algorithm.
Useless comments should be removed, especially commented code.
It was determined in #46 that consensus messages (e.g., Propose
) are not sufficiently documented for now. Each such message could be commented like
// Request connected peers from the node `to`.
//
// ### Processing
// * The message is authenticated by the pubkey `from`.
// It must be in the receiver's full node list
// * If the message is properly authorized, the node responds with...
//
// ### Generation
// A node generates `RequestPeers` under such and such conditions...
message! {
RequestPeers {
Note that consensus messages are slightly different from transaction messages defined by services; neither Processing, nor Generation sections can be straightforwardly translated for transaction messages (although these messages should probably be documented too). This is because tx message processing is encapsulated in the execute()
method of the transaction (i.e., can be documented there); and there are no specific rules as to when ordinary tx messages are generated.
Proposed solution: I think some documentation for consensus messages is needed both here and in general Exonum docs. Message descriptions here could be useful in order to verify that messages are processed and generated as intended without needing to consult an external source. And they can be copy-pasted to the general docs if necessary.
Also check that distinct validators are passed inVec<PublicKey
The table should store transaction_hash
-> storage_value! {block_height_u64
, tx_position_within_block_u64
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.