mimblewimble / grin Goto Github PK
View Code? Open in Web Editor NEWMinimal implementation of the Mimblewimble protocol.
Home Page: https://grin.mw/
License: Apache License 2.0
Minimal implementation of the Mimblewimble protocol.
Home Page: https://grin.mw/
License: Apache License 2.0
It seems the dependency librocksdb-sys v0.4.1 has issues compiling its own rocksdb dependency on gcc7+ with issues such as the following:
cargo:warning= from rocksdb/db/auto_roll_logger.h:15,
cargo:warning= from rocksdb/db/auto_roll_logger.cc:6:
cargo:warning=rocksdb/util/thread_local.h:66:16: error: ‘function’ in namespace ‘std’ does not name a template type
cargo:warning= typedef std::function<void(void*, void*)> FoldFunc;
It seems to be directly related to this issue, which was fixed within rocksdb:
However the actual librocksdb rust bindings haven't been updated since, causing the whole build to fail.
I've opened an issue for the maintainers here, you might want to watch this and come up with an alternate strategy if they've gone missing:
Hard to find us if you don't know github right now...
From @apoelstra on the the mailing list -
https://lists.launchpad.net/mimblewimble/msg00103.html
Paper -
https://blockstream.com/bitcoin17-final41.pdf
Potential impact to "Wallet Import/Export Format " - #28 (can a wallet identify UTXOs that belong to the wallet)?
(starting an issue here to track relevant reading material etc.)
@ignopeverell please feel free to close this out if not immediately relevant given where Grin is today.
Design a simple fee calculation aimed at making spam as expensive as possible while keeping transactions cheap for normal users. Should likely be a mix of a fixed fee for each output (bandwidth consumed) plus a penalty for creating more UTXOs than consumed. Possibly a bonus for destroying UTXOs as well, but careful there. Enforce in the transaction pool.
Currently when a peer gets a new block, it forwards it to all its known peers, regardless of whether they were the sender in the first place. Likely entails tracking what we received and maintaining some ring buffer on the peer.
We currently have the following endpoints in the API -
GET /v1/chain/:id
GET /v1/chain/utxo/:id
GET /v1/pool/:id
POST /v1/pool/push
And the following endpoints on the wallet API -
POST /v1/receive/coinbase
POST /v1/receive/receive_json_tx
Note: the :id
url param is actually redundant in both /v1/chain/:id
and /v1/pool/:id
but is there as an artifact of the ApiEndpoint
implementation.
Ideally these endpoints would be implemented not as get
on a collection of resources but as get
(or index
) on a single resource.
GET /v1/chain
GET /v1/chain/utxo/:id
GET /v1/pool
POST /v1/pool/push
While these API endpoints are RESTful (with some custom operations) they are not really CRUD - we do not currently have much need for full DELETE/GET/POST/PUT
support.
Additionally it may be desirable to support GET /
endpoints for single resources to avoid the redundant :id
url param.
We may also want to return multiple utxos when checking the wallet to avoid an API call per utxo.
We could do this to query/filter by multiple ids on the index
endpoint -
GET /v1/chain/utxo?ids=1,2,3`
This is not currently supported in the existing API framework.
I'm wondering if it may make sense to attempt to simplify the API by eliminating the intermediate ApiEndpoint
framework and implement each specific endpoint directly via an Iron route handler.
I think this would remove a lot of code while making it more flexible (query/filter by ids etc.)
Thoughts on this? Is it worth seeing what the API code looks like with this approach?
I am interested in this project but freenode just is not that great for connecting via tor. I see you guys have a mailing list which is awesome too but is not very active it seems? Would i2p irc maybe be a better alternative for dev talk? Wonder what you guys think about this.
One difficulty with MimbleWimble is that to find one's UTXOs, both the private key and the amounts must be known. There's plenty of solution for private key backup and exports, including HD, but amounts is a MW specific issue. Exporting a list of amounts is possible but ergonomically complicated, especially if we still want to support paper wallets.
Brute-forcing the value space is a possibility, but with 8 or 10 significant numbers could take a prohibitively long time. One improvement could be to have wallets restrict the ranges of amounts it handles and maybe enforcing it through the range proof. A better idea may be to combine a HD type root key with a small bloom filter (32 bytes for example) for values. Brute forcing the Bloom filter would be very fast, reducing drastically the space of values that have to be explored with each HD derivation.
How do we support them? Would likely require another output field and a signature. Maybe add to the transaction proof signature.
This would be great to have for higher level contract supports like payment channels, lightning, etc.
Current one is a little too lightweight. Zcash has gone through a fair amount of analysis:
They ended up going with a heavily tweaked version of Digishield, with tuned up parameters. Viacoin also has an interesting improvement over DarkGravityWave:
https://github.com/viacoin/viacoin/blob/master/src/pow.cpp#L80
We should shamelessly port an existing algo instead of re-inventing the wheel.
While working on the pool (#21) one fairly important piece I'm missing is how the blockchain's UTXO snapshot will be exposed to other components of the system. It looks like this is #10, or at least a very closely related component of it (cc: @merope07).
It seems premature to set anything quite in stone, but I'm wondering if it makes sense to start discussion around what the expected external-facing capabilities are of the UTXO data structure so that we can start working against a common understanding?
The current Merkle tree implementation is mostly a placeholder and is going to be inadequate to handle the UTXO set tree. The following are highly desirable property:
The MMR [1] [2] algorithm has been proposed and Merklix trees [3] offer interesting possibilities as well (i.e. p2p querying, sharding).
[1] https://github.com/opentimestamps/opentimestamps-server/blob/master/python-opentimestamps/opentimestamps/core/timestamp.py#L324
[2] https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md
[3] http://www.deadalnix.me/2016/09/29/using-merklix-tree-to-checkpoint-an-utxo-set/
@GarrickOllivander @merope07 and @apoelstra have expressed interest.
If I get chance over the next few days I will investigate this further but it looks like starting the wallet daemon up with its default port logs that it is receiving on port 13415
but is actually running on 13416
.
It is not immediately obvious to me why it would log the incorrect port like this.
I think it is also inconsistent with the docs.
Specifying -r 15000
explicitly, logs that wallet is receiving on 15000
and port 15000
does appear to be in use -
RUST_LOG=grin=debug ~/github/antiochp/grin/target/debug/grin wallet -p "password" -r 15000 receive
INFO:grin: Starting the Grin wallet receiving daemon at http://127.0.0.1:15000...
INFO:grin_api::rest: route: POST /v1/receive/coinbase
INFO:grin_api::rest: route: POST /v1/receive/receive_json_tx
lsof -i -P | grep grin
grin 41466 <user> 3u IPv4 0x75bdb707e182a011 0t0 TCP localhost:15000 (LISTEN)
Specifying no port, logs that wallet is receiving on 13415
but appears to be running on 13416
and not 13415
-
RUST_LOG=grin=debug ~/github/antiochp/grin/target/debug/grin wallet -p "password" receive
INFO:grin: Starting the Grin wallet receiving daemon at http://127.0.0.1:13415...
INFO:grin_api::rest: route: POST /v1/receive/coinbase
INFO:grin_api::rest: route: POST /v1/receive/receive_json_tx
lsof -i -P | grep grin
grin 41530 <user> 3u IPv4 0x75bdb707dcb34e21 0t0 TCP localhost:13416 (LISTEN)
Specifying -r 13415
explicitly, logs that wallet is receiving on 13415
and port 13415
does appear to be in use as expected -
RUST_LOG=grin=debug ~/github/antiochp/grin/target/debug/grin wallet -p "password" -r 13415 receive
INFO:grin: Starting the Grin wallet receiving daemon at http://127.0.0.1:13415...
INFO:grin_api::rest: route: POST /v1/receive/coinbase
INFO:grin_api::rest: route: POST /v1/receive/receive_json_tx
lsof -i -P | grep grin
grin 41592 <user> 3u IPv4 0x75bdb707d46b93f1 0t0 TCP localhost:13415 (LISTEN)
Throughout the code basis usage of tabs and white spaces for indentation is inconsistent. What is the desired way to do it? Tabs, 2 ws or 4 ws?
The test case simulnet.rs
is stuck at
evtlp.run(change(&servers[4]).and_then(|tip| {
assert!(tip.height == original_height+1);
Ok(())
}));
and never finishes. I am running rustc 1.14.0 (e8a012324 2016-12-16).
Right now we use the wallet password directly as the seed.
2 wallets with password password
will actually be the same wallet (ignoring complexity around wallet utxo recovery).
We should generate a new random seed on first use of the wallet.
And then use the wallet password to encrypt/decrypt it as required.
Possibly something like this in wallet.dat
-
{
"enc_seed": {
"fingerprint": [ ... ],
"bytes": [ ... ]
},
"outputs": [
{
"fingerprint": [ ... ],
"n_child": 1,
"value": 1000,
"status": "unconfirmed"
},
...
]
}
Something like BIP-39 could be used to generate/backup the seed - but we still need to store it encrypted in wallet.dat
itself.
Edit: Thinking more about this and the encrypted seed should probably be in a separate file.
at https://github.com/ignopeverell/grin/blob/afb219ce5c6bf5adb21e638df96a2b3b66cd2e24/core/src/core/mod.rs#L66
it should be secp.commit_sum(output_commits, input_commits)
, otherwise, the resulting sum corresponds to -kG instead of kG
This is more of a longer-term project, as it's not necessary for basic operation of the system. However, in order for transaction pools to operate reliably, I believe there will need to be a system to return transactions invalidated on a fork that was previously head to the pool, so that any transactions that conflict between the fork and new head are not lost forever. This requires a bit more machinery than in most existing blockchains because recovering the transaction from the block itself is not possible.
Conceptually, I was thinking that the pool should retain recently invalidated transactions in their entirety for a period of time (some acceptable reorg depth), which can be much less than the cut-through reorg depth because this property is not necessary for correctness, only usability. Even in the worst case, some miners (those who built the blocks that constitute the new head) should have a consistent view of the transaction pool without any loss. Any thoughts?
In continuation of #51 and #39, currently core::hash
depends on Writeable
/Readable
traits. These include: Hash
, HashWriter
and Hashed
. Current agenda is to:
core::hash
module fulfillsEncoder
with minimum trimming.I think once we have a working proposal, we will have more flexibility on what can be refactored and trimmed.
Right now we have a full copy of the rust-secp256k1 source as well as a full copy of libsecp256k1-zkp source into a repository, with various modifications applied on top of both. It's messy, hard to track and doesn't allow us to merge from upstream. We need to clean that up.
Here's the proposed cleanup solution:
upstream
vendor branches in both of them [1] that we keep in sync with the 2 upstream.upstream
into master and apply our various changes on master.Since AsRef
already exists within the standard library, it would make sense to use it instead of adding another trait.
We need to prevent miners from being able to spend their reward for enough blocks so that they can't benefit from forking the chain. A sensible default may be 1000 blocks, which would be 10 times Bitcoin's (given that our block time is 1/10th).
The following tests appear to be failing consistently on master
when run using
cargo test -p grin_chain
-
Finished dev [unoptimized + debuginfo] target(s) in 86.2 secs
Running target/debug/deps/grin_chain-7bcc00f37abf022a
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Running target/debug/deps/mine_simple_chain-028713fafb6b7973
running 2 tests
test mine_forks ... FAILED
test mine_empty_chain ... FAILED
failures:
---- mine_forks stdout ----
thread 'mine_forks' panicked at 'assertion failed: `(left == right)` (left: `3`, right: `1`)', chain/tests/mine_simple_chain.rs:110
note: Run with `RUST_BACKTRACE=1` for a backtrace.
---- mine_empty_chain stdout ----
thread 'mine_empty_chain' panicked at 'assertion failed: `(left == right)` (left: `18`, right: `1`)', chain/tests/mine_simple_chain.rs:83
failures:
mine_empty_chain
mine_forks
test result: FAILED. 0 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out
error: test failed, to rerun pass '--test mine_simple_chain'
Grin should already support in the near future:
Using these primitives, figure out how to implement vaults [1] [2].
[1] http://hackingdistributed.com/2016/02/26/how-to-implement-secure-bitcoin-vaults/
[2] http://fc16.ifca.ai/bitcoin/papers/MES16.pdf
(In collaboration with @trhode)
Currently every stored type is responsible for providing it's own encoding and decoding format via the core::ser::{Writable, Readable}
. It seems more efficient to delegate the encoding and decoding format to one generic Serializer/Deserializer ( bincode, json, etc...).
It seems as though the need for theReadable
/Writable
traits in the Grin-core stemmed from the inability to implement Serializable
for the secp256k1
types. If so implementing Serializable
in secp256k1 seems like a good place to start.
Removing the Readable/Writable traits would simplify the codebase greatly, which is one of the stated goals of the grin-core project.
From discussions and recent issues with the test suite, it's clear that a solution is needed to reduce the time it takes for tests to run, and it should probably be a project goal to keep automated testing time to a minimum.
It's clear at the moment that the biggest issue is mining speed, as a lot of tests build up a blockchain from scratch in order to test the results. In tests where the mining itself isn't the focus, there should be a mode that builds up blockchains very quickly using cuckoo parameters tweaked to have them build as fast as possible using the internal miner.
To address this, I propose:
-Changing the test_mode parameter from a boolean to an enum, with values along the lines of (CI, USER_TESTING, PRODUCTION)
-Associating tweakable parameters with the enum, i.e. cuckoo_size and solution size, collecting the values currently in consensus.rs into there, (as a matter of fact I'd remove TEST_SIZESHIFT and testing stuff from consensus.rs and only leave actual consensus values in there, reading cuckoo params from elsewhere if test_mode!=production)
-Using the values from the enum in every point in the code that references cuckoo mining.
It is interesting to remove (input|output) pairs across blocks and the blockchain size is well compacted as a result. This is good.
However, block validation is insufficient in compacted blockchain because following conditions are not satisfied.
Therefore, I think It leads to security reduction (like SPV security in Bitcoin).
What do you think about that? I would appreciate any reply.
Depends on #22. Allow forks and maintain then, select the appropriate forks for new blocks and as the head, etc.
During first time sync, received blocks are (rightfully) not broadcasted as everyone is assumed to have them. However orphans still are as the "sync" configuration isn't pushed done.
We need to cover everything that gets in a block, including the range proofs and the kernels. Some of it may be covered somewhere else (i.e. output commitments in the UTXO set), but everything should go into computing some hash to avoid any block-level malleability.
Long pull requests conversations quickly break down... Anyone knows of a good mailing list service that doesn't require a phone-bound account (i.e. google groups) and isn't too hostile to Tor?
For a chain of block maintain its total difficulty as the sum of the ratio of a block target over max target. Required by the chain implementation to select the fork with the most work.
Grin is going to need a rust library to call into John Tromp's Cuckoo Mining code for access to the latest and greatest Cuckoo Mining algorithms, and the most logical way to do this is to maintain a separate library/rust crate to provide this. A project has been started here to address this here:
https://github.com/mimblewimble/cuckoo-miner
For design goals, the library should try to maintain compatibility with what's already in the Cuckoo repository as much as possible, to make it relatively easy to update with any changes or new algorithms that come along. The caller should also be able to select which algorithm/cuckoo version it wants to use, and be able to pass in any parameters needed.
Once this library exists, grin's miner should be changed to use it (possibly removing the existing simple-miner implementation, and options need to be provided to make the miner implementation selectable.
In famous article about ZCash Peter Todd analyses Equihash PoW. It seems like Equihash and Cuckoo are similar in trying to constrain mining by memory latency ("comodity DRAM"). So at least the part about specialized memory chips probably applies to Cuckoo as well.
The docs (somewhere) suggest a value of 1_000
for COINBASE_MATURITY
.
This is based on a 1 minute block time (vs. Bitcoin with 10 min block time and coinbase maturity of 100 blocks).
This makes testing of nodes and wallets really difficult as we need to wait for 1_000 blocks before anything becomes spendable.
We have temporarily reduced this in consensus.rs -
https://github.com/ignopeverell/grin/blob/a5b2c7d3f2d057666a33e416ebf068722a7af6c2/core/src/consensus.rs#L32
Ideally we can make this configurable somehow - possibly based on mining_parameter_mode
in grin.toml
mining_parameter_mode = "UserTesting"
Proposed values -
COINBASE_MATURITY=3
COINBASE_MATURITY=3
COINBASE_ MATURITY=1_000
We also need to make sure this is documented somewhere in the docs (and not just this and other issues).
Hi,
how to decrease difficulty of the genesis block for quick testnet testing?
Or any way to preseed with test genesis block instead of generating one?
Thanks
pinkvoid
Is there a consistent spacing style we should target for this project? It seems like macros have 2-space tabs, most of the code uses tabs, there is some 4-space tabs, and the dependencies use 4-space tabs (which is standard in the Rust community).
I also don't sit well with Option<Error>
. The "silly" Result<(), Error>
lets you write Ok(())
which looks funny but has the word "ok" in it. Receiving None
and interpreting this as "ok" is confusing at first.
I don't want to step on any toes here, but it would be good to agree on these things and have them written down somewhere.
In this article http://www.ibtimes.co.uk/mimblewimble-scriptless-scripts-magicking-blockchain-signatures-1626375
It seems that Andrew has said several times (mentioned in the article) that Grin builds on Blockstream Confidential Transactions (which it seems was invented by Gregory Maxwell and patented, even though the patent has not been granted yet)
See: https://www.google.com/patents/WO2016200885A1
Apache 2.0 license gives full perpetual free patent grants to all users. (http://en.swpat.org/wiki/Patent_clauses_in_software_licences#Apache_License_2.0)
Can you clarify this on Grin's license doc or readme ?
Best regards and keep working on this incredible technology!
We have a test case (https://github.com/ignopeverell/grin/blob/f067e18644d1125159dfb3b48891ccdd7dddc282/grin/tests/simulnet.rs) that simulates 5 peers connecting to each other and one mining a block to check propagation. Expand this to multiple server instances mining in parallel at different rates and check the evolution of the difficulty target.
At the moment, the code related to proof of work, cuckoo itself, and mining are sprinkled throughout the code. The POW verification and internal cuckoo miner is in a subdirectory of core, the cuckoo-miner integration code is in grin/grin, mining the genesis block is called from chain and is stuck using the internal miner which can't use the cuckoo-miner code in grin/grin because it would lead to a circular (and unwanted) dependency... mining config is in grin/grin, and I've had to stick in a messy global into core so that all modules that use POW in any way can use consistent parameters.... and so forth... it's leading to a lot of inconsistency and cruft in the code.
To address these issues, I propose moving all of the POW/Cuckoo/Mining/Plugin-Integration and Validation code into a separate crate/module called 'pow', and refactoring all of the code that uses it to remain as POW agnostic as possible. This way, the genesis block can be modified and mined in the same way as all other blocks, and any future changes to POW are localised and encapsulated. I'll also be able to move the values in global.rs into the new pow module, cleaning that up somewhat.
I think the development of Grin is at a point where it could use a proper configuration file. I want to start adding parameters here and there while working on the mining integration, and the command line switches are getting a bit cumbersome.
I don't think this is much more than an evening's work, but before barging ahead I think it's worth a little bit of discussion as to what format it should take.
The main contenders I can see are:
-.ini
-YAML
-JSON
-TOML
None of these is really 'native' to Rust, (like how it makes most sense to use json in a pure javascript project) with the exception of TOML. Ini files keep it simple, but can be a bit limited... YAML gives the most flexibility, but I personally don't like whitespace having meaning. JSON gives a good degree of flexibility, but it looks like code and you have to be mindful of parenthesis and the structure when making edits, which could be annoying for non-technical users.
Perhaps TOML might make sense as a good balance here, being more readable than JSON or YAML but more flexible than ini files?
From #54:
Another measure that all implementations have (Bitcoin Core, Ethereum, etc.) is banning. For example, a peer sending us a block with an invalid proof of work should immediately be disconnected and banned. There's a whole set of egregious behavior like this that can easily be detected. That's likely best implemented as a specific error type coming from the chain or pool implementations and fed back to the p2p module (through the adapters).
This involves:
Should include a little bit on Cuckoo Cycle but also how we use it (with another layer of hashing), the format in block headers and the difficulty algo.
Setup the first seed server that other peers will connect to first on startup.
Currently we have a have a couple things on the table for p2p
:
We need to triage and setup a roadmap for this list and more.
It has been deprecated: announcement.
Is there any functionality you would require from Serde or another library before this would be possible?
Keep transactions received from the network. Should occupy only a fixed amount of memory and evict based on time/fees. Eviction should also evict dependent transactions.
From #54:
For DoS protection the first rule should be rate limiting based. No peer should send us more than X MB per second and we shouldn't send more (assuming it was all requested) than Y MB per second to any peer either. We can start with X=200 and Y=100. That should be measurable at the protocol or connection level easily.
A couple things that I missed previously on the pool implementation from @MoaningMyrtle that should be updated:
@MoaningMyrtle let me know if that makes sense. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.