Giter Club home page Giter Club logo

solana's Introduction

Solana

Solana crate Solana documentation Build status codecov

Building

1. Install rustc, cargo and rustfmt.

$ curl https://sh.rustup.rs -sSf | sh
$ source $HOME/.cargo/env
$ rustup component add rustfmt

When building the master branch, please make sure you are using the latest stable rust version by running:

$ rustup update

When building a specific release branch, you should check the rust version in ci/rust-version.sh and if necessary, install that version by running:

$ rustup install VERSION

Note that if this is not the latest rust version on your machine, cargo commands may require an override in order to use the correct version.

On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, protobuf etc.

On Ubuntu:

$ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make libprotobuf-dev protobuf-compiler

On Fedora:

$ sudo dnf install openssl-devel systemd-devel pkg-config zlib-devel llvm clang cmake make protobuf-devel protobuf-compiler perl-core

2. Download the source code.

$ git clone https://github.com/solana-labs/solana.git
$ cd solana

3. Build.

$ ./cargo build

Testing

Run the test suite:

$ ./cargo test

Starting a local testnet

Start your own testnet locally, instructions are in the online docs.

Accessing the remote development cluster

  • devnet - stable public cluster for development accessible via devnet.solana.com. Runs 24/7. Learn more about the public clusters

Benchmarking

First, install the nightly build of rustc. cargo bench requires the use of the unstable features only available in the nightly build.

$ rustup install nightly

Run the benchmarks:

$ cargo +nightly bench

Release Process

The release process for this project is described here.

Code coverage

To generate code coverage statistics:

$ scripts/coverage.sh
$ open target/cov/lcov-local/index.html

Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer productivity metric. When a developer makes a change to the codebase, presumably it's a solution to some problem. Our unit-test suite is how we encode the set of problems the codebase solves. Running the test suite should indicate that your change didn't infringe on anyone else's solutions. Adding a test protects your solution from future changes. Say you don't understand why a line of code exists, try deleting it and running the unit-tests. The nearest test failure should tell you what problem was solved by that code. If no test fails, go ahead and submit a Pull Request that asks, "what problem is solved by this code?" On the other hand, if a test does fail and you can think of a better way to solve the same problem, a Pull Request with your solution would most certainly be welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please send us that patch!

Disclaimer

All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the Solana Labs, Inc. (“SL”) good faith efforts. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore, nothing in this project constitutes a solicitation for investment.

Any content produced by SL or developer resources that SL provides are for educational and inspirational purposes only. SL does not encourage, induce or sanction the deployment, integration or use of any such applications (including the code comprising the Solana blockchain protocol) in violation of applicable laws or regulations and hereby prohibits any such deployment, integration or use. This includes the use of any such applications by the reader (a) in violation of export control or sanctions laws of the United States or any other applicable jurisdiction, (b) if the reader is located in or ordinarily resident in a country or territory subject to comprehensive sanctions administered by the U.S. Office of Foreign Assets Control (OFAC), or (c) if the reader is or is working on behalf of a Specially Designated National (SDN) or a person subject to similar blocking or denied party prohibitions.

The reader should be aware that U.S. export control and sanctions laws prohibit U.S. persons (and other persons that are subject to such laws) from transacting with persons in certain countries and territories or that are on the SDN list. Accordingly, there is a risk to individuals that other persons using any of the code contained in this repo, or a derivation thereof, may be sanctioned persons and that transactions with such persons would be a violation of U.S. export controls and sanctions law.

solana's People

Contributors

aeyakovenko avatar apfitzge avatar behzadnouri avatar brooksprumo avatar carllin avatar criesofcarrots avatar danpaul000 avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar dmakarov avatar garious avatar haoranyi avatar jackcmay avatar jeffwashington avatar jstarry avatar lichtso avatar mvines avatar ojshua avatar pgarg66 avatar rob-solana avatar ryoqun avatar sagar-solana avatar sakridge avatar samkim-crypto avatar steviez avatar t-nelson avatar tao-stones avatar yhchiang-sol avatar yihau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

solana's Issues

broadcast network layer

  • window for blobs #98
  • send the consecutive blocks to be executed
  • retransmit the block you are responsible for (merged in #129)
  • erasure codes (might in a separate pr) @sakridge is working on it
  • ask for missing packets (merged in #218)
  • subscriber structure (merged in #129)

Move skel's threads to new crates

The skel module was supposed to be this tiny shim to support the stub, but turned into a catch-all for anything server side that doesn't fit in the accountant module. We now understand that what's in there is a 5 stage transaction processing unit. We can move it all to a new module naned tpu.rs or pipeline.rs. If that's still looking like too much code for one module, we can create a separate module for each stage, as we already did with the Historian.

Add Timestamp event

If a transaction has a time constraint, the historian needs to log the instant in time just after it. When the accountant sees a Timestamp event, it should check to see if it can continue processing any time-constrained transactions.

Add log subscriptions to historian for thick clients

Think clients and verifiers need access to the full log. Currently, the accountant receives new log entries by listening to a sync_channel held by the historian. Those entries are not serialized and do not cross the network.

Implementing this ticket might be accomplished by adding historian_stub and corresponding historian_skel that listens on the internal channel. It's also possibly we might want a generic "switchboard operator" that simply maps network protocols to our deserialized, well-typed channels.

Smart contracts design

Some thoughts on the current state of the smart contract language. We currently have a minimal language suitable for little more than today's needs. As we move towards supporting atomic swaps, we'll need to choose whether to incrementally extend this language or make a bigger jump to a more general solution. Note that this decision is independent of the language's interpreter targeting BPF, as described in the Loom whitepaper. Below is a walk down the path of incrementally extending today's contract language into a general purpose one:

Here's an example spending plan we can write today:

Or((Timestamp(dt), Payment(tokens, to)), (Signature(sig), Payment(tokens, from)))

where the parameters to Payment are of type i64 and PublicKey and Or is of type Plan. In Haskell DSLs, Plan would traditionally be called Expr and Or is the unbiased choice operator sometimes denoted with Alternative's <|>. Likewise, instead of the Condition/Payment tuple, Haskellers would probably use Applicative's *> to say "Timestamp then Payment". In Haskell, we'd look to express the same spending plan as:

Timestamp dt *> Payment tokens to <|> Signature sig *> Payment tokens from

where each function returns an Expr, or maybe a nicely typed Expr GADT. In this form, we see the duplication of Payment tokens, and in fact, can rewrite the same expression as:

Payment tokens <$> (Timestamp dt *> pure to <|> Signature sig *> pure from)

Translated back to Rust, we'd want something like:

Payment(Lit(tokens), Or((Timestamp(dt), Lit(to)), (Signature(sig), Lit(from))))

But unlike Haskell, Rust isn't going to heap-allocate each of those nodes, so it'd actually be more like:

Payment(Box::new(Lit(tokens)), Box::new(Or((Box::new(Timestamp(dt)), Box::new(Lit(to))), (Box::new(Signature(sig)), Box::new(Lit(from))))))

Next, we can remove all those Boxes with a top-level [Plan] instead of a Plan, and replace every Box<Plan> with a &Plan pointing to a slot in the top-level array. Effectively, we'd be translating our heap-allocated recursive expression into a stack of expressions. We'd end up with:

let dt = Timestamp(dt);
let to = Lit(to);
let sig = Signature(sig);
let from = Lit(from);
let or = Or((&dt, &to), (&sig, &from));
let tokens = Lit(tokens);
let payment = Payment(&tokens, &or);
[dt, to, sig, from, or, tokens, payment]

That last line looks so similar to Reverse Polish Notation, it begs the question, what would that look like? Maybe:

(Signature(sig), Lit(from)) (Timestamp(dt), Lit(to)) Or Lit(tokens) Payment

tldr; The walk above shows us that incrementally taking our existing contract language to a general-purpose one takes us down a long, deep rabbit hole. Instead, we'll want to choose between:

  1. a speedy little stack-oriented backend like Bitcoin's
  2. a bulkier, but more expressive register-oriented backend like IELE
  3. postponing the decision and incrementally extending the existing language for atomic swaps

I'm leaning toward option 3 today and then moving toward option 2 after we demonstrate our minimal, custom language meets our performance targets.

Add Authorize event

As an alternative to the self-executing Timestamp event, put a constraint on an Authorize event. It should be implemented the same as a Cancel event, but is a spend constraint.

This is similar in spirit to N N multisig (not as general as M N multisig).

Docs, docs, docs

Do you understand how the codebase works? No? One of the hardest parts of maintaining a codebase is communicating what it does to those that haven't joined us for the ride. Anything that seems unclear, ask about it on our Telegram channel, update the docs, and submit a Pull Request.

Parallel transaction processing design

  • Should we do L0 verification of blocks without context?
    • We can calculate deltas
    • Some event data doesn’t coalesce well (i.e. Witness events)
      • But two timestamps do.
  • Should events within a single entry be processed in parallel?
    • How to identify when order affects output?
      • Example: Race(PayAfter(date, to), CancelOnSig(from))
      • Applying a credit first could allow a debit to proceed
    • For conflict resolution, should we look to event order within the entry?
    • When leader creates the entry, should it order ambiguous events or discard one? Discard all?
  • Should events across entries (all events in one block) be processed in parallel?
    • For conflict-resolution, entry order within the block unambiguously determines order.

Move accountant towards atomic balances

Below is some awkward code that requires the accountant balance to be locked for multiple steps. If instead we added a new forget_signature_with_last_id() before returning InsufficientFunds, then we can reserve the signature first (before checking the balance), which opens the door for replacing the bal RwLock with an Atomic.

        if *bal < tr.data.tokens {
            return Err(AccountingError::InsufficientFunds);
        }

        if !self.reserve_signature_with_last_id(&tr.sig, &tr.data.last_id) {
            return Err(AccountingError::InvalidTransferSignature);
        }

        *bal -= tr.data.tokens;

Allow negative balances

Some interesting possibilities when balances are allowed to go negative:

  • Loans
  • Fast parallel verification of the ledger (divide and conquer and no need for bigint)

Downsides:

  • Will frequently need to check for positive numbers throughout the codebase
  • Max tokens cut in half

Seems the pros far outweigh the cons. Ears open if others would care to comment.

Save event log to a file

Add a method to the historian to output the event log to a JSON file. Use serde_json (https://github.com/serde-rs/json). Note that JSON doesn't allow trailing commas, so tack on a bogus Tick to the end of the file before closing.

[
"transaction",
"transaction",
"tick"
]

Add support for Lua smart contracts

  • One Lua stack per contract
  • Use coroutines to move forward across multiple events

here's the AST for a smart contract that waits for either a timestamp or an abort signature:

Or((Timestamp(some_date), Payment(tokens, to)), (Signature(from), Payment(tokens, from)))

In Lua, that might look like this:

    while true do
       local event = bank:wait_for_event()
       if isinstance(event, Signature) and event.signature == from then
          bank:transfer(tokens, from)
       elseif isinstance(event, Timestamp) and event.date > some_date then
          bank:transfer(tokens, "0ddba11")
       end
    end

Add cancellation constraints to Transaction event

Like spend constraints, the accountant should track cancellation constraints. The only difference is what party receives the funds when the constraint is satisfied. Instead of depositing the funds in the to account, they should be added back to the from account.

Some examples of cancellation constraints might be an expiration time or an explicit Cancel event containing a signature that matches the signature in the transaction.

Parse and Print PoH

Loom uses repr(C) structs and unions and does unsafe operations to get them into Rust. Instead, use all safe Rust. Compare and contrast.

panic cleanup

We're too cavalier with calls to expect() and unwrap(). We're using them safely for the RwLocks, but most others should be acknowledged or added to the return values.

Test-drive atomics in balances

Here's the hard part:

    /// Deduct tokens from the 'from' address the account has sufficient
    /// funds and isn't a duplicate.
    pub fn process_verified_transaction_debits(&self, tr: &Transaction) -> Result<()> {
        let bals = self.balances.read().unwrap();

        // Hold a write lock before the condition check, so that a debit can't occur
        // between checking the balance and the withdraw.
        let option = bals.get(&tr.from);
        if option.is_none() {
            return Err(AccountingError::AccountNotFound);
        }
        let mut bal = option.unwrap().write().unwrap();

        if *bal < tr.data.tokens {
            return Err(AccountingError::InsufficientFunds);
        }

        if !self.reserve_signature_with_last_id(&tr.sig, &tr.data.last_id) {
            return Err(AccountingError::InvalidTransferSignature);
        }

        *bal -= tr.data.tokens;

        Ok(())
    }

We currently hold a lock on the balance while we check the funds and reserve the signature. If we switched to lock-free, we'd need a way to remove that signature if a compare_swap() returned a balance that was too insufficient.

Report parse errors to stderr

The executables all panic if they fail to parse stdin. Instead, report the parser errors on stderr and exit cleanly.

Add back support for fractional tokens

We had fractional tokens in earlier versions, but pulled the feature because we couldn't settle on an encoding. It's also not clear what that should look like in the context of atomics. Time to revisit.

Ensure parallel transaction state is reproducible

Currently there's nothing preventing a validator from taking 2 batches of transactions and processing them in parallel. It's possible the debits from the 2nd batch would underflow an account before the credits from the first batch are applied. If that happened, the validator would incorrectly vote a valid block as an invalid one, which wouldn't affect anyone's account, but would cause unnecessary rollback and potentially slashing the leader's bond without just cause.

We likely just need to push a Tick to the historian right after processing the parallel batch of transactions, since transactions across tick boundaries are guaranteed to be sequentially consistent.
https://github.com/solana-labs/solana/blob/master/src/accountant_skel.rs#L180

This change may affect the performance of other areas if it's parallelizing over the number of entries instead of over the number of cumulative hashes of those entries.

Allow reordering between Ticks

We can probably improve performance by allowing transactions to be reordered between Tick events. Since the generators pull in packets in batches anyway, it already has the power to reorder messages before hashing them anyway. There's probably no good reason to artificially add that serialization.

Add Cancel event

A Cancel event should contain the signature of the event to be canceled. That signature acts only as an ID. It will also need a second signature that can be verified using the PublicKey in the corresponding transaction constraint.

Add spend constraints to Transaction event

Transactions can be constrained by future events. When present, the accountant should immediately withdraw funds from the from party and store any unsatisfied constraints locally. As events arrive, the accountant should check if its data satisfies a constraint, and if so, remove it from its list. Once all constraints are satisfied, the funds should be added to the to party.

Sort out feature dependencies

digraph g {
  "Someone did that" -> "hash0"
  "Someone did that after that" -> "Proof of Anonymous History"
  "Proof of Anonymous History" -> "hash(hash0)"
  "hash(hash0)" -> "hash0"
  "I did that after that" -> "Proof of History"
  "Proof of History" -> "sign(hash(hash0))"
  "sign(hash(hash0))" -> "hash(hash0)"
  "I agree to that" -> "Smart Contracts"
  "Smart Contracts" -> "sign(sign(hash(hash0)))"
  "sign(sign(hash(hash0)))" -> "sign(hash(hash0))"
  "I confirmed they agreed to that" -> "Proof of Stake"
  "Proof of Stake" -> "sign(sign(sign(hash(hash0))))"
  "sign(sign(sign(hash(hash0))))" -> "sign(sign(hash(hash0)))"
  "We agree they agreed to that" -> "Consensus"
  "Consensus" -> "average(sign(sign(sign(hash(hash0))))) >= 2/3"
  "average(sign(sign(sign(hash(hash0))))) >= 2/3" -> "sign(sign(sign(hash(hash0))))"
  "I confirmed they agreed that they agreed to that" -> "Proof of Replication"
  "Proof of Replication" -> "average(sign(sign(sign(hash(hash0))))) >= 2/3"
}

Batch log verification by tick

verify_slice currently zips together entries with adjacent ones and that pair is verified to be consistent in parallel. When the number of Transactions between two Tick events is high, those threads will complete long before any threads with no Transactions. Effectively, we're wasting threads. It should be one thread per tick, not one thread per event.

Add rollback support to accountant

First some background. The accountant currently maintains two hash tables, a map of public keys to balances, and a map of pending transaction IDs (signatures) to spending plans:

pub struct Accountant {
    balances: HashMap<PublicKey, i64>,
    pending: HashMap<Signature, Plan>,
}

When a new transaction enters the system, here's what happens:

  1. The from balance is updated (balances[txn.from] -= txn.tokens)
  2. The transaction is placed in the pending map and will wait for any witnesses its spending plan requires.

Once the it has the needed witnesses and the spending plan is reduced to a Payment, it is removed from the map and balances is updated (balances[payment.to] += payment.tokens).

The problem here is that we're making updates to both balances and pending before the transaction has been finalized. Instead, the Accountant state should be broken up into at least two pieces, a finalized state and at least one unverified block state. Note this is the first time we've used the term block. It means, "a set of transactions that are verified together." Also, we say "at least" so that the leader can optimistically start work on a second unverified block while the first has been submitted for verification. In fact, depending on how long verification takes, it's not unreasonable to imagine the Accountant maintaining a long queue of unverified blocks.

Once a block has been verified by at least 2/3 of the verifiers, the block state should be merged into the finalized state. We say the block state is coalesced. The reason to coalesce is to save memory and redundant computation, and it can only be done after the block has been finalized.

So what does it mean to support rollback? At a high-level, it means the system state rolls back to an earlier point in time. In this architecture, it means that unverified blocks and associated state are simply discarded and never coalesced.

Report serde parse errors to stderr

The executables (solana-mint, solana-genesis, solana-testnode, solana-client-demo) all panic if serde fails to parse the JSON on stdin. Instead, report the parser errors on stderr and exit cleanly.

Wrap RPC return values in futures

The accountant_stub functions all block. That's fine for the moment, but we'll want to make those all async calls very soon. A stepping stone to getting there is to wrap each return value in a future, and then update the callers (mostly in client-demo.rs) to use them.

See: https://crates.io/crates/futures

Note that the tokio framework reexports the same future library, so if we use that as our async runtime, it'll be a painless transition.

Get rid of user keys in MintDemo

Instead of storing those million keys in the JSON file, add a way to deterministically generate the from the Mint's seed key.

Document Historian

Update the README with an explanation of the Historian and how to use it.

Add client's latest end_hash to transaction signature

Limit the forks that are able to process a transaction by signing a fairly recent hash from the generator to each new transaction.

This will also enable expiration times offset from this hash. A malicious generator could then only reorder transactions within that window.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.