Giter Club home page Giter Club logo

neps's Introduction

NEAR Protocol Specifications and Standards

project chat CI

This repository hosts the current NEAR Protocol specification and standards. This includes the core protocol specification, APIs, contract standards, processes, and workflows.

Changes to the protocol specification and standards are called NEAR Enhancement Proposals (NEPs).

NEPs

NEP # Title Author Status
0001 NEP Purpose and Guidelines @jlogelin Living
0021 Fungible Token Standard (Deprecated) @evgenykuzyakov Deprecated
0141 Fungible Token Standard @evgenykuzyakov @oysterpack, @robert-zaremba Final
0145 Storage Management @evgenykuzyakov Final
0148 Fungible Token Metadata @robert-zaremba @evgenykuzyakov @oysterpack Final
0171 Non Fungible Token Standard @mikedotexe @evgenykuzyakov @oysterpack Final
0177 Non Fungible Token Metadata @chadoh @mikedotexe Final
0178 Non Fungible Token Approval Management @chadoh @thor314 Final
0181 Non Fungible Token Enumeration @chadoh @thor314 Final
0199 Non Fungible Token Royalties and Payouts @thor314 @mattlockyer Final
0245 Multi Token Standard @zcstarr @riqi @jriemann @marcos.sun Review
0264 Promise Gas Weights @austinabell Final
0297 Events Standard @telezhnaya Final
0330 Source Metadata @BenKurrek Review
0366 Meta Transactions @ilblackdragon @e-uleyskiy @fadeevab Final
0393 Sould Bound Token (SBT) @robert-zaremba Final
0399 Flat Storage @Longarithm @mzhangmzz Final
0448 Zero-balance Accounts @bowenwang1996 Final
0452 Linkdrop Standard @benkurrek @miyachi Final
0455 Parameter Compute Costs @akashin @jakmeier Final
0514 Fewer Block Producer Seats in testnet @nikurt Final

Specification

NEAR Specification is under active development. Specification defines how any NEAR client should be connecting, producing blocks, reaching consensus, processing state transitions, using runtime APIs, and implementing smart contract standards as well.

Standards & Processes

Standards refer to various common interfaces and APIs that are used by smart contract developers on top of the NEAR Protocol. For example, such standards include SDK for Rust, API for fungible tokens or how to manage user's social graph.

Processes include release process for spec, clients or how standards are updated.

Contributing

Expectations

Ideas presented ultimately as NEPs will need to be driven by the author through the process. It's an exciting opportunity with a fair amount of responsibility from the contributor(s). Please put care into the details. NEPs that do not present convincing motivation, demonstrate understanding of the impact of the design, or are disingenuous about the drawbacks or alternatives tend to be poorly received. Again, by the time the NEP makes it to the pull request, it has a clear plan and path forward based on the discussions in the governance forum.

Process

Spec changes are ultimately done via pull requests to this repository (formalized process here). In an effort to keep the pull request clean and readable, please follow these instructions to flesh out an idea.

  1. Sign up for the governance site and make a post to the appropriate section. For instance, during the ideation phase of a standard, one might start a new conversation in the Development » Standards section or the NEP Discussions Forum.

  2. The forum has comment threading which allows the community and NEAR Collective to ideate, ask questions, wrestle with approaches, etc. If more immediate responses are desired, consider bringing the conversation to Zulip.

  3. When the governance conversations have reached a point where a clear plan is evident, create a pull request, using the instructions below.

    • Clone this repository and create a branch with "my-feature".
    • Update relevant content in the current specification that are affected by the proposal.
    • Create a Pull request, using nep-0000-template.md to describe motivation and details of the new Contract or Protocol specification. In the document header, ensure the Status is marked as Draft, and any relevant discussion links are added to the DiscussionsTo section. Use the pull request number padded with zeroes. For instance, the pull request 219 should be created as neps/nep-0219.md.
    • Add your Draft standard to the NEPs section of this README.md. This helps advertise your standard via github.
    • Update Docusaurus documentation under the specs/Standards to describe the contract standard at a high level, how to integrate it into a Dapp, and a link to the standard document (ie. neps/nep-0123.md). This helps advertise your standard via nomicon. Any related nomicon sections should be prefixed and styled using the following snippet:
    :::caution
    This is part of proposed spec [NEP-123](https://github.com/near/NEPs/blob/master/neps/nep-0123.md) and subject to change.
    :::
    
    • Once complete, submit the pull request for editor review.

    • The formalization dance begins:

      • NEP Editors, who are unopinionated shepherds of the process, check document formatting, completeness and adherence to NEP-0001 and approve the pull request.
      • Once ready, the author updates the NEP status to Review allowing further community participation, to address any gaps or clarifications, normally part of the Review PR.
      • NEP Editors mark the NEP as Last Call, allowing a 14 day grace period for any final community feedback. Any unresolved show stoppers roll the state back to Review.
      • NEP Editors mark the NEP as Final, marking the standard as complete. The standard should only be updated to correct errata and add non-normative clarifications.

Tip: build consensus and integrate feedback. NEPs that have broad support are much more likely to make progress than those that don't receive any comments. Feel free to reach out to the NEP assignee in particular to get help identify stakeholders and obstacles.

Running Docusaurus

This repository uses Docusaurus for the Nomicon website.

  1. Move into the /website folder where you will run the following commands:

    • Make sure all the dependencies for the website are installed:

      # Install dependencies
      yarn
    • Run the local docs development server

      # Start the site
      yarn start

      Expected Output

      # Website with live reload is started
      Docusaurus server started on port 3000

      The website for docs will open your browser locally to port 3000

  2. Make changes to the docs

  3. Observe those changes reflected in the local docs

  4. Submit a pull request with your changes

neps's People

Contributors

10d9e avatar abacabadabacaba avatar akashin avatar akhi3030 avatar austinabell avatar benkurrek avatar bestatigen avatar blasrodri avatar bowenwang1996 avatar bucanero avatar chadoh avatar egorkulikov avatar evgenykuzyakov avatar frol avatar ilblackdragon avatar jakmeier avatar lachlanglen avatar maksymzavershynskyi avatar mfornet avatar mikedotexe avatar mikhailok avatar mzhangmzz avatar ori-near avatar pmnoxx avatar robert-zaremba avatar skidanovalex avatar telezhnaya avatar thor314 avatar url84t avatar zlog-in avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neps's Issues

Contract methods naming guideline

We need to define what is the contract methods and parameters naming guideline.
Specifically, we have right now snake case used in Rust based contracts and camel case in AssemblyScript.

Worth noting that Solidity uses camel case, because it gets surfaced in the JavaScript.
In our case, we right now are using snake case in JavaScript.

We need to decide on either and codify it in the meta-standard.

Guideline: Runtime protocol changes

Motivation

This doc explains the guidelines for protocol changes in Runtime. Which changes should be included as a protocol change and which changes shouldn't. Please add and comment on the guidelines. We can later generalize this doc from Runtime specific to NEAR specific.

Protocol changes guidelines and requirements

This requirements are mostly for new features of the protocol. Bug fixes and corrections should be evaluated differently.

The change can not be implemented with the existing API

If the change can be implemented with the existing API, it shouldn't be part of the protocol.

Examples:

  • Staking delegation was implemented through a smart contract instead of core protocol, because it was possible to do.
  • keccak256 and keccak512 were possible to implement on the contract level, but the gas metering for every call was incredibly expensive, so it wouldn't be possible to use in the intended merkle-path proof application. That's why it was moved to the native support within a protocol

The change should solve general use-cases

The change should not be limited in functionality. When looking at the proposed change always think whether it's possible to solve with more generic solution.

For example a change to support token transfers can be generalized to generic resources transfers, that can be generalized to Safes with resolutions (#26). But even Safes are not generic enough, because they don't allow claiming the underlying resource and can only be dropped or transferred back to the owner's contract. (That's why we're working on a more generic sharded version).

The change is critical for the success of the network

Protocol changes are not free. They require a network upgrade and come with the debt of future maintenance. Some changes solving general use cases and can't be done with the current API, but may not be critical for having them available.

Examples:

  • Private transactions. They are not critical for the success of the network right now, but they become a feature of most modern blockchains that support monetary transfers. While it's arguable whether we should include them on the protocol level, it makes sense to support native math on the protocol level similar to hashes, because it's impossible to implement it on the contract level with our current gas metering infrastructure.
  • Safes can not be implemented without protocol changes, because a safe is required to be atomic and need to have a callback on the resolution. Also safes solve generic enough use-case such as auto-unlock mechanism. But Safes are not critical enough right now, because majority of token and exchange operations can be implemented with the existing API.

The change has a high priority right now

Our engineering resources are limited and every change requires careful review. That's why the change has to be a high priority in order to be implemented and included. This item is somewhat similar to the previous one, but more reflect a time urgency than the long-term requirement.

Examples:

  • For example we know that we need to support common contract code (#93), since accounts are required to pay for the storage of the contract. And some contracts might need to be deployed on a lot of accounts, e.g. for air-drops support. But there are some changes that have higher priority right now and this one can wait.

runtime VM limits

taken from docs (near/docs#242)


The NEAR blockchain executes smart contracts as Wasm binaries inside a custom virtual machine -- this environment is known as the NEAR Runtime.

To guarantee a minimum acceptable level of performance and reliability of these contracts, the NEAR Runtime imposes limits on the resources that contracts can consume (ie. size of deployed contract, number of operations performed, amount of data manipulated in memory).

These limits are defined in the file nearcore/runtime/near-vm-logic/src/config.rs and included here for convenience.

Runtime Limit Configuration

see here for related implementation in nearcore.

Limits for VM and Runtime

field type purpose
max_gas_burnt Gas Max amount of gas that can be used, excluding gas attached to promises.
max_gas_burnt_view Gas Max burnt gas per view method.
-----
max_stack_height u32 How tall the stack is allowed to grow?
See https://wiki.parity.io/WebAssembly-StackHeight to find out how the stack frame cost is calculated.
-----
initial_memory_pages u32 The initial number of memory pages.
It's not a limiter itself, but it's a value we use for initial_memory_pages.
max_memory_pages u32 What is the maximal memory pages amount is allowed to have for a contract.
-----
registers_memory_limit u64 Limit of memory used by registers.
max_register_size u64 Maximum number of bytes that can be stored in a single register.
max_number_registers u64 Maximum number of registers that can be used simultaneously.
-----
max_number_logs u64 Maximum number of log entries.
max_total_log_length u64 Maximum total length in bytes of all log messages.
-----
max_total_prepaid_gas Gas Max total prepaid gas for all function call actions per receipt.
-----
max_actions_per_receipt u64 Max number of actions per receipt.
max_number_bytes_method_names u64 Max total length of all method names (including terminating character) for a function call permission access key.
max_length_method_name u64 Max length of any method name (without terminating character).
max_arguments_length u64 Max length of arguments in a function call action.
max_length_returned_data u64 Max length of returned data
max_contract_size u64 Max contract size
max_length_storage_key u64 Max storage key size
max_length_storage_value u64 Max storage value size
max_promises_per_function_call_action u64 Max number of promises that a function call can create
max_number_input_data_dependencies u64 Max number of input data dependencies

Default Runtime Limits

The default limits can be overridden in the NEAR Genesis configuration.

The code below was taken from here

field default comment
max_gas_burnt 2 * 10u64.pow(14) with 10**15 block gas limit this will allow 5 calls.
max_gas_burnt_view 2 * 10u64.pow(14) same as max_gas_burnt for now
Stack height has to be 16K, otherwise Wasmer produces non-deterministic results.
For experimentation try test_stack_overflow.
max_stack_height 16 * 1024 16Kib of stack
initial_memory_pages 2u32.pow(10) 64Mib of memory
max_memory_pages 2u32.pow(11) 128Mib of memory
registers_memory_limit 2u64.pow(30) By default registers are limited by 1GiB of memory
max_register_size 2u64.pow(20) * 100 By default each register is limited by 100MiB of memory
max_number_registers 100 By default there is at most 100 registers
max_number_logs 100
max_total_log_length 16 * 1024 Total logs size is 16Kib
max_total_prepaid_gas 10 * 10u64.pow(15) Fills 10 blocks. It defines how long a single receipt might live
max_actions_per_receipt 100 Safety limit. Unlikely to hit it for most common transactions and receipts
max_number_bytes_method_names 2000 Should be low enough to deserialize an access key without paying
max_length_method_name 256 basic safety limit
max_arguments_length 4 * 2u64.pow(20) 4 Mib
max_length_returned_data 4 * 2u64.pow(20) 4 Mib
max_contract_size 4 * 2u64.pow(20) 4 Mib
max_length_storage_key 4 * 2u64.pow(20) 4 Mib
max_length_storage_value 4 * 2u64.pow(20) 4 Mib
max_promises_per_function_call_action 1024 Safety limit and unlikely abusable
max_number_input_data_dependencies 128 Unlikely to hit it for normal development

Update Meta Standard to include snake case method names as a standard

We need to define which naming convention standards suppose to follow:

  • Snake case for Rust
  • Camel case for AS/TS/JS & Ethereum

Specifically, it's important because frontend is mostly in JS and expectation is that all calling of arguments will be in Camel Case. While Rust is enforcing Snake case.

I think we can lean toward Camel case to maintain compatibility with frontend and Ethereum standards and add whatever flags required in Rust to enforce that instead of Snake case. This though requires among other things adjusting our Fungible Token standard.

Thoughts?

Tx validity interval instead of block_hash+global validity length

Currently to defend against:

  • tx replay attack from alternative chains (testnet, fork of mainnet),
  • delay attack by validator, where tx are hoarded and then released in one swoop

We require transaction to specify block_hash for some block on the chain it needs to be applied on. We also have global validity length, which specifies maximum amount of time from given block_hash this transaction can be included.

Algorand uses different method: they require in transaction to specify an interval of block heights where transaction is considered valid.

Pros:

  • Actually less bytes in tx
  • More flexibility for transaction construction, as provides people both with lower bound (to make sure tx doesn't land too early) and upper bound (from tx hoarding).
    Cons:
  • Doesn't give very good protection against tx from alternative chains an forks. As one can still send tx from testnet that during the same block height period or if fork of mainnet happened prior to the tx validity.
  • Anything else?

Also for alternative chains, we can leverage epoch_id or few other alternatives to identify chain.

[Proposal] Differentiate cold and hot contracts

Contract compilation is expensive. We have introduced caching for compilation, but unfortunately we currently cannot have different fees for contracts that are in the cache versus that are not. This means that contract calls are priced based on the worst case scenario -- when every call leads to a compilation. Unfortunately, we cannot predict when contract will be compiled or not, because different nodes that implement the protocol can have different cache settings. However, we can enforce it:

  • We can require that each node for each shards keep track of the top-200 invoked contracts (by recency) in the 1-epoch long moving window;
  • Contracts in top-200 list are considered to be "hot" and when they are invoked we do not apply contract_compile_base and contract_compile_bytes fees, https://github.com/nearprotocol/nearcore/blob/master/neard/res/genesis_config.json#L137 . Requiring the node to store the compiled contract in cache.

We would need to introduce 2 parameters for the runtime config:

  • The size of the hotness list;
  • The size of the moving window.

We would need to store the list of the 200 hottest contracts in the trie the way we store delayed receipts. Note, they don't need to be ordered, we just need to store the entries: (code hash, number of times the code was called in the moving window).

[discussion] Remove Nightshade Finality Gadget

Context: there's a fundamental issue between Doomslug and NFG that we discovered recently. Specifically, it could be that a block at height h is produced, and some participants have already sent endorsements for it, and then a block at height less than h is produced but with a higher NFG score, and becomes the head. The chain stalls because no block producer can send a skip message that will bypass h.

The solution to it is to make Doomslug aware of the score, and change the conditions for endorsements and skips to accommodate it. However, since both safety and liveness proof of Doomslug rely on head height never getting lower, enduring safety and liveness remain after introducing this change is not trivial.

I instead suggest to completely remove NFG, and change Doomslug so that it
a) becomes a BFT consensus
b) loses its ability to provide some weaker sense of finality at 50%.

The full argument is here: https://docs.google.com/document/d/10uBwpEN3ADDkL9iY52K0zM1edgxmoO8hJBqxmDafFN0/

Improve number of seats calculation

Currently we have 100 seats for block producers. 100 is coming from performance of the consensus not economic parameters, where we are targeting 100 unique block producers.

But currently the way we fill in the seats is done via economics, e.g. if some validator has 10x of others the stake - they will have 10 the seats. Meaning that total number of unique block producers will be lower.

Given we want to increase decentralization by increasing number of unique block producers, as well as maximize the amount at stake - we have discussed alternatives.

An alternative was suggested by @SkidanovAlex to determine number of seats dynamically, in such a way to get 100 unique block producers:

  • change threshold finding routine, to find 100-th smallest stake. And determine number of seats based on that.

The concerns voice previously:

  • Large validators can split their stake to push away smaller validators. Unclear why would they do, the economic incentive is small (we pay proportionally to stake, so if validator has 10x the stake, and pushes out 10 small stake validators - they only get a bit more of what this in rewards). This reduces decentralization though so they may get punished for that from their delegates. This is also not worth than current state.
  • If 100th validator has 1/1,000,000 of stake of the first one - the number of seats will be huge and the internals of EpochManager may take long time to calculate. This argument can be address by capping number of seats with some large but limiting number. Which means that if stake of lowest validators is so much smaller - they will be ignored.

Getters name convention

I noticed that in most of (all?) the proposals the getters methods are prefixed with get_. It seams that it's not an idiomatic Rust style (source, -> RFC). Should this be avoided or we just don't care about Rust conventions.

Do you think we can still update the Fungible Token interface?

Near Shell NEP

This issue is to track the discussion for the near-shell enhancement NEP.

[Discussion] Shared global key-value storage

To be able to reuse contracts code across multiple accounts, we suggest to dedicate a shared global key-value storage that is available for reads from every shard.

Simple version:

This storage has the following properties:

  • the total cost to write into the storage is one time payment of PRICE_PER_BYTE_PER_SHARD * NUM_SHARD * (NUM_BYTES_VALUES + NUM_BYTES_KEY)
  • once the value is stored, it can't be removed
  • to write into the storage, you need to issue a transaction from the account, that creates a custom receipt.
  • once written, every shard should be able to read any value from this storage

This will allow the following use-cases:

  • reuse contract code. Code can be deployed once into the global storage and then accounts can just link to the hash by paying only about 32 bytes of storage, instead of the size of the storage cost.
  • Have multiple contracts on one account
  • Have some precomputed data deployed accessible from multiple contracts
  • Contract Modules?
  • Shared config per version (needs write permissions)

State Rent vs State Staking

Moved from https://commonwealth.im/near/proposal/discussion/380-state-rent-vs-state-staking

State is the most important resource of the blockchain network.
This is what all nodes in the network need to maintain and what they all come with consensus.

There are two major ways to charge for storage without undercharging (what happens now on ETH):

  • Charge state rent
  • Require some amount of $NEAR token on the account to occupy a byte.

There are also more complex alternatives, like what EOS is using, where RAM (their name for state) needs to be bought explicitly at the current market price. Price then fluctuates depending on the available RAM. These kinds of mechanics lead to lots of weird market manipulations and disadvantaged developers.

State Rent

State rent mechanics is when we charge each block some amount of $NEAR from the account for the occupied size. For example if account is 1kb, and price is 7e-15 $NEAR for block / byte, we will be deducting 7.168E-12 $NEAR per block or ~0.00022 $NEAR per year.

Pros:

  • Developers who need to deploy their contracts and applications, that might be taking ~100kb - 1mb of space will be paying in as-you-go mode.

Cons:

  • Account balance becomes "virtual", because we are not updating it all the time as we charge. Currently for example in NEAR - we never show the real balance still.
  • Accounts must be deleted when rent runs out. This creates both weird user experience and potential vectors where attacker will try to drain user's account in some way, delete their account and re-create account with the same name to get all of their resources. Simplest way is to get user to sign a tx that will transfer all of their assets to some lending application, leaving account without enough to pay for rent and then by taking over the account - receive all their funds back.
  • We have some weird code around validators, to make sure they have enough money to pay for rent.
  • Even though we can burn these fees, at the end it doesn't require to have that much balance on accounts - removing this as a Token Sink.
  • Price is fixed in $NEAR

State Staking

Alternative model in the simplest form is to require some amount of $NEAR on account's balance to store 1kb of data.

This means that total supply of $NEAR will map to some amount of storage. For example 1B of tokens maps to 1TB. This is roughly 1 $NEAR for 1kb of storage.

Specifically, the mechanics will work next time:

def process_transaction(account, tx):
   ... do actual work ...
   if sizeOf(account) > account.amount * config.price_of_bytes:
       fail_and_revert(tx)

(e.g. replacing check_rent function with different condition and removing apply_rent completely with extra fields from the account)

Pros:

  • Improves user experience around account ownership: doesn't require to deduct balance (if you have 10 $NEAR, you will still have 10 $NEAR in a year), doesn't have account deletion problems due to low balance. Tx that reduce your balance below required to store data - will just fail with meaningful error.
  • Creates very solid Token Sink, both because of useful storage and "excess" of $NEAR.
  • Creates lending and borrowing market for $NEAR for storage purposes.
  • Price can adjust as we add more storage into the system by adding extra shards.

Cons:

  • Making user-centric contracts where we deploy 500kb contracts would require a large enough balance (e.g. 500 $NEAR) and to store any extra data that might require more .
  • Developers now required to have a larger amount of $NEAR to deploy their contracts. There might be more considerations around pricing over time that they must consider.

The last item is actually very interesting, because it creates new opportunity for something similar to Polkadot's "Initial Parachain Offering".

Let's say developer expects to need 10Mb of storage for their application but they don't really have expected amount of $NEAR.

On one hand, they can borrow it for their contract (this can be implemented in trustless way, to not require collateral on such loan).

On the other hand, developer can sell their third-party token for $NEAR or for loaning $NEAR for some period of time. Benefits of such approach:

  • $NEAR holders have an alternative way to lend or provide their tokens in exchange for new application token which gives a way to participate in the new ecosystem on top of $NEAR and capture value there as it grows.

  • These $NEAR token holders immediately interested in success of such application and become it's promoters. This also removes some of the tensions between third-party tokens and native token-based applications. Like Bancor vs Uniswap, where Uniswap which doesn't have it's own token is preferred by community because it didn't create "another token". There are few models for such third-party tokens to even further reduce this tension.

Balance between validators staking and using $NEAR for storage will be maintained via the fact that as more storage is required - more $NEAR won't be staked but will be allocated for storage. This means that each staked $NEAR will capture bigger portion of inflation.

Additionally, we will need to have a clear program to onboard developers initially. For example grants to developers who built interesting applications on TestNet to provide them with grant funds to launch on MainNet.

Thoughts?

[Tracking issue] Missing specs

This issue tracks the specs that were not yet written:

Protocol Specs

Transaction runtime

While http://nomicon.io/ provides some specification of the runtime it is far from being complete. Currently, we only have description of the main runtime data-structures, explanation of the actions, examples of the transaction execution. What is missing: detailed explanation of when fees are applied and how, e.g. do we apply the base fee for reading storage before we start reading it or after; details on how runtime allows contracts to return promises, e.g. if we create a promise that awaits on receipts A and B then can we return it from the contract? What sequences of actions are prohibited and what are allowed? In what order local receipts, delayed receipts, and other receipts are processed? What is the exact consistency check for the tokens? In what cases refunds are not issued?

Overall, it might be the case that the spec of the blockchain runtime will be large part of the Rust code rewritten as a pseudocode or Python.

Contract runtime

We need to have clear explanations on how we actually run the smart contracts, what transformations and checks do we apply before running Wasm. What is the part of the protocol and what is not, e.g. is the fact that we compile instead of interpreting is the part of the protocol? How do we make sure the specifics of Wasmer backend do not leak into the protocol spec? E.g. we make sure compilation errors and execution errors are indistinguishable. How do we share the Wasm memory? The gas injection on its own can take a large time to write the specification for.

Trie

Detailed specification on the structure of the trie. How do we make sure the runtime-specific abstractions, like accounts do not leak into the design of the trie. How do we do state split for the state sync. We also need to draw the line what is the part of the protocol spec and what is not, e.g. whether trie is used directly during the transaction execution or whether it is only constructed/updated at the end of the block is implementation-specific.

Staking state-machine

We need an exhaustive diagram describing the state of the given validator and how it changes when they send staking proposal, the proposal gets accepted, whey unstake, etc.

API Specs

RPC API

Since we want different implementations of our node to be compatible with the same Applayer ecosystem, they need to have the same RPC. Describing such RPC and writing the spec tests is a large tedious work.

[discussion] Dynamic gas price during transaction execution

There are a few attacks that are possible with the ability to buy a lot of prepaid gas at the current fixed cheap price. Some of them are described here:

The challenge is the validator node and the runtime doesn't know the amount of gas that is actually going to be used when a transaction is accepted to a chunk. So a validator fills the block based on the burnt gas (the gas to convert a transaction into a receipt), instead of the total amount of the prepaid gas per transaction. This allows an attacker to issue a lot of transactions that have a lot of prepaid gas in each and pay at the current gas price. The gas price will grow in the next block, but the transactions are already have a lot of prepaid gas at a cheaper price. This attack allows to stall the shard for a long time without paying a lot of fees.

Solution 1: Prepaid gas -> Burnt gas

The first proposal is to change the prepaid gas to a burnt gas, so the amount of gas you prepay will be completely burnt. This allows to limit the chunk size based on the prepaid gas, instead of burnt gas, which prevents attacker from issuing a lot of cheap transactions.

The issue, is the contract developers will need to carefully estimate gas usage and might get stuck into inconsistent state if they underestimate the gas usage. It might also lead to unexpected results by users and lost funds due to overcharged gas amount.

Solution 2: Charge burnt gas at current price

The alternative is to change the way the prepaid gas is charged. Instead of buying all prepaid gas at the current price, assume that the transaction will issue the cheapest possible promise every block with NOOP compute. If the blocks are filled completely, the current gas price will grow at 1% (see economics) per block. We can estimate the maximum amount of tokens needed per block to issue such transaction (ignoring the delayed receipts). Then we can charge each receipt with the real current gas price instead of the initial prepaid gas price.

Cons:

  • The issue is the receipt might be delayed and the gas price can grow more than 1% between 2 promise executions. In this case we have to charge more than the estimated amount. Need to think whether it can be addressed.
  • It's unclear the actual price change per transaction with the change.
  • It requires Runtime changes to charge gas at the current price and keep track of the remaining balance. When sending receipts the gas price estimation has to be done similar to initial transaction pricing.

Pros:

  • it doesn't require contract changes or transaction API changes.

It requires near/nearcore#2523 to be fixed, otherwise an access key allowance will be drained even faster.

Dynamic resharding

Overview

For any capacity of the blockchain, there is a limit how much transactions it can process.
NEAR's sharding is designed to increase the capacity of the network with demand, allowing to maintain low fees.

Currently the latency on the change on increasing number of shards is very high, due to needing technical governance change and furthermore multiple epochs for resharding.

Proposal

The original idea is that we can change number of shards (and where they are split) dynamically based on the load to the network.

Resharding rule

There are major approaches:

  • automatic heuristic based on load on the network
  • manual technical governance

A heuristic approach:

Merge back two sequential shards if the load on both of them are under 25% of capacity of the shard and level of delayed receipts under D% for two consecutive epochs.

This is inspired for the next reasons:

  • we don't want to merge two shards such that the average load of the merged shard is above 50%
  • we don't want to merge a shard that had delayed receipts as it means there are spikes it won't handle

Resharding process

Resharding process in general can be decomposed into 2 steps: splitting a single shard into 2 and merging 2 shards into one.

Split two shards

Given transition to stateless validation and chunk producer structure, we can use the next process irrespective of epoch boundary. When resharding rule determines that a shard S needs to be split into two at account X:

  • Chunk producers for shard S are deterministically split into two groups S_left and S_right as of block T.
  • S_left are now processing only transactions and receipts before X and S_right are now processing tx/receipts after X. This means that these chunk producers don't need to re-sync as they are have full state of the the full shard at block T and just stop using the other part.
  • For stateless validation ideally shard mapping is assigned dynamically already (may be for a window) so just mapping reassigned with one more shards.

The core idea here is that instead of doing an actual resharding process, the affected shard's chunk producers just stop processing part of the tx/receipts as if there is a new shard.

Merge two shards

TBD

Technical questions

Biggest technical question to address is around state trie containing shard id as part of the key prefix.

Runtime API for burning NEAR

We need a way to burn NEAR, for example for Top level account name registry account.

The burning is different from just sending NEAR to locked account without retrieval, because it decreases the total supply - which affects a bunch of other parameters (like inflation and pay outs).

NEAR Spec and standards

Want to start discussion for where and how we are going to maintain NEAR Spec and standards.

The NEPs process is mostly around proposal to modify / add things, but we also need the final version that lists the overall specification for the protocol and also all the accepted / used standards.

My proposal, we actually rework this repository instead of been a list of proposals into maintaining the released version of Spec + all accepted standards.

E.g. if would be a combination of https://github.com/ethereum/eth2.0-specs and https://github.com/ethereum/EIPs.

Where as NEP gets accepted it modifies the staging version of the spec and gets released with the new version. Alternatively if NEP represents new standard and doesn't require changes to protocol - it goes to the standards list.

Why standards mixed with spec?
Mostly because standards end up representing a big part of the protocol itself.
If you think of ERC-20 specification - it actually is a core part of the Ethereum protocol, even though doesn't live on blockchain level itself. Same thing can go for any number of other protocols.

I also think we can repurprose https://nomicon.io/ to serve final version of spec. That website already maintains mostly the spec details with few implementation details that we currently have in Rust client. We can split Rust specifics into the docs inside Rust repo (which currently is not pointing to nomicon.io anyway).

Random Vector Commitment based Time Locking of Shard Contracts

This is a new idea proposal for adding synchronization for the concurrent execution of sharded contracts NEAR protocol. I am going through the NEAR protocol design of concurrent execution of smart contracts in different shards. I can see that NEAR is introducing a Random Vector-based Commitment for the implementation of Randomness across contracts. Can we do verified time locking of smart contracts across the shards with a counter-based contract execution? The counters can be synchronized by the random vector commitment in a sequential manner. Once the time lock is verified and revealed in a commitment, it can release the lock and then process the shards in proximity.

Write a spec for chain syncing

The corresponding review work item is here: near/nearcore#2307

Write a spec for all the chain sync modes.

We have three modes of syncing: header syncing, block syncing and state syncing.

The entry point for any sync-related logic is ClientActor::sync. This method is called every few milliseconds. It checks whether any syncing is needed, and if any needed, delegates the actual work to HeaderSync, BlockSync, or StateSync classes.

HeaderSync

We switch to HeaderSync if one of our peers claims to have their head way ahead of ours (in terms of height).

Since peers can provide arbitrary value, this is a vector for a possible attack. We somewhat circumvent it by monitoring how quickly a particular peer sends us the headers, and if it's too slow, we ban them and therefore remove their reported head from consideration when deciding again whether to do header sync.

Header sync starts by us sending several anchor hashes (so called "locator"), and the responding side starts sending us headers starting from the most recent anchor hash that is on the canonical chain.

BlockSync

I don't know exactly how BlockSync work, so that will have to be mostly picked up from code.

StateSync

StateSync constitutes sending sufficient information for the recipient to start processing and building the chain without downloading and processing all the blocks. The state sync consists of

TODO: @alex Kouprin can you write a quick overview of all the components of StateSync with their location in the code

State Sync is a procedure which allows a node to receive the State from another node without processing all blocks. For our particular purpose, State can be described as Trie of raw data.

The circumstances of when State Sync should run are outside of State Sync and not needed for its understanding at this point. State Sync executing starts with calling StateSync::run. It takes sync_hash - a hash of the first block of the new epoch. sync_hash means that we want to receive complete state strictly before Block with hash sync_hash.

State Sync may request States for several shards. They are in tracked_shards.

In State Sync, the State for concrete shard is divided into two concepts:

  1. State Header, which has important data to make syncing possible. It contains latest chunk, incoming receipts, proofs and metadata about State - StateRootNode.
  2. State Parts, filled with raw data. We assume that:
    1. There are reasonable number of parts. Each part is no more than 1 Mb long.
    2. Parts can be validated and stored easily (check validate_state_part()).
    3. Parts can be combined easily to receive the complete State. All the Parts are needed to get the State (check confirm_state()).

According to described above, for each shard State Sync status is stored at ShardSyncStatus and may be the following:

  1. StateDownloadHeader - we want to get State Header.
  2. StateDownloadParts - we want to get State Parts.
  3. StateDownloadFinalize - we received all parts and want to combine them together.
  4. StateDownloadComplete - we've done everything properly, no action is needed.

Getting State Header

To get State Header, State Sync sends NetworkRequests::StateRequestHeader to the node which has the State for the shard. A node who received the request calls get_state_response_header(), calculates it locally and returns ShardStateSyncResponseHeader by sending NetworkViewClientResponses::StateResponse back.

The code of get_state_response_header() is hard to understand without paper and pen, that's why you may find lots of comments there. The idea is, everything which is not stored at State should be proven firmly. At get_state_response_header() we build proofs to check them later by node who requests the State.

state_root_node is important one to continue State Sync. It contains knowledge of State Size, which is necessary to dividing State into State Parts, and State Hash, which is necessary to prove State Parts later.

A node, who receives State Header, executes set_state_header() to make sure that the State Header is correct. It runs all checks and proves State Header validity.

Getting State Part

After receiving valid and proven State Header, a node is able to request State Parts by sending NetworkRequests::StateRequestPart. A node who received the request calls get_state_response_part() and returns Vec<u8> of Part by sending NetworkViewClientResponses::StateResponse back.

get_state_response_part() delegates collecting of State Part to Trie by calling obtain_state_part().

A node, who receives State Parts, executes set_state_part() for each of it. set_state_part() delegates all proofs to Trie by calling validate_state_part() and store the State Part in storage.

Finalizing

After getting all State Parts, a node is unlocked to call set_state_finalize(). It runs last checks that the Complete State matches to all State Parts collected together and replaces the current State with the State we just received. All parts are deleted then by calling clear_downloaded_parts().

[Proposal] The contract lifetime

as a follow-up to the protocol research group discussion 30.06.2020

At the moment Near runtime doesn't have a notion for a contract lifetime. Lack of a contract lifetime logic negatively affects the following areas:

  • security
  • safety
  • composability
  • DevX

From a DevX perspective, developer has to keep in mind to not forget to define the deployment procedure outside to the contract source code - there is no other choice but to define and maintain the deployment script separately. It's also not prefect for a bunch of reasons:

  • it creates surface for creating unnecessary bugs due to the source code fragmentation and lack of sound guarantees;
  • it creates a necessity to define additional safety patterns as a consequence (like having a deployment script(s) and test for it);
  • it makes contract hard to review due to a contract logic split, which makes it uneasy to reason about the contract state invariants;
  • factory contracts have to replicate the deployment script logic (composability issue);
  • since there is no explicit notion of the contract lifetime, developers are not forced to think in terms of a contract lifetime (which should not be underestimated in case of financial contracts)
  • DevX team placed the related issue on their agenda (CC @potatodepaulo)

The list is not exhaustive.

The proposal

I suggest to upgrade runtime by introducing the explicit notion of a contract lifetime. This shouldn't require a lot of changes to
the code, thought impact on the runtime semantics and DeX experience are significant.

Implementation steps

  1. Reserve a function export symbol _init;
  2. Extend the Deploy action to carry arguments (Vec<u8>);
  3. Extend Runtime to call the _init function export with provided arguments as part of the Deploy action processing and update the contract state.
  4. Restrict the direct (full) access to the account on the initial code deployment. Relax it back when account.code empty.

As a new default, we should take out the arbitrary ability of altering code from an external actor which is made with no respect to the logic of the deployed contract. From the moment of deployment till destruction, account is fully managed by the contract code- that should be an axiom. Not having this default is not safe since it opens a window for leaks to the contact invariants. I suggest the following: Deploy action should check if there is a code deployed and should restrict a FullAccess into the account, since otherwise that opens a direct external access to the internal account state which consistency is ought to be maintained by the contract. Having this rule may improve trustability of contracts due to a better semantic expressiveness of operation (and because the argument is valid and its premises are true, the argument is sound).

Note: fees are not covered in this proposal (CC @olonho)

CC @amgando @evgenykuzyakov @nearmax @willemneal @SkidanovAlex @ilblackdragon @frol @bowenwang1996

Exchange standard

We need a standard to how exchanges will be accepting deposits and withdrawals and settling between each other.

Advanced Fungible Token Standard

As per discussion #dev-contract channel in Discord, I am opening a discussion here for an alternative for NEP-21. I have been involved in Ethereum and EOS token development for the last four years and would like to highlight some issue and solutions NEAR community should consider before pushing forward with a token standard.

I am willing to champion this issue through, as long as I know there exists enough community acceptance and the backing for the proposal.

Advanced Fungible Token Standard

  • Proposal Name: advanced_fungible_token_standard
  • Start Date: 2020-09-01
  • NEP PR:
  • Issue(s):

Summary

General-purpose fungible token standard aiming for the better developer and user experience.

Motivation

Currently NEP-21 is blindly copying the most widespread smart contract token standard, ERC-20. ERC-20 was initiated back in 2015, then formalized starting 2016. However the time has proven that there were many mistakes made with ERC-20 and it would be foolish to copy those mistakes to NEAR token standard when one can start from the clean slate.

Below I go through ERC-20 shortcomings one-by-one and also have some links as a reference material. Some post-Ethereum networks, like EOS, have already addressed technicalities and have more user-friendly approach to tokens. Also Ethereum has addressed issues in the form of later standards, like ERC-777, but due to ossification they have not been adopted (more to below).

Smart contracts cannot reject transfers

Because of the lack of standardized token receive hook, smart contracts cannot reject token transfers on them. If someone accidentally sends ERC-20 to a smart contract address they are likely lost. This error is a common and happens especially when copy-pasting addresses around: tokens are send to the token contract itself.

There is a Twitter account tweeting these mistakes:

https://twitter.com/TokenOops

Root cause: ERC-20 has different transfer() and transferFrom() semantics when dealing with normal accounts and smart contract accounts.

Account cannot express if it can receive tokens

Similar to one above, a common mistake is to send tokens to a centralised exchange address that cannot handle them. For example, Bittrex charged $5000 for "token recovery" in one point to give back the tokens that the user deposited to the exchange if the exchange did not have an active order book for them.

For example, there is worth of $772M tokens in 0x0 address. Some of them are token burns, but most of them are accidental sends and wallet input field failures: https://etherscan.io/address/0x0000000000000000000000000000000000000000

Root cause: Accounts cannot express what tokens they support

Hot wallets cannot interact with smart contracts

Centralised exchanges and other custodial use hot wallets where each receiving address belongs to an user, but withdraw address comes from a pooled wallet. Because most smart contract operations use msg.sender as the author, any reverse payments for msg.sender would go to the hot wallet pooled address directly. Because the transfer is not tripped through the receiving address of the user, the hot wallet accounting cannot mark this reverse payment deposit belonging to the user.

As a hack workaround, centralised exchanges like Kraken and Coinbase set the gas limit for the token transfers very low, hoping that the gas limit prevents any smart contract interaction from hot wallet direct withdrawals.

Root cause: transfer() does not provide alternative address as the return address

Different transfer semantics for account and smart contract interaction

People expect transfer() to work with a smart contract, like it works with normal accounts. However this is not the case. Any direct transfer() and not approve() + transferFrom() pair to a smart contract address usually leads to loss of the tokens, because smart contract cannot account tokens to msg.sender correctly.

https://mobile.twitter.com/moo9000/status/1300167829929459713

Native asset is treated differently from tokens

In Ethereum Defi world, the native asset ETH must be wrapped to WETH ERC-20 token to interact with many of the smart contracts. This causes extra work for developers, as they need to write double code with ifs to all deposits and withdrawals. This will also confuse users, as they see both the native asset and the wrapped asset in their wallet and wallets do not account them as one item.

As a side note, Solana copied this design mistake. However, for example in the case of EOS, all assets are treated similarly and any token asset can added to any payable transaction.

https://mobile.twitter.com/ProjectSerum/status/1300633211932868610

Lack of native relayers and gas fee markets

To transact with ERC-20 tokens, the user needs to have both the token and Ethers on the same account. This is very confusing for the users who are there only for the token, for example in gaming scenarios, and could not less care about cryptocurrency.

ERC-20 lacks native mechanisms for fee markets and relayers who would be willing to pay the gas fee on the behalf of the user and take a fee cut in the token amount. The history has proven that adding this functionality afterwards is especially complicate. Multiple smart contract wallets (Argent, Pillar, etc.) have come up with incompatible, proprietary, solutions.

Root cause: Lack of gas fee market design when ERC-20 was launched

Lack of metadata

ERC-20 only provides information for name, symbol and token supply. Even amount of decimals is an add-on. This has created a cottage industry of different "token lists" that supplement this information. A common elements to add would be at least homepage, icon and relayer information (for gas market transactions). Metadata often also contains various discussion forums, support email, officially author information (foundation, corporation) and such. Wallets could consume this information directly.

For example, the following applications maintain their incompatible lists just to get a token icon visible in the wallet: MyEtherWallet, TrustWallet, MetaMask, Parity. Then the services maintain their own lists: Uniswap, Loopring, IDEX.

Root cause: Blockchain persistent storage was deemed too expensive for this, community inability to come together for a common standard

Lack of notifications

Because how ERC-20 transfer events are implemented, wallets usually need to run extra infrastructure and servers to detect incoming token transfers. Developers lack generic "notify me for all incoming transfers for this address" event. (Furthermore it is even worse for ETH itself as it does not have any notifications and the only way to see balance changes is polling or heavily instrumented custom node.)

This makes it expensive to build wallets as you need to invest to the server-side infrastructure a lot, which is against the point of decentralisation.

Standard UX rules how the user finds out incoming transfer

ERC-20 wallets like MetaMask does not display incoming transfers by default if the ERC-20 token is not whitelisted in the MetaMask source code. This is to avoid airdrop spam attacks where desperate marketers send a small amount of tokens to everyone in the hope the user does a web search for this token and proceeds to buy or sell.

However it also makes it impossible to send any tokens to new users. Because MetaMask wallet silently ignores all incoming transfers that are not whitelisted either by MetaMask team or the user itself ("Add custom token") the first question of novice user if their tokens were lost.

Root cause: Community inability to come together for a common standard

Re-entrancy implementation guidelines

Because of re-entrancy issues on badly written smart contracts, people are afraid of moving away from ERC-20 even though this issues would have been already addressed. This has caused ossification of Ethereum token development, as ERC-20 is barely good enough, but users and developers will be suffering for the years to come.

Whereas this issue was addressed in late code examples and is now even highlighted in Solidity developer manual, the community has still not yet moved over this.

Root cause: Community ossification, psychological change resistance

Guide-level explanation

Here I propose that NEAR does not repeat the past mistake in the form of rolling out ERC-20 clone, but has a solid token standard since day zero.

The standard should cover

  • A reference of token implementation

  • Development guide

    • How to send tokens from plain accounts

    • How to send tokens from smart contracts

    • How to receive tokens on smart contracts

    • Security guidelines

  • Standard metadata fields

  • A reference user interface and interaction guide for wallet developers

    • Usage of metadata and icons

    • Rules for displaying incoming transfers

    • Use of relayers and gas markets

  • A reference guide for hot wallet integration

    • How exchanges should process deposits and withdraws

    • How exchanges can directly interact with smart contracts

Reference-level explanation

TODO

Drawbacks

We should definitely do this.

Rationale and alternatives

TODO

Unresolved questions

TODO

Future possibilities

TODO

[Proposal] Increase min gas price

Currently:

  • Transaction fees: 5N for 1M basic function call transactions (~4 Tg) => which mean full block of 1000Tg will be 0.00125N
  • Storage fees: 1N for 10kb => average contracts 300kb that requires 30N

The suggestion to:

  • Transaction fees: get full block fees to be 0.1 N by increasing min gas price to 100,000,000 attoN
  • Reduce storage fee by 10x: 100kb for 1N => making average contracts 3N

Reasoning: storage right now ends up being very expensive for contracts like multsig and we want to reduce it to manageable. Tx fees are way too low at this point and we should leave more room to reduce it later.

[Discussion] Voting criteria for the transition to Phase II

Validators Vote for the Transition to Phase II

Update 10/5/2020 - NEAR token holders can see which criteria different validators are using to vote Yes to enable transfers here.

The NEAR mainnet launch process is truly unique in seeking to launch in a completely decentralized manner. Starting on September 24th, the NEAR Foundation will no longer be running any validators on NEAR Mainnet, making it a 100% community run network. This is Phase I - the network is functional and decentralized, but does not yet have transfers or inflation. At this point, it is now our shared responsibility, as a community, to carry the network forward and complete the full mainnet launch via decentralized governance.

The full launch process is covered in detail here, but in brief, Phase II will enable token transfers and protocol rewards, and will need validators to accomplish two distinct actions:

  • An on-chain vote to enable token transfers.
  • A nearcore update to enable inflationary rewards.

Phase II Vote Criteria

The vote is the first step in proceeding to Phase II. The NEAR community are the decision makers, so it’s important that we all work together to make a good one.

What does the community want to see before voting to enable token transfers and begin launching the unrestricted mainnet?

To start the conversation, below is an initial list of criteria. Questions, feedback, and suggestions are welcome. This list is by no means exhaustive, and everyone is welcome to use whatever criteria they want to vote whichever way they want. But if we are able to align on the launch criteria, we hope it will make it easier to have a successful launch of the NEAR unrestricted mainnet.

Infrastructure

  • There is a sufficient number of nodes (30+) running the network
  • There is a sufficient amount of stake (115m NEAR, out of 750m available for staking) online
  • No more than 10% of nodes on the mainnet network have been down at the same time over the last two weeks

Network Stability

  • The mainnet network has not fallen out of sync, needed to be rebooted, or was unable to finalize blocks in the last two weeks
  • All P3 or higher (or equivalent, if P classification not used) bugs are closed
  • Any ongoing audit is sufficiently completed, and if critical or major bugs are discovered, they are resolved
  • A recovery plan is in-place that describes the process for reverting back to a known good software configuration and network state in case an upgrade fails, either by corrupting state or failing to start. (Courtesy of @adrianbrink)

Upgrades

  • There is a defined and tested process by which network upgrades are performed
  • The last 2 upgrades have been performed without issues
  • There is a plan in place for the upgrade to enable inflation, including a dry run. Dry run can be performed by NEAR team since inflation is already enabled on testnet - NEAR team should provide an update on the dry run's success as soon as possible.

Security

  • Security program is in place to report bugs and vulnerabilities

Token Holders

  • Token holders received information on their staking and participation options (DIY, infrastructure providers, custody, etc.)

Wallets and Stake Management

  • There are 2 or more options by which token holders can manage their tokens
  • A custody provider (e.g., Coinbase Custody) is ready

Foundation

  • The NEAR Foundation is fully established and has all necessary positions appointed

Communications

  • Comms plan is in place to alert the NEAR ecosystem of upcoming votes and upgrades

Next Steps

  1. Provide feedback on the criteria using this github issue
  2. We will work with the NEAR team to schedule two community calls, one on Wednesday the 30th at 9pm ET and one for Friday October 2nd at 12pm ET.

As a reminder, these criteria are not rules or mandatory, and everyone is free to vote as they please. Bison Trails’ involvement here is to help shepherd the process and work alongside the community to collaborate in service of a successful mainnet launch. When the community agrees on criteria and NEAR is meeting the criteria, we will initiate a vote if one has not been initiated, and we will support with a vote in favor to unlock transfers.

This was prepared by Bison Trails for informational purposes only and is not intended to be legal, tax, financial, investment or other advice, nor is it a recommendation or endorsement of any digital asset or network. Bison Trails may have a financial interest in, or receive compensation for services related to, the aforementioned digital assets and networks.

Do only the necessary validation prior to including transaction into a chunk

Rationale

Currently, the network layer is aware of receipt (transaction) actions semantics in a way that node does TX validation outside of a chunk validation:
https://github.com/nearprotocol/nearcore/blob/a3ffe63f03c6d0e5f121aa783ff1091428036802/runtime/runtime/src/verifier.rs#L57-L178

On a chunk validation, runtime does the same TX validation once again. This proposal suggest a change to the protocol to allow to make only the necessary TX checks prior to the inclusion into a chunk.

Solution

We can change SignedTransaction to the following, exposing only the necessary information to check that the signature is authentic, the signer (access_key) is solvent to cover attached_gas and attached_gas > config.receipt_cost_per_byte * receipt.len():

SignedTransaction {
    signer: AccountId,
    public_key: PublicKey,
    signature: Singnature,
/// gas attached to buy calculated on a client, according to the cost of a receipt
/// `config.receipt_cost_per_byte * receipt.len()` + `(total_cost of actions)`
    attached_gas: Gas,
    gas_price: Balance,
    receipt: Vec<u8>
}

We can have a receipt_cost_per_byte in the chain config and thus (on the chunk generation) we have to check only that account (access_key) has funds to cover gas to parse ActionReceipt. Thus client do not aware of receipt structure at all.

This way, the minimum attached_gas to be included into a chunk will be equal to config.receipt_cost_per_byte * transaction.receipt.len(). Of course, signer (nearlib) better attach gas to cover total cost of included actions otherwise transaction outcome will be ExecutionOutcome(ExecutionStatus::Failure) on a chunk application (the validator will burn all attached_gas for the actual CPU work done).

Recap

Proposed change allows chunk producer do not parse and validate Tx body (actions) prior including it into a chunk and leave this validation to validator. This gives us the following interdependent advantages:

  • it speedups chunk production (since no need to execute the same validation logic twice);
  • it abstracts networking layer (chain) makes it independent from a state transition function (runtime) semantics;
  • node does minimum checks to validate transaction;
  • makes transaction deserialization and (semantic) validation costs accounted (included into a fee), which reduces a node DDoS attack surface

Issues:

  • nearlib changes required: it must calculate attached_gas, based on the actions, included in transaction

Comments?

@evgenykuzyakov @nearmax @bowenwang1996 @ilblackdragon @SkidanovAlex

Typo in reward formula "Economics - NEAR Protocol Specification"

Reading the NEAR Protocol Specification paper available at https://nomicon.io/Economics/README.html I came across the error in the reward formula for a given epoch t. The document's source is located at /docs/Economics/README.html.

In Line 190 the reward formula is calculated with 1 - REWARD_PCT_PER_YEAR i.e.

reward[t]  = totalSupply[t] * ((1 - REWARD_PCT_PER_YEAR) ** (1/EPOCHS_A_YEAR) - 1)

In Line 297 the reward formula is calculated with 1 + REWARD_PCT_PER_YEAR

reward[t]  = totalSupply[t] * ((1 + REWARD_PCT_PER_YEAR) ** (1/EPOCHS_A_YEAR) - 1)

The latter formula should be the correct one.

[Docs] Blog/write-up on how our gas works

We need to have a public blog post that describes for our users and partners all nuances of the gas:

  • Gas cost inflation;
  • Gas limits;
  • Pessimistic gas pricing;

Relation to tokens:

  • Refunds;
  • Allowance.

I suggest the blog post to have a strong focus on motivation behind the engineering decisions, so that it is very clear for the readers that our design is the minimal design that satisfies the requirements.

Disable refunds and burn all prepaid gas

After a brief discussion with @SkidanovAlex I think this issue is severe enough to be promoted to Phase 1. Also, I think #104 is not framing the right problem: the problem is not that it is not expensive enough to perform shard congestion, the problem that it is possible in the first place. It should not be possible to disable transaction processing of a shard for 2mins without losing a large stake in the system, it shouldn't be merely expensive.

Problem description

Suppose, there is a contract that burns 300Tgas during its execution. Suppose I create 200 transactions that call this contract and submit it asynchronously from multiple accounts so that they end up in the same block. All 200 transactions are going to be admitted into a single block, because converting a single function call transaction to a receipt only costs ~2.5Tgas. Unfortunately only 2 such function calls can be processed per block, which means for the next 100 blocks the shard will be doing nothing but processing delayed receipts and will not be processing new transactions. Resulting in almost 2min downtime for the clients that are using our blockchain.

The cost of a single such attack is 60NEAR, but the attacker then can repeat this attack after delayed receipts are processed.

Broken invariant

The root of the problem is that we are breaking the following invariant:

In a given time interval T it is possible to submit transactions with attached gas that can potentially result in more CPU computation that can be processed in a time interval of the same length T.

It is a flow problem -- if source produces more flow than sink can accept then it will accumulate somewhere.

Heuristics

As long, as the variant is broken, no amount of heuristics can fix it. Examples of broken heuristics that will not work:

  • Increasing gas price based on how long or how much the shard has been congested;
    Contr-argument to the above heuristics:

    • Since attacker can perform attack within one block they are not affected by the price change, so the attack is already viable. Attacker can perform this attack at the random times of the day when the gas price is low. For instance, they can cause 1h total downtime per day, by paying 3600NEAR, which is negligible compared to the damadge it can cause. And this attack is completely agnostic to how much the gas price hikes after the block that admits 200 transactions.
  • Receipt priority queue based on gas price. Every function call has gas price attached to it. Receipts in the delayed queue with higher gas price are processed first.
    Contr-argument:

    • It does not solve the fundamental issue -- the congestion is still possible. The 2 min congestion will have exactly the same price. If they do it 36 times in random times during the day, when there is no prior congestion, it will result in total 1 hour downtime per day;
    • The second-order effects are not studied. It might open a full surface of attack vectors and manipulations that we need to argue about. It is not the first time when we introduce a seemingly simple change with complex second order effects that later create significant difficulties. For example, state staking seemed to be simple at first, until we figured out that now some contracts need attached tokens, and can be locked by state exhaustion;
    • Our DevX and UX will need to be significantly more complex. Now developers and partners need to think how to implement the mechanics that would boost their receipts that get stuck. Some of them might need to include additional UI elements and educate their users on "stuck" receipts the way metamask has special functionality to boost transactions. This significantly degrades our UX;
    • Our Applayer would need to be reworked. Components like bridge would need to have special complex logic to track all receipts produced by the given transaction and unstuck all of them. Wallet would need to have special UI elements and mechanics;
    • The receipts can be delayed infinitely;
    • New attack: Because receipts can be delayed infinitely, user can create lots of receipts when there is a dip in the gas price and permanently use our state to store these receipts, we currently don't make users pay for the state that delayed receipts occupy, which creates an attack angle -- someone can grab a lot of state for the infinitely delayed receipts and make validators store unlimited amount of storage infinitely.

Solution

It is clear we need to unbreak the invariant. The only way to unbreak the variant is to make sure that each block only contains transactions and receipts with prepaid gas that is cumulatively less than the block capacity.

Unfortunately, there is no incentive for users not to set too high prepaid gas, since we reimburse all unused gas. Which means people can fill in blocks with transactions that have 300Tgas attached but burn only 5Tgas, preventing everyone else from using the blockchain at a very low cost (0.015NEAR per second or 54NEAR per hour). To make sure users do not overestimate the prepaid gas, we need to burn it.
Advantages:

  • Congestion problem is solved in its entirety;
  • We don't need different gas price between the shards;
  • Without it the prepaid gas is some magic number that users don't care about and set to max, which is a weird concept on its own;
  • There are no refunds, which means there are 30% fewer receipts for function call transactions:
    • As the result contract call TPS is higher;
    • Contract call finality is 50% faster.
  • Receipts cannot be delayed for infinite amount of time.

Disadvantages:

  • The DevX becomes a pain. You need to precisely estimate gas for cross-contract calls to avoid overcharging the user, but also if you underestimate, the chain of calls might fail at unexpected state.
  • It doesn't fully solve the shard congesting, because multiple shards might route receipts to one shard and still create a delayed queue. But this delayed queue can only be blocked by N blocks (where N is the number of shards) per block.
    • Contr-argument: It fully solves shard congestion, because the total source capacity of the blockchain is equal to its total sink. All shard congestions are temporary and resolvable through resharding.

I suggest we go with the full-burn solution. We know it is bullet-proof, if later we come up with a scheme that allows refunds we implement it through upgrades. Doing it the other way around -- turning off the upgrades post Phase 1 launch is going to be significantly more painful to our users.

Add actual values to GenesisConfig

Right now the genesis config folder contain all the parameters we have in genesis, but doesn't provide the values. As part of the effort to establish regular releases, I suggest that we fill in the values from current testnet genesis and iterate on those values when we make changes to the protocol.

[Proposal] alt_bn128 curve math

Proposal

To implement a set of zkSNAKs verifiers (like Groth16 or PLONK) I suggest to add alt_bn128 math functions into VM:
To learn more about alt_bn128 and subgroups G1 and G2 see EIP-196 and EIP-197.

Functions

All formulas below are defined in additive notation. If data is wrong serialized, the function returns Error.

  • alt_bn128_g1_multiexp(items:&[G1, Fr]) -> Result<G1>

Compute \sum s_i g_i with Pippenger's algorithm, where s_i are Fr scalars and g_i are G1 group elements.

Bad data: If a s_i is more or equal then Fr order or g_i is not in G1 group, function returns Error.

Complexity: O(\frac{n}{\log(n))}) . I propose to use regularization \frac{n}{\max(\log_2(n),1) )} for n<=1.

Gas formula:

fn log2floor(x:u64) -> u64 {
    let mut t = x;
    let mut r = 0;
    for offset in [32, 16, 8, 4, 2, 1].iter() {
        if t >> offset != 0 {
            t >>= offset;
            r += offset;
        }
    }
    r
}

let n = (n_bytes+item_size-1)/item_size;
let gas_consumed = A+B*n+C * if n > 1 {n / log2floor(n)} else {n};

B is linear component, corresponding to deserialization complexity.

  • alt_bn128_g1_sum(items:&[G1, bool]) -> Result<G1>

Compute \sum (-1)^{s_i} g_i.

Bad data: If a s_i is not one or zero or g_i is not in G1 group, function returns Error.

Complexity: linear

Gas formula:

let n = (n_bytes+item_size-1)/item_size;
let gas_consumed = A+B*n;
  • alt_bn128_pairing_check(items:&[G1,G2]) -> Result<bool>

Compute \sum e(g_{1 i}, g_{2 i}) \stackrel{?}{=} 1

Bad data: If g_1i is not in G1 group or g_2i is not in G2 subgroup, function returns Error.

Complexity: linear

Gas formula:

let n = (n_bytes+item_size-1)/item_size;
let gas_consumed = A+B*n;

Data encoding

G1 is serialized as two U256 (x and y) in LE.
G2 is serialized as four U256 (re(x), im(x), re(y), im(y)) in LE.
bool is serialized as one byte.
Tuple is serialized as concatenated chunks of serialized elements.
Slice is serialized as concatenated chunks of serialized elements.

Implementation

alt_bn128 functions implemented as fork of parity-bn with minor updates:

  • serialization, deserialization
  • pippenger multiexp
  • API, described above

The crate should be published and moved to nearprotocol.

Spec release process

Original suggestion:

  • Spec release every month, version X
  • Clients release every month with 1 month offset from spec. E.g. 1 month from spec release X, clients vX are released.

This requires:

  • Spec has a list of clients that are “supported” - e.g. which release schedule is syncronized. If some client team is not on schedule, issues, etc -> they can be removed from “supported” list. Also can apply via PR to supported list if already following requirements.
  • Spec changes already have pull requests to clients that are enough to understand scope of work at least. E.g. if it’s something trivial, then only to one client PR is required, if it’s something substantial - then to all supported clients.

Questions from @bowenwang1996

Refined suggestion:

  • Spec “stable” release every month, vX
  • Clients release at the same time with vX spec.

Still same idea with “supported” clients.

You submit PR to spec repo, it gets reviewed, debated and marked as “implementing” after acceptance.

This means all clients start developing it. After changes are merged into “master” in all supported clients -> spec PR is merged into “master”. (e.g. there is a checklist in PR that is required to have all supported clients PRs linked and merged).
Note that while implementing spec might change due to discovered implementation details or due to testing / benchmarking.

After that have the usual stabilization month on clients side and released together spec + clients.

If some client implementation is consistently lagging - they will be cut from “support” list.

Add deployment id

As suggested by @k06a it would be convenient to have deployment id stored with each account. Specifically, every time we deploy a new contract we should update deployment_id stored in the contract.

Implementations to consider:

  • Storing deployment id as a nonce that increases every time the contract is deployed.
    • Pros: Easy versioning -- it is easy to have code that only works with the contract if its version is between X and Y nonces;
    • Cons: We cannot prove that deployment id was generated at certain block index/hash, unless we separately store touched_at_block field for the account;
  • Storing deployment id as a hash of block id and the contract blob.

[Proposal] Decrease storage price

Currently storage price: 1N for 10kb. Average contracts 300kb that requires 30N

The suggestion to reduce storage fee by 10x: 100kb for 1N => making average contracts 3N

Reasoning: storage right now ends up being very expensive for contracts like multisig and we want to reduce it to manageable. In Ethereum contract byte code is usually 10x smaller.

For comparison with Ethereum:

  • Current prices are 40-70 GWei
  • Current ETH price ~$240
  • Storage price (spent): 6,400,000 gas per 10 kb. At current prices: 0.256 - 0.448 ETH. $61.44 - $107.52
  • Storage price (spent) for contract byte code: 200 gas per byte, 2,048,000 gas per 10kb. At current prices: 0.08192 - 0.14336 ETH. $19.66 - $34.40
  • Currently our storage will need to lock 1N at $0.4, but if NEAR price appreciates 100x it will be: $40
  • Proposed cost will 0.1N at $0.04, and if appreciates 100x it will be $4.
  • Given our contracts are 10x larger in bytecode, we currently 50x cheaper and will be roughly the same price if price grows.

Note, that even though this locks NEAR instead of burning it when storing - in many cases the NEAR won't ever be returned to the payer because it will be securing data going forward.

Ref #92 for transaction pricing changes and original discussion.

[Discussion] Native fungible token

We are considering adding a native fungible token support on the runtime level. On the low-level each account record will have a map id of the token -> balance which will be empty by the default and the runtime will operate with it exactly the same way it operates with NEAR token: it will perform balance checks and use the same transactions actions to operate with it. The benefit of it, is that we can have significantly faster and cheaper fungible token operations, and very strong safety guarantees (as oppose to people relying on the safety of various fungible token implementations).

As a bonus feature, we might be able to re-express state staking and validator staking as transfer operations in terms of these account records, as three different tokens that are tied 1:1 with each other.

This issue does not provide thorough description of the proposed design, since the purpose is to open the discussion of the potential design, pros/cons, and the timeline.

Please leave the comments.

Introduce deployment_id for smart contracts

Having deployment_id depending on deployer's deployment_id and some salt would allow to have non-interactive authorization checks between smart contracts family of the same service.

Deploying smart contract from the wallet:

deployment_id = hash(concat(hash(deployer_public_key), hash(salt)))

Deploying smart contract from the smart contract:

deployment_id = hash(concat(hash(deployer.deployment_id), hash(salt)))

I propose to discuss exact functions to be used.

Updatable epoch length / seat parameters

Currently epoch length and seat parameters (number of block validators, fisheman in shards) are constants in genesis.
Generally validator_manager supports their change, but it must be recorded somewhere in the headers and updated as we catching up with the network.

[Discussion] Shard congestion issue

The concern is to have a shard congested with delayed receipt and the gas price doesn't react fast enough to slow down the congestion. It's possible to spam the block with 500 TX which all going to target one expensive contract and slow down this shard for 100 blocks. The cost of the attack will increase over time because the price of gas will be adjusted, but after X number of blocks the prepaid gas cost will not be higher than the actual current price, which creates the deficit. This makes the attack less expensive than it should be.

This was partially addressed by changing the way we calculate the gas price at the receipt is created. The gas price difference is refunded back once the receipt is executed. Currently, we use the pessimistic inflation of 1.03 per block up to 64 blocks with the estimate of at most 5TGas per block. But if there are delayed receipts in the shard, then the entire execution might be delayed for undefined amount of blocks, so the purchase price of gas during the transaction conversion to receipt will not be enough.

There are some ideas on how to address this issue:

Burn all attached gas.

This way we can fill the blocks properly to never exceed the overall capacity of the processing power. This solution has a few issues:

  • The DevX becomes a pain. You need to precisely estimate gas for cross-contract calls to avoid overcharging the user, but also if you underestimate, the chain of calls might fail at unexpected state.
  • It doesn't fully solve the shard congesting, because multiple shards might route receipts to one shard and still create a delayed queue. But this delayed queue can only be blocked by N blocks (where N is the number of shards) per block.

TX/Receipt Priority based on gas price

We can introduce receipt and transaction ordering based on the gas price, so a receipt with higher gas price will be processed first. It also has a few issues:

  • We'll introduce explicit gas price in a transaction. So people will need to think about which gas price to put.
  • Every receipt will also be ordered based on the gas price, so some receipts can be delayed for infinite amount of time. This can be addressed through a price boosting, where a receipt boosts the gas price of another receipt.
  • Receipts will not come in order. This is already somewhat an issue with delayed queue that messes the order of local receipts.

TX can define max_gas_price and validators can decide whether to include such transaction to a chunk

(Not a solid idea)
It's similar to current approach, except instead of 1.03 inflation, you define the max price at which the TX is ready to be included and then buy gas at that price. Validators decide whether to include this transaction into a chunk, because it can cause the deficit later on, if the max_gas_price is too low and there are a delayed shard.

Rewards mismatch between spec and code

`epochFee[t]` | `sum([(1 - DEVELOPER_PCT_PER_YEAR) * txFee[i]])`, where [i] represents any considered block within the epoch[t]

leftovers from old spec?

Total reward every epoch t is equal to:

reward[t] = totalSupply * ((1 + REWARD_PCT_PER_YEAR) ** (1 / EPOCHS_A_YEAR) - 1)

vs

        let epoch_total_reward = (U256::from(*self.max_inflation_rate.numer() as u64)
            * U256::from(total_supply)
            * U256::from(self.epoch_length)
            / (U256::from(self.num_blocks_per_year)
                * U256::from(*self.max_inflation_rate.denom() as u64)))
        .as_u128();
  • REWARD_PCT_PER_YEAR is replaced with max_inflation_rate fraction it seems
  • EPOCHS_A_YEAR -> this_epoch_length / num_blocks_per_year
  • this is an intermediate value in computation but seems like it's supposed to round down? should be floor(...)
  • total_supply - where does this come from? Seems like it's the total_supply from the header of last block, however that's computed.
  1. What's happening with treasury cut of the reward?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.