Giter Club home page Giter Club logo

mandel's Introduction

Mandel

⚠️ This repository contains an archive of the C++ implementation (code named "Mandel") for the blockchain node software of the EOSIO protocol. The repository is no longer maintained and Mandel is obsolete. It is replaced by Leap which implements the Antelope protocol, an evolution of the EOSIO protocol.

Software Installation

Visit the release page for Ubuntu binaries. This is the fastest way to get started with the software.

Building From Source

Recent Ubuntu LTS releases are the only Linux distributions that we fully support. Other Linux distros and other POSIX operating systems (such as macOS) are tended to on a best-effort basis and may not be full featured. Notable requirements to build are:

  • C++17 compiler and standard library
  • boost 1.67+
  • CMake 3.8+
  • (for Linux only) LLVM 7 - 11 (newer versions do not work)

A few other common libraries are tools also required such as openssl 1.1+, libcurl, curl, libusb, GMP, Python 3, and zlib.

A Warning On Parallel Compilation Jobs (-j flag): When building C/C++ software often the build is performed in parallel via a command such as make -j $(nproc) which uses the number of CPU cores as the number of compilation jobs to perform simultaneously. However, be aware that some compilation units (.cpp files) in mandel are extremely complex and will consume nearly 4GB of memory to compile. You may need to reduce the level of parallelization depending on the amount of memory on your build host. e.g. instead of make -j $(nproc) run make -j2. Failures due to memory exhaustion will typically but not always manifest as compiler crashes.

Generally we recommend performing what we refer to as a "pinned build" which ensures the compiler and boost version remain the same between builds of different mandel versions (mandel requires these versions remain the same otherwise its state needs to be repopulated from a portable snapshot).

Building Pinned Build Binary Packages

In the directory <mandel src>/scripts you will find the two scripts install_deps.sh and pinned_build.sh. If you haven't installed build dependencies then run install_deps.sh. Then run pinned_build.sh <dependencies directory> <mandel build directory> <number of jobs>.

The dependencies directory is where the script will pull the C++ dependencies that need to be built with the pinned compiler for building the pinned binaries for binary packaging.

The binary package will be produced in the mandel build directory that was supplied.

Manual (non "pinned") Build Instructions

Ubuntu 20.04 & 22.04 Build Instructions

Install required dependencies:

apt-get update && apt-get install   \
        build-essential             \
        cmake                       \
        curl                        \
        git                         \
        libboost-all-dev            \
        libcurl4-openssl-dev        \
        libgmp-dev                  \
        libssl-dev                  \
        libusb-1.0-0-dev            \
        llvm-11-dev                 \
        pkg-config

and perform the build:

git submodule update --init --recursive
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j $(nproc) package
Ubuntu 18.04 Build Instructions

Install required dependencies. You will need to build Boost from source on this distribution.

apt-get update && apt-get install   \
        build-essential             \
        cmake                       \
        curl                        \
        g++-8                       \
        git                         \
        libcurl4-openssl-dev        \
        libgmp-dev                  \
        libssl-dev                  \
        libusb-1.0-0-dev            \
        llvm-7-dev                  \
        pkg-config                  \
        python3                     \
        zlib1g-dev
        
curl -L https://boostorg.jfrog.io/artifactory/main/release/1.79.0/source/boost_1_79_0.tar.bz2 | tar jx && \
   cd boost_1_79_0 &&                                                                                     \
   ./bootstrap.sh --prefix=$HOME/boost1.79 &&                                                             \
   ./b2 --with-iostreams --with-date_time --with-filesystem --with-system                                 \
        --with-program_options --with-chrono --with-test -j$(nproc) install &&                            \
   cd ..

and perform the build:

git submodule update --init --recursive
mkdir build
cd build
cmake -DCMAKE_C_COMPILER=gcc-8 -DCMAKE_CXX_COMPILER=g++-8 \
      -DCMAKE_PREFIX_PATH="$HOME/boost1.79;/usr/lib/llvm-7/"  -DCMAKE_BUILD_TYPE=Release .. \
make -j $(nproc) package

After building you may remove the $HOME/boost1.79 directory, or you may keep it around until next time building the software.

Running Tests

When building from source it's recommended to run at least what we refer to as the "parallelizable tests". Not included by default in the "parallelizable tests" are the WASM spec tests which can add additional coverage and can also be run in parallel.

cd build

# "parallelizable tests": the minimum test set that should be run
ctest -j $(nproc) -LE _tests

# Also consider running the WASM spec tests for more coverage
ctest -j $(nproc) -L wasm_spec_tests

Some other tests are available and recommended but be aware they can be sensitive to other software running on the same host and they may SIGKILL other nodeos instances running on the host.

cd build

# These tests can't run in parallel but are recommended.
ctest -L "nonparallelizable_tests"

# These tests can't run in parallel. They also take a long time to run.
ctest -L "long_running_tests"

mandel's People

Contributors

arhag avatar asiniscalchi avatar b1bart avatar brianjohnson5972 avatar bytemaster avatar cj-oci avatar claytoncalabrese avatar dskvr avatar elmato avatar heifner avatar jgiszczak avatar kj4ezj avatar larryk85 avatar linh2931 avatar lparisc avatar moskvanaft avatar nathanielhourt avatar ndcgundlach avatar norsegaud avatar oschwaldp-oci avatar paulcalabrese avatar pmesnier avatar scottarnette avatar sergmetelin avatar spoonincode avatar taokayan avatar tbfleming avatar vladtr avatar wanderingbort avatar zorba80 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mandel's Issues

Enhancement: blocks-behind parameter in SHIP request

Traffic-intensive networks, such as WAX, cause problems in state history processors because they are too slow in rolling back on forks.

A new parameter in ship request, "blocks-behind" should tell the state history plugins to stream only the blocks which are this number of blocks behind the head. So, the state history processor will have the data that is a few seconds late, but it will dramatically reduce the number of forks to process.

restore pinned builds and binary packages

#137 removes support for pinned builds and binary packages from Mandel. Well, as opined in that PR, those features have already been removed from Mandel by way of them being untested & unmaintained over the course of multiple Mandel releases (3.0 & 3.0.5). Truly, no one knows the working state of anything being removed in that PR.

Both pinned builds and binary packages are desirable features that clearly had a positive impact on usability when they were introduced in EOSIO. These features need to be reintroduced and fully supported.

Baseline criteria for competition of these features are, for all platforms supported, native packages (rpm, deb, etc) that are built with a compiler & boost combo which we have high confidence will not change for all package builds of a minor release (i.e. all releases from x.y.0 to x.y.n). These packages must be built by CI regularly (one would expect the CI to use the pinned compiler for each PR, etc). Users must also have the ability to independently build Mandel in the same configuration without it being too onerous. Taking it further and having these independent builds being completely reproducible to the binary packages provided by the Mandel team is probably more of a requirement captured in #36 instead of this issue.

Determining the platforms that are supported is not the goal of this issue. It's possible a given release of Mandel may only support a single platform, e.g. Ubuntu 20.04. In such a scenario a single .deb file that installs cleanly solely on 20.04 fulfills the requirement.

Enhancement: stop nodeos from syncing blocks from the network while snapshot in progress

If you run nodeos with read-mode = irreversible the snapshot process can start immediately when requested. (when running in read-mode = head it needs to wait for LIB first).

Once nodeos is at LIB it can start generating the snapshot. However it keeps syncing from the network. All this syncing from the network uses up RAM. When snapshots take > 20 minutes to generate (eg WAX), then the amount of memory used can be significant. On servers where there is insufficient RAM, this means swapping which means the snapshot takes longer, which means more RAM, ...

This is very obvious if you start from 1 day behind live blocks. Memory keeps piling up while snapshot is being generated when blocks can be synced quickly. Example chart. Every bump in the chart is a new snapshot being generated (every 3 hours of blocks time).

image

Further, this extra memory doesn't get released ever.

controller::extract_chain_id_from_db() does not properly handle dirty db

Seems like we would want to allow any exception to propagate up. And only return an empty optional if db.revision() < 1.

Most importantly this currently causes confusing if the database is dirty because of an unclean shutdown.

Apr 21 02:06:22 eos nodeos[17876]: warn  2022-04-21T02:06:22.015 nodeos    chain_plugin.cpp:1321         plugin_initialize    ] 3110006 plugin_config_exception: Incorrect plugin configuration
Apr 21 02:06:22 eos nodeos[17876]: Genesis state is necessary to initialize fresh blockchain state but genesis state could not be found in the blocks log. Please either load from snapshot or find a blocks log that starts from genesis.
Apr 21 02:06:22 eos nodeos[17876]:     {}
Apr 21 02:06:22 eos nodeos[17876]:     nodeos  chain_plugin.cpp:1154 plugin_initialize
Apr 21 02:06:22 eos nodeos[17876]: error 2022-04-21T02:06:22.015 nodeos    main.cpp:163                  main                 ] 3110006 plugin_config_exception: Incorrect plugin configuration
Apr 21 02:06:22 eos nodeos[17876]: Genesis state is necessary to initialize fresh blockchain state but genesis state could not be found in the blocks log. Please either load from snapshot or find a blocks log that starts from genesis.
Apr 21 02:06:22 eos nodeos[17876]:     {}
Apr 21 02:06:22 eos nodeos[17876]:     nodeos  chain_plugin.cpp:1154 plugin_initialize
Apr 21 02:06:22 eos nodeos[17876]: rethrow
Apr 21 02:06:22 eos nodeos[17876]:     {}
Apr 21 02:06:22 eos nodeos[17876]:     nodeos  chain_plugin.cpp:1321 plugin_initialize

Should use find instead of get and return empty optional if not found.

ignore p2p messages of unknown types

In current implementation, if a node receives a message of unknown type, it throws an error and disconnects the socket:

fc::raw::unpack( ds, msg );

If the plugin is modified to ignore unknown message types, it would allow extending the protocol without having to upgrade the whole network and activate a feature. One of examples is the fast finality proposal which would send block confirmations in a new type of message.

Enhancement: auto-compress snapshots and load compressed snapshots

Looking at WAX, for example a snapshot original_size = 21960.4 MiB compressed_size with zstd = 3954.6 MiB It would be great if nodeos compressed the snapshot when writing to disk to save extra i/o.

Various different people have chosen different compression standards for snapshots which makes it hard to build interoperable tools. Having nodeos work with a compression tech directly would enforce consistency by default. I like zstd. On the other hand zlib is already included in nodeos.

Enhancement: change default disable-api-persisted-trx to true (currently false)

When transactions arrive to an API, the current default is to update the state and then any other query to the API gets the new state. In a private node this might be appropriate, but in most cases you don't want the state updated until the transaction makes it into a block.

Therefore: disable-api-persisted-trx should be changed to true by default

Enhancement: ship data consistency proofs; block query API

Currently there's no way for a state history reader to verify that the data is consistent and not corrupted. the protocol needs to be enhanced to provide that, as many applications rely on the data.

Proposal:

in each data frame of trace and state history that corresponds to one blockchain block, add two hashes:

  1. sha256 checksum of the frame content
  2. sha256 checksum of the previous block's and current block's checksums, concatenated.

The portable snapshot format would have to be different for ship and non-ship nodes: the one for ship nodes would have to include the checksums of latest ship frames.

Also these checksums will need to be compared across the nodes, so a new HTTP API is needed that delivers the state history data (compressed) in raw form for a specific block. Such an API would also be useful for history solutions which need to decode a specific random block from the past.

Reduce the difficulty of new users entering due to resource model

The current resource model of EOS is still too complicated compared to the transaction fee of Ethereum. Do you consider the mode of temporary consumption fee when the user has no resources? Especially when new users enter for the first time, it is more difficult

Equivalent to system level defi, other users can invest idle EOS. A certain proportion of the transaction fees consumed by others when trading will be distributed to these investors, which is equivalent to the income of the investment.

remove direct YubiHSM support; replace with generic HSM support

Direct YubiHSM support was integrated in to keosd by way of integrating libyubihsm in EOSIO 1.7. While this generally works well enough, it is not without problems:

  • It brings in a number of dependencies (curl, libusb, pkg-config). Originally I shrugged these off as being super common and thus not onerous to depend on. But, we have indeed seen problems like some sort of curl+pkg-config issue in homebrew at one point. Also, libusb is LGPL which may introduce some problems in the future if we simplify our binary releases. Finally, these days, for the most part I’d like to see a reduction in dependencies; I don’t believe YubiHSM support is worth the baggage of more dependencies when there is a reasonable alternative.
    (I experimented with resolving much of the dependency issue in EOSIO/eos#9075, but, while clever, I consider it less desirable than my preference of wholesale removal)
  • libyubihsm’s cmake files don’t follow what I’d consider best practices; and if anything they aren’t consistent with the rest of EOSIO’s cmake environment. For example, it uses pkg-config to find OpenSSL instead of cmake’s find_package() as other components in EOSIO do. This can lead to some inconsistencies and makes it impossible to static link OpenSSL, for example. It also is the sole blocker for natively building EOSIO for ARM on macOS.
  • There are no unit tests for YubiHSM support. I haven’t personally tested it in over a year; maybe two.

I propose removing YubiHSM support and replacing it with PKCS#11 support. PKCS#11 support will allow using a wide range of HSMs with keosd. YubiHSMs can continue to work via this interface along with other HSMs like Yubikey 5, TPM, Amazon CloudHSM, Nitro Key, ultra low cost generic Javacards and many more (it’s a wide industry standard).

The one downside is that it will no longer be possible to create keys on the HSM via keosd. (PKCS#11 defines two users each with their own password: one to create keys, one to then use the keys; it’s not clear how to map this pattern to keosd) Most likely how this will end up working is that keosd will always log in as the user which can use keys, and a separate tool will ship with EOSIO, let’s call it eosio-p11tool, that can be used to create keys if seeing the PUB_R1_ format is important at creation time. (there would be no requirement to use eosio-p11tool to create keys, it’s just if creating keys with a generic PKCS#11 tool such as pkcs11-tool it wouldn’t be possible to see the PUB_R1_ format of the created key unless running something like eosio-p11tool --list later)

Ehancement: change default read-mode to head (currently speculative)

Most people don't want speculative execution, so it should not be the default.

read-mode = head means to undo transaction after each speculative execution of the transaction.
read-mode = speculative (default) undo transactions at the end of the block (when you receive a new block).

Some people have said that the speculative mode should be removed entirely, however I do know there are some people using it, so I wouldn't go that far.

Prevent APIs from being accessed when syncing/catching up

I am not positive on the best solution for this, but I'd like to see some sort of option that can be enabled to block API access to a node if the head is too far behind the current head/time.

This could be either:

  1. The API plugins polling the current state of the chain and ensuring the head block is within X seconds/blocks of what it should be and block API access when appropriate.
  2. The chain controller (or something similar) detecting that a block hasn't been received within X seconds/blocks and then blocks access to the APIs.

When APIs are "blocked" in this fashion, a 503 response to indicate that the server is not yet ready to service requests.

The problem it's seeking to solve is that most generic load balancing solutions will deem an upstream as "valid" if it's returning a HTTP status of 200. While nodeos is syncing, it'll return HTTP status 200 responses from the API even while syncing. Any request served by this API will be outdated, and any tapos values created from this server will also be considered invalid (expired) if someone tries to submit a transaction using them.

snapshot json support

EOSIO/eos#11058 added option to convert snapshots to JSON.
The output format should be changed slightly from what is in that PR so that the resulting JSON can be more easily read.
Along with 11058, should add the ability to read in the JSON. We have rapid JSON dependency already, so rather straightforward to add read of the JSON.

Enhancement: new API call: is_valid_account

this is a copy of EOSIO/eos#9897

get_account API call is gathering a lot of information, but in many cases the requestor just needs to check if an account exists on the blockchain.

A new API call is needed that is just returning true or false for a given account name. This will save a lot of CPU time on API servers and clients.

The feature is non-intrusive, and can be added to the current 2.0 software.

An even better option is to take an array of accounts and return an array of valid accounts. But then it needs to take care of response time limits and error handling.

deb packages depend on local build paths

nodeos binary from 2.0.13 debian package from B1 repository:

 02576530:  94 22 c6 fe 2f 65 6f 73  2f 6c 69 62 72 61 72 69  ."../eos/librari                                                                                                         
 02576540:  65 73 2f 66 63 2f 73 72  63 2f 6c 6f 67 2f 6c 6f  es/fc/src/log/lo                                                                                                         
 02576550:  67 5f 6d 65 73 73 61 67  65 2e 63 70 70 00 46 61  g_message.cpp.Fa                                                                                                         
 02576560:  69 6c 65 64 20 74 6f 20  63 61 73 74 20 66 72 6f  iled to cast fro                                                                                                         
 02576570:  6d 20 56 61 72 69 61 6e  74 20 74 6f 20 6c 6f 67  m Variant to log                                                                                                         
 02576580:  5f 6c 65 76 65 6c 00 24  7b 77 68 61 74 7d 3a 20  _level.${what}:                                                                                                          

The build paths of where the binary was compiled are all across the binary. It makes it difficult to certify the packages, as the build environment should be exactly the same as on the original machine.

Short-term goal: describe the exact and reproducible deb building procedure, so that third parties can verify the package checksum.

Long-term goal: remove all local path dependencies from the binaries.

Deprecation: remove read-mode = read-only option

The database read-mode option of read-only is deprecated. The same functionality is now provided by the combination of options: read-mode = head, p2p-accept-transactions = false, and api-accept-transactions = false. The new options provide better control and more clearly define usage.

EOSIO/eos#7597

backport: p2p timeout from 2.1

**EOSUSA Michael, [14.03.22 10:48]
i think there were 2 parts... one is the p2p issue where it would get all peers unlinkable.... did the fixes also include anything related to the original issue where the p2p sessions hang open for 15min (which is what triggers all those unlinkable blocks in the first place)

Kevin Heifner, [14.03.22 10:49]
The timeout feature is in 2.1, it will need to be back-ported to Mandel**

Where can I see the development plan?

Where can I see development plans? For example, compatible with EVM and RPC implementations, reducing irreversible time?
The most hope is to realize the IBC interoperability of the Ethereum data layer, which can directly transition the existing users of Ethereum

Enhancement: add prometheus exporter plugin that exposes key nodeos metrics

Copy from: EOSIO/eos#9902

or monitoring, nodeos should have a prometheus exporter plugin (that runs on a separate port), which exposes key metrics that would be useful for monitoring.

Metrics such as

what is returned from /v1/chain/get_info (head block number, lib)
unapplied transaction queue sizes
blacklisted transactions size
subjective billing sizes
scheduled transaction size
number of forks by producer
number of unapplied blocks by producer
number of dropped blocks by producer
number of missed blocks (missed in a round) by producer
number of missing producers (missed 12 blocks in a round) by producer
number of double production (more than 12 blocks in a round) by producer
average (and last) block arrival time by producer
number of transactions per block by producer
amount of blockchain cpu used per block by producer
number of bytes per block by producer
number of bad actors which have exceeded their transaction limit (as per the changes introduced in nodeos 2.0.9)
number of clients connected (inbound, outbound, failed)
number of api (failed) requests/sec (by request type)
uptime
cpu usage by thread
disk space used, by volume (blocks, ship, state, trace)
disk space available
rocksdb related
blockvault related (produced block, vs did not produce because other nodeos "won")
replay status (when starting from snapshot and replaying blocks)
etc..

Where possible, attribute actions to a specific producer (eg when counting dropped blocks, specify which producer missed those blocks)

Same code for writing blocks and ship logs

At the moment blocks log and state history logs are written by separate pieces of code, although their internal structures are similar.

If these would be handled by one code components, it would reduce the code duplication, and let log rotation to be implemented in once place.

Enhancement: API request for LIB timestamp

this is a copy of EOSIO/eos#10128

This document explains the importance of LIB timestamp for applications:
https://github.com/cc32d9/cc32d9_ideas_for_EOSIO/blob/master/Temporary_history_in_contract_tables.md

Currently LIB timestamp requires two API calls, get_info and get_block, and get_block response is quite bulky sometimes.

The enhancement would bring a cheaper way to retrieve the LIB timestamp: either as a new field in get_info response, or a new API call, like /v1/chain/get_lib which would return the LIB block number, hash and timestamp.

Enhancement: Add API call to get earliest_block_available

originally: EOSIO/eos#10008

/v1/chain/get_info tells you want is the last block available, but not the first block. Since nodeos can start from a snaphot and with 2.1 auto-remove old blocks, it would be nice to have a way to know which blocks are actually available.

Information could be added to the existing get_info API call.

Use case 1: The use case is that I was trying to sync blocks from a node for which the blocks were expired. So would like to create some sort of internal dashboard so I know how many blocks are available on which nodes. This requirement could also be fulfilled other ways.

Use case 2: From a public API endpoint point of view (eg validate.eosnation.io) it would be nice to know which is the first block available. Currently need to run multiple get_block queries to figure out which is the first block available.

Uniform key format in cleos

currently cleos create key --to-console prints the keys in legacy format, while cleos create key --to-console --r1 prints the keys in new format.

it should always print the keys in new format, and print in legacy form only if an option is specified (e.g. --legacy)

Also "cleos get account" should return the keys in new format, and use legacy format only if asked explicitly with an option

lower the level of push_transaction error to info

On a public node, the log is full of the following errors:

error 2022-02-11T16:23:16.704 nodeos    http_plugin.cpp:964           handle_exception     ] FC Exception encountered while processing chain.push_transaction

it is a norm that clients are pushing transactions that fail. Please lower the message priority, so that it doesn't clog the errors and warnings.

split cleos into library and CLI utility

Currently cleos is a monolithic command-line utility, and there's no C++ library for EOSIO clients.

It makes sense to place all protocol-related functions in a standalone library, and leave only CLI input and output in the utility.

Enhancement: add cpu execution time to transaction trace debug message

Different block producers use different servers to do block signing. In order to find outliers, it is useful to compare the actual execution time by each block producer to a "reference" server. If the cpu execution time was available on a reference server, then that could be compared to what is the cpu execution time in a block.

Background: actual greymass fuel data shows that some block producers have transaction execution time that exceeds the expected value computed by greymass fuel. While this is good information, it is better to have something that finds this difference more quickly by reviewing all successful transactions.

This could be done with additional logging. When transaction_success_tracing logging is set to debug level, nodeos generates messages like:

[TRX_TRACE] Speculative execution is ACCEPTING tx: %s, auth: %s

If the cpu execution time is included in this message, then the comparison can be computed. Ideally cpu_usage_us would be available as that is immediately comparable to the on-chain value.

CPU values available:

  • trace.receipt.cpu_usage_us - billed cpu time. This is the consensus value. You take whatever the bp says and report the same value. It does not include time for loading the wasm.
  • trace.elapsed - this is wall clock time when transaction is executed (either speculative execution or execution in a block)
  • trace.elapsed - is available anywhere a transaction is executed. Including speculative execution on a node and execution of the transaction when applying a block.

specify the path for profiling traces

#16

The profiling feature saves profiling information in current directory of the process, which can be anywhere, and not even writable.

The feature needs an option that specifies the path where to save the traces, and the default value needs to be a subdirectory of data-dir

ABI format extension: scope type

Currently ABI does not tell the reader how to interpret the scope value: it can be a name, an integer, or a symbol.

A new field in ABI table specification would help the reader decode and display the scope value correctly.

Enhancement: make CPU/NET replenishing windows configurable

This is a copy of EOSIO/eos#9376

libraries/chain/include/eosio/chain/config.hpp:
static const uint32_t account_net_usage_average_window_ms  = 24*60*60*1000l;
static const uint32_t account_cpu_usage_average_window_ms  = 24*60*60*1000l;

These parameters need to be configurable through the system contract table, so that the network operators can adjust them according to the network load and demand.

If this window is 5x longer, intensive users will have to stake 5x more, but rare users will get the same service as they have now.

One of potential use cases is setting the replenishing window to infinity and this would lead to a usage-billed blockchain

ship: add wasm_config

It's missing from 2.1.

This approach will work:

  • Create a new variant
  • Add it as a binary extension to global_property_v1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.