Giter Club home page Giter Club logo

zero-bin's Introduction

Zero Bin

Archive note

All this workspace has been moved into https://github.com/0xPolygonZero/zk_evm.

Please refer to this new location for inquiries or contributions.


A composition of paladin and plonky-block-proof-gen. Given the proof generation protocol as input, generate a proof. The project is instrumented with paladin, and as such can distribute proof generation across multiple worker machines.

Project layout

ops
├── Cargo.toml
└── src
   └── lib.rs
worker
├── Cargo.toml
└── src
   └── main.rs
leader
├── Cargo.toml
└── src
   └── main.rs
rpc
├── Cargo.toml
└── src
   └── main.rs
verifier
├── Cargo.toml
└── src
   └── main.rs

Ops

Defines the proof operations that can be distributed to workers.

Worker

The worker process. Receives proof operations from the leader, and returns the result.

Leader

The leader process. Receives proof generation requests, and distributes them to workers.

RPC

A binary to generate the block trace format expected by the leader.

Verifier

A binary to verify the correctness of the generated proof.

Leader Usage

The leader has various subcommands for different io modes. The leader binary arguments are as follows:

cargo r --release --bin leader -- --help

Usage: leader [OPTIONS] <COMMAND>

Commands:
  stdio    Reads input from stdin and writes output to stdout
  jerigon  Reads input from a Jerigon node and writes output to stdout
  native   Reads input from a native node and writes output to stdout 
  http     Reads input from HTTP and writes output to a directory
  help     Print this message or the help of the given subcommand(s)

Options:
  -h, --help
          Print help (see a summary with '-h')

Paladin options:
  -t, --task-bus-routing-key <TASK_BUS_ROUTING_KEY>
          Specifies the routing key for publishing task messages. In most cases, the default value should suffice

          [default: task]

  -s, --serializer <SERIALIZER>
          Determines the serialization format to be used

          [default: postcard]
          [possible values: postcard, cbor]

  -r, --runtime <RUNTIME>
          Specifies the runtime environment to use

          [default: amqp]
          [possible values: amqp, in-memory]

  -n, --num-workers <NUM_WORKERS>
          Specifies the number of worker threads to spawn (in memory runtime only)

      --amqp-uri <AMQP_URI>
          Provides the URI for the AMQP broker, if the AMQP runtime is selected

          [env: AMQP_URI=amqp://localhost:5672]

Table circuit sizes:
      --persistence <PERSISTENCE>
          [default: disk]

          Possible values:
          - none: Do not persist the processed circuits
          - disk: Persist the processed circuits to disk

      --arithmetic <CIRCUIT_BIT_RANGE>
          The min/max size for the arithmetic table circuit.

          [env: ARITHMETIC_CIRCUIT_SIZE=16..22]

      --byte-packing <CIRCUIT_BIT_RANGE>
          The min/max size for the byte packing table circuit.

          [env: BYTE_PACKING_CIRCUIT_SIZE=10..22]

      --cpu <CIRCUIT_BIT_RANGE>
          The min/max size for the cpu table circuit.

          [env: CPU_CIRCUIT_SIZE=15..22]

      --keccak <CIRCUIT_BIT_RANGE>
          The min/max size for the keccak table circuit.

          [env: KECCAK_CIRCUIT_SIZE=14..22]

      --keccak-sponge <CIRCUIT_BIT_RANGE>
          The min/max size for the keccak sponge table circuit.

          [env: KECCAK_SPONGE_CIRCUIT_SIZE=9..22]

      --logic <CIRCUIT_BIT_RANGE>
          The min/max size for the logic table circuit.

          [env: LOGIC_CIRCUIT_SIZE=12..22]

      --memory <CIRCUIT_BIT_RANGE>
          The min/max size for the memory table circuit.

          [env: MEMORY_CIRCUIT_SIZE=18..22]

Note that both paladin and plonky2 table circuit sizes are configurable via command line arguments and environment variables. The command line arguments take precedence over the environment variables.

TABLE CIRCUIT SIZES ARE ONLY RELEVANT FOR THE LEADER WHEN RUNNING IN in-memory MODE.

If you want to configure the table circuit sizes when running in a distributed environment, you must configure the table circuit sizes on the worker processes (the command line arguments are the same).

stdio

The stdio command reads proof input from stdin and writes output to stdout.

cargo r --release --bin leader stdio --help

Reads input from stdin and writes output to stdout

Usage: leader stdio [OPTIONS]

Options:
  -f, --previous-proof <PREVIOUS_PROOF>  The previous proof output
  -h, --help                             Print help

Pull prover input from the rpc binary.

cargo r --release --bin rpc fetch --rpc-url <RPC_URL> -b 6 > ./input/block_6.json

Pipe the block input to the leader binary.

cat ./input/block_6.json | cargo r --release --bin leader -- -r in-memory stdio > ./output/proof_6.json

Jerigon

The Jerigon command reads proof input from a Jerigon node and writes output to stdout.

cargo r --release --bin leader jerigon --help

Reads input from a Jerigon node and writes output to stdout

Usage: leader jerigon [OPTIONS] --rpc-url <RPC_URL> --block-interval <BLOCK_INTERVAL>

Options:
  -u, --rpc-url <RPC_URL>

  -i, --block-interval <BLOCK_INTERVAL>
          The block interval for which to generate a proof
  -c, --checkpoint-block-number <CHECKPOINT_BLOCK_NUMBER>
          The checkpoint block number [default: 0]
  -f, --previous-proof <PREVIOUS_PROOF>
          The previous proof output
  -o, --proof-output-dir <PROOF_OUTPUT_DIR>
          If provided, write the generated proofs to this directory instead of stdout
  -s, --save-inputs-on-error
          If true, save the public inputs to disk on error
  -b, --block-time <BLOCK_TIME>
          Network block time in milliseconds. This value is used to determine the blockchain node polling interval [env: ZERO_BIN_BLOCK_TIME=] [default: 2000]
  -k, --keep-intermediate-proofs
          Keep intermediate proofs. Default action is to delete them after the final proof is generated [env: ZERO_BIN_KEEP_INTERMEDIATE_PROOFS=]
      --backoff <BACKOFF>
          Backoff in milliseconds for request retries [default: 0]
      --max-retries <MAX_RETRIES>
          The maximum number of retries [default: 0]
  -h, --help
          Print help

Prove a block.

cargo r --release --bin leader -- -r in-memory jerigon -u <RPC_URL> -b 16 > ./output/proof_16.json

Native

The native command reads proof input from a native node and writes output to stdout.

cargo r --release --bin leader native --help

Reads input from a native node and writes output to stdout

Usage: leader native [OPTIONS] --rpc-url <RPC_URL> --block-interval <BLOCK_INTERVAL>

Options:
  -u, --rpc-url <RPC_URL>

  -i, --block-interval <BLOCK_INTERVAL>
          The block interval for which to generate a proof
  -c, --checkpoint-block-number <CHECKPOINT_BLOCK_NUMBER>
          The checkpoint block number [default: 0]
  -f, --previous-proof <PREVIOUS_PROOF>
          The previous proof output
  -o, --proof-output-dir <PROOF_OUTPUT_DIR>
          If provided, write the generated proofs to this directory instead of stdout
  -s, --save-inputs-on-error
          If true, save the public inputs to disk on error
  -b, --block-time <BLOCK_TIME>
          Network block time in milliseconds. This value is used to determine the blockchain node polling interval [env: ZERO_BIN_BLOCK_TIME=] [default: 2000]
  -k, --keep-intermediate-proofs
          Keep intermediate proofs. Default action is to delete them after the final proof is generated [env: ZERO_BIN_KEEP_INTERMEDIATE_PROOFS=]
      --backoff <BACKOFF>
          Backoff in milliseconds for request retries [default: 0]
      --max-retries <MAX_RETRIES>
          The maximum number of retries [default: 0]
  -h, --help
          Print help

Prove a block.

cargo r --release --bin leader -- -r in-memory native -u <RPC_URL> -b 16 > ./output/proof_16.json

HTTP

The HTTP command reads proof input from HTTP and writes output to a directory.

cargo r --release --bin leader http --help

Reads input from HTTP and writes output to a directory

Usage: leader http [OPTIONS] --output-dir <OUTPUT_DIR>

Options:
  -p, --port <PORT>              The port on which to listen [default: 8080]
  -o, --output-dir <OUTPUT_DIR>  The directory to which output should be written
  -h, --help                     Print help

Pull prover input from the rpc binary.

cargo r --release --bin rpc fetch -u <RPC_URL> -b 6 > ./input/block_6.json

Start the server.

RUST_LOG=debug cargo r --release --bin leader http --output-dir ./output

Note that HTTP mode requires a slightly modified input format from the rest of the commands. In particular, the previous proof is expected to be part of the payload. This is due to the fact that the HTTP mode may handle multiple requests concurrently, and thus the previous proof cannot reasonably be given by a command line argument like the other modes.

Using jq we can merge the previous proof and the block input into a single JSON object.

jq -s '{prover_input: .[0], previous: .[1]}' ./input/block_6.json ./output/proof_5.json | curl -X POST -H "Content-Type: application/json" -d @- http://localhost:8080/prove

Paladin Runtime

Paladin supports both an AMQP and in-memory runtime. The in-memory runtime will emulate a cluster in memory within a single process, and is useful for testing. The AMQP runtime is geared for a production environment. The AMQP runtime requires a running AMQP broker and spinning up worker processes. The AMQP uri can be specified with the --amqp-uri flag or be set with the AMQP_URI environment variable.

Starting an AMQP enabled cluster

Start rabbitmq

docker run --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Start worker(s)

Start worker process(es). The default paladin runtime is AMQP, so no additional flags are required to enable it.

RUST_LOG=debug cargo r --release --bin worker
Start leader

Start the leader process with the desired command. The default paladin runtime is AMQP, so no additional flags are required to enable it.

RUST_LOG=debug cargo r --release --bin leader jerigon -u <RPC_URL> -b 16 > ./output/proof_16.json

Starting an in-memory (single process) cluster

Paladin can emulate a cluster in memory within a single process. Useful for testing purposes.

cat ./input/block_6.json | cargo r --release --bin leader -- -r in-memory stdio > ./output/proof_6.json

Verifier Usage

A verifier binary is provided to verify the correctness of the generated proof. The verifier expects output in the format generated by the leader. The verifier binary arguments are as follows:

cargo r --bin verifier -- --help

Usage: verifier --file-path <FILE_PATH>

Options:
  -f, --file-path <FILE_PATH>  The file containing the proof to verify
  -h, --help                   Print help

Example:

cargo r --release --bin verifier -- -f ./output/proof_16.json

RPC Usage

An rpc binary is provided to generate the block trace format expected by the leader.

cargo r --bin rpc -- --help

Usage: rpc <COMMAND>

Commands:
  fetch  Fetch and generate prover input from the RPC endpoint
  help   Print this message or the help of the given subcommand(s)

Options:
  -h, --help  Print help

Example:

cargo r --release --bin rpc fetch --start-block <START_BLOCK> --end-block <END_BLOCK> --rpc-url <RPC_URL> --block-number 16 > ./output/block-16.json

Docker

Docker images are provided for both the leader and worker binaries.

Development Branches

There are three branches that are used for development:

  • main --> Always points to the latest production release
  • develop --> All PRs should be merged into this branch
  • testing --> For testing against the latest changes. Should always point to the develop branch for the zk_evm deps

Testing Blocks

For testing proof generation for blocks, the testing branch should be used.

Proving Blocks

If you want to generate a full block proof, you can use tools/prove_rpc.sh:

./prove_rpc.sh <BLOCK_START> <BLOCK_END> <FULL_NODE_ENDPOINT> <RPC_TYPE> <IGNORE_PREVIOUS_PROOFS>

Which may look like this:

./prove_rpc.sh 17 18 http://127.0.0.1:8545 jerigon false

Which will attempt to generate proofs for blocks 17 & 18 consecutively and incorporate the previous block proof during generation.

A few other notes:

  • Proving blocks is very resource intensive in terms of both CPU and memory. You can also only generate the witness for a block instead (see Generating Witnesses Only) to significantly reduce the CPU and memory requirements.
  • Because incorporating the previous block proof requires a chain of proofs back to the last checkpoint height, you can also disable this requirement by passing true for <IGNORE_PREVIOUS_PROOFS> (which internally just sets the current checkpoint height to the previous block height).

Generating Witnesses Only

If you want to test a block without the high CPU & memory requirements that come with creating a full proof, you can instead generate only the witness using tools/prove_rpc.sh in the test_only mode:

./prove_rpc.sh <START_BLOCK> <END_BLOCK> <FULL_NODE_ENDPOINT> <RPC_TYPE> <IGNORE_PREVIOUS_PROOFS> <BACKOFF> <RETRIES> test_only

Filled in:

./prove_rpc.sh 18299898 18299899 http://34.89.57.138:8545 jerigon true 0 0 test_only

Finally, note that both of these testing scripts force proof generation to be sequential by allowing only one worker. Because of this, this is not a realistic representation of performance but makes the debugging logs much easier to follow.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

zero-bin's People

Contributors

0xaatif avatar atanmarko avatar bgluth avatar cpubot avatar dependabot[bot] avatar frisitano avatar julianbraha avatar lastminutedev avatar leovct avatar lindaguiga avatar muursh avatar nashtare avatar praetoriansentry avatar vladimir-trifonov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

zero-bin's Issues

Unable to compile on current nightly (`2024-03-07`)

Just updated my nightly, and it looks like paladin fails to compile on the latest version. It works on 2024-02-29 however. Haven't got a chance to dig into this, but can if no one else has any spare cycles.

src/runtime/mod.rs:209:5
    |
209 |     #[instrument(skip_all, level = "debug")]
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...so that the type `Op` will meet its required lifetime bounds
210 |     async fn get_task_sender<'a, Op: Operation, Metadata: Serializable + 'a>(
    |                              -- the parameter type `Op` must be valid for the lifetime `'a` as defined here...

Implement new `BlockProofFuture` type

If block depends of previous block proof, proof generation needs to be serialized. We should introduce new BlockProofFuture type that would resolve when the proof generation for the previous block is finished. In the meantime, we could parallelize other tasks (TX proof generation), and wait for the previous block proof when it is actually needed.

Refactor GA workflow

The current github actions workflow is not ideal, as clippy / rustfmt / tests are being run on the same job, hiding useful information in case of failure. It would be beneficial to split them into distinct steps as to easily highlight which job, if any, has failed.

Load KERNEL data from disk for faster worker bootstrapping

(as per slack discussion)

zero-bin currently supports pre-processed circuits loading from disk (though this could be greatly improved, see 0xPolygonZero/plonky2#1394).

It however doesn't support plonky2 Kernel loading from disk, which impedes worker bootstrapping by adding an extra overhead of possibly around ~4sec to ~5sec for it to generate. The Kernel data itself takes about ~240kB when stored in a JSON file, hence loading time should be close to negligible.
Given that small txns should be in the ballpark of ~12sec to ~18sec give or take (in addition to Kernel generation), removing this extra 4sec to 5sec of bootstrapping would have a non-negligible impact on overall proving throughput.

Support multi-block proving for other modes than `Jerigon`

#96 introduced multi-block processing with the Jerigon mode. While this is the mode to use in a real-life setting, testing & benchmarking are often performed in stdio mode (the former for easy reproducibility, the latter to not be impacted by networking conditions).

As such, it would be beneficial to also support multi block proving from this mode. We would probably need to enforce some rules on the witness files naming to ensure proper ordering when parsing.

Add circuit version consistency check

Right now circuits are stored as prover_state_foo_N where foo refers to the circuit name (arithmetic, CPU, memory, ...) and N the initial STARK table size associated.
As such, if one bumps evm_arithmetization version because of circuits changes, but forgets to flush the old prover state, they'll get some discrepancies leading to proving failure which may be hard to track. We should strengthen this process by detecting discrepancies between versions, and have some automatic flush&regenerate mechanism, or store circuit data differently as to refer to a specific version they were generated with.

Support checkpoint heights

zero-bin currently only supports proving a chain from the actual genesis. The requirement for starting from checkpoint heights requires considering the checkpoint block as the new "local" genesis. That is:

  • returning the checkpoint block state trie root instead of the genesis root
  • returning the 256 previous blockhashes, possibly going past the checkpoint block (we currently stop at genesis, for obvious reasons)
  • changing the block numbers passed to the prover. As genesis has index 0, every checkpoint height is being reinitialized to 0 after the final checkpoint proof is submitted.

Otherwise, if no change here is preferable, we'd need to alter the logic inside plonky2 zkEVM block_circuit so that we do not refer to genesis when passing None as previous proof argument, but target the previously implied one.

Do not persist all intermediary block proofs to disk

Since #96, we have the ability to prove a sequence of blocks in one go. This however writes all generated block proofs to disk, which is not needed.
More specifically, say we have checkpoint height $N$ and we generate blocks $N+1$ to $M$, then every subsequent proof $\pi_t$ will be attesting validity of all previous block proofs $pi_{N &lt; i &lt; t}$. As such, only the latter is really useful and needs to be persisted.

Support fetching witnesses via eth-tx-proof

Currently zero-bin allows fetching block trace data from a Erigon instance with the proper witness APIs as well as loading this trace data from a file and kicking off proofs.

To enable easier integration testing, as well as to give users a nice way to target chains which do not yet have the proper trace APIs, we would like to integrate the code from eth-tx-proof into zero-bin such that we can fetch information from an existing RPC node and kick off the proving process for a block or transaction.

Load only `VerifierCircuitData` in verifier mode

At the moment, the verifier binary loads the entire ProverState to be able to verify block proofs. This is a bit wasteful, as the verifier only needs a few kBs of data as opposed to the prover who needs several GBs.

plonky2, as of commit 6dd2e313c4a1d67fb5de8efbe4068554119f7f3b (November 28th), allows to call final_verifier_data() from AllRecursiveCircuits.

It would be nice to have zero-bin store a verifier version of the preprocessed prover_state so that the verifier binary does not need to load all the useless data.

Move away from `ethers-core`

eth-tx-proof and #51 are relying on ethers-core, which is on its way to be deprecated, with no plan to support EIP-4844 type 3 transactions, making it incompatible with Cancun HF.

As such, we'd need to consider moving away from it and rely on an alternative. A natural replacement would be alloy-rs, although it is partly WIP and would need us to rely on github revisions for a while instead of crates.io versions.

From its maintainer, the current status of the overall alloy project is as follows:

  • Consensus types are good
  • Transport stack and RPC client are good
  • Some niggling bugs with proper application of eip-155 to signatures for legacy txns only
  • Provider interfaces are almost settled. just working on improving eth_call right now
  • Network abstraction is still expanding, but changes will be mostly invisible
  • Provider type naming is still hard for beginner rust users, and we may change generics/bounds a lot to try to fix this
  • Contract bindings are experimental

Specifying --output-dir with an non-existent directory gives gab error message

System

OS: macOS 14.0
Rust: rustc 1.76.0-nightly (3a85a5cfe 2023-11-20)

Behavior

% RUST_LOG=debug cargo r --release --bin leader -- --mode http --runtime in-memory --output-dir ./output

# Skip tons of stuff

     Running `target/release/leader --mode http --runtime in-memory --output-dir ./output`
Error: No such file or directory (os error 2)

Expected Behavior

Create the output directory if it doesn't exist or give a nice error.

Provide txn IR for debugging upon failure

We currently only have access in the logs to the proving trace upon failure.

It would be nice to also log

  • the GenerationInputs payload passed to generate_txn_proof() (to be able to regenerate locally easily when debugging)
  • the PublicValues associated to the AggregatableProof LHS and RHS arguments of generate_agg_proof(), and the ones for previous block proof and current one in generate_block_proof()

Add support for block intervals

Context

Background 0xPolygonZero/zk_evm#297 0xPolygonZero/zk_evm#296 #62

Originally, the intent with zero-bin was to implement a very simple CLI tool that could prove blocks with plonky2. So the CLI interface is very much designed with this intent — provide a block number as input, and generate a plonky2 proof as output.
As we move towards making this repository production ready it's a good time expand the capabilities of zero-bin in its ability to understand what I am going to call block intervals.

Block Intervals

Block invervals encapsulate the semantics regarding how zero-bin manages a set of proving jobs relative to some higher order intent, like following a chain tip, for example. I will use interval notation to describe them.

Note for all intervals we assume ∈ ℤ

[x,x]

Prove a single block. This is the same (only) block mode that zero-bin currently supports, generalized to interval notation.

In Rust

x..=x

[x,y]

Prove a range of blocks. For example [1,100] entails proving blocks 1 through 100.

In Rust

x..=y

[x,∞)

Follow chain tip, starting from x. This entails making zero-bin aware of the tip of the target chain such that it contiguously proves blocks towards it.

In Rust

x..

Block Connection

A bit of background: zk_evm's prove_block method accepts an optional parent block proof.

With the introduction of block intervals, we will need an additional boolean flag which specifies whether contiguous blocks must be connected when generating proofs. This flag will determine whether zero-bin maintains a record of parent proof outputs to plug into these prove_block calls.

The implications on the prover module

The API of the prover module is at odds with the ability to facilitate block connection in an efficient manner. In particular, the prove method accepts the parent block proof as an argument with the type Option<PlonkyProofIntern>. In other words, this prove method is currently set up to be atomic, in the sense that it executes all proof steps in a single logical step. This of course makes sense in context of the original intent of zero-bin (proving one block at a time), but to support block intervals, this will need to be updated.

zero-bin will need to be able to kick off multiple proving jobs in parallel -- it should not have to wait for block n to complete its block proof before kicking off a proof for block n + 1. This entails heavier orchestration logic in the leader or some clever usage of Future. For example, rather than accepting an Option<PlonkyProofIntern>, it could accept some Option<impl Future<Output = PlonkyProofIntern>> which resolves when its parent proof has completed. This could likely be facilitated with some BlockProofFuture type that uses channels to asynchronously signal block proof completion. My sense is that this will be a more ergonomic solution than a more top-down imperative orchestration system -- but I leave that decision to the implementor.

Summary

Block Intervals are a key capability for wider applicability and thus adoption of zero-bin. The features and details specified herein should be broken into smaller sub issues before implementing, while this issue can serve as the higher level discussion environment and reference definition for the capability as a whole.

Include test/debug mode for easier regression tracking

When debugging failing transactions / blocks, we may want to have a simpler setup to make sure the regression is addressed without having to bother with all the heaviness of computing actual proofs / aggregating txns together.

As most of these proving failures occur during witness generation, we could just have the leader send the individual txn IR, and have workers proceed to a single Operation that would call generate_traces from the plonky2_evm API instead of one of the prove functions.

Introduce block intervals to zero-bin

We need to unify zero-bin program input to handle one, multiple blocks, or continuous stream of new blocks, as discussed in #86. Introduce new type BlockInterval as a CLI argument, and update API of the main use-cases to use it.

Also, introduce new parameter that would indicate if the blocks in the range are dependent from previous block proof. If they are not dependent, their proof generation could be fully parallelized in some future PR.

Move generated `prover_state` / `verifier_state` to dedicated folder

Right now we store all circuit data at the root level, which can become a bit messy as we're now splitting prover state and storing each individual circuit size separately.
It would probably be cleaner to store them in a dedicated circuits folder, or equivalently named.

Rework default circuit ranges loading

Currently, if no circuit ranges are specified when calling the leader, this results in using a default set of ranges. This can be problematic / cumbersome whenever we:

  • forget to manually specify the set of ranges we're interested in that differ from the default ones (which are purposely really large, to account for all scenarios in real-life settings)
  • use the debug_block.sh script, that ignores the notion of prover state as it only focuses on witness generation

When #27 is merged, being in one of these two scenarios will flush the previous circuits and rewrite new ones, which is time consuming and fairly wasteful. Especially if in a debugging session, and alternating say between debug_block.sh and prove_block.sh scripts for instance.

Instead, I'd suggest to tweak default circuit loading, i.e. when no specific ranges are specified, by first loading from disk the existing ones, if existing, and defaulting to an arbitrary set of default hardcoded values if non-existing. This would alleviate the circuit wiping issue, but would still incur possibly heavy burden on disk IO / memory consumption for the debugging script which does not need any of the circuitry. To this end, we could just load a BaseProverState under the test-only feature flag, i.e. containing only the higher level circuits which remain fairly small, and completely ignore the recursive table circuits. This would fail in a regular proof generation context but this suffices for witness generation and doesn't require any API change in the dependencies.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.