Giter Club home page Giter Club logo

kzg-ceremony-sequencer's Introduction

KZG Ceremony Rest API

lines of code dependency status codecov CI

This implements KZG Ceremony Specification.

The latest build is available as a container on ethereum/kzg-ceremony-sequencer:

docker run ethereum/kzg-ceremony-sequencer:latest

Setup

Build, lint, test, run

cargo fmt && cargo clippy --workspace --all-targets --all-features && cargo build --workspace --all-targets --all-features && cargo test --workspace --all-targets --all-features && cargo run -- -vvv

Requirements

  • OAuth Client App : Currently we require users to sign in with either Ethereum or Github, which requires an OAuth client application that the user gives read access to their profile to.

Live URL

Registering for GitHub OAuth

Register for Github OAuth access here.

Registering for Sign-in-with-Ethereum

See the documentation here.

To register, use the REST API:

curl -X POST https://oidc.signinwithethereum.org/register \
   -H 'Content-Type: application/json' \
   -d '{"redirect_uris": ["http://127.0.0.1:3000/auth/callback/eth", "https://kzg-ceremony-sequencer-dev.fly.dev/auth/callback/eth"]}'
{
  "client_id": "9b49de48-d198-47e7-afff-7ee26cbcbc95",
  "client_secret": "...",
  "registration_access_token": "....",
  "registration_client_uri": "https://oidc.signinwithethereum.org/client/9b49de48-d198-47e7-afff-7ee26cbcbc95",
  "redirect_uris": [
    "http://127.0.0.1:3000/auth/callback/eth",
    "https://kzg-ceremony-sequencer-dev.fly.dev/auth/callback/eth"
  ]
}
fly secrets set ETH_RPC_URL="..."
fly secrets set ETH_CLIENT_ID="..."
fly secrets set ETH_CLIENT_SECRET="..."
fly secrets set GH_CLIENT_ID="..."
fly secrets set GH_CLIENT_SECRET="..."
fly volumes create kzg_ceremony_sequencer_dev_data --size 5

kzg-ceremony-sequencer's People

Contributors

carlbeek avatar dependabot[bot] avatar gswirski avatar kevaundray avatar kustosz avatar nicoserranop avatar philsippl avatar plasmapower avatar recmo avatar scroll-dev avatar stefanbratanov avatar tkmct avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kzg-ceremony-sequencer's Issues

Better key derivation

Is your feature request related to a problem? Please describe.

We currently rely on StdRng for CPRNG. We'd like something more standardized.

Describe the solution you'd like

Use Argon2 for key stretching of the source entropy and then a keccak-sponge to extract unique per-ceremony entropy.

Describe alternatives you've considered

Additional context

Add Continuous Deployment

Ideally, we want every push to master or every tagged release to redploy to master.

Currently the solution being used is fly.io to deploy the serer using a docker file. We had discussions about using netlify/aws.

This is a tracking issue for the progress of this

Add persistent storage

The things we want to be stored peristently:

  • Receipts (This includes the witness and the users social id)

We should also keep the social Ids in memory, so we don't need to go to persistent storage to check if a user has already contributed. Moreover, we can store them in a sorted list for faster lookup times.

  • Latest Contribution file

This will also be stored in memory for the /info/transcript endpoint. Related to #1 we can read from disk whenever we need to fetch it for the /contribute endpoint and the /sot/join endpoint.

  • Social ID Blacklist

This is a blacklist of all of the users who managed to reserve the contribution slot, but timed out. There are no second chances here, and they cannot ocntirbute again with that social ID.

Notes

  • The lobby is not stored persistently. If the sequencer goes down, then those people will need to rejoin. We assume that there is enough time for everyone to participate, so this is not much of a problem

Canonical Social Id format

Since we are switching from Auth0, we should explicitly define a canoncial social ID format in the specs. We can simply do:

Github

github | {github_id}

Ethereum

eth | {ethereum_address}

These will uniquely identify a github or ethereum account.

Finer grained locking

Right now there is a single global RwLock for the AppState. Public /info endpoints can starve the /contribute writer. We should rethink locking when we do persistence.

New ethereum account error message is ambiguous

Is your feature request related to a problem? Please describe.

Assumption: Clients are not checking whether an Ethereum account is new.

For new Ethereum accounts, the validation call to the rpc node to figure out whether the ethereum account is valid, produces an error.

This then gets propagated as a "Could Not Fetch User Data Error" back to the client

Describe the solution you'd like

Instead it would be better, if we differentiate between "could not fetch user data" which could happen due to the RPC node being down and "ethereum account is not valid" because it is new

Describe alternatives you've considered

An alternative is for the client to see the "Could not fetch user data" error and ask the user if their account is new. If not, then they should report an error to the sequencer repo

Additional context
Add any other context or screenshots about the feature request here.

/slot/join should return the most recent contribution file if participant is elected to contribute

If participants need to fetch contribution files/transcripts via a separate endpoint, we run into the risk of them working on a stale state.

The better solution is to return the most recent contribution file in the same response that indicates the participant was chosen to contribute. This might reduce the throughput slightly (file download is on the critical path) but in reality being chosen likely means that the file just changed.

We might still have a separate /info/transcript endpoint if required by the UI.

Make sure active_contributor state is cleared on all error conditions

There are conditions (such as the inability to sign contribution receipt) that are technically impossible to occur, but whose occurrence (such as upstream create changes) is beyond our control and that would result in deadlocking the sequencer. Such errors should be handled.

Review TODO comments

There is a significant number of TODOs in this codebase. They should be reviewed and either turned into issues or removed.

Sequencer can get stuck if a user cancels their request while submitting a contribution

If a user cancels their HTTP request while this code block is executing, the sequencer will get stuck:

let (signed_msg, signature) = receipt
.sign(&keys)
.await
.map_err(ContributeError::Signature)?;
write_json_file(
options.transcript_file,
options.transcript_in_progress_file,
shared_transcript,
)
.await;

I believe this is what's been causing the test ceremony to get stuck. It'll get stuck because the lobby state has entered the "Contributing" state which will not be automatically cleared (unlike the "AwaitingContribution" state). As the request is canceled, the request handler will not continue executing to clear the state itself. That can be seen in this example axum server:

use axum::{routing::get, Router};
use std::{net::SocketAddr, time::Duration};

#[tokio::main]
async fn main() {
    let app = Router::new().route("/", get(root));

    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

async fn root() -> &'static str {
    println!("start");
    tokio::time::sleep(Duration::from_secs(5)).await;
    println!("end");
    "Hello, World!"
}

If you curl localhost:3000 but then Ctrl+C it before the 5 seconds, you'll see that "start" gets printed but "end" never gets printed. Analogously, is the sequencer, canceling the request between the begin_contributing call and the clear_current_contributor call will cause the lobby to enter the "Contributing" state but not clear that state, causing the sequencer to get permanently stuck.

There's a few ways to fix this:

  1. For the request handler, tokio::spawn a background thread to record the contribution and then immediately .join() it to get the result
  2. Add a separate expiry to the "Contributing" state
  3. Declare a variable in the function which has a Drop impl that'll attempt to clear the "Contributing" state

Personally I think option 1 makes the most sense because it avoids other unexpected issues with request cancelation.

Lower the test ceremony compute deadline to 1 minute

Since the test ceremony allows for anyone to join, and doesn't prevent people from contributing twice, the current long compute deadline can mean that people are stuck waiting for a while to contribute. Lowering the deadline to a minute should still be more than enough for anyone to contribute but also help speed up the ceremony.

Add Continuous Integration

We should add a github workflow to run the tests.

This is fairly simple and can be done by using the Actions tab in github.

Make ChaCha20Rng zeroizable

ChaCha20Rng does not currently implement Zeroize. It seems like there is no trivial way to implement it, so it either requires an upstream contribution or unsafe memory fiddling. Either way, this behavior should be implemented and used in batch_contribution::derive_taus.

Change specs to not indent the G1Powers and G2Powers in contribution file

Currently the specs specify the transcript as:

{
    "subTranscripts": [
        {
            "numG1Powers": 4096,
            "numG2Powers": 65,
            "powersOfTau": {
                  "G1Powers": [],
                "G2Powers": [],
             }
         } 

The sequencer is returning them as:

{
    "subTranscripts": [
        {
            "numG1Powers": 4096,
            "numG2Powers": 65,
              "G1Powers": [],
              "G2Powers": [],
        }

The latter is clearer to read, so I'd suggest changing the specs to match this

Unknown session id when user takes longer to input entropy

Description
The current user workflow is:

  1. User goes to the frontend page
  2. User logs in with Ethereum and receives session id
  3. User generates entropy
  4. User tries to get into the lobby

Users might take some time between steps 3 and 4 (generating entropy). But if the user takes more than 30 seconds (default LOBBY_CHECKIN_FREQUENCY), his session id would be invalidated by the sequencer and he would receive a TryContributeError::UnknownSessionId error.

Solution
We think that there should be a sightly longer deadline (1 or 2 minutes) between /signin and the first /lobby/try_contribute so users can take some time generating their entropy.

Add CORS headers to allow participant clients to call API

Description
Browser-based participant UIs will call the API with an origin being the UI host URL. Requests currently fail because browsers enforce a CORS policy and the API responses don't provide any CORS-related permissions.

e.g. When my client (currently running locally) sends an /auth/request_link request, it fails. If the request contains this header it succeeds:
Access-Control-Allow-Origin: http://localhost:3000

Consider using Access-Control-Allow-Origin: * Otherwise, a whitelist of client sites will be required.

Issue in `g1_powers_check` rust implementation

The pairing equality called as part of test_verify_g1 should
fail but this is not the case. The reason behind it is a bit "subtle" see below:

The test:

    #[test]
    fn test_verify_g1() {
        let powers = [rand_g1().into()];
        let tau = rand_g2().into();
        let _ = BLST::verify_g1(&powers, tau);
    }

samples two random elements in G1 and G2 having two different $\tau$: $\tau_1G1$ and $\tau_2G2$ so, as said, the pairing check is supposed to fail. Both the implementations of verify_g1(BLST and Arkworks) used the Vitalik's batch optimization for Fast verification of multiple BLS signatures. With a furher optimization (due the fact the second input of the pairing is always the same). Looking at the BLST implementation of verify_g1:

   fn verify_g1(powers: &[crate::G1], tau: crate::G2) -> Result<(), crate::CeremonyError> {
        // Parse ZCash format
        let powers = powers
            .into_par_iter()
            .map(|p| blst_p1_affine::try_from(*p))
            .collect::<Result<Vec<_>, _>>()?;
        let tau = blst_p2_affine::try_from(tau)?;
        let tau = p2_from_affine(&tau);

        // Compute random linear combination
        let (factors, sum) = random_factors(powers.len() - 1);
        let g2 = unsafe { *blst_p2_generator() };

        let lhs_g1 = p1s_mult_pippenger(&powers[1..], &factors[..]);
        let lhs_g2 = p2_to_affine(&p2_mult(&g2, &sum));

        let rhs_g1 = p1s_mult_pippenger(&powers[..factors.len()], &factors[..]);
        let rhs_g2 = p2_to_affine(&p2_mult(&tau, &sum));

        // Check pairing
        if pairing(&lhs_g1, &lhs_g2) != pairing(&rhs_g1, &rhs_g2) {
            return Err(CeremonyError::G1PairingFailed);
        }

        Ok(())
    }

we note that powers.len() = 1 so:

  1. When let (factors, sum) = random_factors(powers.len() - 1); is called sum is equal to 0
  2. Due a peculiar choice of p1s_mult_pippenger
    if bases.is_empty() {
        // NOTE: Without this special case the `blst_p1s_mult_pippenger` will
        // SIGSEGV.
        return blst_p1_affine::default();
    }

the G1 generator is returned.

The pairing equality than becomes:

$e(g_1, 0) \stackrel{?}{=} (g_1, 0)$

Ethereum: 3 transaction criterium

Is the 3-transaction criterium for Ethereum address authentication checked at the specific (past) block? Is the block number taken from defaults of the EthAuthOptions struct and equal to 15565180? Is the default going to change during the ceremony?

`lobby_checkin_tolerance` value for web app client

Version
rustc 1.63.0 (4b91a6ea7 2022-08-08

Platform
x86_64 GNU/Linux (Windows Subsystem for Linux)

Description
I am trying to upload the contribution computed by our react application (redirect-as-env branch) to the sequencer. So far I have three scenarios:

  1. Default: With lobby_checkin_tolerance=2 I get the called Option::unwrap() on a None value on the contribute function.

  2. lobby_checkin_tolerance=20: I am able to upload the contribution correctly (there is still work in progress in the client computation so I get the Contribution contains no entropy: pubkey equals generator error in the API response.

  3. lobby_checkin_tolerance=20: The sequencer crashes with the following error in the console:

The application panicked (crashed).
Message:  overflow when subtracting durations
Location: library/core/src/time.rs:938

If you try to start the React app, please let me know if the instructions to setup the React web app are not clear in the project readme.md file so I can describe it in more detail. I temporarily added a larger compute_deadline=180000000 so the client can compute without restrictions.

Try to not depend on yanked crates

We currently depend on yanked versions of blake2 (through ethers-signer) and futures-intrusive (through sqlx). Check if there are new versions of these crates available that would not depend on such yanked versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.