Giter Club home page Giter Club logo

Comments (6)

afck avatar afck commented on May 26, 2024 2

As discussed, we might not need the full DKG in the wild, if we plan to do the key generation on the transaction log/blockchain itself. We will probably use a smart contract for this, but I'll describe it just in terms of special transactions. This protocol will have to run once at the end of each instance, i.e. before the set of validators/nodes changes, to generate the new keys. Once they have been distributed, the new Honey Badger instance starts, using the new set of nodes.

Idea

We use a simplified version of the Verifiable Secret Sharing (VSS) algorithm in the paper: If I have a bivariate polynomial of degree t over a prime field Fr, i.e. a polynomial

fn f(x: Fr, y: Fr) -> Fr {
    (0..=t).cartesian_product(0..=t).map(|(i, j)|
        x.pow(i) * y.pow(j) * coeff[i][j]
    ).sum()
}

in two variables x and y, then in the matrix

f(0, 0)    f(0, 1)    f(0, 2)    f(0, 3)    …
f(1, 0)    f(1, 1)    f(1, 2)    f(1, 3)    …
f(2, 0)    f(2, 1)    f(2, 2)    f(2, 3)    …
f(3, 0)    f(3, 1)    f(3, 2)    f(3, 3)    …
…

each row and each column are a univariate (i.e. in one variable, because the other is fixed) polynomial of degree t. So if you know t + 1 values in a column s, you can compute the whole column and in particular the value f(0, s).

In VSS, the proposer comes up with the polynomial and sends a commitment (see below for details) to everyone, so they can't lie about the values later. They send row m to node m, where node IDs go from 1 to n. Node m sends each value f(m, s) to node s. If there are at most t faulty nodes, and 2 * t + 1 nodes received a valid row, then every honest node s will have received at least t + 1 correct values for the s-th column and can compute f(0, s).

For DKG, we don't want a single proposer. Instead, we want to add the polynomials from at least t + 1 proposers, so that in the end we can be sure that no single node knows the sum. So every node k proposes a polynomial f[k] via VSS, and when t + 1 VSS instances have completed, everyone sums up their values f[k](0, s) and uses them as their secret key. The resulting secret keys, being sums of values of polynomials, are still values of a polynomial of degree t, so they can be used as keys as in #38 and #40.

Algorithm

Each node k (inluding the new one) generates a random bivariate polynomial f[k] of degree t over the prime field, i.e. coefficients coeff[k][i][j] for i and j in 0..=t. It will publish a commitment to that polynomial, i.e. all values commit[k][i][j] = coeff[k][i][j] * g1, with which all values of the polynomial can be publicly verified:

f[k](x, y) * g1 ==
    (0..=t).cartesian_product(0..=t).map(|(i, j)|
        x.pow(i) * y.pow(j) * commit[k][i][j]
    ).sum()

It also encrypts for each node m all values f[k](m, y)crypt_row_f[k][m]. It commits a transaction:

Propose(commit[k], crypt_row_f[k])

Each node m decrypts crypt_row_f[k][m], verifies the values and for each node s encrypts f[k](m, s)crypt_value_f[k][m][s]. It commits a transaction:

Accept(k, crypt_value_f[k][m])

Once there are 2 * t + 1 transactions Accept(k, _) by different nodes in the log, we call k's proposal "complete": Then at least t + 1 of the Accepts must be honest, so every node s can reconstruct the univariate polynomial f[k](_, s) from it, and in particular the value f[k](0, s).

In the first block in which t + 1 proposals become complete, the algorithm terminates: Each node s computes all f[k](0, s) where k's proposal is complete, and sums them up to produce its new secret key share sk[s]. The public key pk[s], which is the sum of all completed f[k](0, s) * g1, can be computed from the commit[k][0][j] for the completed ks.

Then the sk[s]s are points on a polynomial of degree t, where sk[0] * g1 == pk[0], although no node knows sk[0].

Optimization

The paper uses a symmetric polynomial, i.e. f[k](x, y) == f[k](y, x). I think this is just an optimization so that almost half of the values can be omitted and the messages are smaller. We should probably do that, too.

from hbbft.

afck avatar afck commented on May 26, 2024 1

Let's actually wait for n - 2 * t instead of t + 1 complete proposals. In cases like n = 2 (and t = 0), we don't want a single node to act as a trusted dealer, and it's reasonable to expect every participating node to make a valid proposal.

We should also add a timeout: If, say, 50 blocks/epochs after a ballot the DKG still hasn't succeeded, the validator set change should be aborted/reverted.

from hbbft.

afck avatar afck commented on May 26, 2024

How much to implement in hbbft

I wonder how much of this we should implement as part of hbbft: We could just provide a few convenience functions to facilitate implementations of the above — if it's based on smart contracts, the implementation itself is certainly outside the scope of this crate.

Alternatively, we could provide a full implementation of a "DynamicHoneyBadger" that would allow adding and removing nodes. It can then be used either directly or as a complete reference implementation of the DKG scheme. It would wrap its own transaction type in an enum, and handle the Propose and Accept transactions accordingly.

from hbbft.

afck avatar afck commented on May 26, 2024

We should also do some research on whether it is secure in this context to use the same key for the common coin (signing things like "coin #3 in ABA for Alice in epoch 42") and for encrypting batches of transactions.

In our setting, you can't make a node sign some arbitrary text: it will only sign coin shares, and a <⅓ adversary alone can't even cause it to send any coin share.
However, the adversary can make the other nodes create a decryption share of an arbitrary ciphertext, albeit only one that was produced by actually encrypting something. Can they exploit that to prematurely learn my signature sk * hash(msg) (in G2) for msg?
My encryption share is sk * u, where u = r * g1 (in G1) for a value r that they know. (Due to the check in #40, I only send u to them after verifying that they knew r.) So I don't see a problem there, but that doesn't mean there is none…

from hbbft.

afck avatar afck commented on May 26, 2024

So DynamicHoneyBadger would need to own the NetworkInfo and be able to modify it. But it could still implement DistAlgorithm, with input type:

enum Input<Tx, NodeUid> {
    /// A user-defined transaction.
    User(Tx),
    /// A vote to add a node.
    Add(NodeUid),
    /// A vote to remove a node.
    Remove(NodeUid),
}

It would create a HoneyBadger<Transaction<Tx, NodeUid>, NodeUid> instance with:

enum Transaction<Tx, NodeUid> {
    /// A user-defined transaction.
    User(Tx),
    /// A vote by an existing node to add a new node.
    Add(NodeUid, NodeUid, Signature),
    /// A vote by an existing node to remove a node.
    Remove(NodeUid, NodeUid, Signature),
    /// A proposal of a bivariate polynomial for key generation.
    Propose(NodeUid, BivarCommit, Vec<Ciphertext>, Signature),
    /// A confirmation by a recipient of a row for key generation.
    Accept(NodeUid, Vec<Ciphertext>, Signature),
}

As soon as a majority of votes for adding or removing a particular node have been output, key generation would be performed. And once that is complete, the HoneyBadger instance would be dropped, probably after draining its transaction buffer, so we can immediately put the transactions back into the new instance. The new one would be started with the modified NetworkInfo, including the added or excluding the removed node.

DynamicHoneyBadger's output could either be just Tx, or something similar to Input, so that the user gets informed whenever the set of participating nodes has changed.

When the user inputs e.g. AddNode(alice), it will take a while until the vote makes it into a batch output by HoneyBadger. This could be sped up by either allowing to prioritize particular types of messages in HoneyBadger, or by gossipping the votes to other nodes so they put them into their transaction queues, too.

Edit: This will need another guarantee from HoneyBadger (which I think is already satisfied, thanks to the Agreement termination message):
If all nodes stop after a particular epoch's output, it's still certain that they will all be able to reach that epoch.

from hbbft.

afck avatar afck commented on May 26, 2024

I'm trying to come up with the simplest rules that allow adding and removing nodes and that can't get stuck (e.g. because a new node fails to provide input). A timeout (in terms of number of epochs) could be problematic, if there are a lot of transactions in the buffer and it takes a long time to commit.

Instead let's say you can change your vote by re-voting. Whenever the to-be-added/-removed node with the majority changes, a new DKG round starts. As above, 2 * t + 1 Accepts make a proposal "complete", and we wait for 2 * t + 1 proposals; these numbers always refer to the current network size (before adding or removing a node). However, when adding a node we wait in addition for that node's proposal to become complete.

As soon as the network changes, all further epochs in the old HoneyBadger instance are ignored and all existing votes removed. A new instance with the new NetworkInfo starts, with all votes reset to None.

from hbbft.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.