Giter Club home page Giter Club logo

arcturus's People

Contributors

cargodog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

fiono11 3for

arcturus's Issues

Precompute coeff_f coefficients

At time of writing, coeff_f_kp is iteratively computed 3 times (in batch verification). These computations are very expensive. It may be better to compute these values once and simply iterate over the precomputed values. This is a time-memory tradeoff, as now the entire set of coefficients must live in memory at once.

Iteratively compute set coefficients according to Gray code sequence

Overview

Currently, each member of the set is multiplied by a coefficient, computed as a polynomial combination of each proof input's f_ji tensor. For an m digit proof configuration, this requires m scalar multiplications per member in the set for each proof being verified. So, if we are verifying p proofs, each with u inputs, with a set size of N = n^m, this requires p * u * m * N multiplications in total, just to compute the set coefficients.

If instead we use Gray coded indices (each subsequent index differs from the previous index by only one digit), we can leverage this to iteratively compute each coefficient efficiently from the previous coefficient. To illustrate this, consider the following examples over a 3 bit set.

j = [0..n) and i = [0..m)

Example: f_01 refers to the j = 0 & i = 1 position in the f_ji tensor

Each proof will have a different number of inputs, so u will not be constant across proofs, but I assume constant u for simplicity.

Current approach (m = 3, n = 2)

m multiplications for each k

k (dec) k (binary) coeff_k
0 000 f02 * f01 * f00
1 001 f02 * f01 * f10
2 010 f02 * f11 * f00
3 011 f02 * f11 * f10
4 100 f12 * f01 * f00
5 101 f12 * f01 * f10
6 110 f12 * f11 * f00
7 111 f12 * f11 * f10

Optimized approach (m = 3, n = 2)

2 multiplications and one inversion for each k

k (dec) k (Gray) coeff_k (optimized)
0 000 f02 * f01 * f00
1 001 coeff_0 * f10 / f00
2 011 coeff_1 * f11 / f01
3 010 coeff_2 * f00 / f10
4 110 coeff_3 * f12 / f02
5 111 coeff_4 * f10 / f00
6 101 coeff_5 * f01 / f11
7 100 coeff_6 * f00 / f10

Summary

Gray coded indices allow saving apprx. p * u * m/2 multiplications in a batch verification of p proofs, with u inputs each.

Need iterator adapter for SizedFlatten

Currently, the code creates SizedFlatten iterators as follows:

let foo = SizedFlatten::new(bar);
let baz = SizedFlatten::new(SizedFlatten::new(foz));

This become very verbose, especially when multiple levels of flattening occur. It would be nice to have an iterator adapter to simplify, as such:

let foo = bar.sized_flatten();
let baz = foz.sized_flatten().sized_flatten();

Compute mu_k terms via iterative exponentiation, instead of keyed hashing

Suggestion from Sarang in #monero-research-lab:

sarang: You can actually do a single hash execution for the `mu` terms instead of one each, and substitute a field multiplication instead
sarang: So instead of `mu_k = hash(stuff,k)` for each `k`
sarang: You set `mu = hash(stuff)` and then define `mu_k = mu^k`
sarang: and you can define the former iteratively
sarang: So if your scalar ops are sufficiently faster than your hash function, you can shave some time off that
sarang: I demo this in the C++ code that I linked
sarang: * sarang updates the preprint too
sarang: Note that there's nothing wrong with the original method, except the speed difference
sarang: s/former/latter

This should be trivial to implement and offer some performance benefit over the current Blake2b(mu, k)

docs.rs build fails

Build fails because the build is configured to use the SIMD backend, which is not supported in the docs.rs build sandbox.

Error in evaluation of proof inequalities

According to the paper, the verifier should assert that equations (1), (2), (3), (4), & (5) all equal 0. To improve performance, I combined these evaluations into one single inequality, so as to perform only one large multiexponentiation. Unfortunately, this breaks the proof guarantees. It is possible to choose inputs which satisfy the combined inequality, while not satisfying one or more of the individual inequalites mandated by the proof protocol.

To illustrate, consider two inequalities:

(a) 2x - 6 = 0
(b) y - 8 = 0

Next, consider joining the two inequalities (a) & (b), to create a new inequality (c):

(c) 2x + y - 14 = 0

It is possible to choose x & y such that (c) is satisfied, but not (a) and/or (b). For example, choosing x == 7 & y == 0 would satisfy (c), but not satisfy (a) or (b). Thus, a verifier which only tests (c) does not actually guarantee that (a) and (b) hold.

Cargo build errors

Hello! When I try to cargo build a project with the following Cargo.toml:

curve25519-dalek = "3.0.2"
arcturus = "0.2.1"

it gives me many errors "expected struct FieldElement51, found struct FieldElement2625".

Any idea why? Thanks!

Proof error

I tried to make an example based on yours, but it gives me a proof error. Can you tell me why, please? Thanks!

use arcturus::*;
use curve25519_dalek::ristretto::RistrettoPoint;
use curve25519_dalek::scalar::Scalar;
use merlin::Transcript;
use rand::rngs::OsRng;
use curve25519_dalek::constants::RISTRETTO_BASEPOINT_POINT;

pub const G: RistrettoPoint = RISTRETTO_BASEPOINT_POINT;

fn main() {

    let mut rng = OsRng;

    let gens = ArcturusGens::new(2, 2, 1).unwrap();

    let mut ring: Vec<Output> = Vec::new();

    let sk = Scalar::random(&mut rng);
    let blind = Scalar::random(&mut rng);

    ring.push(Output::new(sk*G, Scalar::from(5u64)*G + blind*G));
    ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));
    ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));
    ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));

    // Indices of UTXOs to spend as transaction inputs
    let idxs = vec![0];

    // Secret data to spend each input UTXO
    let spends = vec![SpendSecret::new(sk, 5, blind)];

    // Secret data for new minted outputs (Total mint ammounts must balance total input ammounts).
    let mints = vec![MintSecret::new(Scalar::random(&mut rng)*G, 5, Scalar::random(&mut rng))];

    // Signer computes the transaction proof
    let mut t = Transcript::new(b"Test proof");
    let proof = gens.prove(&mut t, &ring[..], &idxs[..], &spends[..], &mints[..]).unwrap();

    // The verifier my verify the proof as follows:
    let mut t = Transcript::new(b"Test proof");
    //let proofs = vec![proof];
    assert!(gens.verify(&mut t, &ring[..], proof).is_ok());

}

Use num_traits::pow to efficiently compute random mu_k values for prover

As of #26, the prover tediously computes the mu_l exponent for each index l in his proof. This could be optimized in future work to use a more efficient exponentiation algorithm (e.g. square and add), compute each mu_l in a single iterative computation, or both.

As it happens, num_traits::pow exists for just this purpose, and would be the ideal solution to this problem. Unfortunately, curve25519-dalek does not implement the necessary num_traits::identities traits. The ideal solution to this problem, would involve implementing num_traits for curve25519-dalek and submitting a patch upstream.

Alternatively, I could consider wrapping the Scalar type into a local Wscalar type, and implement the traits locally. This is not ideal, but may be quicker than pushing changes upstream.

Need multi-threaded support

The proof and verification API can be made multi-threaded (in std builds), which would greatly improve proof and verification time.

Tests need improvement

This issue is to enumerate the test cases that should exist and add tests for each case.

Test cases:

arcturus::Output::new()

  • Confirm struct members match input args after creation (i.e. no arg mixups)
  • Confirm output equality matches expected

arcturus::SpendSecret::new()

  • Confirm struct members match input args after creation (i.e. no arg mixups)
  • Confirm scalars are canonical

arcturus::SpendSecret::to_output()

  • Confirm output pubkey matches expected
  • Confirm output commitment matches expected

arcturus::SpendSecret::get_tag()

  • Confirm resulting tag is correct

arcturus::MintSecret::new()

  • Confirm struct members match input args after creation (i.e. no arg mixups)
  • Confirm scalars are canonical

arcturus::MintSecret::to_output()

  • Confirm output pubkey matches expected
  • Confirm output commitment matches expected

arcturus::ArcturusProof::count_spends()

  • Confirm count is correct

arcturus::ArcturusProof::count_mints()

  • Confirm count is correct

arcturus::ArcturusProof::spend_tags()

  • Confirm tags match expected

arcturus::ArcturusProof::mints()

  • Confirm mints are correct

arcturus::ArcturusGens::new()

  • Confirm min & max parameters are asserted
  • Confirm generators are deterministic
  • Confirm dimensions of generator matrix

arcturus::ArcturusGens::ring_size()

  • Confirm size is correct for multiple generator sets of differing dimension

arcturus::ArcturusGens::commit()

  • Confirm commitment is correct

arcturus::ArcturusGens::prove()

  • Confirm min & max parameters are asserted
  • Confirm input array dimensions are asserted
  • Confirm output proof secrets are non-deterministic
  • Confirm output proof array dimensions match expected
  • Confirm output proof mints match expected
  • Confirm output proof tags match expected
  • Confirm valid proof verifies

arcturus::ArcturusGens::verify()

  • Confirm min & max parameters are asserted
  • Confirm input array dimensions are asserted
  • Confirm valid proof verifies
  • Confirm invalid proof fails verification

arcturus::ArcturusGens::verify_batch()

  • Confirm min & max parameters are asserted
  • Confirm input array dimensions are asserted
  • Confirm valid proof verifies
  • Confirm invalid proof fails verification
  • Confirm able to batch verify proofs with differing dimensions (e.g. different u)

arcturus::derive_generator

  • Confirm generators match expected
  • Confirm generators are unique

arcturus::convert_base

  • Confirm base decomposition matches expected

arcturus::CycleTensorPolyEvals

  • Confirm polynomial evaluations match expected

arcturus::compute_p_j

  • Confirm output values match expected

arcturus::util::ScalarExp

  • Confirm iterator values match expected
  • Confirm iterator size_hint matches expected

arcturus::util::SizedFlatten

  • Confirm iterator values match expected
  • Confirm iterator size_hint matches expected

Need separate API for verify batch and verify single

Currently there is only one function to verify, which supports performs batch verification. It would be nice to have separate verify() and verify_batch() APIs, to simplify usage for users that do not require batch verification.

tell_size() hack causes suboptimal multiscalar multiplication

Since multiscalar multiplication requires ExactSized iterators, but that trait does not work well, I added a hack tell_size(0) trait to override the size hint checks within the multiscalar_mult() implementation. Unfortunately, this hack causes multiscalar_mult() to always choose the Straus algorithm and never the Pippenger algorithm for multiscalar multiplication see here.

This can have a serious performance penalty for proofs over large rings. Pippenger claims >40% improvement for certain proofs with >190 point multiplications to perform. I suspect even greater improvement for Arcuturus proofs, as a proof may easily exceed 190 multiplications by several orders of magnitude.

Missing readme

This repository needs a good readme, with info to get started, as well as a disclaimer about early/unaudited code

Missing cargo docs

Add documentation to the codebase, and link to the generated cargo docs

Prove & verify methods should accept ring as slice instead of iterator

The first implementation of ArcturusGens::prove() & ArcturusGens::verify() operates on the ring as an iterator of members. This was to allow flexibility to the caller, in-case the ring was not stored in a contiguous array (e.g. an iterator of references to outputs in blocks of a blockchain). Unfortunately, because the ExactSizeIterator is broken, this has led to some ugly hacks (1, 2).

Furthermore, lack of ExactSizeIterator makes parallelization very inefficient. It may be desirable to break up proof generation or verification into K small pieces to be processed in parallel. In the current design, this would require breaking the ring iterator (and any iterators derived from it) into K sub iterators. Unfortunately, since ExactSizeIterator is not functioning, this requires cloning the iterator and evaluating it N/K times to create the next sub iterator. This is very expensive, as some of the iterators involve very expensive mathematics.

Given these issues, I would like to redesign this interface to require the ring argument to be a slice. This will solve both of the above problems, because slices always have exactly known size. This will be a breaking change, and will restrict the caller somewhat, but the IMO the benefits justify the change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.