cargodog / arcturus Goto Github PK
View Code? Open in Web Editor NEWA pure rust implementation of Arcturus proofs for confidential transactions.
License: MIT License
A pure rust implementation of Arcturus proofs for confidential transactions.
License: MIT License
At time of writing, coeff_f_kp
is iteratively computed 3 times (in batch verification). These computations are very expensive. It may be better to compute these values once and simply iterate over the precomputed values. This is a time-memory tradeoff, as now the entire set of coefficients must live in memory at once.
Currently, each member of the set is multiplied by a coefficient, computed as a polynomial combination of each proof input's f_ji
tensor. For an m
digit proof configuration, this requires m
scalar multiplications per member in the set for each proof being verified. So, if we are verifying p
proofs, each with u
inputs, with a set size of N = n^m
, this requires p * u * m * N
multiplications in total, just to compute the set coefficients.
If instead we use Gray coded indices (each subsequent index differs from the previous index by only one digit), we can leverage this to iteratively compute each coefficient efficiently from the previous coefficient. To illustrate this, consider the following examples over a 3 bit set.
j = [0..n)
andi = [0..m)
Example:
f_01
refers to thej = 0
&i = 1
position in thef_ji
tensor
Each proof will have a different number of inputs, so
u
will not be constant across proofs, but I assume constantu
for simplicity.
m = 3
, n = 2
)m
multiplications for each k
k (dec) | k (binary) | coeff_k |
---|---|---|
0 | 000 | f02 * f01 * f00 |
1 | 001 | f02 * f01 * f10 |
2 | 010 | f02 * f11 * f00 |
3 | 011 | f02 * f11 * f10 |
4 | 100 | f12 * f01 * f00 |
5 | 101 | f12 * f01 * f10 |
6 | 110 | f12 * f11 * f00 |
7 | 111 | f12 * f11 * f10 |
m = 3
, n = 2
)2
multiplications and one inversion for each k
k (dec) | k (Gray) | coeff_k (optimized) |
---|---|---|
0 | 000 | f02 * f01 * f00 |
1 | 001 | coeff_0 * f10 / f00 |
2 | 011 | coeff_1 * f11 / f01 |
3 | 010 | coeff_2 * f00 / f10 |
4 | 110 | coeff_3 * f12 / f02 |
5 | 111 | coeff_4 * f10 / f00 |
6 | 101 | coeff_5 * f01 / f11 |
7 | 100 | coeff_6 * f00 / f10 |
Gray coded indices allow saving apprx. p * u * m/2
multiplications in a batch verification of p
proofs, with u
inputs each.
Currently, the code creates SizedFlatten
iterators as follows:
let foo = SizedFlatten::new(bar);
let baz = SizedFlatten::new(SizedFlatten::new(foz));
This become very verbose, especially when multiple levels of flattening occur. It would be nice to have an iterator adapter to simplify, as such:
let foo = bar.sized_flatten();
let baz = foz.sized_flatten().sized_flatten();
Suggestion from Sarang in #monero-research-lab:
sarang: You can actually do a single hash execution for the `mu` terms instead of one each, and substitute a field multiplication instead
sarang: So instead of `mu_k = hash(stuff,k)` for each `k`
sarang: You set `mu = hash(stuff)` and then define `mu_k = mu^k`
sarang: and you can define the former iteratively
sarang: So if your scalar ops are sufficiently faster than your hash function, you can shave some time off that
sarang: I demo this in the C++ code that I linked
sarang: * sarang updates the preprint too
sarang: Note that there's nothing wrong with the original method, except the speed difference
sarang: s/former/latter
This should be trivial to implement and offer some performance benefit over the current Blake2b(mu, k)
Build fails because the build is configured to use the SIMD backend, which is not supported in the docs.rs build sandbox.
According to the paper, the verifier should assert that equations (1), (2), (3), (4), & (5) all equal 0. To improve performance, I combined these evaluations into one single inequality, so as to perform only one large multiexponentiation. Unfortunately, this breaks the proof guarantees. It is possible to choose inputs which satisfy the combined inequality, while not satisfying one or more of the individual inequalites mandated by the proof protocol.
To illustrate, consider two inequalities:
(a) 2x - 6 = 0
(b) y - 8 = 0
Next, consider joining the two inequalities (a) & (b), to create a new inequality (c):
(c) 2x + y - 14 = 0
It is possible to choose x
& y
such that (c) is satisfied, but not (a) and/or (b). For example, choosing x == 7
& y == 0
would satisfy (c), but not satisfy (a) or (b). Thus, a verifier which only tests (c) does not actually guarantee that (a) and (b) hold.
Hello! When I try to cargo build a project with the following Cargo.toml:
curve25519-dalek = "3.0.2"
arcturus = "0.2.1"
it gives me many errors "expected struct FieldElement51
, found struct FieldElement2625
".
Any idea why? Thanks!
This issue is to enumerate the benchmarks that should exist and implement them.
I tried to make an example based on yours, but it gives me a proof error. Can you tell me why, please? Thanks!
use arcturus::*;
use curve25519_dalek::ristretto::RistrettoPoint;
use curve25519_dalek::scalar::Scalar;
use merlin::Transcript;
use rand::rngs::OsRng;
use curve25519_dalek::constants::RISTRETTO_BASEPOINT_POINT;
pub const G: RistrettoPoint = RISTRETTO_BASEPOINT_POINT;
fn main() {
let mut rng = OsRng;
let gens = ArcturusGens::new(2, 2, 1).unwrap();
let mut ring: Vec<Output> = Vec::new();
let sk = Scalar::random(&mut rng);
let blind = Scalar::random(&mut rng);
ring.push(Output::new(sk*G, Scalar::from(5u64)*G + blind*G));
ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));
ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));
ring.push(Output::new(Scalar::random(&mut rng)*G, Scalar::random(&mut rng)*G));
// Indices of UTXOs to spend as transaction inputs
let idxs = vec![0];
// Secret data to spend each input UTXO
let spends = vec![SpendSecret::new(sk, 5, blind)];
// Secret data for new minted outputs (Total mint ammounts must balance total input ammounts).
let mints = vec![MintSecret::new(Scalar::random(&mut rng)*G, 5, Scalar::random(&mut rng))];
// Signer computes the transaction proof
let mut t = Transcript::new(b"Test proof");
let proof = gens.prove(&mut t, &ring[..], &idxs[..], &spends[..], &mints[..]).unwrap();
// The verifier my verify the proof as follows:
let mut t = Transcript::new(b"Test proof");
//let proofs = vec![proof];
assert!(gens.verify(&mut t, &ring[..], proof).is_ok());
}
As of #26, the prover tediously computes the mu_l
exponent for each index l
in his proof. This could be optimized in future work to use a more efficient exponentiation algorithm (e.g. square and add), compute each mu_l in a single iterative computation, or both.
As it happens, num_traits::pow exists for just this purpose, and would be the ideal solution to this problem. Unfortunately, curve25519-dalek
does not implement the necessary num_traits::identities traits. The ideal solution to this problem, would involve implementing num_traits
for curve25519-dalek
and submitting a patch upstream.
Alternatively, I could consider wrapping the Scalar
type into a local Wscalar
type, and implement the traits locally. This is not ideal, but may be quicker than pushing changes upstream.
The proof and verification API can be made multi-threaded (in std
builds), which would greatly improve proof and verification time.
This issue is to enumerate the test cases that should exist and add tests for each case.
u
)Hi,
I identified a break in the dual-target discrete log hardness assumption used by Arcturus.
Currently there is only one function to verify, which supports performs batch verification. It would be nice to have separate verify()
and verify_batch()
APIs, to simplify usage for users that do not require batch verification.
Since multiscalar multiplication requires ExactSized iterators, but that trait does not work well, I added a hack tell_size(0)
trait to override the size hint checks within the multiscalar_mult()
implementation. Unfortunately, this hack causes multiscalar_mult()
to always choose the Straus algorithm and never the Pippenger algorithm for multiscalar multiplication see here.
This can have a serious performance penalty for proofs over large rings. Pippenger claims >40% improvement for certain proofs with >190 point multiplications to perform. I suspect even greater improvement for Arcuturus proofs, as a proof may easily exceed 190 multiplications by several orders of magnitude.
Update doc examples and add top level crate docs
The prove API should throw an error if the user attempts to create a proof in which the spend and mint values do not balance
ArcturusGens::tensor does something that could be done more simply as a macro, or by directly building iterators in-place. It would be better not to have this as a method to the ArcturusGens object, as tensors are not related to the generators (or vice versa).
Currently, the code has several instances of this, which can be replaced with the simpler IntoIter::new([x, y])
syntax. See:
rust-lang/rust#77514
This repository needs a good readme, with info to get started, as well as a disclaimer about early/unaudited code
Add documentation to the codebase, and link to the generated cargo docs
The first implementation of ArcturusGens::prove()
& ArcturusGens::verify()
operates on the ring as an iterator of members. This was to allow flexibility to the caller, in-case the ring was not stored in a contiguous array (e.g. an iterator of references to outputs in blocks of a blockchain). Unfortunately, because the ExactSizeIterator
is broken, this has led to some ugly hacks (1, 2).
Furthermore, lack of ExactSizeIterator
makes parallelization very inefficient. It may be desirable to break up proof generation or verification into K
small pieces to be processed in parallel. In the current design, this would require breaking the ring iterator (and any iterators derived from it) into K
sub iterators. Unfortunately, since ExactSizeIterator
is not functioning, this requires cloning the iterator and evaluating it N/K
times to create the next sub iterator. This is very expensive, as some of the iterators involve very expensive mathematics.
Given these issues, I would like to redesign this interface to require the ring
argument to be a slice. This will solve both of the above problems, because slices always have exactly known size. This will be a breaking change, and will restrict the caller somewhat, but the IMO the benefits justify the change.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.