arkworks-rs / r1cs-std Goto Github PK
View Code? Open in Web Editor NEWR1CS constraints for bits, fields, and elliptic curves
Home Page: https://www.arkworks.rs
License: Apache License 2.0
R1CS constraints for bits, fields, and elliptic curves
Home Page: https://www.arkworks.rs
License: Apache License 2.0
In alloc.rs
, we have the following implementation for Vec<T>
:
/// This blanket implementation just allocates variables in `Self`
/// element by element.
impl<I, F: Field, A: AllocVar<I, F>> AllocVar<[I], F> for Vec<A> {
fn new_variable<T: Borrow<[I]>>(
cs: impl Into<Namespace<F>>,
f: impl FnOnce() -> Result<T, SynthesisError>,
mode: AllocationMode,
) -> Result<Self, SynthesisError> {
let ns = cs.into();
let cs = ns.cs();
let mut vec = Vec::new();
for value in f()?.borrow().iter() {
vec.push(A::new_variable(cs.clone(), || Ok(value), mode)?);
}
Ok(vec)
}
}
However, in the setup mode, assignments are missing, and we expect all executions of f
to be wrapped, layer by layer, inside the closures of new_variable
. But the current implementation does not respect the setup mode. Instead, it runs f
and unwraps its result in the setup mode.
There seems no easy solution. One may want to unwrap f()
in each A::new_variable
, but f
is FnOnce
, so calling f()?
inside the closure of each value in the Vec
does not work.
Now let me describe some thinking on how to solve it.
First, we can let new_variable
break the "fourth wall" and check whether the ConstraintSystem being referenced is in the setup mode. This would allow it to treat the case in the setup mode differently.
However, this is not enough, because it also needs to learn the size of the Vec
. If f()
cannot be run and unwrapped, there is no way for it to learn the size.
So, this becomes a tangled situation. Going further, did we use this Vec<T>
blanket implementation anywhere?
Since #117 fixed a bug, it'd be nice to have a new release. The commit has been in master for about a year.
I'd like some quick information on how to put together an end-to-end example using arkworks.
Taking a look at the examples, I was able to get about this far:
fn prove_sum() {
let values = vec![0u128, 1, 2, 3];
let sum: u128 = values.iter().sum();
let cs = ConstraintSystem::<Fr>::new_ref();
let sum_i = var(&cs, &sum, Input);
let mut sum_v = var(&cs, &0, Constant);
for value in values.iter() {
sum_v += var(&cs, value, Witness);
}
sum_i.enforce_equal(&sum_v).unwrap();
cs.finalize();
assert!(cs.is_satisfied().unwrap());
}
This creates a constraint system where the sum of all the witnessed in data is constrained to equal a publicly known sum. Great! But, now what? I also need to also be able to construct the same constraint system without any data in a way that it could verify for some commitment and public inputs that the constraint system is satisfied. I also need to be able to somehow export a commitment using this constraint system for a given set of input data.
It would be great to have some documentation on how to solve common problems with arkworks, how the code is generally laid out, and just enough information to let people answer the right questions.
As pointed out by @daira in #79 (comment), we should have NonNativeFieldVar
automatically track when a reduction is needed, and only reduce when necessary (eg: when converting to bytes/bits). What do you think, @weikengchen?
There's a discrepancy between input variable allocation and constraint field conversion for the Short Weierstrass curve group, specifically, how the infinity
marker's value field is used.
For the curve group, it's either zero or one depending on the value of the infinity
marker. Please see the following trace:
However, in the r1cs variant it's always F::one
, see: https://github.com/arkworks-rs/r1cs-std/blob/master/src/groups/curves/short_weierstrass/mod.rs#L219
For the algebra repo, the behavior seems correct. I see that for r1cs there is conversion after allocation here, but since the FpVar was already allocated as FpVar::one
, I get a conflict during proof verification.
Reproduces in v0.3.0
and the latest master.
short_weierstrass::G1Var
with non-infinity valueIn both the SW code and the TE code we should use mixed addition when adding in constants, instead of treating them like generic points. This should save us some multiplications, at the cost of some duplication.
We should then change the scalar multiplication code to iterate in big-endian, to take maximum advantage of this.
We should add implementations of Add
, Sub
, CheckedSub
and CheckedAdd
for Uints. These are common functions one needs when working with Uints. Currently there only exists AddMany
CheckedAdd and CheckedSub should be unsatisfiable if there is an overflow. I think the simplest checked arithmetic implementation would be to just compute the final carry bit and ensure its 0
(The checked and non-checked methods can be implemented in separate PRs!)
This feature request proposes an implementation of NonNativePointVar, which extends NonNativeFieldVar to points.
I thought that we previously decided to rename nonnative
to field_simulation
and I sort of like the latter (as you can see, I have been widely using field simulation when talking about nonnative field vars".
I see that we stick to the old one that is more established.
There seems to be a rising demand to use nonnative tools in production. The potential benefit of it is that it may help developers to do nonnative SW, TE, and M points in a more unified way.
NonNativePointVar would likely just be a "C++ STL"-like wrapper to NonNativePointVar.
Currently, domain.query_position_to_coset
returns all elements in coset. We should make it simply return updated gen
and offset
instead, because users can use gen
and offset
to do some custom optimization.
pub fn query_position_to_coset(
&self,
query_pos: &[Boolean<F>],
coset_dim: u64,
- ) -> Result<Vec<FpVar<F>>, SynthesisError> {
+ ) -> Result<Self, SynthesisError> {
This issue is to discuss how to make NonNativeFieldVar
a drop-in replacement for FieldVar
in curve operations.
One of the reasons that we implemented FieldVar for NonNativeFieldVar is to allow it to be a drop-in replacement of FpVar, so that one doesn't need to build a hierarchy of nonnative primitives.
This, however, is not covered by the current implementation, see:
https://github.com/arkworks-rs/r1cs-std/blob/master/src/groups/curves/short_weierstrass/mod.rs#L44
#[derive(Derivative)]
#[derivative(Debug, Clone)]
#[must_use]
pub struct ProjectiveVar<
P: SWModelParameters,
F: FieldVar<P::BaseField, <P::BaseField as Field>::BasePrimeField>,
> where
for<'a> &'a F: FieldOpsBounds<'a, P::BaseField, F>,
{
...
}
It would be ideal, though, to lift this restriction. However, we want to make sure that we don't change the existing writing system---it would be highly inconvenient to use
pub struct ProjectiveVar<
P: SWModelParameters,
F: FieldVar<P::BaseField, CF>,
CF: PrimeField,
>
because it introduces one more param CF
.
A possible workaround, though not favorable, is to define GeneralizedProjectVar<P, F, CF>
and let Projective<P, F> = GeneralizedProjectVar<P, F, <P::BaseField as Field>::BasePrimeField>
. This is not favored since nonnative field is not important to this level.
We need to think about what would be the best way to do this.
In Aleo, when Howard implemented the Bowe-Hopwood Pedersen circuit, he discovered a weird phenomenon:
It is surprising that, the second case, which actually does slightly more work, reduces the constraint count by 1.
(Note: this is over a slightly different version, snarkVM. I have the feeling that the tradeoff in arkworks-rs may be slightly different. The two may be the same.)
This is because, when we are doing three_bit_cond_neg_lookup
when b[0] or b[1] is not a constant, but b[2] is a constant, we have one unnecessary constraint that is related to b[2].
// enforce y * (1 - 2 * b_2) == res
b.cs().enforce_constraint(
y_lc.clone(),
b[2].lc() * F::from(2u64).neg() + (F::one(), Variable::One),
lc!() + result.variable,
)?;
However, this call to enforce a constraint can be avoided if, when b[2] is a constant, the code is written differently, as in this case, result
is a simple linear combination of y.
What happens is that, because 97 * 8 % 3 = 2, adding a free byte at the beginning takes advantage of this dummy, so one constraint down. Indeed, it should not be the case.
master
It may be slightly complicated. But one can look at the three_bit_cond_neg_lookup
implementation for FpVar, notice that if b[0] or b[1] is not a constant, but b[2] is a constant, it goes to the implementation of AllocatedFp, in which case the above constraint, which can be avoided sometimes, is never omitted.
The Gepetto paper provides an interesting optimization that can provide savings for the unused branch of a conditional select. Namely, since the variable in the unused branch is not used further, we can assign it a value of zero (if there are no other checks involving that variable). This can help reduce proving cost, since the assignment to the unused variable would not contribute to any MSMs.
Hi,
It seems like you split the scalar bits into two parts at the specific index G::Base::NUM_BITS - 2
, after that the first part is safe to apply addition upon affine coordinates, while the second one have to use addition formula upon projective coordinates.
The question is why the specific index works?
Suppose that addition is defined like Acc = Acc + P
, prime modular is q
Still not clear why the lower range [0, G::Base::NUM_BITS - 2)
of scalar k would make sure that point Acc
and point P
are constrained within safe distance, namely to say point Acc
is [2, q - 1)
times point P
or point P
is [2, q - 1)
times point Acc
? How about the other ranges such as [0, G::Base::NUM_BITS - 1)
or [0, G::Base::NUM_BITS - 3)
or ...?
I run a test using curve y^2 = x^3 + 4x + 20
over F_29
(5 bits length), while its order is 37(6 bits length). It turns out that the edge cases(Acc == P or Acc == -P)
always appears in the highest bit(k = 36 or 37). Indeed it's safe within bits [0, 3)
, but it's also safe within bits [0, 4)
.
Appreciated if you guys give some hits about this!
The current implementation of addmany
has a number of bounds:
https://github.com/arkworks-rs/r1cs-std/blob/master/src/bits/uint.rs#L184
// Make some arbitrary bounds for ourselves to avoid overflows
// in the scalar field
assert!(F::Params::MODULUS_BITS >= 2 * $size);
assert!($size * operands.len() <= F::Params::MODULUS_BITS as usize);
However, these bounds seem to be too strict. This part of the code is supposed for additions. And the bounds here seem for multiplication.
To make sure we don't try to publish/tag twice, this is a separate task.
Once #109 is merged, the following should still be done:
tag commits on master & publish both crates with:
git tag v0.4.0-alpha.1
git push origin v0.4.0-alpha.1
cargo release publish
CondSelectGadget's API is
fn conditionally_select(
cond: &Boolean<F>,
true_val: &Self,
false_val: &Self,
)
whereas EqGadget's API is
fn conditional_enforce_equal(
&self,
other: &Self,
should_enforce: &Boolean<F>,
) -> Result<(), SynthesisError> {
I suggest we change EqGadget's conditional methods to be consistent with CondSelectGadget, and have the condition be the first parameter.
How many constraints should it be to compute x^{255}?
How many constraints should it be to compute x^{257}?
In our current implementation, the cost of powering by constants is as follows:
L = len of binary representation
N = num of ones in the binary representation
cost = L - 1 + N - 1
So, powering by 255 is more expensive than 257.
Yet, to compute x^{255}, why not compute x^{256}, and then go down to x^{255}, which only takes one constraint?
The more efficient powering by constant would examine the NAF form (https://en.wikipedia.org/wiki/Non-adjacent_form), which, not just considering the binary representation, but a representation with alphabet -1, 0, 1.
This, indeed, gives a better performance for powering by constant.
Enhance the powering by constant by first computing the NAF form, and then write down the constraints accordingly.
Although this issue pops up during the implementation of Poseidon, it is semi-useful for Poseidon.
On the one hand, for Poseidon, if the cost of computing x^{255} and x^{257} is the same, it would always prefer 257 for alpha (ignoring the fact that 257 is also a prime, and 255 is not).
On the other hand, it might also be useful for Poseidon. For example, 7 = 8 - 1 is not that expensive indeed, and 7 may be a suitable prime.
cc'ed authors who have touched fields/mod.rs
in the past: @ValarDragon @Pratyush
Check whether the Susella-Montrasio ladder for SW curves can be used to make simpler/more efficient constraint system for scalar multiplication on SW curves
This is to be left as a todo for the future.
There seems to be a big gap in constraints & density of complete vs incomplete group operations, as well as operations on affine vs projective.
Ideally, if permitted by completeness and soundness, incomplete + affine can lead to fewer constraints & lower density.
However, it seems to require certain efforts to investigate whether the soundness holds, and to what extent. And completeness would be a separate issue that the application needs to handle. Especially, the application needs to have some sort of "reject sampling" so that it can retry proving until it avoids the incompleteness barriers.
In response to https://github.com/alexchmit/perfect-constraints/issues/6, it's better to add DensePolynomial structure in ark-r1cs-std
.
Move DensePolynomial
into a new folder called poly
in r1cs-std
. The poly
folder should match structures like ark-poly
. The coefficients should be of type FpVar
.
Add a new job in CI
that does the following steps:
curves
repo.curves/Cargo.toml
to force the dependency tree to use the local version of the algebra
crates.curves
.This can prevent changes from being merged in here that actually break downstream, such as #9.
This is analogous to arkworks-rs/algebra#34
The underlying type that takes in an AllocatedBit is a Boolean. Its imo confusing that the allocated type is called Bit
instead of Bool
. Especially since its almost always the case that when you deal with AllocatedBits, you are also dealing with Booleans as well.
This issue is to raise a reconsideration of the names of NonNativeFieldVar
before the next release.
What we implemented is not NonNativeFieldVar
but NonNativeFpVar
. And in the future, there is a possibility that we would implement Fq2 etc as well.
Change the name?
Add a new method to AllocVar
that creates a witness variable or constant in the constraint system depending on the type of cs
.
When feeding non-deterministic advice for some computation new_variable
with a mode
based on the context. That is, when the inputs to AllocationMode::Constant
for the advice to save some constraints for enforcing its validity. Otherwise, we need to choose AllocationMode::Witness
.
However, doing so manually introduces redundancy. We can find many duplicated code fragments for this purpose in this repo, e.g.,
r1cs-std/src/groups/curves/twisted_edwards/mod.rs
Lines 120 to 138 in 4020fbc
r1cs-std/src/fields/quadratic_extension.rs
Lines 279 to 291 in 4020fbc
Furthermore, taking care for every advice allocation increases the mental burden of a circuit developer, while failing to do so may result in a larger circuit.
This proposal aims to address the above problem by introducing a new method (tentatively dubbed new_hint
) to AllocVar
, which creates a constant if cs
is None
and a witness variable otherwise. The logic of new_hint
should be similar as below:
pub trait AllocVar<V: ?Sized, F: Field>: Sized {
// ...
#[tracing::instrument(target = "r1cs", skip(cs, f))]
fn new_hint<T: Borrow<V>>(
cs: impl Into<Namespace<F>>,
f: impl FnOnce() -> Result<T, SynthesisError>,
) -> Result<Self, SynthesisError> {
let ns: Namespace<F> = cs.into();
let cs = ns.cs();
let mode = if cs.is_none() {
AllocationMode::Constant
} else {
AllocationMode::Witness
};
Self::new_variable(cs, f, mode)
}
// ...
}
Please let me know what you think, and I can create a PR if this proposal looks good to you :)
The following code panics if you attempt to use configuration with 1 or 2 limbs
r1cs-std/src/fields/nonnative/reduce.rs
Lines 170 to 174 in 2ca3bd7
However, I don't remember bellman-bignat having this limitation
r1cs-std/src/fields/nonnative/reduce.rs
Line 276 in 2ca3bd7
This could be a problem if nonnative params become configurable
Hello, suppose I have two FqVar
let tmp1 = FpVar::<Fq>::new_witness(r1cs_core::ns!(cs, format!("tmp1")), || Ok(a),).unwrap();
let tmp2 = FpVar::<Fq>::new_witness(r1cs_core::ns!(cs, format!("tmp2")), || Ok(b),).unwrap();
I want to cmp them using cmp
trait. The following code gets error:
tmp1.cmp(&tmp2);
the method
cmp
exists but the following trait bounds were not satisfied:
r1cs_std::fields::fp::FpVar<algebra_core::Fp256<algebra::bls12_381::FrParameters>>: std::iter::Iterator
which is required by&mut r1cs_std::fields::fp::FpVar<algebra_core::Fp256<algebra::bls12_381::FrParameters>>: std::iter::Iterator
How can I compare two FqVar like I compare two integers?
I can only find enforce_cmp
, but obviously, it is not what I want.
The current code adds the implementation of UInt128
, which is the constraint version of the primitive type u128
.
The current implementation uses num-bigint
instead of BigInteger256
. However, there is no reason not to use BigInteger256
beside the (slight) code complexity. It should be possible to revise the current implementation to be based on solely BigInteger256
.
There is a significant benefit of not using num-bigint
because r1cs-std
is a fundamental crate that is used by many applications. It is good to just keep it lightweight.
Hi, when I run cargo test
, I get the following errors:
[bits::boolean::Boolean<F>; 16]: std::convert::From<std::vec::Vec<_>>
is not satisfied[bits::boolean::Boolean<F>; 32]: std::convert::From<std::vec::Vec<_>>
is not satisfied&[bits::boolean::Boolean<F>; 64]: std::iter::IntoIterator
is not satisfied .. etcCurrently the function conditionally_select_power_of_two_vector
uses method 5.1 from https://github.com/mir-protocol/r1cs-workshop/blob/master/workshop.pdf.
By implementing the SumOfConditions method and then combining it with the existing technique, we could achieve less constraints.
Group scalar multiplication with ark-pallas
is leading to the failure of inverse constraint when the group element is 0.
I also tried ark-mnt4-298
and did not observe any issues; I haven't tried other curves.
ark-pallas = { version = "^0.3.0", features = ["curve", "r1cs"], default-features = false }
The following function reproduces the error.
pub fn scalar_mul_bug() {
let scalar: <G as AffineCurve>::ScalarField = BigInteger256::new([13446945587828132253, 13946564884975567571, 1632334266307653820, 1541582214770829428]).into();
let base = G::zero();
let cs = ConstraintSystem::<CF>::new_ref();
let scalar_var = NonNativeFieldVar::new_input(cs.clone(), || Ok(scalar)).unwrap();
println!("scalar_var: {:?}", scalar_var.value());
let scalar_var_le = scalar_var.to_bits_le().unwrap();
println!("scalar_var_le: {:?}", scalar_var_le.value());
let base_var = C::new_input(cs.clone(), || Ok(base)).unwrap();
println!("base_var: {:?}", base_var.value().unwrap().into_affine());
let res = base_var.scalar_mul_le(scalar_var_le.iter()).unwrap();
println!("res: {:?}", res.value().unwrap().into_affine());
res.enforce_equal(&base_var).unwrap();
assert!(cs.is_satisfied().unwrap());
}
Relocate Univariate Domain, Vanishing Polynomial, Lagrange Interpolation to ark-poly
As a result of #53 native implementations of Univariate Domain, Vanishing Polynomial, Lagrange Interpolation in this repo. However, they should actually be at ark-poly.
Move them to ark-poly and adjust the corresponding constraint structs (if there is any at this moment).
This would be a bit smoother to pronounce, and describes what's happening slightly better.
This is related to Lianke's post (arkworks-rs/groth16#33). Since he is working on ZKP for neural networks (see: https://eprint.iacr.org/2021/087). I feel that maybe one of the implementations for FpVar has caused the LC map to blow up.
Consider that we are adding many numbers together. At this moment, when we add two numbers, we create, not a constraint, but a new LC.
If we know that we are adding many numbers together, then we may want to create one LC for the sum, not N - 1 new LCs.
This is just to cut down the depth of the inlining.
Implement the Sum
trait for FpVar
(which implies that we need to implement for AllocatedFp
as well).
There are cases where we need a coset domain where offset is not a constant (for example, the query coset in FRI). It is probably better to add this to r1cs-std
.
Add a struct called QueriedRadix2Domain
(is there a better name?). Methods are similar to Radix2Domain
.
Make Radix2Domain
have offset that is not constant. Specialize when offset
is constant or variable.
Some codes to copy from (ignore it if you do not have access to the link. Repo is currently private):
alpha
with point
): https://github.com/alexchmit/perfect-constraints/blob/79692f2652a95a57f2c7187f5b5276345e680230/fractal/src/ldt/fri/single_round_fri.rs#L38-L69params
has been used somewhere else: https://github.com/arkworks-rs/sponge/blob/c9f7d24a5f10c415e9370a079cb060d95400b938/src/constraints/mod.rs#L19 (bits_le_to_nonnative
) and https://github.com/arkworks-rs/crypto-primitives/blob/1a71386a10ef627834e4c3b14ca982811f23192d/src/snark/constraints.rs (NonNativeFieldInputVar
).We should move NonNativeFieldInputVar
and bits_le_to_nonnative
into r1cs-std
, and the external world should just call them.
Originally posted by @weikengchen in #79 (comment)
This issue is a feature proposal according to https://github.com/alexchmit/perfect-constraints/issues/7. It is better to make a Domain
struct that tries to mirror EvaluationDomain
in ark-poly
. Then add vanishing polynomial API and Lagrange interpolation API that match the API in ark-poly
.
Support univariate evaluation domains.
src/poly/
, create a new folder called domain
. In `src/poly/domain/mod.rs", create a "Domain" struct. This "Domain" struct will borrow existing logic in https://github.com/alexchmit/perfect-constraints/blob/79692f2652a95a57f2c7187f5b5276345e680230/fractal/src/algebra/domain.rs#L7-L71, and will try it best to mirror API defined in https://github.com/arkworks-rs/algebra/blob/80ff5eab89280a96e452376884f5fe588f126920/poly/src/domain/mod.rs#L31 (FFT will not be supported in this moment). Also, add Vanishing Polynomial by borrowing code from https://github.com/alexchmit/perfect-constraints/blob/master/fractal/src/algebra/vanishing_polynomial.rs.src/poly/
, create a new folder called evaluations
. In evaluations/univariate/mod.rs
, create a struct called EvaluationsVar
. the EvaluationsVar
will contain two fields pub evals: Vec<FpVar>
and domain: Domain
. This struct will borrow lagrange interpolation code in https://github.com/alexchmit/perfect-constraints/blob/master/fractal/src/algebra/lagrange_interpolation.rsWe don't have a scalar mul impl of a group element variable with its scalar field element.
Ergonomics
Add a Mul
trait bound and the corresponding impls.
In response to issue https://github.com/alexchmit/perfect-constraints/issues/5. mux
should be moved into an auto-impl on CondSelectGadget
.
By adding new API, this crate can support random access of a vector of size that is power of two.
Add an auto-impl
at CondSelectGadget
(https://github.com/arkworks-rs/r1cs-std/blob/master/src/select.rs#L6). The new API will be conditionally_select_power_of_two_vector(position: &[Bool], values: &[Self]) -> Result<Self, SynthesisError>
, and the main code will reuse existing logic in https://github.com/alexchmit/perfect-constraints/blob/master/fractal/src/algebra/mux.rs#L5.
Hello, I am new to ZEXE and I met some difficulties here:
in traditional programming language, we can do bit shifting like this:
int a = 8;
int b = a >> 1; // b = 4;
Because there is not divide operator for FpVar::::new_witness, I want to use some right/left bit shifting operations over finite field variables. Could you please guide me how to do this? I have not found out the corresponding functions.Thanks!
Add allocator for DensePolynomialVar
, so we can easily convert ark_poly::DensePolynomial
to DensePolynomialVar
by calling DensePolynomial::new_variable
.
We can keep release 3.0 as is. It's ok to move this feature to the next release.
Impl AllocVar
for DensePolynomial
.
I also plan to impl AllocVar
for domains and evaluations so that we can easily convert univariate domain in ark-poly
to variable. However, currently univariate domain can only be subgroup (instead of coset). Once coset is supported, we can so do.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.