Giter Club home page Giter Club logo

r1cs-std's People

Contributors

alex-ozdemir avatar brunoffranca avatar debris avatar dependabot-preview[bot] avatar gakonst avatar howardwu avatar huitseeker avatar jon-chuang avatar kobigurk avatar mmagician avatar mmaker avatar nirvantyagi avatar oblivious-app avatar onewayfunc avatar paberr avatar pratyush avatar ryankung avatar slumber avatar swasilyev avatar tgodden avatar tsunrise avatar valardragon avatar weikengchen avatar will-lin4 avatar winderica avatar yelhousni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

r1cs-std's Issues

`AllocVar` implementation for `Vec<T>` does not respect the setup mode

In alloc.rs, we have the following implementation for Vec<T>:

/// This blanket implementation just allocates variables in `Self`
/// element by element.
impl<I, F: Field, A: AllocVar<I, F>> AllocVar<[I], F> for Vec<A> {
    fn new_variable<T: Borrow<[I]>>(
        cs: impl Into<Namespace<F>>,
        f: impl FnOnce() -> Result<T, SynthesisError>,
        mode: AllocationMode,
    ) -> Result<Self, SynthesisError> {
        let ns = cs.into();
        let cs = ns.cs();
        let mut vec = Vec::new();
        for value in f()?.borrow().iter() {
            vec.push(A::new_variable(cs.clone(), || Ok(value), mode)?);
        }
        Ok(vec)
    }
}

However, in the setup mode, assignments are missing, and we expect all executions of f to be wrapped, layer by layer, inside the closures of new_variable. But the current implementation does not respect the setup mode. Instead, it runs f and unwraps its result in the setup mode.

There seems no easy solution. One may want to unwrap f() in each A::new_variable, but f is FnOnce, so calling f()? inside the closure of each value in the Vec does not work.

Now let me describe some thinking on how to solve it.

First, we can let new_variable break the "fourth wall" and check whether the ConstraintSystem being referenced is in the setup mode. This would allow it to treat the case in the setup mode differently.

However, this is not enough, because it also needs to learn the size of the Vec. If f() cannot be run and unwrapped, there is no way for it to learn the size.

So, this becomes a tangled situation. Going further, did we use this Vec<T> blanket implementation anywhere?

Release with #117

Since #117 fixed a bug, it'd be nice to have a new release. The commit has been in master for about a year.

Getting started guide

Summary

I'd like some quick information on how to put together an end-to-end example using arkworks.

Problem Definition

Taking a look at the examples, I was able to get about this far:

fn prove_sum() {
    let values = vec![0u128, 1, 2, 3];
    let sum: u128 = values.iter().sum();
    let cs = ConstraintSystem::<Fr>::new_ref();
    let sum_i = var(&cs, &sum, Input);
    let mut sum_v = var(&cs, &0, Constant);
    for value in values.iter() {
        sum_v += var(&cs, value, Witness);
    }
    sum_i.enforce_equal(&sum_v).unwrap();
    cs.finalize();
    assert!(cs.is_satisfied().unwrap());
}

This creates a constraint system where the sum of all the witnessed in data is constrained to equal a publicly known sum. Great! But, now what? I also need to also be able to construct the same constraint system without any data in a way that it could verify for some commitment and public inputs that the constraint system is satisfied. I also need to be able to somehow export a commitment using this constraint system for a given set of input data.

Proposal

It would be great to have some documentation on how to solve common problems with arkworks, how the code is generally laid out, and just enough information to let people answer the right questions.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Short Weierstrass `G1Var::new_input` behaves differently from `G1::to_field_elements`

Summary of Bug

There's a discrepancy between input variable allocation and constraint field conversion for the Short Weierstrass curve group, specifically, how the infinity marker's value field is used.

For the curve group, it's either zero or one depending on the value of the infinity marker. Please see the following trace:

  1. https://github.com/arkworks-rs/algebra/blob/master/ec/src/models/short_weierstrass/group.rs#L624
  2. https://github.com/arkworks-rs/algebra/blob/master/ec/src/models/short_weierstrass/affine.rs#L390
  3. https://github.com/arkworks-rs/algebra/blob/master/ff/src/to_field_vec.rs#L14

However, in the r1cs variant it's always F::one, see: https://github.com/arkworks-rs/r1cs-std/blob/master/src/groups/curves/short_weierstrass/mod.rs#L219

For the algebra repo, the behavior seems correct. I see that for r1cs there is conversion after allocation here, but since the FpVar was already allocated as FpVar::one, I get a conflict during proof verification.

Version

Reproduces in v0.3.0 and the latest master.

Steps to Reproduce

  1. Allocate short_weierstrass::G1Var with non-infinity value
  2. Generate proof
  3. Get public input
  4. Verify proof

Implement Add, Sub, CheckedAdd, CheckedSub for Uints

Summary

We should add implementations of Add, Sub, CheckedSub and CheckedAdd for Uints. These are common functions one needs when working with Uints. Currently there only exists AddMany

CheckedAdd and CheckedSub should be unsatisfiable if there is an overflow. I think the simplest checked arithmetic implementation would be to just compute the final carry bit and ensure its 0

(The checked and non-checked methods can be implemented in separate PRs!)


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

NonNativePointVar, or Point Simulation

Summary

This feature request proposes an implementation of NonNativePointVar, which extends NonNativeFieldVar to points.

Problem Definition

I thought that we previously decided to rename nonnative to field_simulation and I sort of like the latter (as you can see, I have been widely using field simulation when talking about nonnative field vars".

I see that we stick to the old one that is more established.

There seems to be a rising demand to use nonnative tools in production. The potential benefit of it is that it may help developers to do nonnative SW, TE, and M points in a more unified way.

Proposal

NonNativePointVar would likely just be a "C++ STL"-like wrapper to NonNativePointVar.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Let `query_position_to_coset` return `gen` and `offset` instead of coset elements

Problem Definition

Currently, domain.query_position_to_coset returns all elements in coset. We should make it simply return updated gen and offset instead, because users can use gen and offset to do some custom optimization.

Proposal

    pub fn query_position_to_coset(
        &self,
        query_pos: &[Boolean<F>],
        coset_dim: u64,
-    ) -> Result<Vec<FpVar<F>>, SynthesisError> {
+   ) -> Result<Self, SynthesisError> { 

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Interchangeability of NonNativeFieldVar and FpVar for FieldVar

Summary

This issue is to discuss how to make NonNativeFieldVar a drop-in replacement for FieldVar in curve operations.

Problem Definition

One of the reasons that we implemented FieldVar for NonNativeFieldVar is to allow it to be a drop-in replacement of FpVar, so that one doesn't need to build a hierarchy of nonnative primitives.

This, however, is not covered by the current implementation, see:
https://github.com/arkworks-rs/r1cs-std/blob/master/src/groups/curves/short_weierstrass/mod.rs#L44

#[derive(Derivative)]
#[derivative(Debug, Clone)]
#[must_use]
pub struct ProjectiveVar<
    P: SWModelParameters,
    F: FieldVar<P::BaseField, <P::BaseField as Field>::BasePrimeField>,
> where
    for<'a> &'a F: FieldOpsBounds<'a, P::BaseField, F>,
{
    ...
}

It would be ideal, though, to lift this restriction. However, we want to make sure that we don't change the existing writing system---it would be highly inconvenient to use

pub struct ProjectiveVar<
    P: SWModelParameters,
    F: FieldVar<P::BaseField, CF>,
    CF: PrimeField,
> 

because it introduces one more param CF.

A possible workaround, though not favorable, is to define GeneralizedProjectVar<P, F, CF> and let Projective<P, F> = GeneralizedProjectVar<P, F, <P::BaseField as Field>::BasePrimeField>. This is not favored since nonnative field is not important to this level.

Proposal

We need to think about what would be the best way to do this.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Redundancy in `three_bit_cond_neg_lookup` for FpVar

Summary of Bug

In Aleo, when Howard implemented the Bowe-Hopwood Pedersen circuit, he discovered a weird phenomenon:

  • BHP compressed gadget takes 2525 constraints to hash 97 bytes (all are not constants)
  • BHP compressed gadget takes 2524 constraints to hash 98 bytes (when the first byte is a constant, and the rest is not)

It is surprising that, the second case, which actually does slightly more work, reduces the constraint count by 1.

(Note: this is over a slightly different version, snarkVM. I have the feeling that the tradeoff in arkworks-rs may be slightly different. The two may be the same.)

This is because, when we are doing three_bit_cond_neg_lookup when b[0] or b[1] is not a constant, but b[2] is a constant, we have one unnecessary constraint that is related to b[2].

        // enforce y * (1 - 2 * b_2) == res
        b.cs().enforce_constraint(
            y_lc.clone(),
            b[2].lc() * F::from(2u64).neg() + (F::one(), Variable::One),
            lc!() + result.variable,
        )?;

However, this call to enforce a constraint can be avoided if, when b[2] is a constant, the code is written differently, as in this case, result is a simple linear combination of y.

What happens is that, because 97 * 8 % 3 = 2, adding a free byte at the beginning takes advantage of this dummy, so one constraint down. Indeed, it should not be the case.

Version

master

Steps to Reproduce

It may be slightly complicated. But one can look at the three_bit_cond_neg_lookup implementation for FpVar, notice that if b[0] or b[1] is not a constant, but b[2] is a constant, it goes to the implementation of AllocatedFp, in which case the above constraint, which can be avoided sometimes, is never omitted.

Implement "energy-saving" circuit optimization from Gepetto.

Summary

The Gepetto paper provides an interesting optimization that can provide savings for the unused branch of a conditional select. Namely, since the variable in the unused branch is not used further, we can assign it a value of zero (if there are no other checks involving that variable). This can help reduce proving cost, since the assignment to the unused variable would not contribute to any MSMs.

Problem Definition

Proposal


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

why the specific bits split_len works within fixed_scalar_mul_le function?

Hi,

It seems like you split the scalar bits into two parts at the specific index G::Base::NUM_BITS - 2, after that the first part is safe to apply addition upon affine coordinates, while the second one have to use addition formula upon projective coordinates.

The question is why the specific index works?

Suppose that addition is defined like Acc = Acc + P, prime modular is q

Still not clear why the lower range [0, G::Base::NUM_BITS - 2) of scalar k would make sure that point Acc and point P are constrained within safe distance, namely to say point Acc is [2, q - 1) times point P or point P is [2, q - 1) times point Acc? How about the other ranges such as [0, G::Base::NUM_BITS - 1) or [0, G::Base::NUM_BITS - 3) or ...?

I run a test using curve y^2 = x^3 + 4x + 20 over F_29(5 bits length), while its order is 37(6 bits length). It turns out that the edge cases(Acc == P or Acc == -P) always appears in the highest bit(k = 36 or 37). Indeed it's safe within bits [0, 3), but it's also safe within bits [0, 4).

Appreciated if you guys give some hits about this!

Examine more carefully on the bound for UInt `addmany` implementation

The current implementation of addmany has a number of bounds:

https://github.com/arkworks-rs/r1cs-std/blob/master/src/bits/uint.rs#L184

// Make some arbitrary bounds for ourselves to avoid overflows
// in the scalar field
assert!(F::Params::MODULUS_BITS >= 2 * $size);
assert!($size * operands.len() <= F::Params::MODULUS_BITS as usize);

However, these bounds seem to be too strict. This part of the code is supposed for additions. And the bounds here seem for multiplication.

Publish & tag 0.4.0-alpha

Summary

To make sure we don't try to publish/tag twice, this is a separate task.

Once #109 is merged, the following should still be done:
tag commits on master & publish both crates with:

git tag v0.4.0-alpha.1
git push origin v0.4.0-alpha.1
cargo release publish

why there is no constraints in some Fp operation?

in fields/fp/mod.rs, some operations has no constraints, such as add and mul_constant
image.

image

while some operation like mul applied constraints.

image

As a user, I have to check each operation has constraints or not, it's very confusing.
Can arkworks add uniform constraints in its implementation?

conditionally_select and conditional_enforce_equal use condition inconsistently

Summary of Bug

CondSelectGadget's API is

    fn conditionally_select(
        cond: &Boolean<F>,
        true_val: &Self,
        false_val: &Self,
    )

whereas EqGadget's API is

    fn conditional_enforce_equal(
        &self,
        other: &Self,
        should_enforce: &Boolean<F>,
    ) -> Result<(), SynthesisError> {

I suggest we change EqGadget's conditional methods to be consistent with CondSelectGadget, and have the condition be the first parameter.

More efficient powering by constant

Summary

How many constraints should it be to compute x^{255}?

How many constraints should it be to compute x^{257}?

In our current implementation, the cost of powering by constants is as follows:

L = len of binary representation
N = num of ones in the binary representation
cost = L - 1 + N - 1

So, powering by 255 is more expensive than 257.

Yet, to compute x^{255}, why not compute x^{256}, and then go down to x^{255}, which only takes one constraint?

Problem Definition

The more efficient powering by constant would examine the NAF form (https://en.wikipedia.org/wiki/Non-adjacent_form), which, not just considering the binary representation, but a representation with alphabet -1, 0, 1.

This, indeed, gives a better performance for powering by constant.

Proposal

Enhance the powering by constant by first computing the NAF form, and then write down the constraints accordingly.

Relationship to Poseidon

Although this issue pops up during the implementation of Poseidon, it is semi-useful for Poseidon.

On the one hand, for Poseidon, if the cost of computing x^{255} and x^{257} is the same, it would always prefer 257 for alpha (ignoring the fact that 257 is also a prime, and 255 is not).

On the other hand, it might also be useful for Poseidon. For example, 7 = 8 - 1 is not that expensive indeed, and 7 may be a suitable prime.

cc'ed authors who have touched fields/mod.rs in the past: @ValarDragon @Pratyush


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Incomplete SW group operations may be worthwhile

This is to be left as a todo for the future.

There seems to be a big gap in constraints & density of complete vs incomplete group operations, as well as operations on affine vs projective.

Ideally, if permitted by completeness and soundness, incomplete + affine can lead to fewer constraints & lower density.

However, it seems to require certain efforts to investigate whether the soundness holds, and to what extent. And completeness would be a separate issue that the application needs to handle. Especially, the application needs to have some sort of "reject sampling" so that it can retry proving until it avoids the incompleteness barriers.

Add DensePolynomial

Summary

In response to https://github.com/alexchmit/perfect-constraints/issues/6, it's better to add DensePolynomial structure in ark-r1cs-std.

Proposal

Move DensePolynomial into a new folder called poly in r1cs-std. The poly folder should match structures like ark-poly. The coefficients should be of type FpVar.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Make CI test against `curves` repo

Add a new job in CI that does the following steps:

  • Clone curves repo.
  • Add a patch in curves/Cargo.toml to force the dependency tree to use the local version of the algebra crates.
  • Run all tests in curves.

This can prevent changes from being merged in here that actually break downstream, such as #9.

This is analogous to arkworks-rs/algebra#34

Rename AllocatedBit to AllocatedBool

The underlying type that takes in an AllocatedBit is a Boolean. Its imo confusing that the allocated type is called Bit instead of Bool. Especially since its almost always the case that when you deal with AllocatedBits, you are also dealing with Booleans as well.

Naming of NonNativeFieldVar and its future usage on Fq2, Fq3, Fq4, Fq6, Fq12

Summary

This issue is to raise a reconsideration of the names of NonNativeFieldVar before the next release.

Problem Definition

What we implemented is not NonNativeFieldVar but NonNativeFpVar. And in the future, there is a possibility that we would implement Fq2 etc as well.

Proposal

Change the name?


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Convenient method for context-based variable/constant allocation

Summary

Add a new method to AllocVar that creates a witness variable or constant in the constraint system depending on the type of cs.

Problem Definition

When feeding non-deterministic advice for some computation $f$ to a circuit, it is often desirable to call new_variable with a mode based on the context. That is, when the inputs to $f$ are all constant, we can choose AllocationMode::Constant for the advice to save some constraints for enforcing its validity. Otherwise, we need to choose AllocationMode::Witness.

However, doing so manually introduces redundancy. We can find many duplicated code fragments for this purpose in this repo, e.g.,

let mode = if cs.is_none() {
AllocationMode::Constant
} else {
AllocationMode::Witness
};
// Compute u = x / y
let u = F::new_variable(
ark_relations::ns!(cs, "u"),
|| {
let y_inv = self
.y
.value()?
.inverse()
.ok_or(SynthesisError::DivisionByZero)?;
Ok(self.x.value()? * &y_inv)
},
mode,
)?;

let mode = if self.is_constant() {
AllocationMode::Constant
} else {
AllocationMode::Witness
};
let inverse = Self::new_variable(
self.cs(),
|| {
self.value()
.map(|f| f.inverse().unwrap_or_else(QuadExtField::zero))
},
mode,
)?;

Furthermore, taking care for every advice allocation increases the mental burden of a circuit developer, while failing to do so may result in a larger circuit.

Proposal

This proposal aims to address the above problem by introducing a new method (tentatively dubbed new_hint) to AllocVar, which creates a constant if cs is None and a witness variable otherwise. The logic of new_hint should be similar as below:

pub trait AllocVar<V: ?Sized, F: Field>: Sized {
    // ...
    #[tracing::instrument(target = "r1cs", skip(cs, f))]
    fn new_hint<T: Borrow<V>>(
        cs: impl Into<Namespace<F>>,
        f: impl FnOnce() -> Result<T, SynthesisError>,
    ) -> Result<Self, SynthesisError> {
        let ns: Namespace<F> = cs.into();
        let cs = ns.cs();
        let mode = if cs.is_none() {
            AllocationMode::Constant
        } else {
            AllocationMode::Witness
        };
        Self::new_variable(cs, f, mode)
    }
    // ...
}

Please let me know what you think, and I can create a PR if this proposal looks good to you :)


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

nonnative arithmetic doesn't support < 3 limbs configuration

The following code panics if you attempt to use configuration with 1 or 2 limbs

if 2 * params.bits_per_limb + ark_std::log2(params.num_limbs) as usize
> BaseField::MODULUS_BIT_SIZE as usize - 1
{
panic!("The current limb parameters do not support multiplication.");
}

However, I don't remember bellman-bignat having this limitation

// The following code is adapted from https://github.com/alex-ozdemir/bellman-bignat/blob/master/src/mp/bignat.rs#L567

This could be a problem if nonnative params become configurable

How to cmp two FqVar

Hello, suppose I have two FqVar

  let tmp1 =      FpVar::<Fq>::new_witness(r1cs_core::ns!(cs, format!("tmp1")), || Ok(a),).unwrap();
  let tmp2 =      FpVar::<Fq>::new_witness(r1cs_core::ns!(cs, format!("tmp2")), || Ok(b),).unwrap();

I want to cmp them using cmp trait. The following code gets error:

tmp1.cmp(&tmp2);

the method cmp exists but the following trait bounds were not satisfied:
r1cs_std::fields::fp::FpVar<algebra_core::Fp256<algebra::bls12_381::FrParameters>>: std::iter::Iterator
which is required by &mut r1cs_std::fields::fp::FpVar<algebra_core::Fp256<algebra::bls12_381::FrParameters>>: std::iter::Iterator

How can I compare two FqVar like I compare two integers?

I can only find enforce_cmp, but obviously, it is not what I want.

Implement UInt128 with only BigInteger256

The current code adds the implementation of UInt128, which is the constraint version of the primitive type u128.

The current implementation uses num-bigint instead of BigInteger256. However, there is no reason not to use BigInteger256 beside the (slight) code complexity. It should be possible to revise the current implementation to be based on solely BigInteger256.

There is a significant benefit of not using num-bigint because r1cs-std is a fundamental crate that is used by many applications. It is good to just keep it lightweight.

cargo test results in errors

Hi, when I run cargo test, I get the following errors:

  • the trait bound [bits::boolean::Boolean<F>; 16]: std::convert::From<std::vec::Vec<_>> is not satisfied
  • the trait bound [bits::boolean::Boolean<F>; 32]: std::convert::From<std::vec::Vec<_>> is not satisfied
  • the trait bound &[bits::boolean::Boolean<F>; 64]: std::iter::IntoIterator is not satisfied .. etc
    All the issues seem to be around the Boolean type.
    Thanks

Handling zero as base in group scalar multiplication

Summary of Bug

Group scalar multiplication with ark-pallas is leading to the failure of inverse constraint when the group element is 0.
I also tried ark-mnt4-298 and did not observe any issues; I haven't tried other curves.

Version

ark-pallas = { version = "^0.3.0", features = ["curve", "r1cs"], default-features = false }

Steps to Reproduce

The following function reproduces the error.

    pub fn scalar_mul_bug() {
        let scalar: <G as AffineCurve>::ScalarField = BigInteger256::new([13446945587828132253, 13946564884975567571, 1632334266307653820, 1541582214770829428]).into();
        let base = G::zero();
        let cs = ConstraintSystem::<CF>::new_ref();
        let scalar_var = NonNativeFieldVar::new_input(cs.clone(), || Ok(scalar)).unwrap();
        println!("scalar_var: {:?}", scalar_var.value());
        let scalar_var_le = scalar_var.to_bits_le().unwrap();
        println!("scalar_var_le: {:?}", scalar_var_le.value());
        let base_var = C::new_input(cs.clone(), || Ok(base)).unwrap();
        println!("base_var: {:?}", base_var.value().unwrap().into_affine());
        let res = base_var.scalar_mul_le(scalar_var_le.iter()).unwrap();
        println!("res: {:?}", res.value().unwrap().into_affine());
        res.enforce_equal(&base_var).unwrap();
        assert!(cs.is_satisfied().unwrap());
    }

Relocate Univariate Domain, Vanishing Polynomial, Lagrange Interpolation

Summary

Relocate Univariate Domain, Vanishing Polynomial, Lagrange Interpolation to ark-poly

Problem Definition

As a result of #53 native implementations of Univariate Domain, Vanishing Polynomial, Lagrange Interpolation in this repo. However, they should actually be at ark-poly.

4fbdc2b

Proposal

Move them to ark-poly and adjust the corresponding constraint structs (if there is any at this moment).


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Implement the `Sum` trait for `FpVar`

Summary

This is related to Lianke's post (arkworks-rs/groth16#33). Since he is working on ZKP for neural networks (see: https://eprint.iacr.org/2021/087). I feel that maybe one of the implementations for FpVar has caused the LC map to blow up.

@brucechin

Problem Definition

Consider that we are adding many numbers together. At this moment, when we add two numbers, we create, not a constraint, but a new LC.

If we know that we are adding many numbers together, then we may want to create one LC for the sum, not N - 1 new LCs.

This is just to cut down the depth of the inlining.

Proposal

Implement the Sum trait for FpVar (which implies that we need to implement for AllocatedFp as well).


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Add `Radix2Domain` where offset is not constant

Summary

There are cases where we need a coset domain where offset is not a constant (for example, the query coset in FRI). It is probably better to add this to r1cs-std.

Proposal

Add a struct called QueriedRadix2Domain (is there a better name?). Methods are similar to Radix2Domain.
Make Radix2Domain have offset that is not constant. Specialize when offset is constant or variable.

Some codes to copy from (ignore it if you do not have access to the link. Repo is currently private):


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Potentially make `fields::nonnative::params` module private

We should move NonNativeFieldInputVar and bits_le_to_nonnative into r1cs-std, and the external world should just call them.

Originally posted by @weikengchen in #79 (comment)

Add Domain, Lagrange Interpolation, Vanishing Polynomial

Summary

This issue is a feature proposal according to https://github.com/alexchmit/perfect-constraints/issues/7. It is better to make a Domain struct that tries to mirror EvaluationDomain in ark-poly. Then add vanishing polynomial API and Lagrange interpolation API that match the API in ark-poly.

Problem Definition

Support univariate evaluation domains.

Proposal


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Add `Mul<NonNativeFieldVar<G::ScalarField, G::BaseField>>` to `G: Group`

Summary

We don't have a scalar mul impl of a group element variable with its scalar field element.

Problem Definition

Ergonomics

Proposal

Add a Mul trait bound and the corresponding impls.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Converting mux to arkworks

Summary

In response to issue https://github.com/alexchmit/perfect-constraints/issues/5. mux should be moved into an auto-impl on CondSelectGadget.

Problem Definition

By adding new API, this crate can support random access of a vector of size that is power of two.

Proposal

Add an auto-impl at CondSelectGadget (https://github.com/arkworks-rs/r1cs-std/blob/master/src/select.rs#L6). The new API will be conditionally_select_power_of_two_vector(position: &[Bool], values: &[Self]) -> Result<Self, SynthesisError>, and the main code will reuse existing logic in https://github.com/alexchmit/perfect-constraints/blob/master/fractal/src/algebra/mux.rs#L5.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

How to do bit shift in finite field variable?

Hello, I am new to ZEXE and I met some difficulties here:

in traditional programming language, we can do bit shifting like this:
int a = 8;
int b = a >> 1; // b = 4;

Because there is not divide operator for FpVar::::new_witness, I want to use some right/left bit shifting operations over finite field variables. Could you please guide me how to do this? I have not found out the corresponding functions.Thanks!

Add allocator for `DensePolynomialVar`

Summary

Add allocator for DensePolynomialVar, so we can easily convert ark_poly::DensePolynomial to DensePolynomialVar by calling DensePolynomial::new_variable.

We can keep release 3.0 as is. It's ok to move this feature to the next release.

Proposal

Impl AllocVar for DensePolynomial.

I also plan to impl AllocVar for domains and evaluations so that we can easily convert univariate domain in ark-poly to variable. However, currently univariate domain can only be subgroup (instead of coset). Once coset is supported, we can so do.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.