Giter Club home page Giter Club logo

bellpepper's People

Contributors

beck-8 avatar bmerge avatar cryptonemo avatar daviddias avatar defuse avatar dependabot[bot] avatar dignifiedquire avatar drpetervannostrand avatar ebfull avatar flyq avatar github-actions[bot] avatar huitseeker avatar jobez avatar keyvank avatar kikakkz avatar kobigurk avatar nginnever avatar nikkolasg avatar porcuquine avatar robquistnl avatar shawnrader avatar simonatsn avatar stebalien avatar str4d avatar stuberman avatar varunthakore avatar vmx avatar wwared avatar xib1uvxi avatar zhiqiangxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bellpepper's Issues

circom compatibility

Is there any way that one could import existing circom templates to use as gadgets in bellpepper?

WitnessCS::with_capacity doesn't pre-populate ONE

WitnessCS::new pre-populates ONE among the inputs:

fn new() -> Self {
let input_assignment = vec![Scalar::ONE];
Self {
input_assignment,
aux_assignment: vec![],
}
}

But WitnessCS::with_capacity, introduced in #60 does not, which is likely to trip users:

pub fn with_capacity(input_size: usize, aux_size: usize) -> Self {
let input_assignment = Vec::with_capacity(input_size);
let aux_assignment = Vec::with_capacity(aux_size);
Self {
input_assignment,
aux_assignment,
}
}

We should fix this before any release of bellpepper.

Repercussion of latest `Num` addition changes

Issue

The recent PR #88 introduces a new behavior for Num::add, favouring None to Some(v) as a result in (Some(v), None) | (None, Some(v) cases.

This new behavior seems to break the current implementation of PoseidonROCircuit::squeeze in arecibo. In this method we actually call the SpongeCircuit::absorb method with the current state elements, that do not have any values at that stage.

In neptune absorb will call add_rate_element with a Num::add that results on a None variant, transmitted to the set_element method that leverages an unwrap on the value and thus panicking.

I'm not sure about the correct way to fix this, so I'm looking for inputs.

How to reproduce

This bug has been found while leveraging the Bn256EngineKZG engine at PublicParams::setup both for Nova & Supernova. You can reproduce it on your machine by running this test on ethereum-lc/aggpk

I was also able to reproduce it on the aptos-lc if patching both belpepper & bellpepper-core to dev.

Support for linear constraints

When folding an R1CS circuit, the cross-term computation
$$T = AZ_1 ◦ BZ_2 + AZ_2 ◦ BZ_1 − u_1CZ_2 − u_2CZ_1$$
can be computed more efficiently when the circuit takes advantage of linear constraints of the form:
$$0=\sum_i C_{i,j} \cdot z_i.$$
Specifically, this corresponds to a constraint where the linear combinations in the rows $A_j, B_j$ are equal to zero.
If this is the case, then the $j$-th component of $T$ will always be equal to zero, and the computation for the $j$-th constraint can be avoided entirely.
Moreover, the computation of the commitment to $T$ can also be optimized to skip the entries corresponding to linear constraints.

In order to encourage the use of linear constraints, I propose adding the two following methods to the ConstraintSystem trait.

 /// Enforce that `A` = 0. The `annotation` function is invoked in testing contexts
 /// in order to derive a unique name for the constraint in the current namespace.
 fn enforce_eq_zero<A, AR, LA>(&mut self, annotation: A, a: LA)
   where
       A: FnOnce() -> AR,
       AR: Into<String>,
       LA: FnOnce(LinearCombination<Scalar>) -> LinearCombination<Scalar> {
   self.enforce(annotation, |_| LinearCombination::zero(), |_| LinearCombination::zero(), a);
 }

 /// Enforce that `A` = `B`. The `annotation` function is invoked in testing contexts
 /// in order to derive a unique name for the constraint in the current namespace.
 fn enforce_eq<A, AR, LA, LB>(&mut self, annotation: A, a: LA, b: LB)
   where
       A: FnOnce() -> AR,
       AR: Into<String>,
       LA: FnOnce(LinearCombination<Scalar>) -> LinearCombination<Scalar>,
       LB: FnOnce(LinearCombination<Scalar>) -> LinearCombination<Scalar> {
   let zero = a(LinearCombination::zero()) - b(LinearCombination::zero());
   self.enforce_eq_zero(annotation, |_| zero);
 }

Both of these functions can have default implementation defined in terms of the existing enforce function definition, avoiding backwards-compatibility issues. However, it allows folding-specific ConstraintSystems to choose to handle linear constraints differently.

Issues with automated publication

Some rought spots publishing 0.2.0 with @porcuquine today:

  1. check the cargo release version, folks don’t update this one frequently
  2. the Makefile publishes all tags, that’s not good if the operator’s local tags include those form a remote of the forked project,
  3. the packages are interdependent, so we need to instruct cargo-release on the dependency,
  4. and we probably need some massaging with cargo-release so that the path dependency from bellpepper-core to bellpepper doesn’t prevent the operator from publishing.

About the bench for defferent `Indexer.insert_or_update` in lc mod

I'm learning the implementation of bellpepper. The Indexer in lc mod caught my attention, it is mainly used to maintain some (k, v) pairs and is required to be sorted by k. Its insert_or_update method will affect the performance of LinearCombination.

This is the original implementation of Indexer and its insert_or_update:

struct Indexer<T> {
     /// Stores a list of `T` indexed by the number in the first slot of the tuple.
     values: Vec<(usize, T)>,
     /// `(index, key)` of the last insertion operation. Used to optimize consecutive operations
     last_inserted: Option<(usize, usize)>,
}

pub fn insert_or_update<F, G>(&mut self, key: usize, insert: F, update: G)

I think the function can also be achieved with BTreeMap, and the insertion of BTreeMap may be faster than Vector. Because when inserting in the middle of Vector, all remaining elements need to be moved.

Here is the modified one:

struct Indexer<T> {
    values: BTreeMap<usize, T>,
}

https://github.com/zkp-learning/bellpepper/blob/22f81bb64e977cab6a34f26234c56d2f685aec0d/crates/bellpepper-core/src/lc.rs#L70

But the results show that origin implementation is faster in most cases:

Condition Vec + pointer BTreeMap
add Fr with order 853.55 ns 2.8 us
add Fr with rev order 3.9 us 3.2 us
add Fr with random order 1.1 us 1.4 us
add lc 777 ns 1.4 us

detail: https://github.com/zkp-learning/bellpepper-bench#readme

I think this is because the time complexity of the original implementation is O(1) when inserting in order, while BTreeMap is O(log n).

Support check for unconstrained allocation

Completely unconstrained allocations are a common source of programmer error, leading to soundness bugs, when writing circuits. This class of error can be completely detected at circuit-shape-synthesis-time.

Bellpepper already inherits (from bellperson/bellman) an error intended to signal this condition:

UnconstrainedVariable,
/// During GPU multiexp/fft, some GPU related error happened
#[error("attempted to aggregate malformed proofs: {0}")]
.

We should add support to check synthesized constraint systems for this — either at program run-time, or in tests. Ideally, this would be implemented such that there is no cost if the check is never performed.

Since there are legitimate reasons to allocate unconstrained values, we will need to add a new operation (say, AllocatedNum::alloc_unconstrained(…)), allowing circuit programmers to white-list such allocations. This will greatly improve clarity when this is actually desired, by forcing it to be declared explicitly.

When performing the optional check on a constraint system supporting it, unconstrained allocations that are declared as above should not result in an error. Conversely, declared unconstrained allocations that are in fact constrained should also result in an error. We will need a new error type for this.

The principle suggested here could be taken further, since constrained/unconstrained is a crude binary hammer. Consider that an AllocatedBit will be 'constrained' at the R1CS level, but may still represent a conceptually unconstrained allocation. Ideally, we could similarly catch such unconstrained (except for the booleanity constraint) AllocatedBits — with an equivalent whitelisted-allocation form.

Of course, this could get complicated and expensive. One possibility that might be viable in general is to include metadata during shape synthesis (when opting into a more expensive analysis mode) so that 'partial constraints' of the kind created by AllocatedBit on construction are recorded and can be ignored when attempting to determine whether a given variable has been fully constrained.

These checks should apply both to inputs and aux variables.

If we want this to be shared by many constraint systems, we might need some refactoring and common traits. In that case, Nova's ShapeCS, for example, should be considered.


The most obvious implementation approach would involve a set of all variables (or their indices), with variables removed from the set as they are encountered in (qualifying — if/when we implement that nuance) constraints. A bit field might be a reasonably performant way to instantiate the idea.

For example, a circuit with n constraints might have a field initialized to [0xFFu8; (n + 7) / 8]. Then as each constraint is encountered, the bit at position corresponding to its index is set to 0. If we also explicitly clear the bits corresponding to whitelisted vars, then any remaining non-zero bits correspond to unconstrained allocations which should be reported.

Or maybe that's overkill, and a simpler but heavier data structure will suffice.


As a final note, the existing error (which should be retained for compatibility by consumers who still use Groth16 with bellpepper) doesn't report which var was unconstrained — but it will be much more useful for debugging purposes to include it. So we may need a new error for this, as well as the constrained-but-whitelisted case (which should also report the offending variable).

theory of base gadgets

From #55 (comment):

If we are establishing a new set of gadgets, it may be worth taking a moment to consider unification of similar alternate approaches for two reasons:

  • This may lead to the most coherent/efficient foundation.
  • This may simplify achieving the goal of migrating downstream projects to use a common base.

To that end, let's take a moment to make an active decision. The current function (alloc_num_is_zero) can be implemented in terms of the (also important) alloc_equal_const (https://github.com/lurk-lab/lurk-rs/blob/dae0d6c478a83c80e3b3753c29f89e4ecd2a6313/src/circuit/gadgets/constraints.rs#L495) [UPDATE: actually alloc_num_is_zero operates on a Num — so my points here are not quite germane to this work. The general point stands as something we should sort out as part of a 'base gadgets' project, though.].

Arecibo, for path-dependent reasons associated with its own internal consistency has the slightly-differently-named alloc_num_equals_const: https://github.com/lurk-lab/arecibo/blob/6a9e8707e868068abf8c6b8da999443c3f3d0895/src/gadgets/utils.rs#L200-L237.

In addition to the slight naming difference, that function has a more concise (arguably simpler — if also less didactic) definition, itself derived from its alloc_num_equals (https://github.com/lurk-lab/arecibo/blob/6a9e8707e868068abf8c6b8da999443c3f3d0895/src/gadgets/utils.rs#L156-L197), which has a similar relationship to Lurk's equivalent alloc_num_equal (https://github.com/lurk-lab/lurk-rs/blob/dae0d6c478a83c80e3b3753c29f89e4ecd2a6313/src/circuit/gadgets/constraints.rs#L440-L492C2).

The point of this comment is to suggest we take a moment to actively decide:

  • About names, do we want to use:
    • the lurk-rs names;
    • the nova/arecibo names;
    • some (potentially) more lexically-consistent names considered from the ground up to solve the larger design problem now faced.
  • About implementations, do we want to use:
    • the lurk-rs implementations;
    • the nova/arecibo implementations;
    • fresh implementations [note: probably not a good idea in this work]

The reason this is worth considering is that any refactoring of both arecibo (which implies the hope for an eventual change in nova) and lurk-rs will have audit/correctness implications for at least one of those projects. We should seek to minimize global cost and maximize eventual global confidence in correctness with whatever we do here.

I don't think we need to overthink this, just make an active decision. I suggest there are only four reasonable choices, and all are more-or-less acceptable with the right argument:

  • lurk-rs names and implementations (status quo of this PR)
  • arecibo names and implementations (status quo if we had started from the premise of refactoring arecibo)
  • lurk-rs names and arecibo implementations (possible lessening of the lower-level crypto's auditing burden as a matter of policy, or in deference to the lowest-level's cryptographic weight).
  • arecibo names and lurk-rs implementations (in deference to lurk-rs and many of the gadgets in question more broadly than just this PR having undergone one round of audit).

I would say that the last option probably makes little sense. If the logic for lurk-rs implementations is strong, we should probably also use lurk-rs names. This is because lurk-rs names are probably the better choice anyway, and changes of naming for purpose of gadget standardization should be non-threatening to arecibo/nova.

In the end, I think my vote is for lurk-rs names and either implementation. If we know we will prefer one implementation over the other, it's probably best to initialize this work with that choice so we can follow a policy of not gratuitously changing implementations of gadgets — since this has a substantive effect on any circuit depending on them; making it a deeply breaking change in ways that are not instantly obvious.

cc: @adr1anh @gabriel-barrett @huitseeker

Originally posted by @porcuquine in #55 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.