Giter Club home page Giter Club logo

noir's People

Contributors

0xyyy avatar alexghr avatar alexvitkov avatar ax0 avatar aztecbot avatar brunny-eth avatar critesjosh avatar ethan-000 avatar ggiraldez avatar github-actions[bot] avatar guipublic avatar jfecher avatar jonybur avatar joss-aztec avatar kevaundray avatar kobyhallx avatar ludamad avatar maddiaa0 avatar michaeljklein avatar mverzilli avatar nickysn avatar phated avatar savio-sou avatar signorecello avatar sirasistant avatar sklppy88 avatar thunkar avatar tomafrench avatar vezenovm avatar yordanmadzhunkov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

noir's Issues

Readme - hello world example not working

Hi there,

I wanted to try out Noir - I installed it correctly, created a hello world example with nargo new hello_world, cd into it, nargo build correctly but nargo prove my_proofsimply fails:

thread 'main' panicked at 'Expected witness values to be integers: ParseIntError { kind: Empty }', crates/noirc_abi/src/input_parser/toml.rs:91:14

What am I missing ? It seems I need to give the witness but there's no indication on how in the CLI.

Refactor error handling

Currently, the code has a lot of unstructured error variants, which seem to be a code smell.

There is already an issue open for this, however it seems that changing the way we handle errors would be better. Rust for example uses structs and proc macros for example, which is analogous to the error variant strategy. rslint on the other hand, seems to inline their errors.

The first step towards closing this would be to refactor noirc_errors

Record Object span

Currently the span is not recorded in the internet for objects. Runtime errors therefore have a default span.

This is a remnant of a previous refactor where all errors were being caught at compile time

FromBytes reduces the element

Here we call from_bytes : https://github.com/noir-lang/noir/blob/master/crates/noir_field/src/lib.rs#L162

Which should be different to from_bytes_reduce. This is not the case as from_bytes calls Fr::from_str which does reduce the field element representation, here: https://github.com/arkworks-rs/algebra/blob/68d4347790a8531bb29b6d21e4e2f39ebae52f10/ff/src/fields/macros.rs#L595

Action steps:

  • To keep the same functionality, we should remove this function and explicitly use from_bytes_reduce.
  • Open a new issue to investigate the use cases for from_bytes_reduce. This should not cause a soundness bug as long as we can uphold the invariant that all FieldElements are always internally reduced.

Change sha256 and blake2s to return bytes

Currently sha256 and blake2s , return two field elements named (low, high) where low contains the low/high 128 bit representation of the output respectively.

On the backend, we are packing the byte array. However, this is not needed and instead you can return the byte array and then if the user chooses to, you can pack the byte array into low and high.

Another reason why this is not good is because it implicitly assumes that the field that the constraint system is defined over cannot represent the entire output, which is true for bn254, but not in general.

Remove generational arenas

For the use-case it is possible to opt for a simpler arena implementation like la-arena. Which also allows you to tag your data structure.

Move NonMainContext check to compile time

Currently, the check for functions that are not allowed to be called outside of main is at runtime. A consequence of this is that if the function is not used, it will not be checked.

This should ideally be a compile-time check. Moving this to a compile-time check would also mean that the interpreter will not distinguish between the non-main and main context.

Integrating backends

Since ACVM is analogous to LLVM in it's functionality. It is possible to add additional backends as long as they at a minimum consume the Arithmetic data structure which is a Linear combination.

There are still a few practical limitations:

  • FieldElement can be generic, however it currently is fixed to bn254. The effort to make it generic would not be large since under the hood, it uses arkworks.

  • The trait Backend is not in it's final form as of yet. This trait does not accept preprocessed circuits. This is also not that much of a limitation and is in place at the moment because the default backend does not support this.

There is also an edge case:

  • Witnesses indices start from 1 and not 0. This is because in the default backend, the first witness is reserved to be a constant equalling zero. The compiler does not currently have machinery to state this, so the witness index just starts from 1. Since the compiler does not use witness at index 0, it is possible for backends to simply minus one from every witness or simply add a dummy witness and make it equal to zero. Maybe in the future an add_constant opcode can be made.

Replace Barretenberg Wasm implementation for FFI

The barretenberg backend currently uses Wasm to interact with the compiler. This leads to longer prover/verification times and a limit on linear memory which in turn places a limit on circuit size.

The disadvantage of using FFI with barretenberg is that windows will not be supported as barretenberg does compile for windows.

Enforce zero as the first variable for backends

This will be very helpful when upcasting integers. We simply pad with the known zero constant. This would first require that we store the bit decomposition alongside the original element, in the process, removing range as a black box function.

Add signed integers

The compiler can currently parse these, however it does not know how to apply the constraints for a signed integer

Regression in ETH contract

Issue brought up by Alex Zammit.

If a program requires public inputs, the public inputs are not encoded in the proof, however the verifier contract requires this.

  • We should change the Verifier contract to accept the public inputs as a vector and remove this encoding in the proof.

Cleanup compiler logic for FFI functions

Preparing the inputs does not change much from function to function, so this can be extracted into a parent module.

This method would now additionally, normalise all witnesses which are not unit.

For expressions should contain the size of the array

The following code:

fn run(x : Field, y : Field) -> [9]Field {
    let arr = for _i in 0..10 {
        x + y
    };
    arr
}

Will error with the error message:

error: expected type priv [9]priv Field, found type priv []priv Field

The for loop (constant for loops) always has bounds that are known at compile time, so this should return the type [9]Field

Change GadgetCaller to return RuntimeError instead of RunTimeErrorKind

This should be possible since we are given the expression as parameters.

The brunt of the work will be, removing the remove_span methods and changing the return type.

However note that in call_low_level, we are given a HirCallExpression and not a ExprID. So we cannot immediately get the correct span. We can either pass in the ExprID along with the HirCallExpression, or we can just pass the ExprId and have call_expr get the HirCallExpression from the interner.

It may also be possible to do a similar refactor to call_builtin

Backend does not work for ECDSA signatures over secp256k1

The problem seems to be WASM related as tests on the C++ side pass verification and the error is HeapAccessOutOfBounds.

This happens in the function call, which indicates that possibly a WASM limit is being hit, or that there may be a bug in the C++ code.

Elaborating, signature verification takes roughly 200K constraints, while the biggest circuit that I have tested is 2^19 . From this it would be reasonable for signature verification to not hit the WASM limit.

Add fixed based scalar multiplication opcode

This opcode will:

  • Perform fixed-base scalar multiplication over the embedded curve(The prime field that this curve is defined over is equal to the field that the constraint system is defined over)

  • Return two field elements signifying the x and y coordinates of a point in the embedded curve. This will be the result of scalar multiplication.

Refactor API for Partial Witness Generator

For the partial witness generator, the arithmetic section does not change between backends. With the current API, implementing a backend means copying and pasting this file.

Instead there should be a way to have this as the default behaviour where backend implementers can overwrite it.

Convert Noir to ACIR via Circ

This is a tracking issue to discuss the conversion of Noir to ACIR via Circ.

Circ treats ACIR as a backend and Noir as an interpreter.

Integrate zkinterface backend

zkinterface gives access to a few backends which integrated, by proxy Noir will have access to.

Bulletproofs by Dalek is not generic over any field so it would not be doable at the moment due to the limitations mentioned here : #16 . Once The Field is made generic, this will be possible.

zkinterface however does allow Groth16 to be integrated which would be a great first implementation.

Reduction is not recursive and is omitting intermediate variables

The following program :

fn main(b : Field, c : Field, d : Field) {
     priv a = b + c + d;
     priv e = a + b;
}

Should produce:

_4 - _1 - _2  - _3 = 0
_5 - _4 - _1 = 0

Which for width=3 should then be reduced to:

_4 - _1 - _6 = 0
_6 - _2 - _3 = 0
_5 - _4 - _1 = 0

The result at the moment for the first equation is:

_4 - _1 = 0

This indicates that the reduction is not recursive and that the intermediate variables are not being added back to the original equation

Sha256 and Blake2s (bits vs bytes)

Most implementations I've seen of Sha256/blake2s accept bytes. This seems reasonable since most of the data structures of interest use bytes such as strings.

In Noir it is possible to cast a field element to a any number of bits. ie u2,u3,u4,u5. If sha256 for example, only accepts bytes (multiples of 8), then an integer such as u2 would not be accepted.

We could:

  1. Implicitly pad to the next multiple of 8, however this might be surprising for some users.
  2. Specify in ACIR that Sha256 must accept arbitrary bits. This option is appealing since casting to an integer implies that the field element has been decomposed into bits.*
  3. Ask the user to explicitly cast their u2 , for example, to a u8 when hashing with sha256

*Barretenberg decomposes field elements into a base4 representation. It would decompose twice, which is quite inefficient.

Short term; since Barretenberg only supports bytes at the moment, option 3 would be the way to go.
Long term: since the sha256 algorithm is defined for bits and you are allowed to cast integers to arbitrary bit widths, it would make sense to eventually support the bits for unity.

It would also be fruitful to find out how many more constraints the bit representation uses vs the byte representation with sha256.

Add a make_public builtin function

Ideally this should be a method on witness, however structures and impl blocks have not yet been implemented.

This method converts a private input into a public input.

This mirrors set_public in barretenberg. Kobi also approved that this is indeed a helpful method to have. Furthermore, we need to add machinery to only allow this method in main.

Make Field generic

Currently Noir is only compiles for bn254's scalar field, however there is nothing stopping it from being generic over any field.

This issue starts a discussion on the work needed to be done to make the compiler generic, and considerations.

Field Trait

We would need a NoirField Trait instead of a FieldElement struct. The trait methods will initially be the methods that are on FieldElement.

Noir will need to be aware of the possible Fields, however we can delegate this list to ACIR, similar to how we delegate all possible OPCODEs there.

Security Level

The security level of a group would need to be documented in Noir also, so that user can be warned when they are using a group which has < 128 bit security.

Some considerations about the implementation:

Question: Should Noir have implementations of all fields supported?

Compiling with implementations for all supported fields, is the current idea and here are the advantages and disadvantages.

Advantages

  • dyn will not need to be used.
  • There will be a single field element implementation per field that backends can choose to hook into.

Disadvantages

  • program will now contain all possible Field element implementations, increasing the binary size.
  • If a backend wants to use a new field, they will need to first upstream the implementation and implement the NoirField trait for it (This may not be a disadvantage)
  • Backends will need to convert from a NoirField trait to the type their proving system uses. (unless they implement it over a NoirField Trait) This may mean a lot of deserialising and subsequent performance loss.

Since NoirField trait can be implemented for the arkworks Field trait, we can immediately get all Field implementations that arkwork uses. For newer fields, I believe that the surface area for the NoirField trait is small and generic enough that wrapping a new field is negligible.

Other ideas would be for backends to supply their own field element implementation to plug into Noir, however this encourages code duplication between backends and may lead to non-deterministic constraint systems in the same field, if the field element implementation differs.

Compiler aware Field

The compiler currently does not need to be aware of the chosen field implementation, since there is only one. It is however aware of the different backends and we would have the same mechanism where the chosen field is passed as a parameter value in the package manager config file, and the package manager picks the corresponding data structure which is then passed to the compiler.

Rough Action steps

  • Replace the FieldElement struct with the Field trait, but still only compile for bn254.
  • Add all supported Fields noir_field as an enum. We then can add a new method in the Proving system trait supported_fields which allows the backends to specify which of these Fields they support. At this moment in time, there will only be one supported Field, which is bn254.
  • Add bls12-381 and allow the user to state they want to use bls12-381. The barretenberg backend should return an error, as it only supports bn254. We modify the arkworks backend adding bls12_381 to its supported_fields , adding the bls12_381 field as a viable field and testing whether it compiles.

Conditionally compile backends

Currently backends are being drawn in as Rust dependencies.

This is not ideal because, the binary then grows per dependency, even if you are not using it.

Instead, we should conditionally compile a backend and if the user asks for a backend which is not installed, we ask them to use the package manager to recompile it.

Consider setting a precedence for Implicit opcode constraints

For the context of range constraints:

Many gadgets may implicitly decompose their inputs. In order to avoid this potential constraint blow up, we should state that each gadget is given the decomposed input initially.

What problems arise from this?

If the underlying constraint system is not using base2, then this may additionally add more constraints.

There is currently no solution for this, it would be dangerous to conditionally compile the output of a black box function. The solution that Rust uses in this case is to annotate functions and conditionally compile for a specific target. This AFAICT is not needed just yet (ever?) as most gadgets have informally standardised inputs and output.

For example, we can foresee that all sha256 implementations will assume the input is bytes, and not base4.

Add a section on the SRS

Since most proving systems use an SRS/CRS, it would be useful to have a section on this.

This definition would also encapsulate the Pedersen generators used in bulletproofs. Both bulletproof and PLONK use a trapdoor.

The differentiation between the CRS used in bulletproof and the SRS used in a system like PLONK would be that the in order to exploit the trapdoor in bulletproofs you would need to break the DLOG or some other hardness assumption, whereas in order to exploit the trapdoor in PLONK/KZG you would need to know some secret scalar, which in practice can be known if the trusted setup is compromised.

We regard the former proving system as using a transparent setup and the latter as using a trusted setup.

Change TypeMismatch error span to callsite

Program:

fn main(b : Field) {
     foo(b);
}

fn foo(a : u32) -> Field {
     a
}

The following program fails because the foo function requires a u32 and a Field was given.

The error message given is:

error: expected type priv u32, found type priv Field
  ┌─ /src/main.nr:7:87fn foo(a : u32) -> Field {
  │        -

error: aborting due to 1 previous errors

The error should point to the callsite and not the function declaration

Arkworks cargo install does not seem to work reliably

using cargo install to install the binary from the local path, seems to trigger a method not implemented error in arkworks.

The workaround for this was to:

  • cargo build to create the binary
  • copy binary into the PATH
  • chmod +x /path/to/binary if it is sometimes seen as a text file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.