noir-lang / noir Goto Github PK
View Code? Open in Web Editor NEWNoir is a domain specific language for zero knowledge proofs
Home Page: https://noir-lang.org
License: Apache License 2.0
Noir is a domain specific language for zero knowledge proofs
Home Page: https://noir-lang.org
License: Apache License 2.0
Semantically this conveys the meaning of the as operator more clearly, as we do not truncate the Field.
Here: https://github.com/noir-lang/noir/blob/master/crates/noirc_frontend/src/hir/type_check/expr.rs#L287
and
Here: https://github.com/noir-lang/noir/blob/master/crates/noirc_frontend/src/hir/type_check/expr.rs#L290
This should say RHS in the error message.
Hi there,
I wanted to try out Noir - I installed it correctly, created a hello world example with nargo new hello_world
, cd into it, nargo build
correctly but nargo prove my_proof
simply fails:
thread 'main' panicked at 'Expected witness values to be integers: ParseIntError { kind: Empty }', crates/noirc_abi/src/input_parser/toml.rs:91:14
What am I missing ? It seems I need to give the witness but there's no indication on how in the CLI.
Currently, the code has a lot of unstructured error variants, which seem to be a code smell.
There is already an issue open for this, however it seems that changing the way we handle errors would be better. Rust for example uses structs and proc macros for example, which is analogous to the error variant strategy. rslint on the other hand, seems to inline their errors.
The first step towards closing this would be to refactor noirc_errors
Currently the span is not recorded in the internet for objects. Runtime errors therefore have a default span.
This is a remnant of a previous refactor where all errors were being caught at compile time
This would move most of the FFI function checks from runtime to compile time. It is also a superset of #22
This method uses std::process::exit which prevents drops from happening. This was noted by @Kixiron .
Here we call from_bytes : https://github.com/noir-lang/noir/blob/master/crates/noir_field/src/lib.rs#L162
Which should be different to from_bytes_reduce. This is not the case as from_bytes calls Fr::from_str which does reduce the field element representation, here: https://github.com/arkworks-rs/algebra/blob/68d4347790a8531bb29b6d21e4e2f39ebae52f10/ff/src/fields/macros.rs#L595
Action steps:
Currently sha256 and blake2s , return two field elements named (low, high) where low
contains the low/high 128 bit representation of the output respectively.
On the backend, we are packing the byte array. However, this is not needed and instead you can return the byte array and then if the user chooses to, you can pack the byte array into low and high.
Another reason why this is not good is because it implicitly assumes that the field that the constraint system is defined over cannot represent the entire output, which is true for bn254, but not in general.
For the use-case it is possible to opt for a simpler arena implementation like la-arena. Which also allows you to tag your data structure.
Currently, the check for functions that are not allowed to be called outside of main is at runtime. A consequence of this is that if the function is not used, it will not be checked.
This should ideally be a compile-time check. Moving this to a compile-time check would also mean that the interpreter will not distinguish between the non-main and main context.
Since ACVM is analogous to LLVM in it's functionality. It is possible to add additional backends as long as they at a minimum consume the Arithmetic data structure which is a Linear combination.
There are still a few practical limitations:
FieldElement can be generic, however it currently is fixed to bn254. The effort to make it generic would not be large since under the hood, it uses arkworks.
The trait Backend
is not in it's final form as of yet. This trait does not accept preprocessed circuits. This is also not that much of a limitation and is in place at the moment because the default backend does not support this.
There is also an edge case:
The barretenberg backend currently uses Wasm to interact with the compiler. This leads to longer prover/verification times and a limit on linear memory which in turn places a limit on circuit size.
The disadvantage of using FFI with barretenberg is that windows will not be supported as barretenberg does compile for windows.
This will be very helpful when upcasting integers. We simply pad with the known zero constant. This would first require that we store the bit decomposition alongside the original element, in the process, removing range as a black box function.
There should be a way to only download the necessary amount of the SRS
The compiler can currently parse these, however it does not know how to apply the constraints for a signed integer
Issue brought up by Alex Zammit.
If a program requires public inputs, the public inputs are not encoded in the proof, however the verifier contract requires this.
Preparing the inputs does not change much from function to function, so this can be extracted into a parent module.
This method would now additionally, normalise all witnesses which are not unit.
The following code:
fn run(x : Field, y : Field) -> [9]Field {
let arr = for _i in 0..10 {
x + y
};
arr
}
Will error with the error message:
error: expected type priv [9]priv Field, found type priv []priv Field
The for loop (constant for loops) always has bounds that are known at compile time, so this should return the type [9]Field
This should be possible since we are given the expression as parameters.
The brunt of the work will be, removing the remove_span
methods and changing the return type.
However note that in call_low_level, we are given a HirCallExpression and not a ExprID. So we cannot immediately get the correct span. We can either pass in the ExprID along with the HirCallExpression, or we can just pass the ExprId and have call_expr get the HirCallExpression from the interner.
It may also be possible to do a similar refactor to call_builtin
Currently the framework for SRS generation and re-use is non-existent. The barretenberg backend re-uses the ignite SRS whereas the arkworks backend re-generates on each proof.
Intern field elements so that they are simply u32/64s
The problem seems to be WASM related as tests on the C++ side pass verification and the error is HeapAccessOutOfBounds
.
This happens in the function call, which indicates that possibly a WASM limit is being hit, or that there may be a bug in the C++ code.
Elaborating, signature verification takes roughly 200K constraints, while the biggest circuit that I have tested is 2^19 . From this it would be reasonable for signature verification to not hit the WASM limit.
This opcode will:
Perform fixed-base scalar multiplication over the embedded curve(The prime field that this curve is defined over is equal to the field that the constraint system is defined over)
Return two field elements signifying the x and y coordinates of a point in the embedded curve. This will be the result of scalar multiplication.
For the partial witness generator, the arithmetic section does not change between backends. With the current API, implementing a backend means copying and pasting this file.
Instead there should be a way to have this as the default behaviour where backend implementers can overwrite it.
This is a tracking issue to discuss the conversion of Noir to ACIR via Circ.
Circ treats ACIR as a backend and Noir as an interpreter.
zkinterface gives access to a few backends which integrated, by proxy Noir will have access to.
Bulletproofs by Dalek is not generic over any field so it would not be doable at the moment due to the limitations mentioned here : #16 . Once The Field is made generic, this will be possible.
zkinterface however does allow Groth16 to be integrated which would be a great first implementation.
The following program :
fn main(b : Field, c : Field, d : Field) {
priv a = b + c + d;
priv e = a + b;
}
Should produce:
_4 - _1 - _2 - _3 = 0
_5 - _4 - _1 = 0
Which for width=3 should then be reduced to:
_4 - _1 - _6 = 0
_6 - _2 - _3 = 0
_5 - _4 - _1 = 0
The result at the moment for the first equation is:
_4 - _1 = 0
This indicates that the reduction is not recursive and that the intermediate variables are not being added back to the original equation
The semantics around integer multiplication has been changed, such that multiplying two integers of the same bit width, returns an unconstrained integer of the same bit width.
This line should be changed to be self.num_bits: https://github.com/noir-lang/noir/blob/master/crates/noirc_evaluator/src/object/integer.rs#L197
Most implementations I've seen of Sha256/blake2s accept bytes. This seems reasonable since most of the data structures of interest use bytes such as strings.
In Noir it is possible to cast a field element to a any number of bits. ie u2,u3,u4,u5. If sha256 for example, only accepts bytes (multiples of 8), then an integer such as u2 would not be accepted.
We could:
*Barretenberg decomposes field elements into a base4 representation. It would decompose twice, which is quite inefficient.
Short term; since Barretenberg only supports bytes at the moment, option 3 would be the way to go.
Long term: since the sha256 algorithm is defined for bits and you are allowed to cast integers to arbitrary bit widths, it would make sense to eventually support the bits for unity.
It would also be fruitful to find out how many more constraints the bit representation uses vs the byte representation with sha256.
Ideally this should be a method on witness, however structures and impl blocks have not yet been implemented.
This method converts a private input into a public input.
This mirrors set_public
in barretenberg. Kobi also approved that this is indeed a helpful method to have. Furthermore, we need to add machinery to only allow this method in main.
To reproduce:
Currently Noir is only compiles for bn254's scalar field, however there is nothing stopping it from being generic over any field.
This issue starts a discussion on the work needed to be done to make the compiler generic, and considerations.
Field Trait
We would need a NoirField Trait instead of a FieldElement struct. The trait methods will initially be the methods that are on FieldElement.
Noir will need to be aware of the possible Fields, however we can delegate this list to ACIR, similar to how we delegate all possible OPCODEs there.
Security Level
The security level of a group would need to be documented in Noir also, so that user can be warned when they are using a group which has < 128 bit security.
Some considerations about the implementation:
Question: Should Noir have implementations of all fields supported?
Compiling with implementations for all supported fields, is the current idea and here are the advantages and disadvantages.
Advantages
dyn
will not need to be used.Disadvantages
Since NoirField trait can be implemented for the arkworks Field trait, we can immediately get all Field implementations that arkwork uses. For newer fields, I believe that the surface area for the NoirField trait is small and generic enough that wrapping a new field is negligible.
Other ideas would be for backends to supply their own field element implementation to plug into Noir, however this encourages code duplication between backends and may lead to non-deterministic constraint systems in the same field, if the field element implementation differs.
Compiler aware Field
The compiler currently does not need to be aware of the chosen field implementation, since there is only one. It is however aware of the different backends and we would have the same mechanism where the chosen field is passed as a parameter value in the package manager config file, and the package manager picks the corresponding data structure which is then passed to the compiler.
Rough Action steps
supported_fields
which allows the backends to specify which of these Fields they support. At this moment in time, there will only be one supported Field, which is bn254.supported_fields
, adding the bls12_381 field as a viable field and testing whether it compiles.Currently backends are being drawn in as Rust dependencies.
This is not ideal because, the binary then grows per dependency, even if you are not using it.
Instead, we should conditionally compile a backend and if the user asks for a backend which is not installed, we ask them to use the package manager to recompile it.
All variables are by default mutable.
Add a mut
keyword allowing a variable to not be mutable.
We have XOR and AND, which can compose to form OR but for uniformity, we should also delegate this to the backend
For the context of range constraints:
Many gadgets may implicitly decompose their inputs. In order to avoid this potential constraint blow up, we should state that each gadget is given the decomposed input initially.
What problems arise from this?
If the underlying constraint system is not using base2, then this may additionally add more constraints.
There is currently no solution for this, it would be dangerous to conditionally compile the output of a black box function. The solution that Rust uses in this case is to annotate functions and conditionally compile for a specific target. This AFAICT is not needed just yet (ever?) as most gadgets have informally standardised inputs and output.
For example, we can foresee that all sha256 implementations will assume the input is bytes, and not base4.
Since most proving systems use an SRS/CRS, it would be useful to have a section on this.
This definition would also encapsulate the Pedersen generators used in bulletproofs. Both bulletproof and PLONK use a trapdoor.
The differentiation between the CRS used in bulletproof and the SRS used in a system like PLONK would be that the in order to exploit the trapdoor in bulletproofs you would need to break the DLOG or some other hardness assumption, whereas in order to exploit the trapdoor in PLONK/KZG you would need to know some secret scalar, which in practice can be known if the trusted setup is compromised.
We regard the former proving system as using a transparent setup and the latter as using a trusted setup.
Program:
fn main(b : Field) {
foo(b);
}
fn foo(a : u32) -> Field {
a
}
The following program fails because the foo
function requires a u32 and a Field
was given.
The error message given is:
error: expected type priv u32, found type priv Field
┌─ /src/main.nr:7:8
│
7 │ fn foo(a : u32) -> Field {
│ -
error: aborting due to 1 previous errors
The error should point to the callsite and not the function declaration
Taken from a conversation with Kobi Gurkan
using cargo install to install the binary from the local path, seems to trigger a method not implemented error in arkworks.
The workaround for this was to:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.