aleonet / snarkvm Goto Github PK
View Code? Open in Web Editor NEWA Virtual Machine for Zero-Knowledge Executions
Home Page: https://snarkvm.org
License: Apache License 2.0
A Virtual Machine for Zero-Knowledge Executions
Home Page: https://snarkvm.org
License: Apache License 2.0
As per this conversation:
https://github.com/AleoHQ/snarkVM/pull/177#discussion_r641337695
we should switch snarkos-testing
in snarkos-integration
from rev
to version
.
This can be done after snarkos-testing
is released in the next release of snarkos
.
Migrate nonnative
into gadgets
Pratyush mentioned that, to avoid any possible misuse, we can change the current instantiations of the Schnorr signature and other cryptographic primitives that should use twisted Edwards curves (probably including encryption) to "must" use twisted Edwards curves.
This allows us to safely keep only the x coordinate when serializing.
This is a pending item to testnet2.
In testnet2, the NoopProgram uses Marlin. The Marlin proof it is using intentionally turns off the "hiding" feature.
The reason why in IVLS the "hiding" feature is turned off is: in IVLS, zero-knowledge is not needed.
In our application, however, this is needed. This would require some changes to the current code:
Schnorr signature native and gadget implementation need to include public key usage in digest
Every constraint system that implements the ConstraintSystem
trait should be optimized by constraints
or by weight
. This optimization will be labeled with an enum in each constraint system.
The current constraint system implementations are currently all constraint optimized, but the incoming work in Marlin will require weight optimized operations. Adding this base infrastructure will be necessary for that logic.
Each constraint system will have a new optimization_goal
and the ConstraintSystem
trait will require two additional functions: optimization_goal
and set_optimization_goal
.
This proposal replaces the group encryption in DPC with ECIES Poseidon.
I did some analysis on pen and paper. Here is the execution plan.
EncryptedRecordCRH
from Bowe-Hopwood Pedersen to Poseidon as well.candidate_encrypted_record_hash
<= <= encrypted_record_hash_input
<= candidate_encrypted_record_bytes
candidate_encrypted_record_gadget
<= encryption_plaintext_gadget
<= <= record_group_encoding_gadgets
record_field_elements_gadgets
record_field_elements_gadgets
payload_elements
payload_field_bits
payload_bits
The PR will be somewhat independent from #301 and #300 since they do not interfere the inner circuit, but better do it after since the DPC trait has changed.
I have some code to test decrypting a transaction record. In this example I decided to modify the encrypted record slightly so that it would not successfully decrypt.
use snarkos_objects::AccountViewKey;
use snarkos_rpc_client::{SnarkOSRpcConn, methods::get_block_count};
use snarkos_dpc::{record_encryption::RecordEncryption, SystemParameters, instantiated::Components, EncryptedRecord};
use snarkos_utilities::bytes::FromBytes;
use std::str::FromStr;
pub fn decrypt_transaction_record() {
let system_parameters = SystemParameters::load().unwrap();
let account_view_key = AccountViewKey::<Components>::from_str("AViewKey1m8gvywHKHKfUzZiLiLoHedcdHEjKwo5TWo6efz8gK7wF").unwrap();
let encrypted_transaction_bytes = hex::decode(
"081e4a0d39c6e22d8b4a170af02311f4ba76b4ffba99811\
0c6ae9854d288bb1b0df491df24446102ccee297d29ad376af\
b46718ce9c9535bfa4596f831e8979807aa9a81ee5f25dbf9d\
6046ce7012e976f6031419d535ffec255fc3e5e919ee00a8b2\
7b4eaa21b83bffdc9cf417dd760352170915a292761ae4b33f\
8604dd82601b345fdf4142ffc422387b06e05836ddae59e5bc\
b8782e458e79de894be85980b2a984492dc4e5e3928b73e9c3\
31ed6fe901f23204ce246c441d00be2f0b2b00573cf8e16d65\
664416da025ce9d10179ae6c60a8ac3c94aef6d06adeab8c09\
904eaeb83825f15894b84f394ed70e7b9ddb6dfacbeaa30bed\
7063dcb9796766e125c00").unwrap();
let encrypted_record = EncryptedRecord::read(&encrypted_transaction_bytes[..]).unwrap();
let record = RecordEncryption::decrypt_record(&system_parameters, &account_view_key, &encrypted_record).unwrap();
println!("record: {:?}", record);
}
I get the following panic:
thread 'snarkos::tests::test_decrypt_record' panicked at 'assertion failed: `(left == right)`
left: `QuadraticNonResidue`,
right: `QuadraticResidue`', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkos-algorithms-1.1.4/src/encoding/elligator2.rs:232:13
stack backtrace:
0: rust_begin_unwind
at /rustc/84b047bf64dfcfa12867781e9c23dfa4f2e6082c/library/std/src/panicking.rs:475
1: std::panicking::begin_panic_fmt
at /rustc/84b047bf64dfcfa12867781e9c23dfa4f2e6082c/library/std/src/panicking.rs:429
2: snarkos_algorithms::encoding::elligator2::Elligator2<P,G>::decode
at ~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkos-algorithms-1.1.4/src/encoding/elligator2.rs:232
3: snarkos_dpc::base_dpc::record::record_serializer::decode_from_group
at ~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkos-dpc-1.1.4/src/base_dpc/record/record_serializer.rs:50
4: <snarkos_dpc::base_dpc::record::record_serializer::RecordSerializer<C,P,G> as snarkos_models::dpc::record_serializer::RecordSerializerScheme>::deserialize
at ~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkos-dpc-1.1.4/src/base_dpc/record/record_serializer.rs:287
5: snarkos_dpc::base_dpc::record::record_encryption::RecordEncryption<C>::decrypt_record
at ~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkos-dpc-1.1.4/src/base_dpc/record/record_encryption.rs:156
6: voyager::snarkos::decrypt_transaction_record
I wouldn't expect the code to panic like this, I think it should return an error that can be handled.
snarkOS libraries 1.1.4
Rust 1.47.0-beta.2
Linux
Gadgets for field equality.
Support field equality in the Leo compiler.
Also support upcoming character equality in the Leo compiler, since characters will be represented as field elements (in the Unicode code point range).
At the moment equality gadget for integers does
// for some input A = N, B = N bits for some bit width integer N
let temp = A[0] == B[0]
for i in 1..N {
temp = and(temp, A[i] == B[i])
}
However we can cut our constraint count in half when using a constant for either A or B by doing
let temp = 0
for [i1, i2] in (0..N).chunks(2) {
temp = and(temp, and(A[i] == B[i], A[i + 1] == B[i + 1]))
}
This conceptually makes the bit comparisons into a two-layer map, with the first layer doing near-free constant + allocated XOR, and the 2nd layer being half of our original constraint count to aggregate. This should cut our constraint count for the optimized case in half, and not effect other cases.
Establish a standardized "dummy" predicate that will be used to determine if records are dummy (#343).
The dummy predicate will check that the record has a value of 0 and an empty payload.
In a setup phase, we may set up for large parameters such as for 2^28. This is essentially a committer key for the polynomial commitment.
However, most statements are small, we want to enable a user to download only a subset of the committer key, which is technically doable, and we just need an infrastructure to do so.
The following test code always generate the state subtree root except the transactions numbers equal to
pub fn txids_to_roots(transaction_ids: &[[u8; 32]]) -> (MerkleRootHash, PedersenMerkleRootHash, Vec<[u8; 32]>) {
let (root, subroots) = merkle_root_with_subroots(transaction_ids, MASKED_TREE_DEPTH);
let mut merkle_root_bytes = [0u8; 32];
merkle_root_bytes[..].copy_from_slice(&root);
(
MerkleRootHash(merkle_root_bytes),
pedersen_merkle_root(&subroots),
subroots,
)
}
pub fn merkle_root_with_subroots(hashes: &[[u8; 32]], subroots_depth: usize) -> ([u8; 32], Vec<[u8; 32]>) {
merkle_root_with_subroots_inner(hashes, &[], subroots_depth)
}
fn merkle_root_with_subroots_inner(
hashes: &[[u8; 32]],
subroots: &[[u8; 32]],
subroots_depth: usize,
) -> ([u8; 32], Vec<[u8; 32]>) {
if hashes.len() == 1 {
// Tree was too shallow.
let root = hashes[0];
let subroots = if subroots.is_empty() {
vec![root]
} else {
subroots.to_vec()
};
return (root, subroots);
}
let result = merkle_round(hashes);
if result.len() == 1 << subroots_depth {
merkle_root_with_subroots_inner(&result, &result, subroots_depth)
} else {
merkle_root_with_subroots_inner(&result, subroots, subroots_depth)
}
}
#[test]
#[allow(deprecated)]
fn test_posw_gm17() {
let rng = &mut XorShiftRng::seed_from_u64(1234567);
// PoSW instantiated over BLS12-377 with GM17.
pub type PoswGM17 = Posw<GM17<Bls12_377>, Bls12_377>;
// run the trusted setup
let posw = PoswGM17::setup(rng).unwrap();
// super low difficulty so we find a solution immediately
let difficulty_target = 0xFFFF_FFFF_FFFF_FFFF_u64;
// let transaction_ids = vec![[1u8; 32]; 8];
let transaction_ids = vec![[1u8; 32]; 9];
let (_, pedersen_merkle_root, subroots) = txids_to_roots(&transaction_ids);
// generate the proof
let (nonce, proof) = posw
.mine(&subroots, difficulty_target, &mut rand::thread_rng(), std::u32::MAX)
.unwrap();
assert_eq!(proof.len(), 193); // NOTE: GM17 compressed serialization
let proof = <GM17<Bls12_377> as SNARK>::Proof::read(&proof[..]).unwrap();
posw.verify(nonce, &proof, &pedersen_merkle_root).unwrap();
}
In aleo-setup repository, in order to complete migration from zexe algebra to snarkVM algebra, we need to extend the FFT infrastructure to work with group elements, as was done in this following PR.
In branch replaced-zexe-algebra-with-snark-vm
of aleo-setup repository, FFT receives as input a set of elliptic curve points. Therefore this branch doesn't build.
To extend the FFT infrastructure, it is necessary to:
Implement Deserialize
for AleoAmount
it could also be made a transparent serialization to the original i64
type. It would also be good to have an unsigned amount type which shares code, and provides safe methods to convert. Also add a display method which formats the amount with the expected decimal place, and can also read from a string with this format.
So third parties can make better use of this type.
I can make this contribution if it is considered a good idea.
Currently, converting a [u8]
into Vec<F>
uses byte-aligned indices to convert to field elements.
#[inline]
fn to_field_elements(&self) -> Result<Vec<F>, ConstraintFieldError> {
let max_size = <F as PrimeField>::Parameters::CAPACITY / 8;
let max_size = max_size as usize;
let fes = self
.chunks(max_size)
.map(|chunk| {
let mut chunk = chunk.to_vec();
chunk.resize(max_size + 1, 0u8);
F::read_le(chunk.as_slice())
})
.collect::<Result<Vec<_>, _>>()?;
Ok(fes)
}
So if a field modulus is 383 bits, the current logic would occupy up to 376 bits and leave 6 bits unused. (We cannot use the 383rd-bit given the value could lie outside the modulus)
Instead, one could bit-align it up to the penultimate bit, to better use the space available (hacky pseudocode):
#[inline]
fn to_field_elements(&self) -> Result<Vec<F>, ConstraintFieldError> {
let max_size = <F as PrimeField>::Parameters::CAPACITY;
let fes = BitIteratorLE::new(self)
.chunks(max_size)
.map(|chunk| {
F::from_repr(BigInteger::from_bititerator(chunk))
})
.collect::<Result<Vec<_>, _>>()?;
Ok(fes)
}
Leo supports an address type but it is not allocated in the constraint system.
We need to support address equality for dp1 examples.
Implement AccountAddressGadget
Implement CondSelectGadget
for AccountAddressGadget
Implement ConditionalEqGadget
for AccountAddressGadget
When computing the root hash of a Merkle tree using PedersenCRH
on Projective
, the computed root hash doesn't match the root hash returned by the MerkleTree
.
There are two tests with TODOs
currently marked #[ignore]
which test for these failing cases, and can be found in snarkos-algorithms/src/merkle_tree/tests.rs
following PR AleoHQ/snarkOS#273 . In the same test file, you will see all other test cases are currently passing.
Make wasm-bindgen
an optional dependency of toolkit
.
It's a big dependency to bring in outside of the web.
We currently implement a DPC
struct that takes Components
as input.
DPC
implements DPCScheme<L: LedgerScheme>
Components
implements DPCComponents
L: LedgerScheme
is an associated item in all DPC
instantiationsThe result is that we must use DPC with DPC<Components> where L: LedgerScheme<{bind Components to LedgerScheme types}>
.
We decouple LedgerScheme
from DPCScheme
, so that an instantiated Ledger
is independent. This allows us to merge DPCComponents
associated items to DPCScheme
, and move ledger-integrated methods in DPCScheme
to LedgerScheme
.
Finally, we can rename the current Components
to DPC
, and the current DPC
to Ledger
.
We implement a DPC
struct that implements DPCScheme
, which includes its associated types built in.
We implement a Ledger
struct that implements LedgerScheme
, which includes ledger-integrated methods in it.
It would be amazing if we could move the location where the parameters are store to a fixed location. Right now when I start it, it points to
~/.cargo/registry/src/github.com-1ecc6299db9ec823/snarkvm-parameters-0.5.4/src/testnet1/
Could it be something like ~/.snarkvm/parameters
?
That would help the work done with persistent storage. Right now it seems to very with every release it changes so we cannot mount persistent storage in that path. We have to run it, figure out what the path will be, and build again with that directory as mount point.
This is a follow-up of the PR.
There is another issue causing the marlin_test_nested_snark
test to fail. It requires big-picture changes to completely fix this one--both native and constraints must sort points by names.
In snarkVM, it follows an older version's poly-commit implementation. It sorts the points by values so that it matches the native order.
https://github.com/AleoHQ/snarkVM/blob/master/polycommit/src/marlin_pc/gadgets/marlin_kzg10.rs#L155
However, this is indeed problematic. In fact, the upstream poly-commit now intentionally sorts the points by names, both in the gadget and the native version. As you can see in the native batch_open, it is changed accordingly.
https://github.com/arkworks-rs/poly-commit/blob/constraints/src/marlin/marlin_pc/constraints.rs#L957
https://github.com/arkworks-rs/poly-commit/blob/master/src/marlin/marlin_pc/mod.rs#L490
Why sort by names?
Because different proofs will have points being in different orders. Sometimes, alpha is smaller than beta. In other proofs, beta is smaller than alpha. Recalling that our circuit needs to be data-independent, and here, challenge-independent, to make sure that it can verify different proofs.
Therefore, in arkworks-rs, we intentionally make the order by explicit names "alpha", "beta", so that it is not sensitive to specific values.
This also resolves Raymond's notes on the to-do about sorting the points by values but having trouble handling the setup mode. The sort-by-point-name approach is designed specifically to avoid leaving proof-specific things into the setup.
Create ID files for the inner circuit, outer circuit, noop program, and posw circuit
By caching these IDs in parameters, it will significantly improve the speed of many operations, such as checking if a record is a dummy record, which requires comparing a record's birth and death program IDs with the noop program ID.
testnet2
todos are as follows:
Core changes:
Ledger changes:
Transaction changes:
Record changes:
Cross referencing arkworks-rs/curves#60
Previously we used 11 as a generator, which has order (p-1)/35.
Now we use 22, which has the right order.
Fixed the two-adic root of unity in accordance with the new generator.
As described in https://github.com/AleoHQ/snarkOS/pull/219:
https://github.com/scipr-lab/zexe/ uses the Zero and One traits from the num_traits crate. Due to a trait resolution problem in Rust, the Add trait that the num_traits traits add to each implementer of Zero and One makes the current Add<&'a Self> unusable. While it seems like a good idea to do the same as Zexe, it requires wide code-base changes and so I'll leave that for a future PR. The traits introduced here are a version of the num_traits traits without the additional Add constraint.
In preparing for production, we need to switch the generators and parameters for a number of (snarkvm-)algorithms to a hash to curve implementation.
To do the switch, we will use a hash function such as Blake2s on a checkpointed string to compute a digest that is mapped to a corresponding field element, and used as the x-coordinate to derive the y-coordinate for an affine point on the curve.
This will involve adding an implementation of from_random_bytes
or equivalent into our curve templates.
By making this update, an added benefit will be a simplification of our function signatures for higher-level objects, such as in our account private key, view key, and address for example.
An error occurred during a connection to snarkvm.org. PR_END_OF_FILE_ERROR.
This is from Firefox. Similar error using Chrome.
Link works (redirects back to repo) with http but not with https (currently points to).
Poseidon does not reason over bytes natively, and does not need to accrue this cost.
Following discussion with @Pratyush and @weikengchen, it is in our interest to deprecate Group
trait in favor of AffineCurve
and ProjectiveCurve
traits for performance and security.
Add record_commitment_root
, program_registry_root
, and datastore_root
to the BlockHeader
.
(Outline your motivation here)
Are you willing to open a pull request? (See CONTRIBUTING)
We probably want to consolidate this parameter and move into an aleo.org address as not to expose where things are running.
https://github.com/AleoHQ/snarkVM/blob/master/parameters/src/testnet1/mod.rs#L23
https://github.com/AleoHQ/snarkVM/blob/master/parameters/src/testnet1/mod.rs#L49
https://github.com/AleoHQ/snarkVM/blob/master/parameters/src/testnet1/mod.rs#L65
Security concern and also beneficial when we start going multi-cloud, or behind a CDN, which is probably a good idea since it is so large.
This is a copy of ProvableHQ/leo#1204, basically, when we saw an error, we do not know where the error comes from. For example, the AssignmentMissing may cause a panic at a specific level where an unwrap
is done, but the actual error is passed up layers and layers, and what is most useful is what is the bottom layer.
Honestly I still don't know how to implement it, so I will look at what happens in Leo.
TBD waiting for Leo's move.
The current address scheme only supports a signature scheme that uses an Account Private Key
to sign and a Signature Public Key
to verify Schnorr signatures.
We need to add to functionality to sign messages with the Account View Key
and verify the signatures with the Account Address
.
The motivation of this feature is to allow users to sign messages that can be verified with a user's public address.
The Account View Key
and Account Address
are a Group EncrytionScheme
private and public key-pair. The GroupEncryption
parameters share the same sized generator_powers
field, however the GroupEncryption
scheme is missing a salt field.
A simple solution is to add an additional salt field that will only be used for message signing. The GroupEncryption
scheme will then also implement the SignatureScheme
and call into the Schnorr signature algorithms directly. This will not require parameter generation (address derivation will remain unchanged), because we are just adding an additional 32 bytes of randomness to the end of the current EncryptionScheme
parameters.
This is an additional step towards allowing users to prove ownership of a private key associated with a particular address.
testnet1
currently has 10 seconds baked as its target time, however as the network is producing blocks at 30 seconds, we have never truly tested difficulty adjustments in prod, only in development.
For testnet2
, we should be setting the difficulty target time to 20 seconds, and adding sanity checks that these updates occur correctly in the block header and consensus protocol.
Are you willing to open a pull request? (See CONTRIBUTING)
In snarkOS, we explicitly derive the inner circuit ID in the codebase. Given testnet2
introduces a new approach to derive this, and the fact that we may want to change this in the future (potentially unlinking it directly from the VK), we should provide a method to get the inner circuit ID as defined by the DPC instantiation itself.
Update PoSW to use the same Marlin universal parameters as DPC.
In testnet1, as PoSW was the only Marlin circuit, it was fine to specialize.
Given testnet2 is Marlin based, we should be using a common URS.
Since BlockHeader
implements Serde's traits, I believe there should be no need to re-write them by hand with both the manual implementations which copy from slice and the ones using the FromBytes
/ ToBytes
traits?
(Write your description here)
While we have PrivateKey
, ViewKey
, and Address
, we have not implemented ProvingKey
and its account derivations.
pub static PROVING_KEY_PREFIX: [u8; 10] = [109, 249, 98, 224, 36, 15, 213, 187, 79, 190]; // AProvingKey1
Are you willing to open a pull request? (See CONTRIBUTING)
Yes
./snarkVM/parameters/src/macros.rs
macro_rules! impl_params_remote {
($name: ident, $remote_url: tt, $local_dir: expr, $fname: tt, $size: tt) => {
pub struct $name;
impl crate::traits::Parameter for $name {
const CHECKSUM: &'static str = include_str!(concat!($local_dir, $fname, ".checksum"));
const SIZE: u64 = $size;
fn load_bytes() -> Result<Vec<u8>, crate::errors::ParameterError> {
// Compose the correct file path for the parameter file.
let filename = Self::versioned_filename();
let mut file_path = std::path::PathBuf::from(file!());
file_path.pop();
file_path.push($local_dir);
file_path.push(&filename);
// Compute the relative path.
let relative_path = if file_path.strip_prefix("parameters").is_ok() {
file_path.strip_prefix("parameters")?
} else {
&file_path
};
.....
let mut file_path = std::path::PathBuf::from(file!());
the macro file!()
will expand in the compile time, so this always the file path of macros.rs. when compiled the snarkos, and copy it to a another computer running that have not the path of macros.rs, will lead to always download parameter from the remote url;
examples:
i'm compile the snarkos in the local computer A, the file_path will be /home/lzj/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/snarkvm-parameters-0.4.0/src/params/
and copy the snarkos to a another computer B that have not the path /home/lzj
, this will lead to download the parameter from remote url:
WARNING - "inner_snark_pk-68eebd0.params" does not exist. snarkVM will download this file remotely and store it locally. Please ensure "inner_snark_pk-68eebd0.params" is stored in "/home/lzj/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/snarkvm-parameters-0.4.0/src/params/inner_snark_pk-68eebd0.params".
snarkvm_parameters::params - Downloading parameters...
^Carkvm_parameters::params - 11.93% complete (238 MB total)
ipfsmain@ipfsmain:~/aleo$ ls /home/lzj
ls: cannot access '/home/lzj': No such file or directory
snarkos-integration
fails to compile.
To reproduce:
ATM we have to use concrete parameters when sending them across threads because associated types on parameters do not enforce Send + Sync. There is a workaround to have a massive where clause.
The is_dummy
and record_commitment
attributes can be removed from the DPCRecord
struct because they can be derived from the other record attributes.
The is_dummy
flag is true
if:
The record_commitment
will be derived when it's needed
The improvements will soon be merged with arkworks.
In the meantime, a description of these improvements, which will help setup time by 3-4x, are available in:
celo-org/zexe#17
arkworks-rs/snark#296
celo-org/zexe#26
https://hackmd.io/@celo/S10UeWUBD
The improvements have also been merged into snark-setup. The CUDA scalar mul may be incorporated into snark-setup soon enough.
I.e. we should not have a type parameter on Transaction
or Block
. This refactor is already deeply in progress, but blocking on testnet2 to avoid conflicts.
To achieve better amortization resistance, we should optimize the masked Pedersen hash.
Currently we use two-bit lookups for both coordinates of each generator and normal Edwards addition.
Main idea for optimization - use the Montgomery form of the Edwards curve, which would require us to use powers of the same generator for as long as we can (~50-70 powers). The main point we must take care of is that the identity point can't be added when using this form.
Remove is_dummy
from Record
struct and add implicit inner circuit enforcement. We can also forgo encoding and encrypting it into encrypted records - note that this doesn't change the encrypted record size, however it should ever so slightly decrease the constraint count of the inner circuit. However, the added enforcement of is_dummy
is accrued in the inner circuit (but we should be doing this check).
(Outline your motivation here)
Are you willing to open a pull request? (See CONTRIBUTING)
Update to use snarkos-integration
properly.
Once a new snarkOS
release is cut, update it as per the note in PR #140
snarkos-toolkit
's PrivateKey
type prints out what I presume to be sensitive information in its Display
trait implementation. e.g.:
AKey1NASMyhNcGYNHnRjABiBWpUxiFrq78UXmuZxebxJa5dL7aY9FQLNKyxVHeVmSDTNcdSucwh6VUTkwQ1LfTCD7eTtkdo3mByneAgNvxGurwK1kqqk4MRRHYH8pKLBxzNE6JXcm3szZP85andaMwNiWeiy6PY3L4XNP8MN5A3VfgEFhQN3
It also prints out what I presume to be sensitive information in the Debug
trait:
PrivateKey { private_key: AccountPrivateKey { sk_sig: Fp256(BigInteger256([15670022261322161260, 16113987832349911964, 18411568504641040723, 237711639907821057])), sk_prf: [2, 59, 216, 60, 28, 37, 81, 55, 114, 128, 111, 43, 31, 210, 159, 125, 67, 12, 97, 31, 215, 117, 101, 98, 192, 105, 75, 110, 5, 231, 206, 32], metadata: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], r_pk: Fp256(BigInteger256([14390026911485217211, 6606126054640334358, 13996801153520813382, 323067478123723715])) } }.
This could be a security problem if any of these ends up in a log file or bug report somewhere by accident.
Code snippet to reproduce
let private_key = PrivateKey::new(None, &mut rng).unwrap();
println!("{}", private_key);
println!("{:?}", private_key);
Probably it's better to implement the types for this type manually and print out **PRIVATE**
for the Display
trait, and maybe just an opaque PrivateKey
for Debug
?
Encountered this issue while working on https://github.com/aleohq/voyager/ which is consuming the snarkos-toolkit
crate.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.