Giter Club home page Giter Club logo

rust-multihash's Introduction

rust-multihash

Build Status Crates.io License Documentation Dependency Status Coverage Status

multihash implementation in Rust.

Table of Contents

Install

First add this to your Cargo.toml

[dependencies]
multihash = "*"

Then run cargo build.

MSRV

The minimum supported Rust version for this library is 1.64.0. This is only guaranteed without additional features activated.

Usage

The multihash crate exposes a basic data structure for encoding and decoding multihash. It does not provide any hashing functionality itself. Multihash uses const-generics to define the internal buffer size. You should set this to the maximum size of the digest you want to support.

use multihash::Multihash;

const SHA2_256: u64 = 0x12;

fn main() {
	let hash = Multihash::<64>::wrap(SHA2_256, b"my digest");
	println!("{:?}", hash);
}

Using a custom code table

You can derive your own application specific code table using the multihash-derive crate. The multihash-codetable provides predefined hasher implementations if you don't want to implement your own.

use multihash_derive::MultihashDigest;

#[derive(Clone, Copy, Debug, Eq, MultihashDigest, PartialEq)]
#[mh(alloc_size = 64)]
pub enum Code {
    #[mh(code = 0x01, hasher = multihash_codetable::Sha2_256)]
    Foo,
    #[mh(code = 0x02, hasher = multihash_codetable::Sha2_512)]
    Bar,
}

fn main() {
    let hash = Code::Foo.digest(b"my hash");
    println!("{:02x?}", hash);
}

Supported Hash Types

  • SHA1
  • SHA2-256
  • SHA2-512
  • SHA3/Keccak
  • Blake2b-256/Blake2b-512/Blake2s-128/Blake2s-256
  • Blake3
  • Strobe

Maintainers

Captain: @dignifiedquire.

Contribute

Contributions welcome. Please check out the issues.

Check out our contributing document for more information on how we work, and about contributing in general. Please be aware that all interactions related to multiformats are subject to the IPFS Code of Conduct.

Small note: If editing the README, please conform to the standard-readme specification.

License

MIT

rust-multihash's People

Contributors

anderssorby avatar austinabell avatar bishopcheckmate avatar boneyard93501 avatar briansmith avatar crackcomm avatar dependabot[bot] avatar dignifiedquire avatar dvc94ch avatar eminence avatar galargh avatar iamjpotts avatar koushiro avatar kubuxu avatar michaelsproul avatar mriise avatar richardlitt avatar rklaehn avatar romanb avatar samuelburnham avatar stebalien avatar sunny-g avatar thomaseizinger avatar tomaka avatar tomusdrw avatar twittner avatar tyshko5 avatar vmx avatar web-flow avatar zen3ger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rust-multihash's Issues

Multihashes can be formed with incorrect sizes

If a multihash is read from bytes, there are no checks to assert that the code and size read match, so the below pseudo code can be read from bytes and sent along as if it were correct.

Multihash {
    code: SHA2_512,
    size: 10
    digest: [0; 10]
}

along with some other variations

Consider removing `MultihashDigest` and `Hasher` traits

Fundamentally, traits are meant for abstractions. If we don't create abstractions, the traits are not needed. As part of working on #272, it became apparent that the MultihashDigest and Hasher trait are never used for abstractions, only to enforce an interface.

We could change the multihash-derive crate from a custom derive to a proc-macro that just creates an impl block for us, based on API conventions of the various hashers.

If we want to stick to the traits, it would make more sense to develop a generic Codetable component that abstracts over the Hasher trait.

Heap allocation for large hashes

An implementation using https://docs.rs/smallvec/1.6.1/smallvec/struct.SmallVec.html would allow for us to keep stack allocating for smaller sizes, but easily enough spill onto heap if the hash is bigger than the specified allocated size.

The question is would alloc_size still be a hard limit with this, or the just the allocated stack size with whatever usize is to limit heap?

This should be close to a drop in replacement for the array we use now to store the digest in Multihash without affecting end user API

Wrapping a static array

fn wrap_arr<const S: usize>(digest: [u8; S]) -> Self::<S+SIZE_OF_PREFIX>

or something along those lines...
I have more reading to do following const generic evaluations.

Let's make fewer breaking changes

In the same spirit as multiformats/rust-multiaddr#71, I want to raise awareness about how disruptive the breaking changes in this fundamental library are for the ecosystem.

I have a few suggestions:

  • Error could be made non_exhaustive or completely opaque. The latter would be my personal preference as it gives you more flexibility as a library author.
  • Move MultihashDigest & Code to a separate crate. All we need in rust-multiaddr is the Multihash type which acts as a type-safe representation for a multihash. The machinery around MultihashDigest, the actual hash implementations etc could all live in a separate crate. This would reduce the API surface drastically and allow multihash to remain more stable (and almost dependency free)
  • Code uses conditional enum variants. That is not ideal. Features are additive across a dependency tree in Rust. Conditional enum variants can turn an exhaustive match in a library into a non-exhaustive one simply because the end-user activates more features of multihash.

Curious to hear your thoughts. (And happy to send PRs if we agree on something!)

Remove `Code::Identity` support

  1. The feature is technically opt in, but features are unified so the entire program will be "opted in" to supporting identity hashes if any component needs to support them.
  2. It's ridiculously easy to end up panicing based on user inputs.

Possible solutions:

  1. Make Code::digest return a result. This is likely a non-starter due to mass breakage and the number of unwraps it would introduce.
  2. Make Cid dynamically allocated. Also a massive refactor, and a non-starter.
  3. Remove it from the default Code.

Honestly, I think the best bet is to remove the feature, and maybe export an IDENTITY_CODE constant, or even a identity_hash free-function that returns a result. My reasoning is that the current identity "code" is pretty much unusable without special casing anyways, and is a massive foot-gun.

Provide quickcheck arbitraries

This would have to be behind an optional and off-by-default feature flag so you don't pay the price of the quickcheck dependency if you do not need it.

But arbitraries would be quite helpful for property based testing of more complex structures that use multihashes.

Make CC dependency a (default?) feature

v0.11.3 caused some difficulties for us due to, on some targets, the blake3 dependency using cc to compile intrinsics.

We maintain a complex cross-platform project that we build for Windows/macOS/Linux/Android/iOS where all C/C++ dependencies are prebuilt and packaged through conan.io packages with strict compilation options. With this in mind, we try to enforce builds where the "cc" crate is never called because of the lack of control over the build process and compilation options. In this specific case, we got an unexpected and sudden build break in our CI (blake3 is using linker options that are incompatible with ours on Windows x86).

For now, I've pinned 0.11.2 and we can always patch multihash ourselves. However, it would be really nice if there was some possibility to just control the hash algorithms available via features. We don't need blake3 support, and being able to disable it would be more "environmentally friendly". As it stands right now, a C/C++ application built with MSVC for x86 and using /MT won't be able to compile with multihash as a dependency.

Possibly related to #32

No way to `encode` a digest directly?

Problem

If you have a hash digest digest: &[u8] and you want to encode that into multihash bytes, you're out of luck if you're using rust-multihash currently. This is because encode expects un-hashed data as input. Instead, you must do something like this manually:

    let hash_alg = multihash::Hash::SHA1; 
    let mut mh = Vec::with_capacity(digest.len() + 2); 
    mh.push(hash_alg.code()); 
    mh.push(hash_alg.size()); 
    mh.extend_from_slice(digest); 

In contrast, go-multihash's Encode function does not hash your data for you, so you can directly pass in a digest.

Potential solutions

This could be remedied with a new encoding function that expects a hash digest, maybe something named encode_digest? However, perhaps we should consider changing the way encode works. It would be a breaking change, but it would bring it in-line with how go-multihash works.

Panic on size being smaller than default

I was looking through the source code and found this for the Blake3Hasher.finalize() method.

let digest_out = &mut self.digest[..digest_bytes.len().max(S)];

This will always panic if S is less than 32 and the same line of code appears for other hasher implementations too. The fix would be to compute the minimum length and use that to create the byte slices.

let digest = self.state.finalize();
let len = digest.len().min(S);
let digest_bytes = digest.as_bytes()[..len];
let digest_out = &mut self.digest[..len];
digest_out.copy_from_slice(digest_bytes);
digest_out

Add getter for inner array

We have fn digest(&self) -> &[u8 though it is helpful to also have fn digest_arr(&self) -> [u8; S] for Multihash

fn digest_arr(&self) -> &[u8; S] {
    self.digest
}

Fixing warn no-run -> no_run breaks BoxedMultihashDigest

I can't say I'm too experienced with rust, but while playing around with this library I saw the warning:
warning: unknown attribute `no-run`. Did you mean `no_run`? --> src\digests.rs:20:1
and decided to fix, which then caused cargo test to fail at src\digests.rs:24 BoxedMultihashDigest.

Should code be added to the doc to fix compiler errors, or should ignore or compile_fail flags be used instead of no_run?

No elegent way to stream hash given a hash code

Hi,

In previous multihash version, we used to be able to compute the digest in a streamed manner using MultihashDigest::input and it was possible to get a boxed MultihashDigest given a multihash.
I currently see no way of doing the same, which is an issue in some use cases.

For example, I need to validate a digest computed from a file. Since the file can be big, I want to use the new StatefulHasher trait. However, I found no way to get a trait object.

Hereโ€™s my code:

pub fn validate_file_checksum(expected_digest: &str, file_path: &Path) -> std::io::Result<bool> {
    let (_, hash_data) = multibase::decode(expected_digest).map_err(|e| Error::new(ErrorKind::InvalidInput, e))?;
    let expected_digest = Multihash::from_bytes(&hash_data).map_err(|e| Error::new(ErrorKind::InvalidInput, e))?;
    let hash_code = multihash::Code::try_from(expected_digest.code()).map_err(|e| Error::new(ErrorKind::InvalidInput, e))?;

    // FIXME: multihash new API is breaking this code for streaming hashing (checked for version 0.14)
    //
    //const BUF_SIZE: usize = 1024 * 128;
    //let file = File::open(file_path)?;
    //let mut reader = BufReader::with_capacity(BUF_SIZE, file);
    //
    //let hasher = todo!("get an appropriate trait object hasher given the hash code");
    //
    //loop {
    //    let length = {
    //        let buffer = reader.fill_buf()?;
    //        hasher.update(buffer);
    //        buffer.len()
    //    };
    //    if length == 0 {
    //        break;
    //    }
    //    reader.consume(length);
    //}
    //
    //let digest_found = hasher.finalize();
    //
    // So instead, we read the whole file in memory:

    let file_content = std::fs::read_to_string(file_path)?;
    let digest_found = hash_code.digest(file_content.as_bytes());

    Ok(expected_digest == digest_found)
}

If I overlooked something, please let me know!

Thank you

Provide an Ord instance

So a multihash and structs using a multihash, like rust-cid, can be used in ordered data structures.

In general, I prefer sorted data structures for more deterministic behaviour, unless the performance requirements absolutely mandate using a hashmap.

Should be a trait

This is really all we care about for libipld currently, but we cant do H::digest() or H::CODE on a type that implements MultihashDigest.

impl $name {
    #[doc = $code_doc]
    pub const CODE: Code = Code::$name;
    /// Hash some input and return the Multihash digest.
    pub fn digest(data: &[u8]) -> Multihash {
        let digest = <$type>::digest(&data);
        wrap(Self::CODE, &digest)
    }
}

thread 'main' panicked at 'Should not occur as multihash is known to be valid'

Hey, I'm processing a large number of CIDs and stumbled upon this:

thread 'main' panicked at 'Should not occur as multihash is known to be valid', <::std::macros::panic macros>:2:4
stack backtrace:
   0:     0x555f14b962e4 - backtrace::backtrace::libunwind::trace::h5d52ba5f20882f09
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/libunwind.rs:86
   1:     0x555f14b962e4 - backtrace::backtrace::trace_unsynchronized::hceee092869668a74
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/mod.rs:66
   2:     0x555f14b962e4 - std::sys_common::backtrace::_print_fmt::ha312c2904605e4d5
                               at src/libstd/sys_common/backtrace.rs:78
   3:     0x555f14b962e4 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h5b9981092140b727
                               at src/libstd/sys_common/backtrace.rs:59
   4:     0x555f14baf9dc - core::fmt::write::h5f6d7d8de88b4173
                               at src/libcore/fmt/mod.rs:1063
   5:     0x555f14b945d3 - std::io::Write::write_fmt::h893169117de3cc15
                               at src/libstd/io/mod.rs:1426
   6:     0x555f14b988c5 - std::sys_common::backtrace::_print::h8ab61d4120f7a335
                               at src/libstd/sys_common/backtrace.rs:62
   7:     0x555f14b988c5 - std::sys_common::backtrace::print::h8aae19fbb153bf2a
                               at src/libstd/sys_common/backtrace.rs:49
   8:     0x555f14b988c5 - std::panicking::default_hook::{{closure}}::h1ee5b7d8b6f83429
                               at src/libstd/panicking.rs:204
   9:     0x555f14b98612 - std::panicking::default_hook::hd6c32c13403f9210
                               at src/libstd/panicking.rs:224
  10:     0x555f14b98ed2 - std::panicking::rust_panic_with_hook::h1f2449d529a25f22
                               at src/libstd/panicking.rs:470
  11:     0x555f14b619be - std::panicking::begin_panic::h6e070b70aef8705c
  12:     0x555f14b67439 - multihash::digests::MultihashRefGeneric<T>::algorithm::{{closure}}::h8663a4ee3426df84
  13:     0x555f14b67412 - multihash::digests::MultihashRefGeneric<T>::algorithm::h6d7c35d9ad6e23d5
  14:     0x555f14b648b0 - cid_decode::main::h988a09d69d682991
  15:     0x555f14b618e7 - std::rt::lang_start::{{closure}}::h4bae47bd4b36cdd9
  16:     0x555f14b98993 - std::rt::lang_start_internal::{{closure}}::h9a4aa16acf1cdc99
                               at src/libstd/rt.rs:52
  17:     0x555f14b98993 - std::panicking::try::do_call::h0b6fc9f6090c1e2b
                               at src/libstd/panicking.rs:303
  18:     0x555f14b9a757 - __rust_maybe_catch_panic
                               at src/libpanic_unwind/lib.rs:86
  19:     0x555f14b993ec - std::panicking::try::h9eaeeaa81242ec77
                               at src/libstd/panicking.rs:281
  20:     0x555f14b993ec - std::panic::catch_unwind::h07d504c1b691e8fb
                               at src/libstd/panic.rs:394
  21:     0x555f14b993ec - std::rt::lang_start_internal::hcea4e704875ab132
                               at src/libstd/rt.rs:51
  22:     0x555f14b653d2 - main
  23:     0x7fd73e079b97 - __libc_start_main
  24:     0x555f14b6008a - _start
  25:                0x0 - <unknown>

This is the CID that triggers it:

bagyacvrah3bmneoui6zp2dlksr2cgrnpd42ran62wgvz5eacaaaaaaaaaaaa

This is the program:

fn main() -> Result<()>{
    let mut rdr = BufReader::new(io::stdin());
    let mut s = String::new();

    let mut results:HashMap<_,usize> = HashMap::new();

    while let Ok(n) = rdr.read_line(&mut s) {
        if n == 0 {
            results.into_iter().for_each(|(k,v)| println!("{},{}",k,v));
            return Ok(())
        }

        debug!("working on {}",s.trim());

        let res = match do_single(s.trim()) {
            Err(_) => {"invalid".to_string()},
            Ok(m) => format!("{:?}:{:?}:{:?}:{:?}:{}",m.base,m.version,m.codec,m.hash,m.hash_len)
        };

        let entry = results.entry(res.clone()).or_default();
        *entry += 1;

        s.clear();
    }

    rdr.read_line(&mut s)?;

    Ok(())
}

#[derive(Debug,Clone)]
struct Metadata {
    base: multibase::Base,
    version: cid::Version,
    codec: cid::Codec,
    hash: multihash::Code,
    hash_len: usize,
}

fn do_single(s: &str) -> Result<Metadata> {
    let c = cid::Cid::try_from(s)?;
    if c.version() == cid::Version::V0 {
        return Ok(Metadata{
            base: multibase::Base::Base58Btc,
            version: c.version(),
            codec: c.codec(),
            hash: c.hash().algorithm(),
            hash_len: c.hash().digest().len()
        })
    }

    let (b, _) = multibase::decode(s.trim())?;

    Ok(Metadata{
        base: b,
        version: c.version(),
        codec: c.codec(),
        hash: c.hash().algorithm(),
        hash_len: c.hash().digest().len()
    })
}

Implement ripemd160

Code: 0x1053

This hash function is used in Ethereum, Bitcoin, etc. Specifically, I need support for this hash function in the FVM.

Write docs for transition to new crate structure

Created as a result of #272 (review).

This issue is to collect what needs to be done before we can cut the first release:

Tasks

Maintain a CHANGELOG.md

As a downstream user, it would be very helpful to have a changelog for each release. A changelog would allow downstream users to quickly understand the impact of an update like multiformats/rust-multiaddr#63.

I don't have a strong opinion on a format, though favor https://keepachangelog.com/ for the sake of consistency with other projects.

Would this be an option for this project as well?

Thanks for all the work here.
Max - Happy downstream user

Unsafe unwrap could cause an application to panic

return Ok(decode::u64(&b[..=i]).unwrap().0);

The unwrap() here might cause some crash using the library if an error occurred. For chains using Substrate, this could cause critical severity issues if an attacker is able to craft a malicious payload in order to trigger the unwrap.

You can find an image of a malicious payload that will cause a panic below.
image

Below is an example of a fix that could be used :

   if decode::is_last(b[i]) {
      match decode::u64(&b[..=i]) {
        Ok(val) => return Ok(val.0),
        Err(_) => return Err(Error::VarIntDecodeError),
      }
    }

Support arbitrary Blake3 sizes

In rust-multihash the digests are generic over their size. It should be possible to support arbitrarily sized Blake3 hashes. Currently the code will panic if you try a size differently from 256 bits.

Question about maintenance status

I just stumbled across this crate after contributing to the multihash crate stored in libp2p/rust-libp2p. I know that the Multiformats project is part of the greater IPFS community, but so is libp2p. Both crates appear to have the same names and are developed by the same community, but are stored in separate places and the latter seems to be more frequently updated. What is the status of this crate versus the other one? Are they both one and the same?

Provide way to produce truncated hashes

Multihash encodes the length of the hash separately to the hash. There is no way to produce e.g. a multihash that uses Sha2_256 but only stores 16 bytes of hash data.

There should be a method truncate that reduces the number of bytes of the hash that is stored. This is also needed to use multihash from rust-cid. The length parameter should be the number of hash payload bytes to retain.

Not sure what the behavior should be when truncate is called with a very high value. Fail / noop?

WriteHasher is unusable

multihash 0.13 has introduced WriteHasher as a newtype wrapper to provide a generic implementation of Write for all hashers.
However, the crate provides no way to construct an instance of WriteHasher or obtain a digest from the hasher state.

Tracking issue: Polish and stabilize the API

The discussion for how we can stabilize the API of this crate takes place here: #259.

This issue is here to track our progress and link to relevant tasks.

Tasks:

  • #266
  • Make Error an opaque type
  • Export Multihash as Multihash instead of MultihashGeneric
  • Rename custom-derive to MultihashDigest: Derives are conventionally named after the trait they implement
  • Rename serde-codec feature to serde: Use package renames instead
  • Remove parity-scale-codec dependency? Has a large API surface and is already on major version 3. Doesn't seem to be trust-worthy to not break our API in the future again.
  • Remove write_multihash and read_multihash from the public API: They are already exposed via Multihash::read and Multihash::write
  • Rename arb feature to quickcheck: More conventional

The above list are suggestions. Happy to debate all of them. The thinking was:

  • Reduce and harden API surface
  • Minimize dependencies
  • Follow ecosystem conventions

Overall, my suggestion would be to pack all of these into one breaking change which will hopefully be the last one for a long time.

Move to Multiformats?

Hey @dignifiedquire!

Thanks so much for this. As you may know, we recently created the Multiformats organization to be a home for all of the multiformats - multiaddr, multihash, etc. Would you be interested in moving this repository to that organization? You would still have admin rights on the repository, but it would be a part of a wider organization. This would mean more relevant eyes on it (most likely) and better cross-repository issue tracking. We'd also add a line mentioning you as the original author, and of course your commits would stay the same.

Of course, keeping it on your profile is also cool; we'll still link to this from the main multihash repository at github.com/multiformats/multihash.

Thanks for taking the time to read this. ๐Ÿ‘

Tracking issue, here: multiformats/multiformats#4.

Remove the APIs for hashing and only do the encoding

While I support #25 and #26 , wouldn't it be cleaner if this crate only handles Multihash itself and leaves the hashing logic to other crates? There are so many crates out there that can do hashing of all sorts, and one wouldn't want to include 2+ crates in the dependency graph for only one algorithm.

Right now this crate depends on other crates for SHA-1/2/3. Since the specification might even support an infinite number of hashing algorithms (I'm exaggerating), it is not ideal to let this crate depend on all of them.

Merging work from yatima-inc/sp-multihash

Hi, I talked briefly to @Stebalien about merging some of the work from our https://github.com/yatima-inc/sp-multihash no_std fork. Happy to send PRs, but wanted to ask a few questions first:

  1. Const-Generics:sp-multihash is on rust 1.54-nightly, so we were able to adapt @mriise's excellent #116. It looks like you guys are waiting on rust-lang/rust#44265 hitting stable Rust first before merging #116. To me, the usability regression reported in this comment seems acceptable compared to the usability gains from not having to drag typenums around everywhere. Plus, const-generics also opens up some really exciting refactoring possiblities, like @Stebalien's neat idea for using DSTs to erase the size-info here: lurk-lab/sp-cid#5. Since waiting some weeks/months for stable GATs seems not ideal, I'm wondering if we can figure out some way around this:
  • Is the stable requirement an absolute for this lib? Or maybe we could make a "nightly" rust-multihash branch/release so that development here isn't impacted by rustc as much?
  • Alternatively, is there a way to make the above usability regression acceptable to you, using only stable const-generics?
  1. ByteCursor: Our no_std work removes std::io functions for reading/writing in favor of our bytecursor library. This can be added independently of Const-Generics.
  • Is the bytecursor dependency acceptable here? What about sp-std?
  • Do you want to keep the std::io, functions gated behind the std feature via pragmas? If so, should std still be default?
  1. Nix: We've added a Nix build to sp-multihash, and can PR (independently of the above) to add one here (preview: https://github.com/yatima-inc/rust-multihash/pull/1/files), if that's of interest.

Explore using inline data storage

The most common hashes are 32 bytes long. So with the additional data that gives 34 bytes. It seems inefficient to store such a small object on the heap with the associated overhead / pointer chasing / cache issues.

It would be possible to use an approach similar to https://crates.io/crates/smallvec and store hashes inline up to a certain size, and only store on the heap once the needed size is bigger.

Note that we would not want to use smallvec since we don't need all the metadata of a vec. The hash already knows how big it is, so what we would need would be just an enum like this:

enum Storage {
  Inline([u8; 39]), // any hash up to 39 bytes goes in here
  Heap(Arc<[u8]>), // the rare hash that is larger goes in here
}

This can be made pretty much opaque from the outside, like smallvec. Might not even need unsafe...

Replace or Upgrade Tarpaulin Code Coverage

tarpaulin may be incompatible with newer versions of Node and cause CI failures.

#264 disables the code coverage step in CI.

I don't have a view as to what direction to take afterwards, but that probably depends on if/when tarpaulin #22 gets released.

Help: How to use serde(Serialize, Deserialize) on multihash?

I search the previous issue: Make multihash serializable with serde #62. And I got: Multihash is now serializable with Serde (it's not released yet, but it's available on HEAD.

I still don't know how to do it.

Could you help provide some codes how to use serde on multihash?

Blake2 support

I was wondering about what it would take to add blake2 support. Presently it looks like the codes for blake hashes are u16, so other than adding the types + hashing, I'd need to change this function signature and anything depending on it?

Anyway, this lib is something I really need, and I'm interested in helping out :)

Make multihash serializable with serde

At now, we are using serde serialization/deserialization of structures that contain libp2p::PeerId. In its turn PeerId contains two Multihashes, and, finally, Multihash is based on top of the Storage. But Storage enum is only crate pub. Could you please make it pub?
Also for implementing serde traits it needs explicit setter/getter of storage field inside Multihash, can we also do so? I can make a PR.

Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.