rustcrypto / formats Goto Github PK
View Code? Open in Web Editor NEWCryptography-related format encoders/decoders: DER, PEM, PKCS, PKIX
Cryptography-related format encoders/decoders: DER, PEM, PKCS, PKIX
The four main traits that comprise the decoding/encoding API have somewhat inconsistent names. To break them down along with their function:
These traits also represent the main end user-facing API for those who wish to decode/encode DER documents:
Decodable
Encodable
These traits are primarily used by ASN.1 format implementers, and are emitted by der_derive
when using the Choice
or Sequence
macros:
DecodeValue
EncodeValue
Unlike Decodable
/Encodable
, these traits are specifically for parsing the "value" portion of a tag-length-value production, and they receive the TLV Header
(tag/value) as an argument.
Having a separate trait for value decoding/encoding is important for implementing IMPLICIT
context-specific productions, which may have arbitrary context-specific tags.
At the very least, the "-able" endings on Decodable
and Encodable
are inconsistent with the non-"able" endings used by DecodeValue
/EncodeValue
.
In the ssh-key
crate I've used a similar (but simplified) trait structure inspired by the der
crate. It also has Decode
/Encode
traits, but additionally provides traits for Reader
/Writer
:
These traits are impl'd for types like pem_rfc7468::{Decoder, Encoder}
, allowing it to decode from/encode to PEM directly without an intermediate step where data is first decoded from/encoded to a Vec<u8>
. This makes it possible to use PEM encoding in heapless environments, which currently isn't possible with the der
crate.
PEM decoding is a bit tricky: since the original buffer can't be referenced (since it's encoded as Base64) decoding only works for fully owned types, which happens to be what the ssh-key
provides. We don't currently have a way of bounding on such types (though that's trivial to add).
If we were to add this, type signatures would change as follows:
fn decode(decoder: &mut Decoder<'a>) -> Result<T>
=> fn decode(decoder: &mut impl Reader<'a>) -> Result<T>
fn encode(&self, encoder: &mut Encoder<'_>) -> Result<()>
=> fn encode(&self, encoder: &mut impl Writer) -> Result<()>
We'd also need to come up with new names for the current struct Decoder
/ struct Encoder
, possibly something like SliceDecoder
and SliceEncoder
.
The encoder trait could also be impl'd for sha2::Sha256
which would allow on-the-fly computation of key fingerprints:
https://github.com/RustCrypto/formats/blob/d42432a3/ssh-key/src/encoder.rs#L107-L113
Presently the spki
crate first encodes SubjectPublicKeyInfo
to an intermediate buffer before computing a digest of that buffer:
https://github.com/RustCrypto/formats/blob/d42432a3/spki/src/spki.rs#L40-L43
Instead the DER serialization could be computed on the fly and input directly to Sha256
with no intermediate buffer as in the ssh-key
crate:
https://github.com/RustCrypto/formats/blob/d42432a3/ssh-key/src/fingerprint.rs#L125-L127
Several crates that implement ASN.1 DER-based formats in this repo define "document" types such as:
sec1::EcPrivateKeyDocument
spki::PublicKeyDocument
pkcs1::{RsaPrivateKeyDocument, RsaPublicKeyDocument}
pkcs10::CertReqDocument
pkcs8::{PrivateKeyDocument, EncryptedPrivateKeyDocument}
These types are all newtype wrappers around a Vec<u8>
containing a DER-encoded document which is guaranteed to infallibly parse to a particular message type which impls the der
crate's Decodable
and Encodable
traits.
Where the der
crate is designed around zero-copy borrowed types in order to accommodate heapless no_std
targets, the "document" types provide an owned analogue of each message type without the need to duplicate each struct definition so as to have an owned and borrowed equivalent of each type.
This is codified in the der::Document
trait, which has an associated Message
type which incorporates the aforementioned bounds. The der::Document::decode
method infallibly parses the contents of the Vec<u8>
into the associated Message
type, having already checked in advance that it parses correctly at the time it's instantiated.
The der::Document
trait also wraps up common functionality around decoding PEM to DER as well as encoding DER to PEM, and also provides utility functions for reading/writing documents from/to the filesystem, also incorporating awareness as to whether or not the document is a secret and doing things like setting file permissions appropriately.
It would be nice to not have to define separate document types for every message type, instead changing der::Document
from a trait into a struct which is generic around the inner message type.
This has been blocked on the problem of specifying the lifetime to associate with Decodable<'a>
. However, that might be resolvable by avoiding any bounds on the generic type in the struct definition, and hanging the lifetime off of the der::Document::decode
method instead. Something like:
use core::marker::PhantomData;
pub struct Document<M> {
der: Vec<u8>,
message: PhantomData<M>
}
impl<M> Document<M> {
pub fn decode<'a>(&'a self) -> M
where
M: Decode<'a> + Encode + Sized
{
[...]
}
}
The bounds will need to be repeated for every method on the struct.
Right now each call to der::Document::decode
parses the message. This means the message must be parsed twice at a minimum: once to instantiate the document type (which validates the DER parses successfully as a given type, then throws the result away), and once to actually access the parsed results.
It would be nice to store the M: Decode<'a>
type along with the Vec<u8>
, but this gets into the classical Rust problem of borrowing from self. The situation isn't that bad though, as the unparsed DER document is stored on the heap as a Vec<u8>
, which means if the type acts as an immutable document accessor which never mutates that Vec<u8>
we don't need to worry about the address of its contents changing.
A number of crates have emerged which try to solve the self-borrowing problem. A popular one is ouroboros
, which would allow us to do something like this:
use core::marker::PhantomData;
use ouroboros::self_referencing;
#[self_referencing]
pub struct Document<M> {
der: Vec<u8>,
#[borrows(der)]
message_data: M,
message_type: PhantomData<M>
}
This would allow &M
to be accessed as a simple reference conversion: in fact, we could easily implement traits like AsRef
and Borrow
this way. We could also potentially implement ToOwned
in such cases, which would serialize a given message type to its corresponding document type, which would be very nice.
The following decoder is used to parse PKCS#1 RSA private keys:
formats/pkcs1/src/private_key.rs
Lines 99 to 111 in 2898d39
The version
field is unconstrained, allowing either the two-prime or multi-prime encoding, but the otherPrimeInfos
field is ignored. This violates a MUST requirement in RFC 8017:
o otherPrimeInfos contains the information for the additional primes
r_3, ..., r_u, in order. It SHALL be omitted if version is 0 and
SHALL contain at least one instance of OtherPrimeInfo if version
is 1.
Either version
must be constrained to equal Version::TwoPrime
by this decoder (as I do in the age
crate), or the otherPrimeInfos
field must be conditionally parsed.
Implement PKCS#7 per RFC5652
Hey @franziskuskiefer question for you, how is the current encoding / decoding performance? If there are places you think we can help make improvements let us know.
Originally posted by @tomleavy in #250 (comment)
That's a good question. Before being able to answer the question we'd need to know what the performance is and understand if its any good.
There's a very basic benchmark here thttps://github.com/RustCrypto/formats/blob/4d0c0530be501e00dd1fbc25024db02eeffd669b/tls_codec/benches/tls_vec.rs. I wrote it to look at the performance impact of (de)serialising each element vs using the specialised ByteVec.
There are two things that I think are worth looking at.
New versions of all of the der
crate's dependencies are now released, which means it's possible to do another release of der
/der_derive
.
Ideally I think we can get any prospective breaking changes into the forthcoming release so the version in git can stay v0.6 as long as possible as opposed to having to immediately bump to v0.7.0-pre.
Here's a list of changes that would be nice to get into a v0.6 release of the der
crate:
RFC 7469 Section 2.4 defines an algorithm for computing a cryptographic hash (namely SHA-256) for X.509 SubjectPublicKeyInfo to be used as a fingerprint for the purposes of public key pinning.
We could potentially add an optional feature that pulls in sha2
for the purposes of computing these fingerprints.
The trait method for deserializing PKCS#1-encoded RSA private keys is documented as accepting PEM:
Lines 29 to 39 in 2898d39
However, via the call graph we can see it actually uses the pem-rfc7468
crate, which parses textual encodings that are a strict subset of PEM:
formats/pkcs1/src/document/private_key.rs
Lines 56 to 57 in 2898d39
Line 70 in 2898d39
In particular, this makes the trait impl incompatible with legacy passphrase-encrypted OpenSSH keys:
Unlike legacy PEM encoding [RFC1421], OpenPGP ASCII armor, and the
OpenSSH key file format, textual encoding does *not* define or permit
headers to be encoded alongside the data. Empty space can appear
between the pre-encapsulation boundary and the base64, but generators
SHOULD NOT emit such any such spacing. (The provision for this empty
area is a throwback to PEM, which defined an "encapsulated header
portion".)
The documentation for FromRsaPrivateKey::from_pkcs1_pem
should be updated to clarify that it only accepts RFC 7468 textual encodings, not arbitrary PEM.
let pem = "-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VuBCIEIHBgJSkzrG56SpsOsmMsWgQKhyV624aaPszD0WtyTyZH
-----END PRIVATE KEY-----";
let key = pkcs8::PrivateKeyDocument::from_pem(pem).unwrap();
called `Result::unwrap()` on an `Err` value: Error { kind: UnexpectedTag { expected: Some(Tag(0x30: SEQUENCE)), actual: Tag(0x04: OCTET STRING) }, position: Some(Length(34)) }
pem key is generate use openssl
openssl genpkey -algorithm x25519 > ed25519.pem
Across the doc
https://docs.rs/pkcs8/0.7.6/pkcs8/#supported-algorithms
Ed25519 is supported.
Right now the pkcs8
crate has an optional pkcs1
feature which adds a blanket impl of PKCS#8 support for all types which impl the pkcs1
traits.
Unfortunately, with the blanket impl going in that direction, it isn't possible to have two of them, so it isn't possible to also use the same trick for sec1
, at least in a way where features are additive.
It seems the best solution is probably to reverse the relationship, and add blanket impls of the pkcs1
and sec1
traits for types which impl the pkcs8
traits. These can be further bounded by TryFrom
impls for the respective ASN.1 DER types (e.g. RsaPrivateKey
, EcPrivateKey
).
https://datatracker.ietf.org/doc/html/rfc4514
The above RFC defines the syntax.
Can the AlgorithmIdentifier type be modified to implement the DerOrd trait so that a SET OF AlgorithmIdentifier is allowed?
Are there any other types in this crate that should also implement DerOrd so that SET OF can be used with them?
I ran into this issue trying to use base64ct in arti, a Tor implementation in Rust. https://gitlab.torproject.org/tpo/core/arti/-/issues/154
This simple crate demonstrates the issue: https://github.com/pastly/base64ct-poc. From its README:
The code uses two short base64-encoded strings to demonstrate the discrepancy
in how base64ct handles the last character. Base64 is chosen as the point of
comparison because it handles this issue correctly IMO.
The two strings:
Mi, which is 001100 100010
, or split "on the byte," 00110010 0010
.
Mg, which is 001100 100000
, or split "on the byte," 00110010 0000
.
The former contains trailing bits that are non-zero, thus shouldn't be ignored,
but are ignored by base64ct.
The latter has only zero-valued trailing bits, so the bits are safely ignored
by both base64 and base64ct.
I'm struggling with reading ed22519
keys from PEM files.
If I create a PEM file as follows
openssl genpkey -algorithm ed25519 > ed25519.pem
And try to load it with the following code:
let pkcs8_doc = PrivateKeyDocument::read_pkcs8_pem_file(path)?;
let pk_info = pkcs8_doc.decode();
let private_key = pk_info.private_key;
The length private_key.len()
is 34 bytes but should actually be 32 bytes. If I throw away the first two bytes of private_key
everything works as expected.
Am I doing something wrong here, or is this a bug in the library? Thanks!
Hi there,
I'm seriously considering using your library to encode lots of f64
data (cf. https://github.com/anise-toolkit , and specifically this flatbuffer example: https://github.com/anise-toolkit/specs/blob/1-draft-0-0-1/ephemeris.fbs ).
I'm brand new to ASN.1, so please excuse novice questions. According to a copy of some specs I found here, https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf, section 2.4 (page 83) talks about a REAL
type. The ASN.1 playground also uses that type in their default example: https://asn1.io/asn1playground/ .
However, I don't see it in the docs as one of the supported type (https://docs.rs/der/latest/der/asn1/index.html).
Is the REAL type not supported in DER encoding? Or has it not yet been implemented in this library because it isn't needed for the crypto algorithms?
Thanks
I think it's pretty common to have a single .pem
file that contains a bunch of public keys. Not sure what the best API for this is, maybe return the remaining string/byte slice on successful parse? I wrote a PEM encoder/decoder that follows this pattern, might be useful: https://gist.github.com/akhilles/1ebaa0a6f15e46d6b4414bdf7d8c8ced.
See this repo https://github.com/mishazharov/der-repro for a small test case to show the error
$ rustc --version
rustc 1.56.0 (09c42c458 2021-10-18)
$ cargo --version
cargo 1.56.0 (4ed5d137b 2021-10-04)
Kernel: 5.11.0-38-generic
Distro: #42~20.04.1-Ubuntu
Arch: x86_64
Run wasm-pack build --no-typescript --release
.
Compiling der v0.4.4
error[E0277]: can't compare `usize` with `()`
--> /home/misha/.cargo/registry/src/github.com-1ecc6299db9ec823/der-0.4.4/src/encoder.rs:151:43
|
151 | if nested_encoder.finish()?.len() == length.try_into()? {
| ^^ no implementation for `usize == ()`
|
= help: the trait `PartialEq<()>` is not implemented for `usize`
error[E0277]: the trait bound `(): From<Length>` is not satisfied
--> /home/misha/.cargo/registry/src/github.com-1ecc6299db9ec823/der-0.4.4/src/encoder.rs:151:53
|
151 | if nested_encoder.finish()?.len() == length.try_into()? {
| ^^^^^^^^ the trait `From<Length>` is not implemented for `()`
|
= note: required because of the requirements on the impl of `Into<()>` for `Length`
note: required because of the requirements on the impl of `TryFrom<Length>` for `()`
--> /home/misha/.cargo/registry/src/github.com-1ecc6299db9ec823/der-0.4.4/src/asn1/null.rs:46:6
|
46 | impl TryFrom<Any<'_>> for () {
| ^^^^^^^^^^^^^^^^ ^^
= note: required because of the requirements on the impl of `TryInto<()>` for `Length`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `der` due to 2 previous errors
Error: Compiling your crate to WebAssembly failed
Caused by: failed to execute `cargo build`: exited with exit status: 101
full command: "cargo" "build" "--lib" "--release" "--target" "wasm32-unknown-unknown"
I did some basic testing and found that patching the encoder with the following line appeared to resolve it on v0.4.4, but I didn't get the chance to test it on master.
In https://github.com/RustCrypto/formats/blob/master/der/src/encoder.rs#L159
Header::new(Tag::Sequence, length).and_then(|header| header.encode(self))?;
let mut nested_encoder = Encoder::new(self.reserve(length)?);
f(&mut nested_encoder)?;
+ let len: usize = length.try_into()?;
+ if nested_encoder.finish()?.len() == len {
- if nested_encoder.finish()?.len() == length.try_into()? {
Ok(())
} else {
self.error(ErrorKind::Length { tag: Tag::Sequence })
}
I wanted to get some thoughts before opening a PR though because this seems like a bandaid fix. Am happy to open a PR though if the maintainers think this is worth fixing and the fix is acceptable!
Edit: Going to open one anyway because it's easy
While usually more than one object is serialized in a byte buffer, there might be situations where you just expect one object from a byte buffer. In other words, there is currently no counterpart to tls_serialized_detached()
.
The motivation behind this is that the TLS encoding implicitly guarantees that at least the required number of bytes indicated in the length prefix have been read, but you don't know if any trailing bytes are left after deserializing an object unless you try to deserialize another object or do a manual check.
I would propose something like tls_deserialize_detached()
(or whatever other name is more suitable) that takes ownership of the Read
and after internally calling tls_deserialize
it checks if it can read one more byte and if so, returns an error.
RFC 7468 states that parsers may handle other line sizes.
OpenSSH encoded private keys use a line-length of 70, but are otherwise compliant with RFC 7468. Supporting non-standard line sizes would allow pem-rfc7468
to read these keys.
Unfortunately, handling non-standard sizes is likely to cause a performance regression for a couple of reasons:
70
(or other values) may not be an exact base64 chunkFor those reasons, I propose additional decoder methods that are less strict.
decode_less_strict
decode_vec_less_strict
I noticed a possible issue with x509::Extension
, but I haven't investigated it completely. Specifically, Extension::critical
seems to not emit any DER
for that field when serialized if the value of the field is false. This means that if you deserialize and the re-serialize an Extension
that contains the field in the DER, you will not be able to re-produce the original DER. This might cause problems with signature validations. Perhaps Option<bool>
might be the better choice here?
Prior discussion: https://github.com/iqlusioninc/yubikey.rs/pull/348/files/1dd3aa37596f0f60db597b17d77a101d53445ab4#r804122161
One thing that would be nice to have is a builder type for constructing an X.509 certificate which provides a higher-level API aimed at reducing choice and potential errors when constructing certificates.
It could take care of constructing the actual TbsCertificate
type, as well as signing that and constructing the final Certificate
. It could also own the data for the various fields, allowing the serialization types to borrow them, so it doesn't require a lifetime.
We could potentially use tooling like certlint
and/or zlint
to ensure that certificates generated by this builder follow best practices.
Currently there is no way to communicate that parameters
field in spki::AlgorithmIdentifier
must not be specified. This is useful for e.g. Ed25519 keys where the algorithm OID contains all information needed and the RFC specifies that:
the parameters MUST be absent.
Source: https://datatracker.ietf.org/doc/html/rfc8410#section-3
Right now CI time is wasted on re-compiling cargo hack
from scratch each time. It can be done similarly to how we work with cross
.
Allow building tls-codec with no_std
.
I now see this is a bit more
std
-dependent than I realized due to the extensive use ofstd::io::{Read, Write}
.
Originally posted by @tarcieri in #56 (review)
These string types are needed for e.g. RFC5280's DisplayText
:
DisplayText ::= CHOICE {
ia5String IA5String (SIZE (1..200)),
visibleString VisibleString (SIZE (1..200)),
bmpString BMPString (SIZE (1..200)),
utf8String UTF8String (SIZE (1..200)) }
Please post a comment to request additional formats
There is a bug with the generated DER somewhere in the der
crate. I discovered this while writing a pkcs10
crate. This causes the encoding tests to fail. Specifically, it fails to correctly emit the IMPLICIT
tag modification of the SET
in CertReqInfo::attributes
(see tagged PR). Decoding works correctly.
The fix is probably simple, but I don't know enough about the structure of the der
crate to find it easily. And since I'm not sure you actually want this PR, I didn't spend a lot of time on it. I'm willing to produce a patch if you point me in the right direction.
The following code triggers the bug:
#[derive(Clone, Debug, PartialEq, Eq, Sequence)]
pub struct Foo {
#[asn1(context_specific = "0", tag_mode = "IMPLICIT")]
pub bar: SetOfVec<Baz>,
}
Decoding of the above code works correctly. But encoding causes SET
to be emitted without the context-specific 0
tag.
As part of investigating indygreg/PyOxidizer#482 I was looking at what other ASN.1 implementations do for GeneralizedTime
.
This project's implementation - like mine - assumes the no fractional seconds allowed definition of GeneralizedTime
per RFCs like 5280. However, the ASN.1 definition of GeneralizedTime
(I think X.690 is the current specification) allows multiple representations, including with fractional seconds and/or with a time zone differentiator. At least 1 crypto related RFC (3161 - Time-Stamp Protocol) allows use of the form with fractional seconds.
I know less about crypto than this project's maintainers. But I think there is a compelling case to allow GeneralizedTime
to accept fractional seconds or to define a variant of this type that allows fractional seconds since there is at least 1 crypto RFC needing it - and one used as part of many PKI systems at that. I'm leaning towards making GeneralizedTime
generic over an enum that controls which forms are allowed but I haven't decided yet. I'll be very curious to see what this project decides!
Problem trying to declare a SEQUENCE that contains an ENUMERATED field. Code as follows:
#[derive(Enumerated, Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[repr(u32)]
enum EID {
e1 = 1,
e2 = 2,
e3 = 3
}
#[derive(Clone, Debug, Eq, PartialEq, Sequence, PartialOrd, Ord)]
pub struct E {
eid: EID,
b1: bool,
b2: bool,
}
impl OrdIsValueOrd for E {}
type SetOfE = SetOf<E, 2>;
Compiling says:
error[E0759]: decoder has an anonymous lifetime '_ but it needs to satisfy a 'static lifetime requirement
--> src/main.rs:83:39
|
83 | #[derive(Clone, Debug, Eq, PartialEq, Sequence, PartialOrd, Ord)]
| ^^^^^^^^
| |
| this data with an anonymous lifetime '_...
| ...is captured and required to live as long as 'static here
|
= note: this error originates in the derive macro Sequence (in Nightly builds, run with -Z macro-backtrace for more info)
Hi there,
I'm trying to use ASN1 to store a large data set (up to 200 MB or so). This in replacement of a file that is currently packed struct stored as bytes to disk. I was wondering if you had any hints on the best approach to do this.
The data I have is an "ephemeris" which stores several "splines." Before starting the encoding, I do not know how many splines I'll need to encode/decode, but it could be several thousands. The reference file I'm using defines that length up front and then libraries perform a direct access to the correct spline (there's a specific algorithm on how to retrieve the correct spline number from some input information, and then seeking through the file is sufficient to grab the data).
In an attempt to mimic this behavior, I'm trying the following structure:
pub struct Ephemeris<'a> {
pub spline_duration_s: f64,
pub splines: &'a [Spline<'a>], // We can't do SequenceOf<Spline<'a>, N>, because N would be too large to fit on the stack
}
And the following encoding and decoding implementations (the encoding works, but the decoding fails)
impl<'a> Encode for Ephemeris<'a> {
fn encoded_len(&self) -> der::Result<der::Length> {
self.spline_duration_s.encoded_len()?
+ self.splines.iter().fold(Length::new(2), |acc, spline| {
(acc + spline.encoded_len().unwrap()).unwrap()
})
}
fn encode(&self, encoder: &mut der::Encoder<'_>) -> der::Result<()> {
encoder.encode(&self.spline_duration_s)?;
encoder.sequence(Length::new(self.splines.len() as u16), |encoder| {
for spline in self.splines {
encoder.encode(spline)?;
}
Ok(())
})
}
}
impl<'a> Decode<'a> for Ephemeris<'a> {
fn decode(decoder: &mut Decoder<'a>) -> der::Result<Self> {
let spline_duration_s = decoder.decode()?;
let expected_len: u32 = decoder.peek_header().unwrap().length.into();
dbg!(expected_len);
// XXX: how can I point each spline to the input buffer? I can't perform an alloc, nor can I
let mut splines: &'a [Spline<'a>; 1000]; // XXX: How do I even initialize this?
decoder.sequence(|decoder| {
for idx in 0..expected_len {
splines[idx as usize] = decoder.decode()?;
}
Ok(())
});
Ok(Self {
spline_duration_s,
splines,
})
}
}
splines
field to point to the decoded splines?For reference, here is how a spline is defined, where x,y,z
are encoded as OctetStrings.
pub struct Spline<'a> {
pub start_epoch: Epoch,
pub end_epoch: Epoch,
/// State information (km)
pub x: &'a [u8],
/// State information (km)
pub y: &'a [u8],
/// State information (km)
pub z: &'a [u8],
Thanks for your help
I cannot derive "ValueOrd" on a struct/sequence that contains an spki::SubjectPublicKeyInfo, and therefore cannot declare a SetOf that struct. Should spki::SubjectPublicKeyInfo provide a standard implementation of ValueOrd?
The Sequence
trait is built around a callback which accepts a [&dyn Encodable]
slice as an argument.
This necessitates what are currently some special case hacks when dealing with intermediate values used for encoding which need to wrap a reference to a field. There are currently ContextSpecificRef
and OptionalRef
to handle encoding context-specific and OPTIONAL
fields.
A nicer alternative would be to have a single solution that works everywhere, rather than introducing special case types for handling references as this comes up. Some potential solutions:
EncodeValue
, e.g. EncodeRef
. This feels like a bit of a hack, but less of a special case one than things like ContextSpecificRef
and OptionalRef
. In fact, the latter could be type aliases composed in terms of the former.EncodeValue
blanket impl to operate on &T
where T: EncodeValue
. This is a bit tricky because there's already a blanket impl of EncodeValue
for sequence types, so that would need a different solution. However, as Sequence
is already a derived trait, it would be possible to add a derived EncodeValue
shim which uses a helper method to easily encode any Sequence
.My inclination is to try the first immediately as that one's easy enough, then experiment with the second to see if it's feasible.
An interesting optimization for no_std
platforms would be to support in-place decoders/encoders.
The entire crate is already written to operate over byte slices as inputs/outputs, which should make much of the existing implementation reusable in this context..
Seems like something which would nicely be replaced with const generics, whenever you're ready to bump MSRV to 1.51.
Originally posted by @tarcieri in #56 (comment)
Picking up the discussion from: RustCrypto/utils#553 (comment)
This is an issue to discuss better naming for the *pem_with_le
methods in the From*Key/To*Key
traits.
This issue proposes, as a breaking change in the next release, to impl the following:
FromPrivateKey
(and potentially ToPrivateKey
) on PrivateKeyDocument
FromPublicKey
(and potentially ToPublicKey
) on PublicKeyDocument
The methods in these traits both duplicate and thunk through the methods on the respective document types, so impl'ing the traits would consolidate all of the functionality in one place.
Another additional benefit is that methods like EncryptedPrivateKeyInfo::decrypt
could be made generic around a T: FromPrivateKey
return type, making it possible to decrypt encrypted private keys to any type which impls FromPrivateKey
.
Likewise an encrypt
method could be added to ToPrivateKey
.
The FromPrivateKey
trait presently had a method:
https://docs.rs/pkcs8/0.6.0/pkcs8/trait.FromPrivateKey.html#tymethod.from_pkcs8_private_key_info
fn from_pkcs8_private_key_info(
private_key_info: PrivateKeyInfo<'_>
) -> Result<Self>
It seems like this could be easily enough replaced with a TryFrom<PrivateKeyInfo<'a>>
bound instead.
Before going too deep into this, I'm trying to implement a Subject Alternative Names struct, RFC section link here: https://datatracker.ietf.org/doc/html/rfc2459#section-4.2.1.7
General Name
here is a CHOICE, with 9 context-specific implicit tags. Several of these tags have the same underlying type.
I'm trying to implement the Encodable
/Decodable
/Choice
traits on my GeneralName
enum. I can't use the Choice
derive macro because:
Ia5String
as a possible type (my main interest)Choice
derive macro does not allow for multiple enum variants of the same intermediate subtypeHaving dug into the expanded macro, I think I understand why, so I figured I could implement the traits on my own, but I'm running into trouble implementing TryFrom<Any<'a>>
. Code snippet:
fn try_from(any: Any<'a>) -> Result<Self, Self::Error> {
match any.tag() {
der::Tag::Ia5String => any
.ia5_string()
.ok()
.and_then(|val| val.try_into().ok())
.ok_or_else(|| der::Tag::Ia5String.value_error()),
actual => Err(der::ErrorKind::UnexpectedTag {
expected: None,
actual,
}
.into()),
}
}
Should the tag requested here be an Ia5String
, or do I need to manually interpret the tag? I also don't understand how to interpret implicit context-specific tags, and the ContextSpecific
struct seems to set the constructed bit even though, in the X509 certificates I've parsed, none of them do (the tag for each SAN goes 0x82 <len> <value>
, where as the ContextSpecific
struct always serializes to 0xA2
.
Can the library encode a plain private key using pkcs8
to der
/pem
?
Plz, if yes, let me know how.
Thanks in advance.
The der_derive crate does not contain license files, neither in the git repo, nor in published crates. Since all other crates I see in this repository (and in the whole RustCrypto project, to be honest) have this set up correctly, this is probably an oversight :)
Hi, first of all, many thanks for sharing this library, If possible could you please give an example of how to create a pair of ssh keys (private/public) mainly for RSA 2048?
|
20 | println!("{}", Base64::encode_string(&msg));
| ^^^^^^^^^^^^^ function or associated item not found in Base64
I'm starting to play around with der
and noticed, as with all ASN.1 crates I've seen, that when describing enum
with CHOICE
, the variants are only allowed to have one tuple when using derive macros. I'm curious why that is. I'm very new to ASN.1, but from my understanding it's possible to have a choice with multiple structs within it. For instance couldn't something like
enum Person {
Firstname(String),
FirstLast(String, String)
}
be represented by the following?
FirstLast ::= SEQUENCE {
field0 UTF8String,
field1 UTF8String }
Person ::= CHOICE {
firstname UTF8String,
firstLast FirstLast }
Is the idea to match the types in rust to the format describable by the schema files (in other words, FirstName
would be a first-class struct in the rust code with two fields)? Or is there a technical reason for not doing this, like it would introduce some kind of ambiguity?
I'm mainly asking because I'm adding DER serialization to an existing system that makes use of enums with more than one tuple and trying to figure out if I can implement Encodable
/Decodable
directly for these using the format above (encode the tuples as an implied sequence) or if I should convert them to struct
s with named fields instead.
Thanks for the library!
Thoughts on adding PKCS12/RFC7292 and (eventually) complete or near complete PFX support? It's a monster and it sucks but it's in wide use thanks to Windows' use of the format.
I'm happy to take ownership of the feature and start work but can't guarantee any timelines on it
While building the base64ct or any repo which depends on it, I get the following error:
error: failed to parse manifest at /../../../../formats/base64ct/Cargo.toml
Caused by:
feature edition2021
is required
consider adding cargo-features = ["edition2021"]
to the manifest
Please have a look if this is due to a new Rust related speciifcation.
Hi Tony,
Unless I'm mistaken, in der version 0.6, the inner
field of a SequenceOf
struct is not public: would it be possible to get a mutable pointer to that ArrayVec? From that, I think I should be able to call any of the methods of slice
(e.g. the sort
method) on the underlying data since it implements the AsRef
structure. Is that correct?
My use case is the following: I am building a lookup table of u32
s, stored as a SequenceOf<u32, 256>
. Prior to encoding it, I want to ensure it's sorted so I can do a binary search on that data upon loading it.
Another option would be for me to store the data as an ArrayVec
and then initialize the SequenceOf
from that ArrayVec
(probably by moving the data) when encoding it, but that seems less straightforward.
Thanks for your help
Is there a way to create RSA keys of 1024 as the minimum instead of 2048 or make it optional?
I am trying to create keys using Cloudflare workers, but with the default of 2048 I randomly get errors of exceeding CPU time for example: Error: Worker exceeded CPU time limit.
I saw that the crate on crates.io still points to the old repository.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.