Giter Club home page Giter Club logo

heapless's People

Contributors

afoht avatar andresv avatar ansg191 avatar birkenfeld avatar bors[bot] avatar debug-ito avatar dirbaio avatar ede1998 avatar finnbear avatar homunkulus avatar japaric avatar jeandudey avatar jgallagher avatar jordens avatar korken89 avatar kpp avatar lambda-logan avatar marcusgrass avatar newam avatar pleepleus avatar reitermarkus avatar rjsberry avatar rlee287 avatar samlich avatar sosthene-nitrokey avatar tdholmes avatar vinaychandra avatar vorot93 avatar xosplicer avatar yuhanliin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heapless's Issues

Error on nightly: Unions in const fn are unstable

I'm getting this error on recent nightlies:

error[E0658]: unions in const fn are unstable (see issue #51909)
  --> /Users/josef/.cargo/registry/src/github.com-1ecc6299db9ec823/heapless-0.3.6/src/__core.rs:43:9
   |
43 |         U { none: () }.some
   |         ^^^^^^^^^^^^^^^^^^^
   |
   = help: add #![feature(const_fn_union)] to the crate attributes to enable

error: aborting due to previous error

Benchmarks

The probing algorithms used in this crate for IndexMap are pretty rudimentary, and from my pretty unscientific benchmarks (basically just copying the benchmarks from the hashbrown crate, comparing FnvHashMap with the fnv-based hashmap from this crate) it seems like IndexMap is around 15x slower than std::collections::HashMap:

test indexmap::tests::get_remove_insert_heapless ... bench:         665 ns/iter (+/- 263)
test indexmap::tests::get_remove_insert_std      ... bench:          43 ns/iter (+/- 19)

Here are the benchmarks used to get the results above. I compiled with LTO and codegen-units = 1 in order to make sure that std wasn't getting benefits from being inlining where heapless wasn't, most notably around std::hash vs hash32. Of course, these benchmarks are for large maps and smaller maps won't give such pronounced differences. Also, the use of hash32 will probably give a speedup on 32-bit targets that the std map doesn't have access to.

#[bench]
fn get_remove_insert_heapless(b: &mut Bencher) {
    let mut m = crate::FnvIndexMap::<_, _, U1024>::new();

    for i in 1..1001 {
        m.insert(i, i).unwrap();
    }

    let mut k = 1;

    b.iter(|| {
        m.get(&(k + 400));
        m.get(&(k + 2000));
        m.remove(&k);
        m.insert(k + 1000, k + 1000).unwrap();
        k += 1;
    })
}

#[bench]
fn get_remove_insert_std(b: &mut Bencher) {
    let mut m = fnv::FnvHashMap::with_capacity_and_hasher(1024, Default::default());

    for i in 1..1001 {
        m.insert(i, i);
    }

    let mut k = 1;

    b.iter(|| {
        m.get(&(k + 400));
        m.get(&(k + 2000));
        m.remove(&k);
        m.insert(k + 1000, k + 1000);
        k += 1;
    })
}

I'm writing an embedded project that needs a hashmap, and although I do have access to an allocator avoiding it will make my performance more predictable. So I might try to put in a PR improving the performance of IndexMap if I get some spare time to do so.

Failure to build on AVR due to missing atomic types

I'm trying to build a project for the Atmega 328p but I'm hitting an error while compiling heapless

Error

error[E0432]: unresolved imports `core::sync::atomic::AtomicU16`, `core::sync::atomic::AtomicUsize`
 --> /home/patrick/.cargo/git/checkouts/heapless-9de3abde4d9bcaa0/9ff3a5f/src/sealed.rs:7:36
  |
7 |     use core::sync::atomic::{self, AtomicU16, AtomicU8, AtomicUsize, Ordering};
  |                                    ^^^^^^^^^            ^^^^^^^^^^^ no `AtomicUsize` in `sync::atomic`
  |                                    |
  |                                    no `AtomicU16` in `sync::atomic`
  |                                    help: a similar name exists in the module: `AtomicU8`

Reproduction steps

Same as for #177

Versions

rustc 1.48.0-nightly (e599b53e6 2020-09-24)
heapless commit 9ff3a5f (latest master as of writing)
avr-atmega328p.json from https://github.com/Rahix/avr-hal, commit 2a59d441 (latest master as of writing)

Workaround

I was able to compile after adding avr-atmega328p to the rustc-cfg=has_atomics check in build.rs, but I'm not sure if that's the correct way to go about it since the target isn't built-in yet.

Serde support not working?

I am trying to derive structure and getting the following error:

the trait bound `heapless::string::String<typenum::uint::UInt<typenum::uint::UInt<typenum::uint::UInt<typenum::uint::UInt<typenum::uint::UInt<typenum::uint::UTerm, typenum::bit::B1>, typenum::bit::B0>, typenum::bit::B1>, typenum::bit::B0>, typenum::bit::B1>>: config::_IMPL_DESERIALIZE_FOR_Config::_serde::Deserialize<'_>` is not satisfied

It's strange because i see that heapless supports serde

Vector .resize() broken

Hello. while working on a project I may have found a potential bug with .resize(&mut self, new_len: usize, value: T). When the following code is executed there is no change in the capacity of the vector.

let mut data: Vec<u8, U1> = Vec::new();
hprintln!("data = {:#?}", data.capacity());
data.resize(10, 0);
hprintln!("data = {:#?}", data.capacity());

Correct me if I'm wrong but I believe this should work in the same way as std, right?

Serialize Vec<u8> as bytes

Some serialization formats distinguish between arrays of bytes and byte strings.

One approach is to defer to serde-bytes, serde-rs/bytes#18.

An alternative is that heapless has a specialized Vec<u8> wrapper type, with its own ser/de routines, but delegating everything else to Vec. This is probably preferable, to have all the traits available.

Thoughts? I could give this a try if there is support to include such a Bytes type here (I have a private/non-polished POC that does the obvious thing). Maybe there are other properties it could have over a generic Vec, such as Ord or (maybe?) more efficient extend_from_slice, and more natural target of String::into_bytes.

Did pool in an external crate break in the update?

I'm not 100% sure if I'm doing something wrong, but it seems that after the update pool! broke when used in an external crate and exported.

Simply creating a pool in crate X as:

use heapless::{
    pool,
    pool::singleton::Pool,
};

// ...

pool!(Mypool: [u8; 16]);

And trying to use is:

use X::Mypool;

// ...
Mypool::grow(MEMORY);
// ...

Gives the error:

the trait `heapless::pool::singleton::Pool` is not implemented for `X::Mypool`

This did not happen before the update and I am having issues circumventing the issue. Any ideas?
Thanks!

Atomics and multi core support

The current implementation of ring_buffer::{Consumer,Producer} only works on single core systems because it's missing memory barriers / fences. The proper way to add those barries is to change the types of the head and tail fields in RingBuffer from usize to AtomicUsize.

We are not doing that right now because that change would make us drop support for ARMv6-M, which is one of the main use cases (single core microcontrollers). If rust-lang/rust#45085 was implemented we could do the change to AtomicUsize while still supporting ARMv6-M.

I think the options are: (a) wait for rust-lang/rust#45085 or (b) provide a Cargo feature that switches the implementation to atomics to enable multi-core support (seems error prone: you could forget to enable the feature -- the other option is to make multi-core support a default feature but default features are hard, and sometimes impossible, to disable which hinders ARMv6-M support).

Should the usage of {Option, Result}::expect be avoided to safe memory?

Currently there are a few functions that use {Option, Result}::expect because there is no way to return Option or Result. Examples are LinearMap::{index, index_mut} , Vec::from_iter and IndexMap::index.

AFIK the messages that are associated with them are statically allocated in the data section of the binary and therefore take up space, even if no panic actualy can occure.

As this library is used with microcontrollers, should those strings be removed?
I don't know if rust is smart enough to remove them from the binary by itself.

However some clarity in panic messages would be sacrificed in exchange for it.

Pool: Store the pointer to the next node and data in the same memory location.

Instead of having:

pub struct Node<T> {
    next: AtomicPtr<Node<T>>,
    pub(crate) data: UnsafeCell<T>,
}

We could just use one location to store both things, since we don't need to know both at the same time. I made a small PoC of this:

Playground

I still have to check a few things, but I would like some opinions about this, to see if it's really sound and worth it. I then can adapt this to heapless' code and make a PR if there's a disposition to merge.

heapless does not build for riscv32imc-unknown-none-elf target

rv32imc does not have atomics leading to this issue:

error[E0599]: no method named `compare_exchange_weak` found for type `core::sync::atomic::AtomicPtr<pool::Node<T>>` in the current scope
   --> /home/ryan/.cargo/registry/src/github.com-1ecc6299db9ec823/heapless-0.4.4/src/pool/mod.rs:278:33
    |
278 |                 match self.head.compare_exchange_weak(
    |                                 ^^^^^^^^^^^^^^^^^^^^^ method not found in `core::sync::atomic::AtomicPtr<pool::Node<T>>`

Cannot create static HistoryBuffer

With Vec it is possible to do this:

use heapless::Vec; // fixed capacity `std::Vec`
use heapless::consts::U8; // type level integer used to specify capacity
static mut XS: Vec<u8, U8> = Vec(heapless::i::Vec::new());
let xs = unsafe { &mut XS };

However HistoryBuffer is missing in heapless::i.

"Classic" ringbuffer support?

I've needed a "classic" ringbuffer quite a few times now, and since it's not that hard rolled my own. What I mean is:

  • fixed capacity array plus index of "head"
  • push() moves head and overwrites
  • allows random access using iterator, and some standard things like first(), last()
  • (maybe:) all items are initially filled with some default, allows getting rid of Option<>s for first() etc
  • normal &mut methods, no lock-free support

This is nothing fancy, but seems like it falls under the mission statement of heapless? I'll contribute an initial implementation PR, if yes.

Comment in the pool! macro does not propagate to the struct

Example:

pool!(
    /// Pool allocator for the DMA engine running the transfers, 
    /// where each node in the allocator is an `MyPoolNode`
    #[allow(non_upper_case_globals)]
    MyPool: MyPoolNode
);

The attribute propagates properly, but the comment does not.
This makes it difficult to document and show examples connected to the pool.

How to write generic function

IndexMap require

N: ArrayLength<Bucket<K, V>> + ArrayLength<Option<Pos>>

But Bucket and Pos are private.
So I cann't use them in function definition.

How to write function with IndexMap that can take any map size?

FromIterator for Vec

Is it possible to implement IntoIterator for Vec and maybe other?
It is needed for .collect().

compile on stable

Off the top of my head these things need to be done to get this compiling on stable:

  • Put all const functions behind a const-fn feature.

  • Put the generic atomic type (Atomic<T>) behind a Cargo feature. AtomicUsize should be used everywhere when that feature is disabled.

  • Remove the nonzero feature ( NonZeroU32) in favor of manually checking for some magic value that represents None.

Vec: Simpler creation from slice

Right now creating a Vec with fixed data can be a bit verbose...

let mut data = Vec::<u8, consts::U256>::new();
data.extend_from_slice(b"Hello").unwrap();

Would you accept a PR that adds a constructor function?

let data = Vec::<u8, consts::U256>::from_slice(b"Hello").unwrap();

Besides requiring less code, it also allows to drop the mut.

Const generics

Const generics are getting more popular. I can try to implement heapless using const generics and push it into a separate branch until they are stable. It is a nice challenge both for coding skills and for const generics.
Shall I start?

Interest in adding BBQueue for batched SPSC support

Hey @japaric, I've gotten bbqueue to a pretty usable state, and I was wondering if you would be interested in adding it to the heapless collections. It is a SPSC queue, however rather than being "one item at a time", it's more like "one slice at a time", meaning you can request a "chunk" of space to read/write, which pairs well with DMA space allocations.

Originally, I planned to use generic-array to match the interfaces provided by heapless::queue, however after struggling with getting that to work for my use cases, I ended up writing a "static allocator" macro based on your singleton!() macro in the cortex-m crate.

Let me know if you'd be interested in adding this, or if you have any thoughts or comments in the impl or what it would take to merge it in. I saw your other PR regarding DMA, I think this might work well with that (it gives you &'static [u8]s you can give to the DMA impl).

Could not compile `as-slice` when targeting AVR

I'm a newbie to embedded rust (and rust in general). When I add heapless to this project, it fails to build. If I comment out heapless in the Cargo.toml, then it builds fine.

REPRO

  1. Create lib
(base) ~/temp/break $ cargo new --lib abc
     Created library `abc` package
  1. Build
(base) ~/temp/break/abc $ cargo build
   Compiling abc v0.1.0 (/home/todd/temp/break/abc)
    Finished dev [unoptimized + debuginfo] target(s) in 0.23s
  1. Add heapless to Cargo.toml

Cargo.toml

[dependencies]
heapless = "0.5.6"
(base) ~/temp/break/abc $ cargo build
    Updating crates.io index
   Compiling typenum v1.12.0
   Compiling byteorder v1.3.4
   Compiling stable_deref_trait v1.2.0
   Compiling heapless v0.5.6
   Compiling hash32 v0.1.1
   Compiling generic-array v0.13.2
   Compiling generic-array v0.12.3
   Compiling as-slice v0.1.3
   Compiling abc v0.1.0 (/home/todd/temp/break/abc)
    Finished dev [unoptimized + debuginfo] target(s) in 6.80s
  1. Switch to nightly and build
$ rustup override set nightly

rustup override set nightly
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: override toolchain for '/home/todd/temp/break/abc' set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.48.0-nightly (dbb73f8f7 2020-09-12)

(base) ~/temp/break/abc $ cargo build
   Compiling typenum v1.12.0
   Compiling byteorder v1.3.4
   Compiling stable_deref_trait v1.2.0
   Compiling heapless v0.5.6
   Compiling hash32 v0.1.1
   Compiling generic-array v0.13.2
   Compiling generic-array v0.12.3
   Compiling as-slice v0.1.3
   Compiling abc v0.1.0 (/home/todd/temp/break/abc)
    Finished dev [unoptimized + debuginfo] target(s) in 6.78s
  1. Add AVR stuff
copy from/somewhere/avr-atmega328p.json .
copy from/somewhere/.cargo/config.toml .cargo/.

(project files attached)

ERROR

    Updating crates.io index
       Fresh core v0.0.0 (/home/todd/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core)
       Fresh rustc-std-workspace-core v1.99.0 (/home/todd/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/rustc-std-workspace-core)
       Fresh compiler_builtins v0.1.35
       Fresh typenum v1.12.0
       Fresh byteorder v1.3.4
       Fresh stable_deref_trait v1.2.0
       Fresh generic-array v0.13.2
       Fresh generic-array v0.12.3
       Fresh hash32 v0.1.1
   Compiling as-slice v0.1.3
     Running `rustc --crate-name as_slice /home/todd/.cargo/registry/src/github.com-1ecc6299db9ec823/as-slice-0.1.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C panic=abort -C embed-bitcode=no -C debuginfo=2 -C metadata=fee3837ddb75675c -C extra-filename=-fee3837ddb75675c --out-dir /home/todd/temp/break/abc/target/avr-atmega328p/debug/deps --target /home/todd/temp/break/abc/avr-atmega328p.json -L dependency=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps -L dependency=/home/todd/temp/break/abc/target/debug/deps --extern 'noprelude:compiler_builtins=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libcompiler_builtins-7d4617280997064b.rmeta' --extern 'noprelude:core=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libcore-2d142ffef2835bf2.rmeta' --extern generic_array=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libgeneric_array-15924d526938d28b.rmeta --extern ga13=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libgeneric_array-16c4ac167d17a181.rmeta --extern stable_deref_trait=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libstable_deref_trait-900fefb67ab7dd77.rmeta -Z unstable-options --cap-lints allow`
error[E0080]: evaluation of constant value failed
   --> /home/todd/.cargo/registry/src/github.com-1ecc6299db9ec823/as-slice-0.1.3/src/lib.rs:169:102
    |
169 |     250, 251, 252, 253, 254, 255, 256, 1 << 9, 1 << 10, 1 << 11, 1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16
    |                                                                                                      ^^^^^^^ attempt to shift left by 16_i32 which would overflow

error[E0119]: conflicting implementations of trait `AsSlice` for type `[_; 0]`:
   --> /home/todd/.cargo/registry/src/github.com-1ecc6299db9ec823/as-slice-0.1.3/src/lib.rs:138:13
    |
138 |               impl<T> AsSlice for [T; $N] {
    |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |               |
    |               first implementation here
    |               conflicting implementation for `[_; 0]`
...
156 | / array!(
157 | |     0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
158 | |     26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
159 | |     50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
...   |
169 | |     250, 251, 252, 253, 254, 255, 256, 1 << 9, 1 << 10, 1 << 11, 1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16
170 | | );
    | |__- in this macro invocation
    |
    = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0119]: conflicting implementations of trait `AsMutSlice` for type `[_; 0]`:
   --> /home/todd/.cargo/registry/src/github.com-1ecc6299db9ec823/as-slice-0.1.3/src/lib.rs:147:13
    |
147 |               impl<T> AsMutSlice for [T; $N] {
    |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |               |
    |               first implementation here
    |               conflicting implementation for `[_; 0]`
...
156 | / array!(
157 | |     0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
158 | |     26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
159 | |     50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
...   |
169 | |     250, 251, 252, 253, 254, 255, 256, 1 << 9, 1 << 10, 1 << 11, 1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16
170 | | );
    | |__- in this macro invocation
    |
    = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)

error: aborting due to 3 previous errors

Some errors have detailed explanations: E0080, E0119.
For more information about an error, try `rustc --explain E0080`.
error: could not compile `as-slice`

Caused by:
  process didn't exit successfully: `rustc --crate-name as_slice /home/todd/.cargo/registry/src/github.com-1ecc6299db9ec823/as-slice-0.1.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C panic=abort -C embed-bitcode=no -C debuginfo=2 -C metadata=fee3837ddb75675c -C extra-filename=-fee3837ddb75675c --out-dir /home/todd/temp/break/abc/target/avr-atmega328p/debug/deps --target /home/todd/temp/break/abc/avr-atmega328p.json -L dependency=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps -L dependency=/home/todd/temp/break/abc/target/debug/deps --extern 'noprelude:compiler_builtins=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libcompiler_builtins-7d4617280997064b.rmeta' --extern 'noprelude:core=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libcore-2d142ffef2835bf2.rmeta' --extern generic_array=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libgeneric_array-15924d526938d28b.rmeta --extern ga13=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libgeneric_array-16c4ac167d17a181.rmeta --extern stable_deref_trait=/home/todd/temp/break/abc/target/avr-atmega328p/debug/deps/libstable_deref_trait-900fefb67ab7dd77.rmeta -Z unstable-options --cap-lints allow` (exit code: 1)

Time to push out 0.5.0?

Hi all,
Now we have Rust 1.36 with MaybeUninit, is it time to push out 0.5.0 to start making full use of it?

peek_mut for BinaryHeap?

peek_mut of std::collections::BinaryHeap is sometimes useful, for example we can use it to swap the top element of the heap efficiently, while we may require more time when using pop and push for that, because of pop.
I'd like to see that in heapless::BinaryHeap, too.

generic-array v0.14

We just bumped all of the RustCrypto crates to generic-array v0.14, which adds some nice features:

RustCrypto/traits#95

It'd be great if heapless got updated too so we could share a single version of generic-array (which hasn't happened so far as the RustCrypto crates were previously stuck on v0.12).

We use heapless in the aead crate to provide a Vec-like buffer type on no_std. Here's an example:

https://docs.rs/aes-gcm/0.5.0/aes_gcm/#in-place-usage-eliminates-alloc-requirement

heapless on const generics: status

I tried to port heapless types from GenericArray to usize const generics.
Here is branch.

Today I managed to compile it in release mode.
For testing you can use: cargo test --release.

Issues:

  • In debug mode compile fails.
  • Queue doesn't work. Fails on type inference. I think the problem is in the interaction of default types and const generics. pub struct Consumer<'a, T, U = usize, C = MultiCore, const N: usize>
  • I don't know how to set PowerOfTwo restriction for indexmap.

Everything else is pretty good.

rustc 1.37.0-nightly (8aa42ed7c 2019-06-24)

Fallback behavior for std environments

Right now, most of the data structures in the heapless crate simply return an error status if they have reached capacity.

When architecting a library that is intended to run in both std and no_std environments, it may be desirable to fall back to heap allocation in those environments where heap allocation is available. For example, if you can determine that 99.9% of cases require a vector to be length 5 or shorter, then you set that as the default length, but for border cases, in an std environment, you can fall back to a heap-allocated data structure.

Are you aware of prior art for such a feature? Would you think that this would be appropriate for the heapless crate under an std/alloc feature?

Context: unicode-org/icu4x#77

@Manishearth

Is there a reason Send is not implemented for spsc::Producer<_,_,_,SingleCore>?

I got stuck trying to put a Producer in a Mutex<RefCell<Option<Producer<_, _, _, SingleCore>>>> because Send was not implemented for it.
After switching to MultiCore everything worked.

This seems to be the code in question.

unsafe impl<'a, T, N, U> Send for Producer<'a, T, N, U>
where
    N: ArrayLength<T>,
    T: Send,
    U: sealed::Uxx,
{
}

Allow changing the `len` type of `Vec`

Queue supports switching out the type to use for head and tail pointers, which can save memory (instead of 8 Bytes for 2 usizes, I can choose to pay only 2 Bytes for 2 u8s). It would be nice to have the same option for Vec, which currently always uses usize.

Explain recommended strategy for moving data interrupt <-> main

It would be nice to have a little more fleshed out info about the recommended ways to share data between interrupt handlers and the main program. With the addition of Qxx, I assume that this is to be preferred, but its size limitation makes it not always applicable if items come in bursts and handling them can take a while.

Using Queue in such a context still requires static mut unsafe, and also it isn't explained if split() can be called in the main program in a way that it doesn't require interrupt::free to en/dequeue things.

Sorry, this is a bit rambly, but these are some of the things I was expecting to find in the docs, since this crate seems like it's tailored for these tasks.

New release?

Just wondering when / if a new release is expected?

I need #154, and packaging with Cargo.toml links to github is problematic!

Thanks for the amazing crate!

Have BufferFullError give the value back?

Right now when I give values to a RingBuffer via enqueue I can't get them back if there's not room in the buffer. Is there any particular reason it works this way instead of returning the value as part of the BufferFullError? If not would you be open to a patch that adds that functionality?

Queue Iter loops [nearly] endlessy (and probably unsound behaviour)

Reproduction:
Add test in queue tests

    #[test]
    fn iter_endless() {
        let mut rb: Queue<i32, U4, u8> = Queue::u8();
        rb.enqueue(0).unwrap();
        for i in 0..300 {
            print!(
                "Iteration {}, {:?}, {:?}",
                i,
                rb.0.head.load_relaxed(),
                rb.0.tail.load_relaxed()
            );
            let mut items = rb.iter_mut();
            println!(" {} {}", items.index, items.len);
            assert_eq!(items.next(), Some(&mut 0));
            assert_eq!(items.next(), None);
            rb.dequeue().unwrap();
            rb.enqueue(0).unwrap();
        }
    }

Execute:

cargo test -- --nocapture

Output:

...
Iteration 253, 253, 254 0 1
Iteration 254, 254, 255 0 1
Iteration 255, 255, 0 0 18446744073709551361
...

Then stop cargo, since the drop implementation relies on correct iterator behaviour.
(This is then obviously "really bad", since it tries to drop something multiple times)

How to move spsc Producer and Consumer into different RTIC tasks?

I like the possibility to split an spsc queue into a Consumer and Producer value. However, when using RTIC, I run into lifetime problems trying to do that. My goal is to create a queue , split it, and move the producer to one RTIC task and the consumer to another RTIC task. So that the producing task only sees the enqueue function, while the consuming task only sees the dequeue function.

This means that I need to move the producer and consumer to separate RTIC resources (i.e., fields in the Resources struct of RTIC). Something like this:

#[rtic::app(device = hal, monotonic = rtic::cyccnt::CYCCNT)]
const APP: () = {
struct Resources {
prod: MyProducer<'static>,
cons: MyConsumer<'static>,
}

where MyProducer and MyConsumer are type-parameterized spsc Producer and Consumer, respectively.

These resources are then set up in the RTIC init task, something like this:

    let mut q: MyQueue = ... call spsc to create a new queue ...;
    let (p, c): (MyProducer, MyConsumer) = queue.split();
    init::LateResources {
        prod: p,
        cons: c,
    }

Unfortunately, here the queue q is dropped, but the prod and cons resources still live on, and internally contain references to q. So the borrow checker prohibits this move of p to proc, and c to cons:

error[E0597]: q does not live long enough
--> src/bin/main.rs:118:120
|
81 | #[rtic::app(device = hal, monotonic = rtic::cyccnt::CYCCNT)]
| - q dropped here while still borrowed
...
118 | let (p, c): MyProducer, MyConsumer) = q.split();
| ^^^^^^^^^^^^^--------
| |
| borrowed value does not live long enough
| argument requires that q is borrowed for 'static

I guess that the queue should also be added to the resources, to make it live as long as prod and cons, but initialization syntax then does not allow to set up prod and cons. Something like this isn't syntactically valid:

    init::LateResources {
        queue: ... call spsc to create a new queue ...,
        (prod, cons): queue.split(),
    }

As I don't see a way around this problem, I'll probably give up on splitting the queue, but it's a pity that this separation of Producer and Consumer apparently only almost works for me. Or is there a way, without using 'unsafe'?

Defmt support

I recently discovered the defmt library and attempted to integrate it into one of my embedded projects.

Unfortunately it appears that heapless::Vec doesn't implement the defmt::Format trait yet, which blocks defmt integration in my downstream.

Feature request: support for defmt::Format in heapless::Vec

Heapless paper pdf?

Is it possible to get access to the associated paper? "Heapless: Dynamic Data Structures without Dynamic Heap Allocator for Rust" somewhere?

Question mark semantics for push on heapless vector.

I work in a crate that requires static allocation everywhere, heapless has been great. I do have a lot of code that looks like:

let mut heapless_vec: Vec<i32, U32> = Vec::new();

heapless_vec.push(3).expect("Failed to push");

Now I'd like to get all these pushes on the same level of error forwarding semantics as the rest of my crate and avoid any possibility of a panic. Because push<T> returns a Result<(), T> rather than a Result<(), &'static str>, we can't use the brilliant ? operator to pass errors from pushing up the error chain.

Right now in order to return the error I want, I have to do something along the lines of

let mut heapless_vec: Vec<i32, U32> = Vec::new();

match heapless_vec.push(3) {
    Ok(_) => (),
    Err(_) => return Err("failed to push onto heapless vec."),
}

This is really verbose. If we had another type of push function that returned a Result<(), &'static str> then we could use ? like this:

fn psuh_to_vec() -> Result<Vec<i32, U32>, &'static str> {
    let mut heapless_vec: Vec<i32, U32> = Vec::new();

    heapless_vec.push_msg(3)?;
    heapless_vec
}

This would represent a huge clean up and brings heapless more in line with encouraged Rust error handling semantics.

What are your thoughts on this?

how to run test ?

my environment is :
os: osx 10.15
rust:nightly-x86_64-apple-darwin

in the heapless directory, cargo build returns successfully .
but when I changed to tests directory and run cargo test ๏ผŒ it reports error:

error[E0432]: unresolved import `scoped_threadpool`
 --> tests/tsan.rs:9:5
  |
9 | use scoped_threadpool::Pool;
  |     ^^^^^^^^^^^^^^^^^ use of undeclared type or module `scoped_threadpool`

error[E0433]: failed to resolve: could not find `pool` in `heapless`
   --> tests/tsan.rs:242:15
    |
242 |     heapless::pool!(A: [u8; 8]);
    |               ^^^^ could not find `pool` in `heapless`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> tests/tsan.rs:244:5
    |
244 |     A::grow(unsafe { &mut M });
    |     ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> tests/tsan.rs:249:25
    |
249 |                 let a = A::alloc().unwrap();
    |                         ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> tests/tsan.rs:250:25
    |
250 |                 let b = A::alloc().unwrap();
    |                         ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> tests/tsan.rs:259:25
    |
259 |                 let a = A::alloc().unwrap();
    |                         ^ use of undeclared type or module `A`

error: unused import: `heapless::pool::singleton::Pool as _`
   --> tests/tsan.rs:238:9
    |
238 |     use heapless::pool::singleton::Pool as _;
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
note: the lint level is defined here
   --> tests/tsan.rs:3:9
    |
3   | #![deny(warnings)]
    |         ^^^^^^^^
    = note: `#[deny(unused_imports)]` implied by `#[deny(warnings)]`

error: cannot find macro `pool` in this scope
   --> src/pool/singleton.rs:309:9
    |
309 |         pool!(A: u8);
    |         ^^^^

error: cannot find macro `pool` in this scope
   --> src/pool/singleton.rs:347:9
    |
347 |         pool!(A: X);
    |         ^^^^

error: aborting due to 7 previous errors

Some errors have detailed explanations: E0432, E0433.
For more information about an error, try `rustc --explain E0432`.
error: could not compile `heapless`.

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:312:17
    |
312 |         assert!(A::alloc().is_none());
    |                 ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:314:9
    |
314 |         A::grow(unsafe { &mut MEMORY });
    |         ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:316:17
    |
316 |         let x = A::alloc().unwrap().init(0);
    |                 ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:320:17
    |
320 |         assert!(A::alloc().is_none());
    |                 ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:325:21
    |
325 |         assert_eq!(*A::alloc().unwrap().init(1), 1);
    |                     ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:349:17
    |
349 |         let x = A::alloc().unwrap().init(X::new());
    |                 ^ use of undeclared type or module `A`

error[E0433]: failed to resolve: use of undeclared type or module `A`
   --> src/pool/singleton.rs:350:17
    |
350 |         let y = A::alloc().unwrap().init(X::new());
    |                 ^ use of undeclared type or module `A`

error: unused import: `Pool`
   --> src/pool/singleton.rs:302:30
    |
302 |     use super::{super::Node, Pool};
    |                              ^^^^
    |
note: the lint level is defined here
   --> src/lib.rs:76:9
    |
76  | #![deny(warnings)]
    |         ^^^^^^^^
    = note: `#[deny(unused_imports)]` implied by `#[deny(warnings)]`

error: aborting due to 10 previous errors

For more information about this error, try `rustc --explain E0433`.
error: could not compile `heapless`.

To learn more, run the command again with --verbose.

Suggestion: consider exposing the Uxx and XCore traits for SPSC queues

It is currently not possible to be generic over the index type or single core/multi core SPSC queue variants because the traits Uxx and XCore can't be declared as type parameters because they are private.

If the traits were sealed so that the crate retained control over what types could implement Uxx and XCore it would be possible to write more generic code utilizing SPSC queues.

Improve / refactor `RingBuffer` iterators

The implementation of ring_buffer::{Iter,IterMut} is pretty much a copy of the other. The duplication can be eliminated by using a macro (see implementation of iterators in libcore). Also both Iter and IterMut use the default methods provided by the Iterator trait; the Iterator implementation for those two could override the default methods for better performance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.