Giter Club home page Giter Club logo

gpu-alloc's Introduction

gpu-alloc

crates docs actions MIT/Apache loc

Implementation agnostic memory allocator for Vulkan like APIs.

This crate is intended to be used as part of safe API implementations.
Use with caution. There are unsafe functions all over the place.

Usage

Start with fetching DeviceProperties from gpu-alloc-<backend> crate for the backend of choice.
Then create GpuAllocator instance and use it for all device memory allocations.
GpuAllocator will take care for all necessary bookkeeping like memory object count limit, heap budget and memory mapping.

Backends implementations

Backend supporting crates should not depend on this crate.
Instead they should depend on gpu-alloc-types which is much more stable, allowing to upgrade gpu-alloc version without gpu-alloc-<backend> upgrade.

Supported Rust Versions

The minimum supported version is 1.40. The current version is not guaranteed to build on Rust versions earlier than the minimum supported version.

gpu-alloc-erupt crate requires version 1.48 or higher due to dependency on erupt crate.

License

Licensed under either of

at your option.

Contributions

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Donate

Become a patron

gpu-alloc's People

Contributors

cwfitzgerald avatar daxpedda avatar dependabot[bot] avatar friz64 avatar grovesnl avatar hellbutcher avatar i509vcb avatar kvark avatar neo-zhixing avatar nerditation avatar realmayus avatar theonlymrcat avatar zakarumych avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gpu-alloc's Issues

Allow adding MemoryBlocks manually to GpuAllocator

When importing external memory I need to use facilities outside of a memory device and the gpu allocator to create the device memory. Generally via extensions like VK_EXT_external_memory_dmabuf and etc.

Once I have imported the memory I would like a way to have the allocator track the external memory I have imported so that cleanup can manage freeing the allocations from external memory.

I guess a way to know how many allocations are left would also be needed to use said api safely.

Failing assertion in `gpu_alloc::usage::priority`

This assertion fails on my Computer (RX 580, Arch Linux, Mesa 20.3.1):

assert!(
flags.contains(Flags::HOST_VISIBLE)
&& !usage
.intersects(UsageFlags::HOST_ACCESS | UsageFlags::UPLOAD | UsageFlags::DOWNLOAD)
);

This was caused by this commit: 6886925

Backtrace
$ env RUST_BACKTRACE=1 cargo run --bin erupt --features="gpu-alloc-erupt erupt"
    Finished dev [unoptimized + debuginfo] target(s) in 0.03s
     Running `/mnt/hdd1/Documents/rust/gpu-alloc/target/debug/erupt`
The application panicked (crashed).
Message:  assertion failed: flags.contains(Flags::HOST_VISIBLE) &&
    !usage.intersects(UsageFlags::HOST_ACCESS | UsageFlags::UPLOAD |
                          UsageFlags::DOWNLOAD)
Location: gpu-alloc/src/usage.rs:150

  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
                                ⋮ 15 frames hidden ⋮                              
  16: gpu_alloc::usage::priority::h7613748b55c19d03
      at /mnt/hdd1/Documents/rust/gpu-alloc/gpu-alloc/src/usage.rs:150
  17: gpu_alloc::usage::one_usage::{{closure}}::h2926f6c38f033e4c
      at /mnt/hdd1/Documents/rust/gpu-alloc/gpu-alloc/src/usage.rs:113
  18: core::slice::<impl [T]>::sort_unstable_by_key::{{closure}}::hde2259978c8f82ce
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/mod.rs:2033
  19: core::slice::sort::shift_tail::habf89cce343df245
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/sort.rs:100
  20: core::slice::sort::insertion_sort::h4ca355d7a4c5c56f
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/sort.rs:177
  21: core::slice::sort::recurse::hbf43938c295355b9
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/sort.rs:687
  22: core::slice::sort::quicksort::he11b2b96fd78627b
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/sort.rs:768
  23: core::slice::<impl [T]>::sort_unstable_by_key::h75b5376bed40d40f
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/mod.rs:2033
  24: gpu_alloc::usage::one_usage::h1f726666b67db449
      at /mnt/hdd1/Documents/rust/gpu-alloc/gpu-alloc/src/usage.rs:112
  25: gpu_alloc::usage::MemoryForUsage::new::hcfe77697a0a87704
      at /mnt/hdd1/Documents/rust/gpu-alloc/gpu-alloc/src/usage.rs:81
  26: gpu_alloc::allocator::GpuAllocator<M>::new::h7746728400ee1719
      at /mnt/hdd1/Documents/rust/gpu-alloc/gpu-alloc/src/allocator.rs:96
  27: erupt::main::h7753fd625228359e
      at /mnt/hdd1/Documents/rust/gpu-alloc/examples/src/erupt.rs:47
  28: core::ops::function::FnOnce::call_once::h2613c75ef8235c7e
      at /home/friz64/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227
                                ⋮ 11 frames hidden ⋮                              

Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering.
Run with RUST_BACKTRACE=full to include source snippets.

vulkaninfo

Mutable self for memory mapping

It used to be the case that mapping memory worked with &self. This changed with gfx-rs/gfx#3551, according to Vulkan's synchronization semantics. Now, working with gpu-alloc is more difficult. Perhaps, the MemoryDevice trait here can be changed to work with &mut Memory instead when mapping/unmapping?

Linear allocator doesn't re-use memory

Here is the relevant code:

        match &mut self.ready {
            Some(ready) if fits(self.chunk_size, ready.offset, size, align_mask) => ...,
            ready => {
                ...                
                let mut memory = device.allocate_memory(self.chunk_size, self.memory_type, flags)?;

This is not desired. The expectation is that both linear and buddy allocators would re-use memory.

It causes gfx-rs/wgpu#1242 on our side.

Could not find `instrument` in `tracing`

error[E0433]: failed to resolve: could not find instrument in tracing
--> /home/kvark/.cargo/git/checkouts/gpu-alloc-b89b27c43ff2846f/d07be73/gpu-alloc/src/allocator.rs:68:46
|
68 | #[cfg_attr(feature = "tracing", tracing::instrument)]

Disregarded semver

The last release gpu-alloc 0.3.1 introduced a breaking change and therefore didn't adhere to semantic versioning.

As a result cargo automatically uses the new version (if Cargo.lock is recent) and previously working code breaks.

The breaking change lies in fixing a typo (treshold => threshold) in src/config.rs.

wgpu-core, which uses gpu-alloc, no longer compiles (see gfx-rs/wgpu#1285).

AshMemoryDevice could avoid unsafe code during construction

At the moment the following is done:

    #[repr(transparent)]
    pub struct AshMemoryDevice {
        device: Device,
    }
    
    impl AshMemoryDevice {
        pub fn wrap(device: &Device) -> &Self {
            unsafe {
                // Safe because `Self` is `repr(transparent)`
                // with only field being `DeviceLoader`.
                &*(device as *const Device as *const Self)
            }
        }
    }

This works but requires what I feel is unnecessary unsafe code.

I imagine we could change AshMemoryDevice to the following if we use a lifetime:

pub struct AshMemoryDevice<'a> {
    device: &'a Device,
}

impl AshMemoryDevice<'_> {
    pub fn wrap(device: &Device) -> Self {
        Self { device }
    }
}

Update gpu-alloc-ash to be compatible with ash 0.37.0

I'm using gpu-alloc for the first time and it seems gpu-alloc-ash needs updated to support ash 0.37.0. The device_properties function gives the following error

mismatched types
expected reference `&ash::instance::Instance`
found reference `&ash::Instance`
perhaps two different versions of crate `ash` are being used?

(and a similar error for the physical_device parameter) and indeed I can see the last release supported up to 0.36.0 which does have those types instead.

Align mask?

From the implementation perspective, I can see how align mask would be more convenient. But from the API perspective in the Request, it's not very nice. For one, it's not clear if the mask is just supposed to be alignment - 1 or the !(alignment - 1) from the documentation. Would it be simpler to just receive the alignment and document that it has to be a power of 2?

Handle `VkPhysicalDeviceLimits::bufferImageGranularity`

The Vulkan specification has text has VkPhysicalDeviceLimits::bufferImageGranularity which allows drivers to request that buffers and images are put in different memory pages. This is useful at least on Nvidia where I've seen buffer memory be garbled when in the same page as a samplable image.

The Config doesn't have anything looking like this granularity and wgpu-core doesn't seem to handle this limit at all. It should really be handled somewhere otherwise there will be weird bugs.

Thoughts on potential dx12 support?

I've implemented dx12 suballocations in wgpu, but I was unable to use this crate for it due to the differences between vk and dx12's memory allocation strategies. Dx12 has you allocate an ID3D12Heap, but then you pass that into CreatePlacedResource which gives you an ID3D12Resource that you can then map/unmap/etc, vs Vulkan where you can do all that with just a DeviceMemory.

I haven't dug into what kind of changes enabling the dx12 method would entail, beyond the surface level of changing the MemoryDevice trait, but my initial impression is that it's probably better to just use a separate dx12 allocator instead of trying to duct tape dx12 support onto gpu-alloc.

I was wondering what your thoughts were on this, how difficult it might be, and if you even had any interest in including dx12 support in gpu-alloc?

Define and enforce MSRV

Looks like the crate depends on fairly fresh compiler features:

error[E0277]: arrays only have std trait implementations for lengths 0..=32
--> /Users/dmalyshau/.cargo/git/checkouts/gpu-alloc-b89b27c43ff2846f/0bdf9ea/gpu-alloc/src/usage.rs:51:5
|
51 | usages: [MemoryForOneUsage; 64],
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait std::array::LengthAtMost32 is not implemented for [usage::MemoryForOneUsage; 64]
|
= note: required because of the requirements on the impl of std::fmt::Debug for [usage::MemoryForOneUsage; 64]
= note: required because of the requirements on the impl of std::fmt::Debug for &[usage::MemoryForOneUsage; 64]
= note: required for the cast to the object type dyn std::fmt::Debug
= note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info

BuddyAllocator::pairs Grows Without Bound with Steady Allocation/Deallocation

Related

This was initially reported as BVE-Reborn/rend3#167.

Description

rend3, in the course of normal operation, allocates/deallocates 5 buffers per frame from wgpu. If the camera does not move, these buffers will be the exact same size.

Especially when running at high frame rates, this causes BuddyAllocator::pairs to slowly increase in size to the tune of 1MB every couple seconds (depending on framerate).

This definitely feels unexpected.

More Information

If you need more information, please reach out, I'm not sure exactly what would be of most help.

Repro

This repros on vulkan on windows.

git clone https://github.com/BVE-Reborn/rend3.git
cd rend3
cargo run --bin scene-viewer --release -- -m gpu

Watch the memory usage of the application slowly rise.

EruptMemoryDevice missing `Clone` impl

I get trait bound erupt::DeviceLoader: std::clone::Clone is not satisfied when trying to use the following:

let block = unsafe { allocator.alloc(EruptMemoryDevice::wrap(&core.device), request)? };

Guessing the M: Clone trait bound was added to alloc(...) after the erupt backend was written?
(More context if you need it here)

Assertion failed: Greater == Equal

Created from gfx-rs/wgpu#1364

Panic Payload: "assertion failed: (left == right)\n left: Greater,\n right: Equal"
PanicInfo: panicked at 'assertion failed: (left == right)
left: Greater,
right: Equal', /Users/bronson/.cargo/registry/src/github.com-1ecc6299db9ec823/gpu-alloc-0.4.4/src/freelist.rs:201:9

Repro steps:

# download and unpack trace-cross.zip
# add "]" to the end
# s/Metal/Vulkan/g (assuming you are on Vulkan?)
git clone https://github.com/gfx-rs/wgpu
cd wgpu/player
cargo run --features winit,cross -- <path to unpacked trace>

Edit: actually, the repro case doesn't work for me.

Sessioned reads and writes

It would be very useful to do multiple reads/writes in a single mapping session. Currently, write_bytes and read_bytes are limited to exactly one request. Case in question: laying out texture data into the staging buffer, it needs to be done row by row to respect the alignment of the target API.

Merge Strategy into Request

I think something like this would make sense:

#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub enum RequestStrategy {
    Implicit(Dedicated),
    Explicit(Strategy),
}

#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct Request {
    pub size: u64,
    pub align: u64,
    pub memory_types: u32,
    pub usage: UsageFlags,
    pub strategy: RequestStrategy,
}

That would make it so you don't have to specify the usage flags and stuff when you know the strategy, and you don't need an extra alloc_with_strategy method.

the trait bound `&mut u32: Value` is not satisfied

I'm using this crate from a dependency (iced -> iced_wgpu -> wgpu -> wgpu-core -> gpu-alloc), and am getting an issue trying to compile my project, during compilation, the following errors are thrown in gpu-alloc:

   Compiling gpu-alloc v0.3.0
error[E0277]: the trait bound `&mut u32: Value` is not satisfied
   --> C:\Users\boop\.cargo\registry\src\github.com-1ecc6299db9ec823\gpu-alloc-0.3.0\src\buddy.rs:317:37
    |
317 |     #[cfg_attr(feature = "tracing", tracing::instrument(skip(self, device)))]
    |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                     |
    |                                     the trait `Value` is not implemented for `&mut u32`
    |                                     help: consider removing the leading `&`-reference
    |
    = help: the following implementations were found:
              <u32 as Value>
    = note: `Value` is implemented for `&u32`, but not for `&mut u32`
    = note: required for the cast to the object type `dyn Value`
    = note: this error originates in the macro `$crate::valueset` (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0277]: the trait bound `&mut u32: Value` is not satisfied
   --> C:\Users\boop\.cargo\registry\src\github.com-1ecc6299db9ec823\gpu-alloc-0.3.0\src\buddy.rs:422:37
    |
422 |     #[cfg_attr(feature = "tracing", tracing::instrument(skip(self, device)))]
    |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                     |
    |                                     the trait `Value` is not implemented for `&mut u32`
    |                                     help: consider removing the leading `&`-reference
    |
    = help: the following implementations were found:
              <u32 as Value>
    = note: `Value` is implemented for `&u32`, but not for `&mut u32`
    = note: required for the cast to the object type `dyn Value`
    = note: this error originates in the macro `$crate::valueset` (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0277]: the trait bound `&mut u32: Value` is not satisfied
  --> C:\Users\boop\.cargo\registry\src\github.com-1ecc6299db9ec823\gpu-alloc-0.3.0\src\linear.rs:83:37
   |
83 |     #[cfg_attr(feature = "tracing", tracing::instrument(skip(self, device)))]
   |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |                                     |
   |                                     the trait `Value` is not implemented for `&mut u32`
   |                                     help: consider removing the leading `&`-reference
   |
   = help: the following implementations were found:
             <u32 as Value>
   = note: `Value` is implemented for `&u32`, but not for `&mut u32`
   = note: required for the cast to the object type `dyn Value`
   = note: this error originates in the macro `$crate::valueset` (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0277]: the trait bound `&mut u32: Value` is not satisfied
   --> C:\Users\boop\.cargo\registry\src\github.com-1ecc6299db9ec823\gpu-alloc-0.3.0\src\linear.rs:179:37
    |
179 |     #[cfg_attr(feature = "tracing", tracing::instrument(skip(self, device)))]
    |                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                     |
    |                                     the trait `Value` is not implemented for `&mut u32`
    |                                     help: consider removing the leading `&`-reference
    |
    = help: the following implementations were found:
              <u32 as Value>
    = note: `Value` is implemented for `&u32`, but not for `&mut u32`
    = note: required for the cast to the object type `dyn Value`
    = note: this error originates in the macro `$crate::valueset` (in Nightly builds, run with -Z macro-backtrace for more info)

For more information about this error, try `rustc --explain E0277`.        
error: could not compile `gpu-alloc` due to 4 previous errors
warning: build failed, waiting for other jobs to finish...
error: build failed

If there's an obvious fix, I'm sorry, I'm newer to rust, so...

LICENSE files in package subdirs

Background:

Hello, the Fuchsia project vendors crates from crates.io, and in order to do so we require explicit license files alongside the source code. Here is the policy: https://fuchsia.dev/fuchsia-src/contribute/governance/policy/open-source-licensing-policies?hl=en#licenses_and_tracking . In particular, reading the SPDX package/license field from the crate's Cargo.toml is not good enough.

Request:

Could you please add LICENSE-* files to the package subdirectories, specifically gpu-alloc and types (although you might as well do it for all package dirs that are uploaded to crates.io)?

We are currently using gpu-alloc 0.5.3 and gpu-alloc-types 0.2.0. Since crates.io doesn't allow re-uploading the same version with a modified crate, it seems the right thing to do is to upload new crates with versions 0.5.4 and 0.2.1 respectively.

I'd appreciate if you're able to do this, since I have to file similar issues for many other crate dependencies. But if you don't have the bandwidth to address this issue, please let me know and I'll find time to submit a pull request (although of course I won't be able to upload anything to crates.io)

Option to silence errors sent to stderr

I have written a program that uses wgpu, and if a panic occurs I get a lot of error messages from gpu-alloc, due to this line:

eprintln!("Memory block wasn't deallocated")

Due to the high amount of messages that gpu-alloc prints, trying to understand what caused the panic in the first place can be tricky. This can be especially annoying when the console output moves previous messages so far up that they no longer show.

Do you think it might be possible to add some option to suppress or remove that output?

Leaked memory on v0.4

Updating wgpu from gpu-alloc 2cd1ad6 to 0.4 started to result in vulkan validation errors, see gfx-rs/wgpu#1404. It can be reproduced by just checking out wgpu and running cargo test (on Linux with Vulkan support). Any idea what this might be caused by?

Too hungry for host-visible memory

Getting this validation error:

VUID-vkAllocateMemory-pAllocateInfo-01713(ERROR / SPEC): msgNum: -375211665 - Validation Error: [ VUID-vkAllocateMemory-pAllocateInfo-01713 ] Object 0: handle = 0x2214da35ba0, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xe9a2b96f | vkAllocateMemory: attempting to allocate 609460056 bytes from heap 0,but size of that heap is only 268435456 bytes. The Vulkan spec states: pAllocateInfo->allocationSize must be less than or equal to VkPhysicalDeviceMemoryProperties::memoryHeaps[memindex].size where memindex = VkPhysicalDeviceMemoryProperties::memoryTypes[pAllocateInfo->memoryTypeIndex].heapIndex as returned by vkGetPhysicalDeviceMemoryProperties for the VkPhysicalDevice that device was created from (https://vulkan.lunarg.com/doc/view/1.3.236.0/windows/1.3-extensions/vkspec.html#VUID-vkAllocateMemory-pAllocateInfo-01713)

This is AMD, and Heap 0 only has 256Mb of GPU memory visible to the host.

The requested usage flags are: FAST_DEVICE_ACCESS | DEVICE_ADDRESS.

Here is what the documentation about HOST_ACCESS says:

Memory will be accessed from host. This flags guarantees that host memory operations will be available. Otherwise implementation is encouraged to use non-host-accessible memory.

Since I'm not requesting host access, I don't expect this heap to be used for such allocations at all.

Crash buddy::PairState::replace_next

Crashed Thread:        0  Dispatch queue: com.apple.main-thread

Exception Type:        EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes:       0x0000000000000001, 0x0000000000000000
Exception Note:        EXC_CORPSE_NOTIFY

Termination Signal:    Illegal instruction: 4
Termination Reason:    Namespace SIGNAL, Code 0x4
Terminating Process:   exc handler [1338]

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   cube                          	0x000000010e447014 core::hint::unreachable_unchecked::h7e948729dce20eef + 4 (hint.rs:51)
1   cube                          	0x000000010e44503b gpu_alloc::buddy::PairState::replace_next::hd644150acb99daa5 + 75 (buddy.rs:37)
2   cube                          	0x000000010e4453ff gpu_alloc::buddy::Size::acquire::h27b3e461dc60d6b9 + 383 (buddy.rs:159)
3   cube                          	0x000000010e209730 gpu_alloc::buddy::BuddyAllocator$LT$M$GT$::alloc::hb0998c057cbd506a + 944 (buddy.rs:345)
4   cube                          	0x000000010e1b7baa gpu_alloc::allocator::GpuAllocator$LT$M$GT$::alloc_internal::h33ffc27b6ac6d927 + 3354 (allocator.rs:317)
5   cube                          	0x000000010e1b7e34 gpu_alloc::allocator::GpuAllocator$LT$M$GT$::alloc_with_strategy::hd0b8beb74e7f121c + 68 (allocator.rs:140)
6   cube                          	0x000000010e01509a wgpu_core::device::alloc::MemoryAllocator$LT$B$GT$::allocate::h371a98d8c4aa3e1a + 362 (alloc.rs:80)
7   cube                          	0x000000010e185698 wgpu_core::device::Device$LT$B$GT$::create_buffer::hed67cbbc7728bce3 + 1944 (mod.rs:472)
8   cube                          	0x000000010e04efa6 wgpu_core::device::_$LT$impl$u20$wgpu_core..hub..Global$LT$G$GT$$GT$::device_create_buffer::hd87c51df08fd4f54 + 3270 (mod.rs:1002)
9   cube                          	0x000000010e1b5075 _$LT$wgpu..backend..direct..Context$u20$as$u20$wgpu..Context$GT$::device_create_buffer::hb23665a174b9be64 + 261 (direct.rs:894)
10  cube                          	0x000000010e1b0263 _$LT$wgpu..Device$u20$as$u20$wgpu..util..DeviceExt$GT$::create_buffer_init::h90c5b4529b02d706 + 387 (mod.rs:93)

Explicit allocator kind request

One thing that we had in gfx-memory that worked exceptionally well was that the user explicitly selected what kind of allocator was needed: https://docs.rs/gfx-memory/0.2.2/gfx_memory/struct.Heaps.html#method.allocate

So wgpu knows precisely when it needs a generic allocation versus the linear allocation. With rendy-memory as well as gpu-alloc as it's successor, this becomes a guessing game: what UsageFlags do I need to specify in order for the allocator to pick the proper sub-allocator kind? I believe this is a totally useless indirection. The API should be explicit about linear/general/dedicated allocation, and only default to dedicated if the other one fails.

As it stands now, I consider this a blocker for wgpu integration. It's very important for us to know what sub-allocator is involved.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.