You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've implemented dx12 suballocations in wgpu, but I was unable to use this crate for it due to the differences between vk and dx12's memory allocation strategies. Dx12 has you allocate an ID3D12Heap, but then you pass that into CreatePlacedResource which gives you an ID3D12Resource that you can then map/unmap/etc, vs Vulkan where you can do all that with just a DeviceMemory.
I haven't dug into what kind of changes enabling the dx12 method would entail, beyond the surface level of changing the MemoryDevice trait, but my initial impression is that it's probably better to just use a separate dx12 allocator instead of trying to duct tape dx12 support onto gpu-alloc.
I was wondering what your thoughts were on this, how difficult it might be, and if you even had any interest in including dx12 support in gpu-alloc?
The text was updated successfully, but these errors were encountered:
As I understand it, gpu-alloc should not create ID3D12Resources.
It should only work on the level of ID3D12Heap. gpu-alloc allocation routine would simply return ID3D12Heap + offset + size for allocation requests and then user would create ID3D12Resources using CreatePlacedResource as needed, possibly suballocating from returned range.
gpu-alloc mapping API should be changed to reflect that mapping of ranges from ID3D12Heap is now allowed directly.
Or maybe allow it using internal overlapping ID3D12Resources objects?
I've implemented dx12 suballocations in wgpu, but I was unable to use this crate for it due to the differences between vk and dx12's memory allocation strategies. Dx12 has you allocate an
ID3D12Heap
, but then you pass that intoCreatePlacedResource
which gives you anID3D12Resource
that you can then map/unmap/etc, vs Vulkan where you can do all that with just aDeviceMemory
.I haven't dug into what kind of changes enabling the dx12 method would entail, beyond the surface level of changing the
MemoryDevice
trait, but my initial impression is that it's probably better to just use a separate dx12 allocator instead of trying to duct tape dx12 support onto gpu-alloc.I was wondering what your thoughts were on this, how difficult it might be, and if you even had any interest in including dx12 support in gpu-alloc?
The text was updated successfully, but these errors were encountered: