Le vendredi 04 novembre 2022 à 10:03 +0100, Christian König a écrit : > Am 03.11.22 um 23:16 schrieb Nicolas Dufresne: > > [SNIP] > > > > Was there APIs suggested to actually make it manageable by userland to allocate > > from the GPU? Yes, this what Linux Device Allocator idea is for. Is that API > > ready, no. > > Well, that stuff is absolutely ready: > https://elixir.bootlin.com/linux/latest/source/drivers/dma-buf/heaps/system_heap.c#L175 > What do you think I'm talking about all the time? I'm aware of DMA Heap, still have few gaps, but this unrelated to coherency (we can discuss offline, with Daniel S.). For DMABuf Heap, its used in many forks by vendors in production. There is an upstream proposal for GStreamer, but review comments were never addressed, in short, its stalled, and it waiting for a volunteer. It might also be based on very old implementation of DMABuf Heap, needs to be verified in depth for sure as the time have passed. https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1391 > > DMA-buf has a lengthy section about CPU access to buffers and clearly > documents how all of that is supposed to work: > https://elixir.bootlin.com/linux/latest/source/drivers/dma-buf/dma-buf.c#L1160 > This includes braketing of CPU access with dma_buf_begin_cpu_access() > and dma_buf_end_cpu_access(), as well as transaction management between > devices and the CPU and even implicit synchronization. > > This specification is then implemented by the different drivers > including V4L2: > https://elixir.bootlin.com/linux/latest/source/drivers/media/common/videobuf2/videobuf2-dma-sg.c#L473 > > As well as the different DRM drivers: > https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L117 > https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c#L234 I know, I've implement the userspace bracketing for this in GStreamer [1] before DMAbuf Heap was merged and was one of the reporter for the missing bracketing in VB2. Was tested against i915 driver. Note, this is just a fallback, the performance is terrible, memory exported by (at least my old i915 HW) is not cacheable on CPU. Though, between corrupted image and bad performance or just bad performance, we decided that it was better to have the second. When the DMABuf is backed by CPU cacheable memory, peformance is great and CPU fallback works. Work is in progress to better handle these two cases generically. For now, sometimes the app need to get involved, but this is only happens on embedded/controlled kind of use cases. What matters is that application that needs this can do it. [1] https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/main/subprojects/gst-plugins-base/gst-libs/gst/allocators/gstdmabuf.c > > This design was then used by us with various media players on different > customer projects, including QNAP https://www.qnap.com/en/product/ts-877 > as well as the newest Tesla > https://www.amd.com/en/products/embedded-automotive-solutions > > I won't go into the details here, but we are using exactly the approach > I've outlined to let userspace control the DMA between the different > device in question. I'm one of the main designers of that and our > multimedia and mesa team has up-streamed quite a number of changes for > this project. > > I'm not that well into different ARM based solutions because we are just > recently getting results that this starts to work with AMD GPUs, but I'm > pretty sure that the design should be able to handle that as well. > > So we have clearly prove that this design works, even with special > requirements which are way more complex than what we are discussing > here. We had cases where we used GStreamer to feed DMA-buf handles into > multiple devices with different format requirements and that seems to > work fine. Sounds like you have a love/hate relationship with GStreamer. Glad the framework is working for you too. The framework have had bidirectional memory allocation for over a decade, it also has context sharing for stacks like D3D11,12/GL/Vulkan/CUDA etc. I strictly didn't understand what you were complaining about. As a vendor, you can solve all this in your BSP. Though, translating BSP patches into a generic upstream-able features is not as simple. The solution that works for vendor is usually the most cost effective one. I'm sure, Tesla or AMD Automotive are no exceptions. > > ----- > What is clearly a bug in the kernel is that we don't reject things which > won't work correctly and this is what this patch here addresses. What we > could talk about is backward compatibility for this patch, cause it > might look like it breaks things which previously used to work at least > partially. I did read your code review (I don't class this discussion as a code review). You where asked to address several issues, clearly a v2 is needed. 1. Rob Clark stated the coherency is not homogeneous in many device drivers. So your patch will yield many false positives. He asked if you could think of a "per attachement" solution, as splitting drivers didn't seem like the best approach (and it would have to happen at the same time anyway). 2. Lucas raised a concern, unfortunately no proof yet, that this may cause regressions in existing userspace application. You stated that the application must be wrong, yet this would break Linus rule #1. I'm not participating in that discussion, I tried to reproduced, but appart from writing an app that will break, I haven't found yet. But it feels like the way forward is to ensure each exporter driver can override this in case it has a good reason to do so. As a third option, a safer approach would be to avoid reusing a flawed mechanism to detect device coherency (or rephrased, accept that device coherency isn't homogeneous). Communicate this back another way. this means patching all the drivers, but at least each driver maintainer will be able to judge with their HW knowledge if this is going to regress or not. When I first looked at it, I wanted to experiment with enabling cache in vb2 contiguous allocator. I was thinking that perhaps I could have bracketing, and state changes triggered by the attach call, all based on the device coherency, but now that I saw Rob Clark comment, I feel like this is flawed. happy v2, Nicolas