Apologies for letting so much time pass since the last revision! The point of this series is trying to add both deferred-freeing logic as well as a page pool to the DMA-BUF system heap to improve allocation performance. This is desired, as the combination of deferred freeing along with the page pool allows us to offload page-zeroing out of the allocation hot path. This was done originally with ION and this patch series allows the DMA-BUF system heap to match ION's system heap allocation performance in a simple microbenchmark [1] (ION re-added to the kernel for comparision, running on an x86 vm image): ./dmabuf-heap-bench -i 0 1 system Testing dmabuf system vs ion heaptype 0 (flags: 0x1) --------------------------------------------- dmabuf heap: alloc 4096 bytes 5000 times in 88092722 ns 17618 ns/call ion heap: alloc 4096 bytes 5000 times in 103043547 ns 20608 ns/call dmabuf heap: alloc 1048576 bytes 5000 times in 252416639 ns 50483 ns/call ion heap: alloc 1048576 bytes 5000 times in 358190744 ns 71638 ns/call dmabuf heap: alloc 8388608 bytes 5000 times in 2854351310 ns 570870 ns/call ion heap: alloc 8388608 bytes 5000 times in 3676328905 ns 735265 ns/call dmabuf heap: alloc 33554432 bytes 5000 times in 13208119197 ns 2641623 ns/call ion heap: alloc 33554432 bytes 5000 times in 15306975287 ns 3061395 ns/call Daniel didn't like earlier attempts to re-use the network page-pool code to achieve this, and suggested the ttm_pool be used instead, so this series pulls the page pool functionality out of the ttm_pool logic and creates a generic page pool that can be shared. New in v7 (never submitted): * Reworked how I integrated the page pool with the ttm logic to use container of to avoid allocating structures per page. New in v8: * Due to the dual license requirement from Christian König I completely threw away the earlier shared page pool implementation (which had evolved from ion code), and rewrote it using just the ttm_pool logic. My apologies for any previously reviewed issues that I've reintroduced in doing so. Input would be greatly appreciated. Testing as well, as I don't have any development hardware that utilizes the ttm pool. thanks -john [1] https://android.googlesource.com/platform/system/memory/libdmabufheap/+/refs/heads/master/tests/dmabuf_heap_bench.c Cc: Daniel Vetter <daniel@xxxxxxxx> Cc: Christian Koenig <christian.koenig@xxxxxxx> Cc: Sumit Semwal <sumit.semwal@xxxxxxxxxx> Cc: Liam Mark <lmark@xxxxxxxxxxxxxx> Cc: Chris Goldsworthy <cgoldswo@xxxxxxxxxxxxxx> Cc: Laura Abbott <labbott@xxxxxxxxxx> Cc: Brian Starkey <Brian.Starkey@xxxxxxx> Cc: Hridya Valsaraju <hridya@xxxxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Sandeep Patil <sspatil@xxxxxxxxxx> Cc: Daniel Mentz <danielmentz@xxxxxxxxxx> Cc: Ørjan Eide <orjan.eide@xxxxxxx> Cc: Robin Murphy <robin.murphy@xxxxxxx> Cc: Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx> Cc: Simon Ser <contact@xxxxxxxxxxx> Cc: James Jones <jajones@xxxxxxxxxx> Cc: linux-media@xxxxxxxxxxxxxxx Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx John Stultz (5): drm: Add a sharable drm page-pool implementation drm: ttm_pool: Rework ttm_pool to use drm_page_pool dma-buf: heaps: Add deferred-free-helper library code dma-buf: system_heap: Add drm pagepool support to system heap dma-buf: system_heap: Add deferred freeing to the system heap drivers/dma-buf/heaps/Kconfig | 5 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/deferred-free-helper.c | 138 ++++++++++++ drivers/dma-buf/heaps/deferred-free-helper.h | 55 +++++ drivers/dma-buf/heaps/system_heap.c | 47 +++- drivers/gpu/drm/Kconfig | 5 + drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/page_pool.c | 214 +++++++++++++++++++ drivers/gpu/drm/ttm/ttm_pool.c | 156 +++----------- include/drm/page_pool.h | 65 ++++++ include/drm/ttm/ttm_pool.h | 6 +- 11 files changed, 557 insertions(+), 137 deletions(-) create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h create mode 100644 drivers/gpu/drm/page_pool.c create mode 100644 include/drm/page_pool.h -- 2.25.1 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel