Re: [PATCH 1/8] drm/gem: Write down some rules for vmap usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 01.12.20 um 09:32 schrieb Thomas Zimmermann:
Hi

Am 30.11.20 um 16:33 schrieb Christian König:
Am 30.11.20 um 16:30 schrieb Daniel Vetter:
On Mon, Nov 30, 2020 at 01:04:26PM +0100, Thomas Zimmermann wrote:
Mapping a GEM object's buffer into kernel address space prevents the
buffer from being evicted from VRAM, which in turn may result in
out-of-memory errors. It's therefore required to only vmap GEM BOs for
short periods of time; unless the GEM implementation provides additional
guarantees.

Signed-off-by: Thomas Zimmermann <tzimmermann@xxxxxxx>
---
  drivers/gpu/drm/drm_prime.c |  6 ++++++
  include/drm/drm_gem.h       | 16 ++++++++++++++++
  2 files changed, 22 insertions(+)

diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 7db55fce35d8..9c9ece9833e0 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -669,6 +669,12 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
   * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
   * The kernel virtual address is returned in map.
   *
+ * To prevent the GEM object from being relocated, callers must hold the GEM + * object's reservation lock from when calling this function until releasing the + * mapping. Holding onto a mapping and the associated reservation lock for an + * unbound time may result in out-of-memory errors. Calls to drm_gem_dmabuf_vmap() + * should therefore be accompanied by a call to drm_gem_dmabuf_vunmap().
+ *
   * Returns 0 on success or a negative errno code otherwise.
This is a dma-buf hook, which means just documenting the rules you'd like to have here isn't enough. We need to roll this out at the dma-buf level,
and enforce it.

Enforce it = assert_lock_held

Roll out = review everyone. Because this goes through dma-buf it'll come back through shmem helpers (and other helpers and other subsystems) back
to any driver using vmap for gpu buffers. This includes the media
subsystem, and the media subsystem definitely doesn't cope with just
temporarily mapping buffers. So there we need to pin them, which I think
means we'll need 2 version of dma_buf_vmap - one that's temporary and
requires we hold dma_resv lock, the other requires that the buffer is
pinned.

OR start to proper use the dma_buf_pin/dma_buf_unpin functions which I added to cover this use case as well.

While I generally agree, here are some thoughts:

I found all generic pin functions useless, because they don't allow for specifying where to pin. With fbdev emulation, this means that console buffers might never make it to VRAM for scanout. If anything, the policy should be that pin always pins in HW-accessible memory.

Yes, completely agree. The major missing part here are some kind of usage flags what we want to do with the buffer.


Pin has quite a bit of overhead (more locking, buffer movement), so it should be the second choice after regular vmap. To make both work together, pin probably relies on holding the reservation lock internally.

There is a dma_resv_assert_held() at the beginning of those two functions.


Therefore I think we still would want some additional helpers, such as:

  pin_unlocked(), which acquires the resv lock, calls regular pin and then drops the resv lock. Same for unpin_unlocked()

  vmap_pinned(), which enforces that the buffer has been pinned and then calls regalar vmap. Same for vunmap_pinned()

I would rather open code that in each driver, hiding the locking logic in some midlayer is usually not a good idea.

Regards,
Christian.


A typical pattern with these functions would look like this.

    drm_gem_object bo;
    dma_buf_map map;

    init() {
        pin_unlocked(bo);
        vmap_pinned(bo, map);
    }

    worker() {
        begin_cpu_access()
        // access bo via map
        end_cpu_access()
    }

    fini() {
        vunmap_pinned(bo, map);
        unpin_unlocked(bo);
    }

    init()
    while (...) {
        worker()
    }
    fini()

Is that reasonable for media drivers?

Best regards
Thomas



Cheers,
Christian.


That's what I meant with that this approach here is very sprawling :-/
-Daniel

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux