Re: [PATCH] dma_resv: prime lockdep annotations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/22/19 3:36 PM, Daniel Vetter wrote:
On Thu, Aug 22, 2019 at 3:30 PM Thomas Hellström (VMware)
<thomas_os@xxxxxxxxxxxx> wrote:
On 8/22/19 3:07 PM, Daniel Vetter wrote:
Full audit of everyone:

- i915, radeon, amdgpu should be clean per their maintainers.

- vram helpers should be fine, they don't do command submission, so
    really no business holding struct_mutex while doing copy_*_user. But
    I haven't checked them all.

- panfrost seems to dma_resv_lock only in panfrost_job_push, which
    looks clean.

- v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(),
    copying from/to userspace happens all in v3d_lookup_bos which is
    outside of the critical section.

- vmwgfx has a bunch of ioctls that do their own copy_*_user:
    - vmw_execbuf_process: First this does some copies in
      vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself.
      Then comes the usual ttm reserve/validate sequence, then actual
      submission/fencing, then unreserving, and finally some more
      copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of
      details, but looks all safe.
    - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be
      seen, seems to only create a fence and copy it out.
    - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be
      found there.
    Summary: vmwgfx seems to be fine too.

- virtio: There's virtio_gpu_execbuffer_ioctl, which does all the
    copying from userspace before even looking up objects through their
    handles, so safe. Plus the getparam/getcaps ioctl, also both safe.

- qxl only has qxl_execbuffer_ioctl, which calls into
    qxl_process_single_command. There's a lovely comment before the
    __copy_from_user_inatomic that the slowpath should be copied from
    i915, but I guess that never happened. Try not to be unlucky and get
    your CS data evicted between when it's written and the kernel tries
    to read it. The only other copy_from_user is for relocs, but those
    are done before qxl_release_reserve_list(), which seems to be the
    only thing reserving buffers (in the ttm/dma_resv sense) in that
    code. So looks safe.

- A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in
    usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this
    everywhere and needs to be fixed up.

v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a
dma_resv lock of a different object already. Christian mentioned that
ttm core does this too for ghost objects. intel-gfx-ci highlighted
that i915 has similar issues.

Unfortunately we can't do this in the usual module init functions,
because kernel threads don't have an ->mm - we have to wait around for
some user thread to do this.

Solution is to spawn a worker (but only once). It's horrible, but it
works.

v3: We can allocate mm! (Chris). Horrible worker hack out, clean
initcall solution in.

v4: Annotate with __init (Rob Herring)

Cc: Rob Herring <robh@xxxxxxxxxx>
Cc: Alex Deucher <alexander.deucher@xxxxxxx>
Cc: Christian König <christian.koenig@xxxxxxx>
Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Thomas Zimmermann <tzimmermann@xxxxxxx>
Cc: Rob Herring <robh@xxxxxxxxxx>
Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx>
Cc: Eric Anholt <eric@xxxxxxxxxx>
Cc: Dave Airlie <airlied@xxxxxxxxxx>
Cc: Gerd Hoffmann <kraxel@xxxxxxxxxx>
Cc: Ben Skeggs <bskeggs@xxxxxxxxxx>
Cc: "VMware Graphics" <linux-graphics-maintainer@xxxxxxxxxx>
Cc: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
Reviewed-by: Christian König <christian.koenig@xxxxxxx>
Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Tested-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx>
---
   drivers/dma-buf/dma-resv.c | 24 ++++++++++++++++++++++++
   1 file changed, 24 insertions(+)

diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 42a8f3f11681..97c4c4812d08 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -34,6 +34,7 @@

   #include <linux/dma-resv.h>
   #include <linux/export.h>
+#include <linux/sched/mm.h>

   /**
    * DOC: Reservation Object Overview
@@ -95,6 +96,29 @@ static void dma_resv_list_free(struct dma_resv_list *list)
       kfree_rcu(list, rcu);
   }

+#if IS_ENABLED(CONFIG_LOCKDEP)
+static void __init dma_resv_lockdep(void)
+{
+     struct mm_struct *mm = mm_alloc();
+     struct dma_resv obj;
+
+     if (!mm)
+             return;
+
+     dma_resv_init(&obj);
+
+     down_read(&mm->mmap_sem);

I took a quick look into using lockdep macros replacing the actual
locks: Something along the lines of

lock_acquire(mm->mmap_sem.dep_map, 0, 0, 1, 1, NULL, _THIS_IP_);
Yeah I'm not a fan of the magic numbers this nees :-/ And now this is
run once at startup, so the taking the fake locks for real, once,
shouldn't hurt. Lockdep updating it's data structures is going to be
100x more cpu cycles anyway :-)

+     ww_mutex_lock(&obj.lock, NULL);
lock_acquire(obj.lock.dep_map, 0, 0, 0, 1, NULL, _THIS_IP_);
+     fs_reclaim_acquire(GFP_KERNEL);
+     fs_reclaim_release(GFP_KERNEL);
+     ww_mutex_unlock(&obj.lock);
lock_release(obj.lock.dep_map, 0, _THIS_IP_);

+     up_read(&mm->mmap_sem);
lock_release(obj.lock.dep_map, 0, _THIS_IP_);

Either way is fine with me, though.

Reviewed-by: Thomas Hellström <thellstrom@xxxxxxxxxx>
Thanks for your review comments.

Can you pls also run this in some test cycles, if that's easily
possible? I'd like to have a tested-by from at least the big drivers -
i915, amd, nouveau, vmwgfx and is definitely using ttm to its fullest
too, so best chances for hitting an oversight.

Cheers, Daniel

Tested vmwgfx with a decent OpenGL / rendercheck stress test and no lockdep trips.

/Thomas

Tested-by: Thomas Hellström <thellstrom@xxxxxxxxxx>


_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux