Quoting Daniel Vetter (2019-08-21 22:50:28) > Full audit of everyone: > > - i915, radeon, amdgpu should be clean per their maintainers. > > - vram helpers should be fine, they don't do command submission, so > really no business holding struct_mutex while doing copy_*_user. But > I haven't checked them all. > > - panfrost seems to dma_resv_lock only in panfrost_job_push, which > looks clean. > > - v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), > copying from/to userspace happens all in v3d_lookup_bos which is > outside of the critical section. > > - vmwgfx has a bunch of ioctls that do their own copy_*_user: > - vmw_execbuf_process: First this does some copies in > vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. > Then comes the usual ttm reserve/validate sequence, then actual > submission/fencing, then unreserving, and finally some more > copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of > details, but looks all safe. > - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be > seen, seems to only create a fence and copy it out. > - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be > found there. > Summary: vmwgfx seems to be fine too. > > - virtio: There's virtio_gpu_execbuffer_ioctl, which does all the > copying from userspace before even looking up objects through their > handles, so safe. Plus the getparam/getcaps ioctl, also both safe. > > - qxl only has qxl_execbuffer_ioctl, which calls into > qxl_process_single_command. There's a lovely comment before the > __copy_from_user_inatomic that the slowpath should be copied from > i915, but I guess that never happened. Try not to be unlucky and get > your CS data evicted between when it's written and the kernel tries > to read it. The only other copy_from_user is for relocs, but those > are done before qxl_release_reserve_list(), which seems to be the > only thing reserving buffers (in the ttm/dma_resv sense) in that > code. So looks safe. > > - A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in > usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this > everywhere and needs to be fixed up. > > v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a > dma_resv lock of a different object already. Christian mentioned that > ttm core does this too for ghost objects. intel-gfx-ci highlighted > that i915 has similar issues. > > Unfortunately we can't do this in the usual module init functions, > because kernel threads don't have an ->mm - we have to wait around for > some user thread to do this. > > Solution is to spawn a worker (but only once). It's horrible, but it > works. > > Cc: Alex Deucher <alexander.deucher@xxxxxxx> > Cc: Christian König <christian.koenig@xxxxxxx> > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Thomas Zimmermann <tzimmermann@xxxxxxx> > Cc: Rob Herring <robh@xxxxxxxxxx> > Cc: Tomeu Vizoso <tomeu.vizoso@xxxxxxxxxxxxx> > Cc: Eric Anholt <eric@xxxxxxxxxx> > Cc: Dave Airlie <airlied@xxxxxxxxxx> > Cc: Gerd Hoffmann <kraxel@xxxxxxxxxx> > Cc: Ben Skeggs <bskeggs@xxxxxxxxxx> > Cc: "VMware Graphics" <linux-graphics-maintainer@xxxxxxxxxx> > Cc: Thomas Hellstrom <thellstrom@xxxxxxxxxx> > Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> > --- > drivers/dma-buf/dma-resv.c | 42 ++++++++++++++++++++++++++++++++++++++ > 1 file changed, 42 insertions(+) > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c > index 42a8f3f11681..29988b1564c1 100644 > --- a/drivers/dma-buf/dma-resv.c > +++ b/drivers/dma-buf/dma-resv.c > @@ -34,6 +34,7 @@ > > #include <linux/dma-resv.h> > #include <linux/export.h> > +#include <linux/sched/mm.h> > > /** > * DOC: Reservation Object Overview > @@ -95,6 +96,28 @@ static void dma_resv_list_free(struct dma_resv_list *list) > kfree_rcu(list, rcu); > } > > +#if IS_ENABLED(CONFIG_LOCKDEP) > +struct lockdep_work { > + struct work_struct work; > + struct dma_resv obj; > + struct mm_struct *mm; > +} lockdep_work; > + > +void lockdep_work_fn(struct work_struct *work) > +{ > + dma_resv_init(&lockdep_work.obj); > + > + down_read(&lockdep_work.mm->mmap_sem); > + ww_mutex_lock(&lockdep_work.obj.lock, NULL); > + fs_reclaim_acquire(GFP_KERNEL); > + fs_reclaim_release(GFP_KERNEL); > + ww_mutex_unlock(&lockdep_work.obj.lock); > + up_read(&lockdep_work.mm->mmap_sem); > + > + mmput(lockdep_work.mm); > +} > +#endif #if IS_ENABLED(CONFIG_LOCKDEP) static void dma_resv_lockmap(void) { struct mm_struct *mm = alloc_mm(); struct dma_resv obj; dma_resv_init(&obj); down_read(&mm->mmap_sem); ww_mutex_lock(&obj.lock, NULL); fs_reclaim_acquire(GFP_KERNEL); fs_reclaim_release(GFP_KERNEL); ww_mutex_unlock(&obj.lock); up_read(&mm->mmap_sem); mmput(mm); } core_initcall(dma_resv_lockmap); #endif as a thought. -Chris _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel