On Thu, Aug 18, 2016 at 06:55:12AM -0400, Rob Clark wrote: > On Thu, Aug 18, 2016 at 4:36 AM, Daniel Vetter <daniel@xxxxxxxx> wrote: > > On Wed, Aug 17, 2016 at 05:29:31PM -0400, Rob Clark wrote: > >> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c > >> index 6cd4af4..4502e4b 100644 > >> --- a/drivers/gpu/drm/msm/msm_gem.c > >> +++ b/drivers/gpu/drm/msm/msm_gem.c > >> @@ -201,6 +201,13 @@ int msm_gem_fault(struct vm_area_struct *vma, > >> struct vm_fault *vmf) > >> pgoff_t pgoff; > >> int ret; > >> > >> + /* I think this should only happen if userspace tries to pass a > >> + * mmap'd but unfaulted gem bo vaddr into submit ioctl, triggering > >> + * a page fault while struct_mutex is already held > >> + */ > >> + if (mutex_is_locked_by(&dev->struct_mutex, current)) > >> + return VM_FAULT_SIGBUS; > > > > This is an ok (well still horrible) heuristics for the shrinker, but for > > correctness it kinda doesn't cut it. What you need to do instead is drop > > all the locks, copy relocations into a temp memory area and then proceed > > in the msm command submission path above. > > > > Also reentrant mutexes are evil ;-) > > Please note that this is not a reentrant mutex in the fault path, it > bails with VM_FAULT_SIGBUG! Except on UP it totally deadlocks ;-) -Daniel > There is never a legit reason to use a gem bo for the bos (or cmds) > table in the ioctl, so while this may not be pretty, I believe it is > an acceptable solution. -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel