From: Rob Clark <robdclark@xxxxxxxxxxxx> vm_open() is not allowed to fail. Fortunately we are guaranteed that the pages are already pinned, thanks to the initial mmap which is now being cloned into a forked process, and only need to increment the refcnt. So just increment it directly. Previously if a signal was delivered at the wrong time to the forking process, the mutex_lock_interruptible() could fail resulting in the pages_use_count not being incremented. Fixes: 2194a63a818d ("drm: Add library for shmem backed GEM objects") Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx> Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> --- drivers/gpu/drm/drm_gem_shmem_helper.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 3b7b71391a4c..b602cd72a120 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -571,12 +571,20 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; WARN_ON(shmem->base.import_attach); - ret = drm_gem_shmem_get_pages(shmem); - WARN_ON_ONCE(ret != 0); + mutex_lock(&shmem->pages_lock); + + /* + * We should have already pinned the pages when the buffer was first + * mmap'd, vm_open() just grabs an additional reference for the new + * mm the vma is getting copied into (ie. on fork()). + */ + if (!WARN_ON_ONCE(!shmem->pages_use_count)) + shmem->pages_use_count++; + + mutex_unlock(&shmem->pages_lock); drm_gem_vm_open(vma); } -- 2.38.1