The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to <stable@xxxxxxxxxxxxxxx>. Possible dependencies: 09bf649a7457 ("drm/shmem-helper: Avoid vm_open error paths") 526408357318 ("drm/shmem-helpers: Ensure get_pages is not called on imported dma-buf") thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 09bf649a74573cb596e211418a4f8008f265c5a9 Mon Sep 17 00:00:00 2001 From: Rob Clark <robdclark@xxxxxxxxxxxx> Date: Wed, 30 Nov 2022 10:57:48 -0800 Subject: [PATCH] drm/shmem-helper: Avoid vm_open error paths vm_open() is not allowed to fail. Fortunately we are guaranteed that the pages are already pinned, thanks to the initial mmap which is now being cloned into a forked process, and only need to increment the refcnt. So just increment it directly. Previously if a signal was delivered at the wrong time to the forking process, the mutex_lock_interruptible() could fail resulting in the pages_use_count not being incremented. Fixes: 2194a63a818d ("drm: Add library for shmem backed GEM objects") Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx> Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> Signed-off-by: Javier Martinez Canillas <javierm@xxxxxxxxxx> Link: https://patchwork.freedesktop.org/patch/msgid/20221130185748.357410-3-robdclark@xxxxxxxxx diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 3b7b71391a4c..b602cd72a120 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -571,12 +571,20 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; WARN_ON(shmem->base.import_attach); - ret = drm_gem_shmem_get_pages(shmem); - WARN_ON_ONCE(ret != 0); + mutex_lock(&shmem->pages_lock); + + /* + * We should have already pinned the pages when the buffer was first + * mmap'd, vm_open() just grabs an additional reference for the new + * mm the vma is getting copied into (ie. on fork()). + */ + if (!WARN_ON_ONCE(!shmem->pages_use_count)) + shmem->pages_use_count++; + + mutex_unlock(&shmem->pages_lock); drm_gem_vm_open(vma); }