This is a note to let you know that I've just added the patch titled vfio/type1: restore locked_vm to the 6.2-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: vfio-type1-restore-locked_vm.patch and it can be found in the queue-6.2 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 90fdd158a695d70403163f9a0e4efc5b20f3fd3e Mon Sep 17 00:00:00 2001 From: Steve Sistare <steven.sistare@xxxxxxxxxx> Date: Tue, 31 Jan 2023 08:58:06 -0800 Subject: vfio/type1: restore locked_vm From: Steve Sistare <steven.sistare@xxxxxxxxxx> commit 90fdd158a695d70403163f9a0e4efc5b20f3fd3e upstream. When a vfio container is preserved across exec or fork-exec, the new task's mm has a locked_vm count of 0. After a dma vaddr is updated using VFIO_DMA_MAP_FLAG_VADDR, locked_vm remains 0, and the pinned memory does not count against the task's RLIMIT_MEMLOCK. To restore the correct locked_vm count, when VFIO_DMA_MAP_FLAG_VADDR is used and the dma's mm has changed, add the dma's locked_vm count to the new mm->locked_vm, subject to the rlimit, and subtract it from the old mm->locked_vm. Fixes: c3cbab24db38 ("vfio/type1: implement interfaces to update vaddr") Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Steve Sistare <steven.sistare@xxxxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Link: https://lore.kernel.org/r/1675184289-267876-5-git-send-email-steven.sistare@xxxxxxxxxx Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/vfio/vfio_iommu_type1.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1591,6 +1591,38 @@ static bool vfio_iommu_iova_dma_valid(st return list_empty(iova); } +static int vfio_change_dma_owner(struct vfio_dma *dma) +{ + struct task_struct *task = current->group_leader; + struct mm_struct *mm = current->mm; + long npage = dma->locked_vm; + bool lock_cap; + int ret; + + if (mm == dma->mm) + return 0; + + lock_cap = capable(CAP_IPC_LOCK); + ret = mm_lock_acct(task, mm, lock_cap, npage); + if (ret) + return ret; + + if (mmget_not_zero(dma->mm)) { + mm_lock_acct(dma->task, dma->mm, dma->lock_cap, -npage); + mmput(dma->mm); + } + + if (dma->task != task) { + put_task_struct(dma->task); + dma->task = get_task_struct(task); + } + mmdrop(dma->mm); + dma->mm = mm; + mmgrab(dma->mm); + dma->lock_cap = lock_cap; + return 0; +} + static int vfio_dma_do_map(struct vfio_iommu *iommu, struct vfio_iommu_type1_dma_map *map) { @@ -1640,6 +1672,9 @@ static int vfio_dma_do_map(struct vfio_i dma->size != size) { ret = -EINVAL; } else { + ret = vfio_change_dma_owner(dma); + if (ret) + goto out_unlock; dma->vaddr = vaddr; dma->vaddr_invalid = false; iommu->vaddr_invalid_count--; Patches currently in stable-queue which might be from steven.sistare@xxxxxxxxxx are queue-6.2/vfio-type1-prevent-underflow-of-locked_vm-via-exec.patch queue-6.2/vfio-type1-restore-locked_vm.patch queue-6.2/vfio-type1-exclude-mdevs-from-vfio_update_vaddr.patch queue-6.2/vfio-type1-track-locked_vm-per-dma.patch