Re: [PATCH V7 3/7] vfio/type1: track locked_vm per dma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 09, 2023 at 04:24:03PM -0500, Steven Sistare wrote:
> On 1/3/2023 1:13 PM, Steven Sistare wrote:
> > On 1/3/2023 10:21 AM, Jason Gunthorpe wrote:
> >> On Tue, Dec 20, 2022 at 12:39:21PM -0800, Steve Sistare wrote:
> >>> Track locked_vm per dma struct, and create a new subroutine, both for use
> >>> in a subsequent patch.  No functional change.
> >>>
> >>> Fixes: c3cbab24db38 ("vfio/type1: implement interfaces to update vaddr")
> >>> Cc: stable@xxxxxxxxxxxxxxx
> >>> Signed-off-by: Steve Sistare <steven.sistare@xxxxxxxxxx>
> >>> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
> >>> ---
> >>>  drivers/vfio/vfio_iommu_type1.c | 20 +++++++++++++++-----
> >>>  1 file changed, 15 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>> index 71f980b..588d690 100644
> >>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>> @@ -101,6 +101,7 @@ struct vfio_dma {
> >>>  	struct rb_root		pfn_list;	/* Ex-user pinned pfn list */
> >>>  	unsigned long		*bitmap;
> >>>  	struct mm_struct	*mm;
> >>> +	long			locked_vm;
> >>
> >> Why is it long? Can it be negative?
> > 
> > The existing code uses both long and uint64_t for page counts, and I picked one.
> > I'll use size_t instead to match vfio_dma size.
> > 
> >>>  };
> >>>  
> >>>  struct vfio_batch {
> >>> @@ -413,22 +414,21 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
> >>>  	return ret;
> >>>  }
> >>>  
> >>> -static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
> >>> +static int mm_lock_acct(struct task_struct *task, struct mm_struct *mm,
> >>> +			bool lock_cap, long npage, bool async)
> >>>  {
> >>
> >> Now async is even more confusing, the caller really should have a
> >> valid handle on the mm before using it as an argument like this.
> > 
> > The caller holds a grab reference on mm, and mm_lock_acct does mmget_not_zero to 
> > validate the mm.  IMO this is a close analog of the original vfio_lock_acct code
> > where the caller holds a get reference on task, and does get_task_mm to validate
> > the mm.
> > 
> > However, I can hoist the mmget_not_zero from mm_lock_acct to its callsites in
> > vfio_lock_acct and vfio_change_dma_owner.
> 
> Yielding:
> 
> static int mm_lock_acct(struct task_struct *task, struct mm_struct *mm,
>                         bool lock_cap, long npage)
> {
>         int ret = mmap_write_lock_killable(mm);
> 
>         if (!ret) {

Please don't write in the 'single return' style, that is not kernel
code.

'success oriented flow' means you have early returns and goto error so
a straight line read of the function tells what success looks like

Jason



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux