Re: [RFC PATCH 00/11] mm/hmm: Various revisions from a locking/code review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 24, 2019 at 10:47:16AM -0700, Ralph Campbell wrote:
> 
> On 5/24/19 9:49 AM, Jerome Glisse wrote:
> > On Fri, May 24, 2019 at 11:36:49AM -0300, Jason Gunthorpe wrote:
> > > On Thu, May 23, 2019 at 12:34:25PM -0300, Jason Gunthorpe wrote:
> > > > From: Jason Gunthorpe <jgg@xxxxxxxxxxxx>
> > > > 
> > > > This patch series arised out of discussions with Jerome when looking at the
> > > > ODP changes, particularly informed by use after free races we have already
> > > > found and fixed in the ODP code (thanks to syzkaller) working with mmu
> > > > notifiers, and the discussion with Ralph on how to resolve the lifetime model.
> > > 
> > > So the last big difference with ODP's flow is how 'range->valid'
> > > works.
> > > 
> > > In ODP this was done using the rwsem umem->umem_rwsem which is
> > > obtained for read in invalidate_start and released in invalidate_end.
> > > 
> > > Then any other threads that wish to only work on a umem which is not
> > > undergoing invalidation will obtain the write side of the lock, and
> > > within that lock's critical section the virtual address range is known
> > > to not be invalidating.
> > > 
> > > I cannot understand how hmm gets to the same approach. It has
> > > range->valid, but it is not locked by anything that I can see, so when
> > > we test it in places like hmm_range_fault it seems useless..
> > > 
> > > Jerome, how does this work?
> > > 
> > > I have a feeling we should copy the approach from ODP and use an
> > > actual lock here.
> > 
> > range->valid is use as bail early if invalidation is happening in
> > hmm_range_fault() to avoid doing useless work. The synchronization
> > is explained in the documentation:
> > 
> > 
> > Locking within the sync_cpu_device_pagetables() callback is the most important
> > aspect the driver must respect in order to keep things properly synchronized.
> > The usage pattern is::
> > 
> >   int driver_populate_range(...)
> >   {
> >        struct hmm_range range;
> >        ...
> > 
> >        range.start = ...;
> >        range.end = ...;
> >        range.pfns = ...;
> >        range.flags = ...;
> >        range.values = ...;
> >        range.pfn_shift = ...;
> >        hmm_range_register(&range);
> > 
> >        /*
> >         * Just wait for range to be valid, safe to ignore return value as we
> >         * will use the return value of hmm_range_snapshot() below under the
> >         * mmap_sem to ascertain the validity of the range.
> >         */
> >        hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
> > 
> >   again:
> >        down_read(&mm->mmap_sem);
> >        ret = hmm_range_snapshot(&range);
> >        if (ret) {
> >            up_read(&mm->mmap_sem);
> >            if (ret == -EAGAIN) {
> >              /*
> >               * No need to check hmm_range_wait_until_valid() return value
> >               * on retry we will get proper error with hmm_range_snapshot()
> >               */
> >              hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
> >              goto again;
> >            }
> >            hmm_range_unregister(&range);
> >            return ret;
> >        }
> >        take_lock(driver->update);
> >        if (!hmm_range_valid(&range)) {
> >            release_lock(driver->update);
> >            up_read(&mm->mmap_sem);
> >            goto again;
> >        }
> > 
> >        // Use pfns array content to update device page table
> > 
> >        hmm_range_unregister(&range);
> >        release_lock(driver->update);
> >        up_read(&mm->mmap_sem);
> >        return 0;
> >   }
> > 
> > The driver->update lock is the same lock that the driver takes inside its
> > sync_cpu_device_pagetables() callback. That lock must be held before calling
> > hmm_range_valid() to avoid any race with a concurrent CPU page table update.
> > 
> > 
> > Cheers,
> > Jérôme
> 
> 
> Given the above, the following patch looks necessary to me.
> Also, looking at drivers/gpu/drm/nouveau/nouveau_svm.c, it
> doesn't check the return value to avoid calling up_read(&mm->mmap_sem).
> Besides, it's better to keep the mmap_sem lock/unlock in the caller.

No, nouveau use the old API so check hmm_vma_fault() within hmm.h, i have
patch to convert it to new API for 5.3


> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 836adf613f81..8b6ef97a8d71 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -1092,10 +1092,8 @@ long hmm_range_fault(struct hmm_range *range, bool
> block)
> 
>  	do {
>  		/* If range is no longer valid force retry. */
> -		if (!range->valid) {
> -			up_read(&hmm->mm->mmap_sem);
> +		if (!range->valid)
>  			return -EAGAIN;
> -		}
> 
>  		vma = find_vma(hmm->mm, start);
>  		if (vma == NULL || (vma->vm_flags & device_vma))
> 
> -----------------------------------------------------------------------------------
> This email message is for the sole use of the intended recipient(s) and may contain
> confidential information.  Any unauthorized review, use, disclosure or distribution
> is prohibited.  If you are not the intended recipient, please contact the sender by
> reply email and destroy all copies of the original message.
> -----------------------------------------------------------------------------------



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux