Re: WARNING in __mmdrop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/7/23 下午11:02, Michael S. Tsirkin wrote:
On Tue, Jul 23, 2019 at 09:34:29PM +0800, Jason Wang wrote:
On 2019/7/23 下午6:27, Michael S. Tsirkin wrote:
Yes, since there could be multiple co-current invalidation requests. We need
count them to make sure we don't pin wrong pages.


I also wonder about ordering. kvm has this:
          /*
            * Used to check for invalidations in progress, of the pfn that is
            * returned by pfn_to_pfn_prot below.
            */
           mmu_seq = kvm->mmu_notifier_seq;
           /*
            * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
            * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
            * risk the page we get a reference to getting unmapped before we have a
            * chance to grab the mmu_lock without mmu_notifier_retry() noticing.
            *
            * This smp_rmb() pairs with the effective smp_wmb() of the combination
            * of the pte_unmap_unlock() after the PTE is zapped, and the
            * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
            * mmu_notifier_seq is incremented.
            */
           smp_rmb();

does this apply to us? Can't we use a seqlock instead so we do
not need to worry?
I'm not familiar with kvm MMU internals, but we do everything under of
mmu_lock.

Thanks
I don't think this helps at all.

There's no lock between checking the invalidate counter and
get user pages fast within vhost_map_prefetch. So it's possible
that get user pages fast reads PTEs speculatively before
invalidate is read.

--

In vhost_map_prefetch() we do:

         spin_lock(&vq->mmu_lock);

         ...

         err = -EFAULT;
         if (vq->invalidate_count)
                 goto err;

         ...

         npinned = __get_user_pages_fast(uaddr->uaddr, npages,
                                         uaddr->write, pages);

         ...

         spin_unlock(&vq->mmu_lock);

Is this not sufficient?

Thanks
So what orders __get_user_pages_fast wrt invalidate_count read?


So in invalidate_end() callback we have:

spin_lock(&vq->mmu_lock);
--vq->invalidate_count;
        spin_unlock(&vq->mmu_lock);


So even PTE is read speculatively before reading invalidate_count (only in the case of invalidate_count is zero). The spinlock has guaranteed that we won't read any stale PTEs.

Thanks







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux