Re: [PATCH v2] mm: page_alloc: move mlocked flag clearance into free_pages_prepare()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 22, 2024, Yosry Ahmed wrote:
> On Mon, Oct 21, 2024 at 9:33 PM Roman Gushchin <roman.gushchin@xxxxxxxxx> wrote:
> >
> > On Tue, Oct 22, 2024 at 04:47:19AM +0100, Matthew Wilcox wrote:
> > > On Tue, Oct 22, 2024 at 02:14:39AM +0000, Roman Gushchin wrote:
> > > > On Mon, Oct 21, 2024 at 09:34:24PM +0100, Matthew Wilcox wrote:
> > > > > On Mon, Oct 21, 2024 at 05:34:55PM +0000, Roman Gushchin wrote:
> > > > > > Fix it by moving the mlocked flag clearance down to
> > > > > > free_page_prepare().
> > > > >
> > > > > Urgh, I don't like this new reference to folio in free_pages_prepare().
> > > > > It feels like a layering violation.  I'll think about where else we
> > > > > could put this.
> > > >
> > > > I agree, but it feels like it needs quite some work to do it in a nicer way,
> > > > no way it can be backported to older kernels. As for this fix, I don't
> > > > have better ideas...
> > >
> > > Well, what is KVM doing that causes this page to get mapped to userspace?
> > > Don't tell me to look at the reproducer as it is 403 Forbidden.  All I
> > > can tell is that it's freed with vfree().
> > >
> > > Is it from kvm_dirty_ring_get_page()?  That looks like the obvious thing,
> > > but I'd hate to spend a lot of time on it and then discover I was looking
> > > at the wrong thing.
> >
> > One of the pages is vcpu->run, others belong to kvm->coalesced_mmio_ring.
> 
> Looking at kvm_vcpu_fault(), it seems like we after mmap'ing the fd
> returned by KVM_CREATE_VCPU we can access one of the following:
> - vcpu->run
> - vcpu->arch.pio_data
> - vcpu->kvm->coalesced_mmio_ring
> - a page returned by kvm_dirty_ring_get_page()
> 
> It doesn't seem like any of these are reclaimable,

Correct, these are all kernel allocated pages that KVM exposes to userspace to
facilitate bidirectional sharing of large chunks of data.

> why is mlock()'ing them supported to begin with?

Because no one realized it would be problematic, and KVM would have had to go out
of its way to prevent mlock().

> Even if we don't want mlock() to err in this case, shouldn't we just do
> nothing?

Ideally, yes.

> I see a lot of checks at the beginning of mlock_fixup() to check
> whether we should operate on the vma, perhaps we should also check for
> these KVM vmas?

Definitely not.  KVM may be doing something unexpected, but the VMA certainly
isn't unique enough to warrant mm/ needing dedicated handling.

Focusing on KVM is likely a waste of time.  There are probably other subsystems
and/or drivers that .mmap() kernel allocated memory in the same way.  Odds are
good KVM is just the messenger, because syzkaller knows how to beat on KVM.  And
even if there aren't any other existing cases, nothing would prevent them from
coming along in the future.

> Trying to or maybe set VM_SPECIAL in kvm_vcpu_mmap()? I am not
> sure tbh, but this doesn't seem right.

Agreed.  VM_DONTEXPAND is the only VM_SPECIAL flag that is remotely appropriate,
but setting VM_DONTEXPAND could theoretically break userspace, and other than
preventing mlock(), there is no reason why the VMA can't be expanded.  I doubt
any userspace VMM is actually remapping and expanding a vCPU mapping, but trying
to fudge around this outside of core mm/ feels kludgy and has the potential to
turn into a game of whack-a-mole.

> FWIW, I think moving the mlock clearing from __page_cache_release ()
> to free_pages_prepare() (or another common function in the page
> freeing path) may be the right thing to do in its own right. I am just
> wondering why we are not questioning the mlock() on the KVM vCPU
> mapping to begin with.
> 
> Is there a use case for this that I am missing?

Not that I know of, I suspect mlock() is allowed simply because it's allowed by
default.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux