Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 14, 2022 at 03:02:37PM +0100, Vlastimil Babka wrote:
> On 11/1/22 16:19, Michael Roth wrote:
> > On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote:
> >> > 
> >> >   1) restoring kernel directmap:
> >> > 
> >> >      Currently SNP (and I believe TDX) need to either split or remove kernel
> >> >      direct mappings for restricted PFNs, since there is no guarantee that
> >> >      other PFNs within a 2MB range won't be used for non-restricted
> >> >      (which will cause an RMP #PF in the case of SNP since the 2MB
> >> >      mapping overlaps with guest-owned pages)
> >> 
> >> Has the splitting and restoring been a well-discussed direction? I'm
> >> just curious whether there is other options to solve this issue.
> > 
> > For SNP it's been discussed for quite some time, and either splitting or
> > removing private entries from directmap are the well-discussed way I'm
> > aware of to avoid RMP violations due to some other kernel process using
> > a 2MB mapping to access shared memory if there are private pages that
> > happen to be within that range.
> > 
> > In both cases the issue of how to restore directmap as 2M becomes a
> > problem.
> > 
> > I was also under the impression TDX had similar requirements. If so,
> > do you know what the plan is for handling this for TDX?
> > 
> > There are also 2 potential alternatives I'm aware of, but these haven't
> > been discussed in much detail AFAIK:
> > 
> > a) Ensure confidential guests are backed by 2MB pages. shmem has a way to
> >    request 2MB THP pages, but I'm not sure how reliably we can guarantee
> >    that enough THPs are available, so if we went that route we'd probably
> >    be better off requiring the use of hugetlbfs as the backing store. But
> >    obviously that's a bit limiting and it would be nice to have the option
> >    of using normal pages as well. One nice thing with invalidation
> >    scheme proposed here is that this would "Just Work" if implement
> >    hugetlbfs support, so an admin that doesn't want any directmap
> >    splitting has this option available, otherwise it's done as a
> >    best-effort.
> > 
> > b) Implement general support for restoring directmap as 2M even when
> >    subpages might be in use by other kernel threads. This would be the
> >    most flexible approach since it requires no special handling during
> >    invalidations, but I think it's only possible if all the CPA
> >    attributes for the 2M range are the same at the time the mapping is
> >    restored/unsplit, so some potential locking issues there and still
> >    chance for splitting directmap over time.
> 
> I've been hoping that
> 
> c) using a mechanism such as [1] [2] where the goal is to group together
> these small allocations that need to increase directmap granularity so
> maximum number of large mappings are preserved. But I guess that means

Thanks for the references. I wasn't aware there was work in this area,
this opens up some possibilities on how to approach this.

> maximum number of large mappings are preserved. But I guess that means
> knowing at allocation time that this will happen. So I've been wondering how
> this would be possible to employ in the SNP/UPM case? I guess it depends on
> how we expect the private/shared conversions to happen in practice, and I
> don't know the details. I can imagine the following complications:
> 
> - a memfd_restricted region is created such that it's 2MB large/aligned,
> i.e. like case a) above, we can allocate it normally. Now, what if a 4k page
> in the middle is to be temporarily converted to shared for some
> communication between host and guest (can such thing happen?). With the
> punch hole approach, I wonder if we end up fragmenting directmap
> unnecessarily? IIUC the now shared page will become backed by some other

Yes, we end up fragmenting in cases where a guest converts a sub-page to a
shared page because the fallocate(PUNCH_HOLE) gets forwarded through to shmem
which will then split it. At that point the subpage might get used elsewhere
so we no longer have the ability to restore as 2M after
invalidation/shutdown. We could potentially just intercept those
fallocate()'s and only issue the invalidation once all the subpages have
been PUNCH_HOLE'd. We'd still need to ensure KVM MMU invalidations
happen immediately though, but since we rely on a KVM ioctl to do the
conversion in advance, we can rely on the KVM MMU invalidation that
happens at that point and simply make fallocate(PUNCH_HOLE) fail if
someone attempts it on a page that hasn't been converted to shared yet.

Otherwise we could end up being an good chunk of pages depending on how
guest allocates shared pages, but I'm slightly less concerned about that
seeing as there are some general solutions to directmap fragmentation
being considered. I need to think more how this hooks would tie in to
that though.

And since we'd only really being able to avoid unrecoverable splits if
the restrictedmem is hugepage-backed (if we get a bunch of 4K pages
to begin with there's no handling that would avoid fragmentation), it seems
like we'd end up relying on hugetlbfs support for instances where a host
really wants to avoid splitting, and maybe in the case of hugetlbfs
fallocate(PUNCH_HOLE) is already a no-op of sorts? Either way maybe it's
better to explore this aspect in the context of hugetlbfs support.

> page (as the memslot supports both private and shared pages simultaneously).
> But does it make sense to really split the direct mapping (and e.g. the
> shmem page?) We could leave the whole 2MB unmapped without splitting if we
> didn't free the private 4k subpage.
> 
> - a restricted region is created that's below 2MB. If something like [1] is
> merged, it could be used for the backing pages to limit directmap
> fragmentation. But then in case it's eventually fallocated to become larger
> and gain one more more 2MB aligned ranges, the result is suboptimal. Unless
> in that case we migrate the existing pages to a THP-backed shmem, kinda like
> khugepaged collapses hugepages. But that would have to be coordinated with
> the guest, maybe not even possible?

Any migrations would need to be coordinated with SNP firmware at least. I
think it's possible, but that support is probably a ways out. Near-term
I think it might be more straightforward to say: if you don't want to
directmap fragmentation (for SNP anyway), you need to ensure restricted
ranges are backed by THPs or hugetlbfs, and make that the basis for
avoiding directmap splitting for now. Otherwise, it's simply done as a
best-effort, and then maybe over time, with things like [1] and migration
support in place, this restriction can go away, or become less impactful
at least.

Thanks,

Mike

> 
> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fall%2F20220127085608.306306-1-rppt%40kernel.org%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C50b74bc241704885319d08dac648e4bb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638040313701097847%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iGsgnGccmJrik%2FJqve4NmP0U%2B9cEQBJGDPITynIYZUQ%3D&reserved=0
> [2] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F894557%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C50b74bc241704885319d08dac648e4bb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638040313701097847%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=W%2BtSXxi%2Bs5RJYxP9BH0%2FiNOfl1FKM9mfw5nEXJO5doU%3D&reserved=0
> 
> >> 
> >> > 
> >> >      Previously we were able to restore 2MB mappings to some degree
> >> >      since both shared/restricted pages were all pinned, so anything
> >> >      backed by a THP (or hugetlb page once that is implemented) at guest
> >> >      teardown could be restored as 2MB direct mapping.
> >> > 
> >> >      Invalidation seems like the most logical time to have this happen,
> >> 
> >> Currently invalidation only happens at user-initiated fallocate(). It
> >> does not cover the VM teardown case where the restoring might also be
> >> expected to be handled.
> > 
> > Right, I forgot to add that in my proposed changes I added invalidations
> > for any still-allocated private pages present when the restricted memfd
> > notifier is unregistered. This was needed to avoid leaking pages back to
> > the kernel that still need directmap or RMP table fixups. I also added
> > similar invalidations for memfd->release(), since it seems possible that
> > userspace might close() it before shutting down guest, but maybe the
> > latter is not needed if KVM takes a reference on the FD during life of
> > the guest.
> > 
> >> 
> >> >      but whether or not to restore as 2MB requires the order to be 2MB
> >> >      or larger, and for GPA range being invalidated to cover the entire
> >> >      2MB (otherwise it means the page was potentially split and some
> >> >      subpages free back to host already, in which case it can't be
> >> >      restored as 2MB).
> >> > 
> >> >   2) Potentially less invalidations:
> >> >       
> >> >      If we pass the entire folio or compound_page as part of
> >> >      invalidation, we only needed to issue 1 invalidation per folio.
> >> 
> >> I'm not sure I agree, the current invalidation covers the whole range
> >> that passed from userspace and the invalidation is invoked only once for
> >> each usrspace fallocate().
> > 
> > That's true, it only reduces invalidations if we decide to provide a
> > struct page/folio as part of the invalidation callbacks, which isn't
> > the case yet. Sorry for the confusion.
> > 
> >> 
> >> > 
> >> >   3) Potentially useful for hugetlbfs support:
> >> > 
> >> >      One issue with hugetlbfs is that we don't support splitting the
> >> >      hugepage in such cases, which was a big obstacle prior to UPM. Now
> >> >      however, we may have the option of doing "lazy" invalidations where
> >> >      fallocate(PUNCH_HOLE, ...) won't free a shmem-allocate page unless
> >> >      all the subpages within the 2M range are either hole-punched, or the
> >> >      guest is shut down, so in that way we never have to split it. Sean
> >> >      was pondering something similar in another thread:
> >> > 
> >> >        https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2FYyGLXXkFCmxBfu5U%40google.com%2F&data=05%7C01%7Cmichael.roth%40amd.com%7C50b74bc241704885319d08dac648e4bb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638040313701097847%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=GTO73Onun86jZh3PZABQL%2F4Fs5R%2BFZe9gDkOSMoHddA%3D&reserved=0
> >> > 
> >> >      Issuing invalidations with folio-granularity ties in fairly well
> >> >      with this sort of approach if we end up going that route.
> >> 
> >> There is semantics difference between the current one and the proposed
> >> one: The invalidation range is exactly what userspace passed down to the
> >> kernel (being fallocated) while the proposed one will be subset of that
> >> (if userspace-provided addr/size is not aligned to power of two), I'm
> >> not quite confident this difference has no side effect.
> > 
> > In theory userspace should not be allocating/hole-punching restricted
> > pages for GPA ranges that are already mapped as private in the xarray,
> > and KVM could potentially fail such requests (though it does currently).
> > 
> > But if we somehow enforced that, then we could rely on
> > KVM_MEMORY_ENCRYPT_REG_REGION to handle all the MMU invalidation stuff,
> > which would free up the restricted fd invalidation callbacks to be used
> > purely to handle doing things like RMP/directmap fixups prior to returning
> > restricted pages back to the host. So that was sort of my thinking why the
> > new semantics would still cover all the necessary cases.
> > 
> > -Mike
> > 
> >> 
> >> > 
> >> > I need to rework things for v9, and we'll probably want to use struct
> >> > folio instead of struct page now, but as a proof-of-concept of sorts this
> >> > is what I'd added on top of v8 of your patchset to implement 1) and 2):
> >> > 
> >> >   https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2F127e5ea477c7bd5e4107fd44a04b9dc9e9b1af8b&data=05%7C01%7Cmichael.roth%40amd.com%7C50b74bc241704885319d08dac648e4bb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638040313701097847%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=yYd6lVWEFkffTZnAoFTgoozYxZbxvMXjOd%2BWuP70G7I%3D&reserved=0
> >> > 
> >> > Does an approach like this seem reasonable? Should be work this into the
> >> > base restricted memslot support?
> >> 
> >> If the above mentioned semantics difference is not a problem, I don't
> >> have strong objection on this.
> >> 
> >> Sean, since you have much better understanding on this, what is your
> >> take on this?
> >> 
> >> Chao
> >> > 
> >> > Thanks,
> >> > 
> >> > Mike
> 



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux