Re: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 01, 2022 at 07:37:29PM +0800, Chao Peng wrote:
> On Mon, Oct 31, 2022 at 12:47:38PM -0500, Michael Roth wrote:
> > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote:
> > > From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
> > > 
> > > +struct restrictedmem_data {
> > > +	struct mutex lock;
> > > +	struct file *memfd;
> > > +	struct list_head notifiers;
> > > +};
> > > +
> > > +static void restrictedmem_notifier_invalidate(struct restrictedmem_data *data,
> > > +				 pgoff_t start, pgoff_t end, bool notify_start)
> > > +{
> > > +	struct restrictedmem_notifier *notifier;
> > > +
> > > +	mutex_lock(&data->lock);
> > > +	list_for_each_entry(notifier, &data->notifiers, list) {
> > > +		if (notify_start)
> > > +			notifier->ops->invalidate_start(notifier, start, end);
> > > +		else
> > > +			notifier->ops->invalidate_end(notifier, start, end);
> > > +	}
> > > +	mutex_unlock(&data->lock);
> > > +}
> > > +
> > > +static int restrictedmem_release(struct inode *inode, struct file *file)
> > > +{
> > > +	struct restrictedmem_data *data = inode->i_mapping->private_data;
> > > +
> > > +	fput(data->memfd);
> > > +	kfree(data);
> > > +	return 0;
> > > +}
> > > +
> > > +static long restrictedmem_fallocate(struct file *file, int mode,
> > > +				    loff_t offset, loff_t len)
> > > +{
> > > +	struct restrictedmem_data *data = file->f_mapping->private_data;
> > > +	struct file *memfd = data->memfd;
> > > +	int ret;
> > > +
> > > +	if (mode & FALLOC_FL_PUNCH_HOLE) {
> > > +		if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))
> > > +			return -EINVAL;
> > > +	}
> > > +
> > > +	restrictedmem_notifier_invalidate(data, offset, offset + len, true);
> > > +	ret = memfd->f_op->fallocate(memfd, mode, offset, len);
> > > +	restrictedmem_notifier_invalidate(data, offset, offset + len, false);
> > > +	return ret;
> > > +}
> > 
> > In v8 there was some discussion about potentially passing the page/folio
> > and order as part of the invalidation callback, I ended up needing
> > something similar for SEV-SNP, and think it might make sense for other
> > platforms. This main reasoning is:
> 
> In that context what we talked on is the inaccessible_get_pfn(), I was
> not aware there is need for invalidation callback as well.

Right, your understanding is correct. I think Sean had only mentioned in
passing that it was something we could potentially do, and in the cases I
was looking at it ended up being useful. I only mentioned it so I don't
seem like I'm too far out in the weeds here :)

> 
> > 
> >   1) restoring kernel directmap:
> > 
> >      Currently SNP (and I believe TDX) need to either split or remove kernel
> >      direct mappings for restricted PFNs, since there is no guarantee that
> >      other PFNs within a 2MB range won't be used for non-restricted
> >      (which will cause an RMP #PF in the case of SNP since the 2MB
> >      mapping overlaps with guest-owned pages)
> 
> Has the splitting and restoring been a well-discussed direction? I'm
> just curious whether there is other options to solve this issue.

For SNP it's been discussed for quite some time, and either splitting or
removing private entries from directmap are the well-discussed way I'm
aware of to avoid RMP violations due to some other kernel process using
a 2MB mapping to access shared memory if there are private pages that
happen to be within that range.

In both cases the issue of how to restore directmap as 2M becomes a
problem.

I was also under the impression TDX had similar requirements. If so,
do you know what the plan is for handling this for TDX?

There are also 2 potential alternatives I'm aware of, but these haven't
been discussed in much detail AFAIK:

a) Ensure confidential guests are backed by 2MB pages. shmem has a way to
   request 2MB THP pages, but I'm not sure how reliably we can guarantee
   that enough THPs are available, so if we went that route we'd probably
   be better off requiring the use of hugetlbfs as the backing store. But
   obviously that's a bit limiting and it would be nice to have the option
   of using normal pages as well. One nice thing with invalidation
   scheme proposed here is that this would "Just Work" if implement
   hugetlbfs support, so an admin that doesn't want any directmap
   splitting has this option available, otherwise it's done as a
   best-effort.

b) Implement general support for restoring directmap as 2M even when
   subpages might be in use by other kernel threads. This would be the
   most flexible approach since it requires no special handling during
   invalidations, but I think it's only possible if all the CPA
   attributes for the 2M range are the same at the time the mapping is
   restored/unsplit, so some potential locking issues there and still
   chance for splitting directmap over time.

> 
> > 
> >      Previously we were able to restore 2MB mappings to some degree
> >      since both shared/restricted pages were all pinned, so anything
> >      backed by a THP (or hugetlb page once that is implemented) at guest
> >      teardown could be restored as 2MB direct mapping.
> > 
> >      Invalidation seems like the most logical time to have this happen,
> 
> Currently invalidation only happens at user-initiated fallocate(). It
> does not cover the VM teardown case where the restoring might also be
> expected to be handled.

Right, I forgot to add that in my proposed changes I added invalidations
for any still-allocated private pages present when the restricted memfd
notifier is unregistered. This was needed to avoid leaking pages back to
the kernel that still need directmap or RMP table fixups. I also added
similar invalidations for memfd->release(), since it seems possible that
userspace might close() it before shutting down guest, but maybe the
latter is not needed if KVM takes a reference on the FD during life of
the guest.

> 
> >      but whether or not to restore as 2MB requires the order to be 2MB
> >      or larger, and for GPA range being invalidated to cover the entire
> >      2MB (otherwise it means the page was potentially split and some
> >      subpages free back to host already, in which case it can't be
> >      restored as 2MB).
> > 
> >   2) Potentially less invalidations:
> >       
> >      If we pass the entire folio or compound_page as part of
> >      invalidation, we only needed to issue 1 invalidation per folio.
> 
> I'm not sure I agree, the current invalidation covers the whole range
> that passed from userspace and the invalidation is invoked only once for
> each usrspace fallocate().

That's true, it only reduces invalidations if we decide to provide a
struct page/folio as part of the invalidation callbacks, which isn't
the case yet. Sorry for the confusion.

> 
> > 
> >   3) Potentially useful for hugetlbfs support:
> > 
> >      One issue with hugetlbfs is that we don't support splitting the
> >      hugepage in such cases, which was a big obstacle prior to UPM. Now
> >      however, we may have the option of doing "lazy" invalidations where
> >      fallocate(PUNCH_HOLE, ...) won't free a shmem-allocate page unless
> >      all the subpages within the 2M range are either hole-punched, or the
> >      guest is shut down, so in that way we never have to split it. Sean
> >      was pondering something similar in another thread:
> > 
> >        https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Flinux-mm%2FYyGLXXkFCmxBfu5U%40google.com%2F&amp;data=05%7C01%7Cmichael.roth%40amd.com%7C3aba56bf7d574c749ea708dabbfe2224%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638028997419628807%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=c7gSLjJEAxuX8xmMiTDMUHNwUdQNKN00xqtAZAEeow8%3D&amp;reserved=0
> > 
> >      Issuing invalidations with folio-granularity ties in fairly well
> >      with this sort of approach if we end up going that route.
> 
> There is semantics difference between the current one and the proposed
> one: The invalidation range is exactly what userspace passed down to the
> kernel (being fallocated) while the proposed one will be subset of that
> (if userspace-provided addr/size is not aligned to power of two), I'm
> not quite confident this difference has no side effect.

In theory userspace should not be allocating/hole-punching restricted
pages for GPA ranges that are already mapped as private in the xarray,
and KVM could potentially fail such requests (though it does currently).

But if we somehow enforced that, then we could rely on
KVM_MEMORY_ENCRYPT_REG_REGION to handle all the MMU invalidation stuff,
which would free up the restricted fd invalidation callbacks to be used
purely to handle doing things like RMP/directmap fixups prior to returning
restricted pages back to the host. So that was sort of my thinking why the
new semantics would still cover all the necessary cases.

-Mike

> 
> > 
> > I need to rework things for v9, and we'll probably want to use struct
> > folio instead of struct page now, but as a proof-of-concept of sorts this
> > is what I'd added on top of v8 of your patchset to implement 1) and 2):
> > 
> >   https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmdroth%2Flinux%2Fcommit%2F127e5ea477c7bd5e4107fd44a04b9dc9e9b1af8b&amp;data=05%7C01%7Cmichael.roth%40amd.com%7C3aba56bf7d574c749ea708dabbfe2224%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638028997419628807%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=jOFT0iLmeU7rKniEkWOsTf2%2FPI13EAw4Qm7arI1q970%3D&amp;reserved=0
> > 
> > Does an approach like this seem reasonable? Should be work this into the
> > base restricted memslot support?
> 
> If the above mentioned semantics difference is not a problem, I don't
> have strong objection on this.
> 
> Sean, since you have much better understanding on this, what is your
> take on this?
> 
> Chao
> > 
> > Thanks,
> > 
> > Mike



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux