On 5/19/22 08:37, Chao Peng wrote:
Extend the memslot definition to provide guest private memory through a
file descriptor(fd) instead of userspace_addr(hva). Such guest private
memory(fd) may never be mapped into userspace so no userspace_addr(hva)
can be used. Instead add another two new fields
(private_fd/private_offset), plus the existing memory_size to represent
the private memory range. Such memslot can still have the existing
userspace_addr(hva). When use, a single memslot can maintain both
private memory through private fd(private_fd/private_offset) and shared
memory through hva(userspace_addr). A GPA is considered private by KVM
if the memslot has private fd and that corresponding page in the private
fd is populated, otherwise, it's shared.
So this is a strange API and, IMO, a layering violation. I want to make
sure that we're all actually on board with making this a permanent part
of the Linux API. Specifically, we end up with a multiplexing situation
as you have described. For a given GPA, there are *two* possible host
backings: an fd-backed one (from the fd, which is private for now might
might end up potentially shared depending on future extensions) and a
VMA-backed one. The selection of which one backs the address is made
internally by whatever backs the fd.
This, IMO, a clear layering violation. Normally, an fd has an
associated address space, and pages in that address space can have
contents, can be holes that appear to contain all zeros, or could have
holes that are inaccessible. If you try to access a hole, you get
whatever is in the hole.
But now, with this patchset, the fd is more of an overlay and you get
*something else* if you try to access through the hole.
This results in operations on the fd bubbling up to the KVM mapping in
what is, IMO, a strange way. If the user punches a hole, KVM has to
modify its mappings such that the GPA goes to whatever VMA may be there.
(And update the RMP, the hypervisor's tables, or whatever else might
actually control privateness.) Conversely, if the user does fallocate
to fill a hole, the guest mapping *to an unrelated page* has to be
zapped so that the fd's page shows up. And the RMP needs updating, etc.
I am lukewarm on this for a few reasons.
1. This is weird. AFAIK nothing else works like this. Obviously this
is subjecting, but "weird" and "layering violation" sometimes translate
to "problematic locking".
2. fd-backed private memory can't have normal holes. If I make a memfd,
punch a hole in it, and mmap(MAP_SHARED) it, I end up with a page that
reads as zero. If I write to it, the page gets allocated. But with
this new mechanism, if I punch a hole and put it in a memslot, reads and
writes go somewhere else. So what if I actually wanted lazily allocated
private zeros?
2b. For a hypothetical future extension in which an fd can also have
shared pages (for conversion, for example, or simply because the fd
backing might actually be more efficient than indirecting through VMAs
and therefore get used for shared memory or entirely-non-confidential
VMs), lazy fd-backed zeros sound genuinely useful.
3. TDX hardware capability is not fully exposed. TDX can have a private
page and a shared page at GPAs that differ only by the private bit.
Sure, no one plans to use this today, but baking this into the user ABI
throws away half the potential address space.
3b. Any software solution that works like TDX (which IMO seems like an
eminently reasonable design to me) has the same issue.
The alternative would be to have some kind of separate table or bitmap
(part of the memslot?) that tells KVM whether a GPA should map to the fd.
What do you all think?