On Tue, Apr 05, 2022, Quentin Perret wrote: > On Monday 04 Apr 2022 at 15:04:17 (-0700), Andy Lutomirski wrote: > > >> - it can be very useful for protected VMs to do shared=>private > > >> conversions. Think of a VM receiving some data from the host in a > > >> shared buffer, and then it wants to operate on that buffer without > > >> risking to leak confidential informations in a transient state. In > > >> that case the most logical thing to do is to convert the buffer back > > >> to private, do whatever needs to be done on that buffer (decrypting a > > >> frame, ...), and then share it back with the host to consume it; > > > > > > If performance is a motivation, why would the guest want to do two > > > conversions instead of just doing internal memcpy() to/from a private > > > page? I would be quite surprised if multiple exits and TLB shootdowns is > > > actually faster, especially at any kind of scale where zapping stage-2 > > > PTEs will cause lock contention and IPIs. > > > > I don't know the numbers or all the details, but this is arm64, which is a > > rather better architecture than x86 in this regard. So maybe it's not so > > bad, at least in very simple cases, ignoring all implementation details. > > (But see below.) Also the systems in question tend to have fewer CPUs than > > some of the massive x86 systems out there. > > Yep. I can try and do some measurements if that's really necessary, but > I'm really convinced the cost of the TLBI for the shared->private > conversion is going to be significantly smaller than the cost of memcpy > the buffer twice in the guest for us. It's not just the TLB shootdown, the VM-Exits aren't free. And barring non-trivial improvements to KVM's MMU, e.g. sharding of mmu_lock, modifying the page tables will block all other updates and MMU operations. Taking mmu_lock for read, should arm64 ever convert to a rwlock, is not an option because KVM needs to block other conversions to avoid races. Hmm, though batching multiple pages into a single request would mitigate most of the overhead. > There are variations of that idea: e.g. allow userspace to mmap the > entire private fd but w/o taking a reference on pages mapped with > PROT_NONE. And then the VMM can use mprotect() in response to > share/unshare requests. I think Marc liked that idea as it keeps the > userspace API closer to normal KVM -- there actually is a > straightforward gpa->hva relation. Not sure how much that would impact > the implementation at this point. > > For the shared=>private conversion, this would be something like so: > > - the guest issues a hypercall to unshare a page; > > - the hypervisor forwards the request to the host; > > - the host kernel forwards the request to userspace; > > - userspace then munmap()s the shared page; > > - KVM then tries to take a reference to the page. If it succeeds, it > re-enters the guest with a flag of some sort saying that the share > succeeded, and the hypervisor will adjust pgtables accordingly. If > KVM failed to take a reference, it flags this and the hypervisor will > be responsible for communicating that back to the guest. This means > the guest must handle failures (possibly fatal). > > (There are probably many ways in which we can optimize this, e.g. by > having the host proactively munmap() pages it no longer needs so that > the unshare hypercall from the guest doesn't need to exit all the way > back to host userspace.) ... > > Maybe there could be a special mode for the private memory fds in which > > specific pages are marked as "managed by this fd but actually shared". > > pread() and pwrite() would work on those pages, but not mmap(). (Or maybe > > mmap() but the resulting mappings would not permit GUP.) Unless I misunderstand what you intend by pread()/pwrite(), I think we'd need to allow mmap(), otherwise e.g. uaccess from the kernel wouldn't work. > > And transitioning them would be a special operation on the fd that is > > specific to pKVM and wouldn't work on TDX or SEV. To keep things feature agnostic (IMO, baking TDX vs SEV vs pKVM info into private-fd is a really bad idea), this could be handled by adding a flag and/or callback into the notifier/client stating whether or not it supports mapping a private-fd, and then mapping would be allowed if and only if all consumers support/allow mapping. > > Hmm. Sean and Chao, are we making a bit of a mistake by making these fds > > technology-agnostic? That is, would we want to distinguish between a TDX > > backing fd, a SEV backing fd, a software-based backing fd, etc? API-wise > > this could work by requiring the fd to be bound to a KVM VM instance and > > possibly even configured a bit before any other operations would be > > allowed. I really don't want to distinguish between between each exact feature, but I've no objection to adding flags/callbacks to track specific properties of the downstream consumers, e.g. "can this memory be accessed by userspace" is a fine abstraction. It also scales to multiple consumers (see above).