On Thu, Sep 14, 2023, David Hildenbrand wrote: > On 14.09.23 05:50, Xiaoyao Li wrote: > > It's the v2 RFC of enabling KVM gmem[1] as the backend for private > > memory. > > > > For confidential-computing, KVM provides gmem/guest_mem interfaces for > > userspace, like QEMU, to allocate user-unaccesible private memory. This > > series aims to add gmem support in QEMU's RAMBlock so that each RAM can > > have both hva-based shared memory and gmem_fd based private memory. QEMU > > does the shared-private conversion on KVM_MEMORY_EXIT and discards the > > memory. > > > > It chooses the design that adds "private" property to hostmeory backend. > > If "private" property is set, QEMU will allocate/create KVM gmem when > > initialize the RAMbloch of the memory backend. > > > > This sereis also introduces the first user of kvm gmem, > > KVM_X86_SW_PROTECTED_VM. A KVM_X86_SW_PROTECTED_VM with private KVM gmem > > can be created with > > > > $qemu -object sw-protected-vm,id=sp-vm0 \ > > -object memory-backend-ram,id=mem0,size=1G,private=on \ > > -machine q35,kernel_irqchip=split,confidential-guest-support=sp-vm0,memory-backend=mem0 \ > > ... > > > > Unfortunately this patch series fails the boot of OVMF at very early > > stage due to triple fault, because KVM doesn't support emulating string IO > > to private memory. > > Is support being added? Or have we figured out what it would take to make it > work? Hrm, this isn't something I've thought deeply about. The issue is that anything that reaches any form of copy_{from,to}_user() will go kablooie because KVM will always try to read/write the shared mappings. The best case scenario is that the shared mapping is invalid and the uaccess faults. The worst case scenario is that KVM read/writes the wrong memory and sends the guest into the weeds. Eww. And we (well, at least I) definitely want to support this so that gmem can be used for "regular" VMs, i.e. for VMs where userspace is in the TCB, but for which userspace doesn't have access to guest memory by default. It shouldn't be too hard to support. It's easy enough to wire up the hook (thankfully that aren't _that_ many sites), and gmem only supports struct page at the moment so we go straight to kmap. E.g. something like this diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 54480655bcce..b500b0ce5ce3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3291,12 +3291,15 @@ static int next_segment(unsigned long len, int offset) return len; } -static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) +static int __kvm_read_guest_page(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, void *data, int offset, int len) { int r; unsigned long addr; + if (kvm_mem_is_private(kvm, gfn)) + return kvm_gmem_read(slot, gfn, data, offset, len); + addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; @@ -3309,9 +3312,8 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len) { - struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(kvm, gfn_to_memslot(kvm, gfn), gfn, data, + offset, len); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -3320,7 +3322,7 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(vcpu->kvm, slot, gfn, data, offset, len); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); > > 2. hugepage support. > > > > KVM gmem can be allocated from hugetlbfs. How does QEMU determine Not yet it can't. gmem only supports THP, hugetlbfs is a future thing, if it's ever supported. I wouldn't be at all surprised if we end up going down a slightly different route and don't use hugetlbfs directly. > > when to allocate KVM gmem with KVM_GUEST_MEMFD_ALLOW_HUGEPAGE. The > > easiest solution is create KVM gmem with KVM_GUEST_MEMFD_ALLOW_HUGEPAGE > > only when memory backend is HostMemoryBackendFile of hugetlbfs. > > Good question. > > Probably "if the memory backend uses huge pages, also use huge pages for the > private gmem" makes sense. > > ... but it becomes a mess with preallocation ... which is what people should > actually be using with hugetlb. Andeventual double memory-consumption ... > but maybe that's all been taken care of already? > > Probably it's best to leave hugetlb support as future work and start with > something minimal. > > > > > 3. What is KVM_X86_SW_PROTECTED_VM going to look like? and do we need it? > > > > Why implement it when you have to ask others for a motivation? ;) > > Personally, I'm not sure if it is really useful, especially in this state. Yeah, as of today, KVM_X86_SW_PROTECTED_VM is mainly a development vehicle, e.g. so that testing gmem doesn't require TDX/SNP hardware, debugging gmem guests isn't brutally painful, etc. Longer term, I have aspirations of being able to back most VMs with gmem, but that's going to require quite a bit more work, e.g. gmem needs to be mappable (when hardware allows it) so that gmem doesn't all but require double mapping, KVM obviously needs to be able to read/write gmem, etc. The value proposition is that having a guest-first memory type will allow KVM to optimize and harden gmem in ways that wouldn't be feasible for a more generic memory implementation. E.g. memory isn't mapped into host userspace by default (makes it harder to accidentally corrupt the guest), the guest can have *larger* mappings than host userspace, guest memory can be served from a dedicated pool (similar-ish to hugetlb), the pool can be omitted from the direct map, etc. > > This series implements KVM_X86_SW_PROTECTED_VM because it's introduced > > with gmem together on KVM side and it's supposed to be the first user > > who requires KVM gmem. However the implementation is incomplete and > > there lacks the definition of how KVM_X86_SW_PROTECTED_VM works. > > Then it should not be included in this series such that you can make > progress with the gmem implementation for TDX guests instead?