On Fri, Oct 14, 2022, Vishal Annapurve wrote: > On Fri, Oct 7, 2022 at 1:32 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > On Fri, Aug 19, 2022, Vishal Annapurve wrote: > > > Add a helper to query guest physical address for ucall pool > > > so that guest can mark the page as accessed shared or private. > > > > > > Signed-off-by: Vishal Annapurve <vannapurve@xxxxxxxxxx> > > > --- > > > > This should be handled by the SEV series[*]. Can you provide feedback on that > > series if having a generic way to map the ucall address as shared won't work? > > > > [*] https://lore.kernel.org/all/20220829171021.701198-1-pgonda@xxxxxxxxxx > > Based on the SEV series you referred to, selftests are capable of > accessing ucall pool memory by having encryption bit cleared (as set > by guest pagetables) as allowed by generic API vm_vaddr_alloc_shared. > This change is needed in the context of fd based private memory where > guest (specifically non-confidential/sev guests) code in the selftests > will have to explicitly indicate that ucall pool address range will be > accessed by guest as shared. Ah, right, the conversion needs an explicit hypercall, which gets downright annoying because auto-converting shared pages would effectivfely require injecting code into the start of every guest. Ha! I think we got too fancy. This is purely for testing UPM, not any kind of trust model, i.e. there's no need for KVM to treat userspace as untrusted. Rather than jump through hoops just to let the guest dictate private vs. shared, simply "trust" userspace when determining whether a page should be mapped private. Then the selftests can invoke the repurposed KVM_MEMORY_ENCRYPT_(UN)REG_REGION ioctls as appropriate when allocating/remapping guest private memory. E.g. on top of UPM v8, I think the test hook boils down to: diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d68944f07b4b..d42d0e6bdd8c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4279,6 +4279,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault fault->gfn = fault->addr >> PAGE_SHIFT; fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + fault->is_private = IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_TESTING) && + kvm_slot_can_be_private(fault->slot) && + kvm_mem_is_private(vcpu->kvm, fault->gfn); if (page_fault_handle_page_track(vcpu, fault)) return RET_PF_EMULATE; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8ffd4607c7d8..0dc5d0bf647c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1653,7 +1653,7 @@ static void kvm_replace_memslot(struct kvm *kvm, bool __weak kvm_arch_has_private_mem(struct kvm *kvm) { - return false; + return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM_TESTING); } static int check_memory_region_flags(struct kvm *kvm,