Re: [PATCH 4/8] KVM: gmem: protect kvm_mmu_invalidate_end()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 18, 2023, Mingwei Zhang wrote:
> +Jacky Li
> 
> On Fri, Aug 18, 2023 at 3:45 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > On a separate note here, the SEV hook blasting WBINVD is still causing
> > > serious performance degradation issues with SNP triggered via
> > > AutoNUMA/numad/KSM, etc. With reference to previous discussions related to
> > > it, we have plans to replace WBINVD with CLFLUSHOPT.
> >
> > Isn't the flush unnecessary when freeing shared memory?  My recollection is that
> > the problematic scenario is when encrypted memory is freed back to the host,
> > because KVM already flushes when potentially encrypted mapping memory into the
> > guest.
> >
> > With SNP+guest_memfd, private/encrypted memory should be unreachabled via the
> > hva-based mmu_notifiers.  gmem should have full control of the page lifecycles,
> > i.e. can get the kernel virtual address as appropriated, and so it SNP shouldn't
> > need the nuclear option.
> >
> > E.g. something like this?
> >
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 07756b7348ae..1c6828ae391d 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -2328,7 +2328,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
> >
> >  void sev_guest_memory_reclaimed(struct kvm *kvm)
> >  {
> > -       if (!sev_guest(kvm))
> > +       if (!sev_guest(kvm) || sev_snp_guest(kvm))
> >                 return;
> >
> >         wbinvd_on_all_cpus();
> 
> I hope this is the final solution :)
> 
> So, short answer: no.
> 
> SNP+guest_memfd prevent untrusted host user space from directly
> modifying the data, this is good enough for CVE-2022-0171, but there
> is no such guarantee that the host kernel in some scenarios could
> access the data and generate dirty caches. In fact, AFAIC, SNP VM does
> not track whether each page is previously shared, isn't it? If a page
> was previously shared and was written by the host kernel or devices
> before it was changed to private. No one tracks it and dirty caches
> are there!

There's an unstated assumption that KVM will do CLFLUSHOPT (if necessary) for
SEV-* guests when allocating into guest_memfd().

> So, to avoid any corner case situations like the above, it seems
> currently we have to retain the property: flushing the cache when the
> guest memory mapping leaves KVM NPT.

What I'm saying is that for guests whose private memory is backed by guest_memfd(),
which is all SNP guests, it should be impossible for memory that is reachable via
mmu_notifiers to be mapped in KVM's MMU as private.  So yes, KVM needs to flush
when memory is freed from guest_memfd(), but not for memory that is reclaimed by
mmu_notifiers, i.e. not for sev_guest_memory_reclaimed().




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux