On Fri, Dec 1, 2023 at 2:13 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Fri, Dec 01, 2023, Mingwei Zhang wrote: > > On Fri, Dec 1, 2023 at 1:30 PM Kalra, Ashish <ashish.kalra@xxxxxxx> wrote: > > > For SNP + gmem, where the HVA ranges covered by the MMU notifiers are > > > not acting on encrypted pages, we are ignoring MMU invalidation > > > notifiers for SNP guests as part of the SNP host patches being posted > > > upstream and instead relying on gmem own invalidation stuff to clean > > > them up on a per-folio basis. > > > > > > Thanks, > > > Ashish > > > > oh, I have no question about that. This series only applies to > > SEV/SEV-ES type of VMs. > > > > For SNP + guest_memfd, I don't see the implementation details, but I > > doubt you can ignore mmu_notifiers if the request does cover some > > encrypted memory in error cases or corner cases. Does the SNP enforce > > the usage of guest_memfd? How do we prevent exceptional cases? I am > > sure you guys already figured out the answers, so I don't plan to dig > > deeper until SNP host pages are accepted. > > KVM will not allow SNP guests to map VMA-based memory as encrypted/private, full > stop. Any invalidations initiated by mmu_notifiers will therefore apply only to > shared memory. Remind me. If I (as a SEV-SNP guest) flip the C-bit in my own x86 page table and write to some of the pages, am I generating encrypted dirty cache lines? I understand that the RMP table may say, hey it is "shared" but that's ok since I just don't need to pvalidate them, right? > > That approach doesn't work for SEV/SEV-ES because KVM can't prevent the guest > from accessing memory as encrypted, i.e. KVM needs the #NPF due to RMP violation > to intercept attempts to convert a GFN from shared to private.