On Wed, Feb 28, 2024 at 5:39 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > This doesn't work. The ENC flag gets set on any SNP *capable* CPU, which results > > > in false positives for SEV and SEV-ES guests[*]. > > > > You didn't look at the patch did you? :) > > Guilty, sort of. I looked (and tested) the patch from the TDX series, but I didn't > look at what you postd. But it's a moot point, because now I did look at what you > posted, and it's still broken :-) > > > It does check for has_private_mem (alternatively I could have dropped the bit > > in SVM code for SEV and SEV-ES guests). > > The problem isn't with *KVM* setting the bit, it's with *hardware* setting the > bit for SEV and SEV-ES guests. That results in this: > > .is_private = vcpu->kvm->arch.has_private_mem && (err & PFERR_GUEST_ENC_MASK), > > marking the fault as private. Which, in a vacuum, isn't technically wrong, since > from hardware's perspective the vCPU access was "private". But from KVM's > perspective, SEV and SEV-ES guests don't have private memory vcpu->kvm->arch.has_private_mem is the flag from the SEV VM types series. It's false on SEV and SEV-ES VMs, therefore fault->is_private is going to be false as well. Is it ENOCOFFEE for you or ENODINNER for me? :) Paolo > And because the flag only gets set on SNP capable hardware (in my limited testing > of a whole two systems), running the same VM on different hardware would result > in faults being marked private on one system, but not the other. Which means that > KVM can't rely on the flag being set for SEV or SEV-ES guests, i.e. we can't > retroactively enforce anything (not to mention that that might break existing VMs). >