On Mon, May 06, 2024, Ravi Bangoria wrote: > On 03-May-24 5:21 AM, Sean Christopherson wrote: > > On Tue, Apr 16, 2024, Ravi Bangoria wrote: > >> Currently, LBR Virtualization is dynamically enabled and disabled for > >> a vcpu by intercepting writes to MSR_IA32_DEBUGCTLMSR. This helps by > >> avoiding unnecessary save/restore of LBR MSRs when nobody is using it > >> in the guest. However, SEV-ES guest mandates LBR Virtualization to be > >> _always_ ON[1] and thus this dynamic toggling doesn't work for SEV-ES > >> guest, in fact it results into fatal error: > >> > >> SEV-ES guest on Zen3, kvm-amd.ko loaded with lbrv=1 > >> > >> [guest ~]# wrmsr 0x1d9 0x4 > >> KVM: entry failed, hardware error 0xffffffff > >> EAX=00000004 EBX=00000000 ECX=000001d9 EDX=00000000 > >> ... > >> > >> Fix this by never intercepting MSR_IA32_DEBUGCTLMSR for SEV-ES guests. > > > > Uh, what? I mean, sure, it works, maybe, I dunno. But there's a _massive_ > > disconnect between the first paragraph and this statement. > > > > Oh, good gravy, it "works" because SEV already forces LBR virtualization. > > > > svm->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK; > > > > (a) the changelog needs to call that out. > > Sorry, I should have called that out explicitly. > > > (b) KVM needs to disallow SEV-ES if > > LBR virtualization is disabled by the admin, i.e. if lbrv=false. > > That's what I initially thought. But since KVM currently allows booting SEV-ES > guests even when lbrv=0 (by silently ignoring lbrv value), erroring out would > be a behavior change. IMO, that's totally fine. There are no hard guarantees regarding module params, > > Alternatively, I would be a-ok simply deleting lbrv, e.g. to avoid yet more > > printks about why SEV-ES couldn't be enabled. > > > > Hmm, I'd probably be more than ok. Because AMD (thankfully, blessedly) uses CPUID > > bits for SVM features, the admin can disable LBRV via clear_cpuid (or whatever it's > > called now). And there are hardly any checks on the feature, so it's not like > > having a boolean saves anything. AMD is clearly committed to making sure LBRV > > works, so the odds of KVM really getting much value out of a module param is low. > > Currently, lbrv is not enabled by default with model specific -cpu profiles in > qemu. So I guess this is not backward compatible? I am talking about LBRV being disabled in the _host_ kernel, not guest CPUID. QEMU enabling LBRV only affects nested SVM, which is out of scope for SEV-ES. > > And then when you delete lbrv, please add a WARN_ON_ONCE() sanity check in > > sev_hardware_setup() (if SEV-ES is supported), because like the DECODEASSISTS > > and FLUSHBYASID requirements, it's not super obvious that LBRV is a hard > > requirement for SEV-ES (that's an understatment; I'm curious how some decided > > that LBR virtualization is where the line go drawn for "yeah, _this_ is mandatory"). > > I'm not sure. Some ES internal dependency. > > In any case, the patch simply fixes 'missed clearing MSR Interception' for > SEV-ES guests. So, would it be okay to apply this patch as is and do lbrv > cleanup as a followup series? No. (a) the lbrv module param mess needs to be sorted out. (b) this is not a complete fix. (c) I'm not convinced it's the right way to fix this, at all. (d) there's a big gaping hole in KVM's handling of MSRs that are passed through to SEV-ES guests. (e) it's not clear to me that KVM needs to dynamically toggle LBRV for _any_ guest. (f) I don't like that sev_es_init_vmcb() mucks with the LBRV intercepts without using svm_enable_lbrv(). Unless I'm missing something, KVM allows userspace to get/set MSRs for SEV-ES guests, even after the VMSA is encrypted. E.g. a naive userspace could attempt to migrate MSR_IA32_DEBUGCTLMSR and end up unintentionally disabling LBRV on the target. The proper fix for VMSA being encrypted is to likely to disallow KVM_{G,S}ET_MSR on MSRs that are contexted switched via the VMSA. But that doesn't address the issue where KVM will disable LBRV if userspace sets MSR_IA32_DEBUGCTLMSR before the VMSA is encrypted. The easiest fix for that is to have svm_disable_lbrv() do nothing for SEV-ES guests, but I'm not convinced that's the best fix. AFAICT, host perf doesn't use the relevant MSRs, and even if host perf did use the MSRs, IIUC there is no "stack", and #VMEXIT retains the guest values for non-SEV-ES guests. I.e. functionally, running with and without LBRV would be largely equivalent as far as perf is concerned. The guest could scribble an MSR with garbage, but overall, host perf wouldn't be meaningfully affected by LBRV. So unless I'm missing something, the only reason to ever disable LBRV would be for performance reasons. Indeed the original commits more or less says as much: commit 24e09cbf480a72f9c952af4ca77b159503dca44b Author: Joerg Roedel <joerg.roedel@xxxxxxx> AuthorDate: Wed Feb 13 18:58:47 2008 +0100 KVM: SVM: enable LBR virtualization This patch implements the Last Branch Record Virtualization (LBRV) feature of the AMD Barcelona and Phenom processors into the kvm-amd module. It will only be enabled if the guest enables last branch recording in the DEBUG_CTL MSR. So there is no increased world switch overhead when the guest doesn't use these MSRs. but what it _doesn't_ say is what the world switch overhead is when LBRV is enabled. If the overhead is small, e.g. 20 cycles?, then I see no reason to keep the dynamically toggling. And if we ditch the dynamic toggling, then this patch is unnecessary to fix the LBRV issue. It _is_ necessary to actually let the guest use the LBRs, but that's a wildly different changelog and justification. And if we _don't_ ditch the dynamic toggling, then sev_es_init_vmcb() should be using svm_enable_lbrv(), not open coding the exact same thing.