On Tue, Jul 09, 2024, Manali Shukla wrote: > From: Nikunj A Dadhania <nikunj@xxxxxxx> > > Malicious guests can cause bus locks to degrade the performance of > system. Non-WB(write-back) and misaligned locked RMW(read-modify-write) > instructions are referred to as "bus locks" and require system wide > synchronization among all processors to guarantee atomicity. Bus locks > may incur significant performance penalties for all processors in the > system. Copy+pasting the background into every changelog isn't helpful. Instead, focus on what the feature actually does, and simply mention what bus locks are in passing. If someone really doesn't know, it shouldn't be had for them to find the previous changelog. > The Bus Lock Threshold feature proves beneficial for hypervisors seeking > to restrict guests' ability to initiate numerous bus locks, thereby > preventing system slowdowns that affect all tenants. > > Support for the buslock threshold is indicated via CPUID function > 0x8000000A_EDX[29]. > > VMCB intercept bit > VMCB Offset Bits Function > 14h 5 Intercept bus lock operations > (occurs after guest instruction finishes) > > Bus lock threshold > VMCB Offset Bits Function > 120h 15:0 Bus lock counter I can make a pretty educated guess as to how this works, but this is a pretty simple feature, i.e. there's no reason not to document how it works in the changelog. > Use the KVM capability KVM_CAP_X86_BUS_LOCK_EXIT to enable the feature. > > When the bus lock threshold counter reaches to zero, KVM will exit to > user space by setting KVM_RUN_BUS_LOCK in vcpu->run->flags in > bus_lock_exit handler, indicating that a bus lock has been detected in > the guest. > > More details about the Bus Lock Threshold feature can be found in AMD > APM [1]. > > [1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024, > Vol 2, 15.14.5 Bus Lock Threshold. > https://bugzilla.kernel.org/attachment.cgi?id=306250 > > [Manali: > - Added exit reason string for SVM_EXIT_BUS_LOCK. > - Moved enablement and disablement of bus lock intercept support. > to svm_vcpu_after_set_cpuid(). > - Massage commit message. > - misc cleanups. > ] No need for this since you are listed as co-author. > Signed-off-by: Nikunj A Dadhania <nikunj@xxxxxxx> > Co-developed-by: Manali Shukla <manali.shukla@xxxxxxx> > Signed-off-by: Manali Shukla <manali.shukla@xxxxxxx> > --- > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 7d396f5fa010..9f1d51384eac 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -191,6 +191,9 @@ module_param(pause_filter_count_shrink, ushort, 0444); > static unsigned short pause_filter_count_max = KVM_SVM_DEFAULT_PLE_WINDOW_MAX; > module_param(pause_filter_count_max, ushort, 0444); > > +static unsigned short bus_lock_counter = KVM_SVM_DEFAULT_BUS_LOCK_COUNTER; > +module_param(bus_lock_counter, ushort, 0644); This should be read-only, otherwise the behavior is non-deterministic, e.g. as proposed, awon't take effect until a vCPU happens to trigger a bus lock exit. If we really want it to be writable, then a per-VM capability is likely a better solution. Actually, we already have a capability, which means there's zero reason for this module param to exist. Userspace already has to opt-in to turning on bus lock detection, i.e. userspace already has the opportunity to provide a different threshold. That said, unless someone specifically needs a threshold other than '0', I vote to keep the uAPI as-is and simply exit on every bus lock. > /* > * Use nested page tables by default. Note, NPT may get forced off by > * svm_hardware_setup() if it's unsupported by hardware or the host kernel. > @@ -3231,6 +3234,19 @@ static int invpcid_interception(struct kvm_vcpu *vcpu) > return kvm_handle_invpcid(vcpu, type, gva); > } > > +static int bus_lock_exit(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_svm *svm = to_svm(vcpu); > + > + vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK; > + vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK; > + > + /* Reload the counter again */ > + svm->vmcb->control.bus_lock_counter = bus_lock_counter; > + > + return 0; > +} > + > static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = { > [SVM_EXIT_READ_CR0] = cr_interception, > [SVM_EXIT_READ_CR3] = cr_interception, > @@ -3298,6 +3314,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = { > [SVM_EXIT_CR4_WRITE_TRAP] = cr_trap, > [SVM_EXIT_CR8_WRITE_TRAP] = cr_trap, > [SVM_EXIT_INVPCID] = invpcid_interception, > + [SVM_EXIT_BUS_LOCK] = bus_lock_exit, > [SVM_EXIT_NPF] = npf_interception, > [SVM_EXIT_RSM] = rsm_interception, > [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception, > @@ -4356,6 +4373,27 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) Why on earth is this in svm_vcpu_after_set_cpuid()? This has nothing to do with guest CPUID. > set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0, > !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)); > > + if (cpu_feature_enabled(X86_FEATURE_BUS_LOCK_THRESHOLD) && This should be a slow path, there's zero reason to check for host support as bus_lock_detection_enabled should be allowed if and only if it's supported. > + vcpu->kvm->arch.bus_lock_detection_enabled) { > + svm_set_intercept(svm, INTERCEPT_BUSLOCK); > + > + /* > + * The CPU decrements the bus lock counter every time a bus lock > + * is detected. Once the counter reaches zero a VMEXIT_BUSLOCK > + * is generated. A value of zero for bus lock counter means a > + * VMEXIT_BUSLOCK at every bus lock detection. > + * > + * Currently, default value for bus_lock_counter is set to 10. Please don't document the default _here_. Because inevitably this will become stale when the default changes. > + * So, the VMEXIT_BUSLOCK is generated after every 10 bus locks > + * detected. > + */ > + svm->vmcb->control.bus_lock_counter = bus_lock_counter; > + pr_debug("Setting buslock counter to %u\n", bus_lock_counter); > + } else { > + svm_clr_intercept(svm, INTERCEPT_BUSLOCK); > + svm->vmcb->control.bus_lock_counter = 0; > + } > + > if (sev_guest(vcpu->kvm)) > sev_vcpu_after_set_cpuid(svm); > > @@ -5149,6 +5187,11 @@ static __init void svm_set_cpu_caps(void) > kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); > } > > + if (cpu_feature_enabled(X86_FEATURE_BUS_LOCK_THRESHOLD)) { > + pr_info("Bus Lock Threashold supported\n"); > + kvm_caps.has_bus_lock_exit = true; > + } > + > /* CPUID 0x80000008 */ > if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) || > boot_cpu_has(X86_FEATURE_AMD_SSBD)) > diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h > index d80a4c6b5a38..2a77232105da 100644 > --- a/arch/x86/kvm/x86.h > +++ b/arch/x86/kvm/x86.h > @@ -58,6 +58,7 @@ void kvm_spurious_fault(void); > #define KVM_VMX_DEFAULT_PLE_WINDOW_MAX UINT_MAX > #define KVM_SVM_DEFAULT_PLE_WINDOW_MAX USHRT_MAX > #define KVM_SVM_DEFAULT_PLE_WINDOW 3000 > +#define KVM_SVM_DEFAULT_BUS_LOCK_COUNTER 10 There's zero reason this needs to be in x86.h. I don't even see a reason to have a #define, there's literally one user.