Re: [PATCH kernel 1/3] x86/amd/dr_addr_mask: Cache values in percpu variables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/12/22 03:58, Sean Christopherson wrote:
On Thu, Dec 01, 2022, Alexey Kardashevskiy wrote:
Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to
be noticeable when the AMD KVM SEV-ES's DebugSwap feature is enabled and
KVM needs to store these before switching to a guest; the DebugSwitch
hardware support restores them as type B swap.

This stores MSR values from set_dr_addr_mask() in percpu values and
returns them via new get_dr_addr_mask(). The gain here is about 10x.

Signed-off-by: Alexey Kardashevskiy <aik@xxxxxxx>
---
  arch/x86/include/asm/debugreg.h |  1 +
  arch/x86/kernel/cpu/amd.c       | 32 ++++++++++++++++++++
  2 files changed, 33 insertions(+)

diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h
index cfdf307ddc01..c4324d0205b5 100644
--- a/arch/x86/include/asm/debugreg.h
+++ b/arch/x86/include/asm/debugreg.h
@@ -127,6 +127,7 @@ static __always_inline void local_db_restore(unsigned long dr7)
#ifdef CONFIG_CPU_SUP_AMD
  extern void set_dr_addr_mask(unsigned long mask, int dr);
+extern unsigned long get_dr_addr_mask(int dr);
  #else
  static inline void set_dr_addr_mask(unsigned long mask, int dr) { }

KVM_AMD doesn't depend on CPU_SUP_AMD, i.e. this needs a stub.  Or we need to add
a dependency.

The new symbol is under CONFIG_CPU_SUP_AMD and so is its only user arch/x86/kernel/cpu/amd.c:

arch/x86/kernel/cpu/Makefile:
obj-$(CONFIG_CPU_SUP_AMD)               += amd.o


Is this enough dependency or we need something else? (if enough, I'll put it into the next commit log).


diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index c75d75b9f11a..ec7efcef4e14 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1158,6 +1158,11 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
  	return false;
  }
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr0_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr1_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr2_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr3_addr_mask);
+
  void set_dr_addr_mask(unsigned long mask, int dr)
  {
  	if (!boot_cpu_has(X86_FEATURE_BPEXT))
@@ -1166,17 +1171,44 @@ void set_dr_addr_mask(unsigned long mask, int dr)
  	switch (dr) {
  	case 0:
  		wrmsr(MSR_F16H_DR0_ADDR_MASK, mask, 0);

LOL, I'd love to hear how MSR_F16H_DR0_ADDR_MASK ended up with a completely
different MSR index.
+		per_cpu(dr0_addr_mask, smp_processor_id()) = mask;

Use an array to avoid the copy+paste?  And if you're going to add a cache, might
as well use it to avoid unnecessary writes.

I'll do this, I did not realize DEFINE_PER_CPU_READ_MOSTLY can do arrays. Thanks,


  		break;
  	case 1:

--
Alexey



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux