On Thu, Dec 01, 2022 at 01:19:46PM +1100, Alexey Kardashevskiy wrote:
Subject: Re: [PATCH kernel 1/3] x86/amd/dr_addr_mask: Cache values in percpu variables
"x86/amd: " is perfectly fine as a prefix.
Reading DR[0-3]_ADDR_MASK MSRs takes about 250 cycles which is going to
be noticeable when the AMD KVM SEV-ES's DebugSwap feature is enabled and
which does what? I.e., a sort of lazy DR regs swapping...
KVM needs to store these before switching to a guest; the DebugSwitch
hardware support restores them as type B swap.
I know this is all clear to you but you should explain what type B
register swap is.
This stores MSR values from set_dr_addr_mask() in percpu values and
s/This stores/Store/
From Documentation/process/submitting-patches.rst:
"Describe your changes in imperative mood, e.g. "make xyzzy do frotz"
instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
to do frotz", as if you are giving orders to the codebase to change
its behaviour."
Also, do not talk about what your patch does - that should hopefully be
visible in the diff itself. Rather, talk about *why* you're doing what
you're doing.
returns them via new get_dr_addr_mask(). The gain here is about 10x.
Signed-off-by: Alexey Kardashevskiy <aik@xxxxxxx>
---
arch/x86/include/asm/debugreg.h | 1 +
arch/x86/kernel/cpu/amd.c | 32 ++++++++++++++++++++
2 files changed, 33 insertions(+)
diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h
index cfdf307ddc01..c4324d0205b5 100644
--- a/arch/x86/include/asm/debugreg.h
+++ b/arch/x86/include/asm/debugreg.h
@@ -127,6 +127,7 @@ static __always_inline void local_db_restore(unsigned long dr7)
#ifdef CONFIG_CPU_SUP_AMD
extern void set_dr_addr_mask(unsigned long mask, int dr);
+extern unsigned long get_dr_addr_mask(int dr);
#else
static inline void set_dr_addr_mask(unsigned long mask, int dr) { }
#endif
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index c75d75b9f11a..ec7efcef4e14 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1158,6 +1158,11 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
return false;
}
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr0_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr1_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr2_addr_mask);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned long, dr3_addr_mask);
This BPEXT thing is AMD-only, right?
I guess those should be called amd_drX_addr_mask where X in [0-3].
Yeah yeah, they are used in AMD-only code - svm* - but still.
void set_dr_addr_mask(unsigned long mask, int dr)
{
if (!boot_cpu_has(X86_FEATURE_BPEXT))
@@ -1166,17 +1171,44 @@ void set_dr_addr_mask(unsigned long mask, int dr)
switch (dr) {
case 0:
wrmsr(MSR_F16H_DR0_ADDR_MASK, mask, 0);
+ per_cpu(dr0_addr_mask, smp_processor_id()) = mask;
break;
case 1:
+ wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0);
+ per_cpu(dr1_addr_mask, smp_processor_id()) = mask;
+ break;
case 2:
+ wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0);
+ per_cpu(dr2_addr_mask, smp_processor_id()) = mask;
+ break;
case 3:
wrmsr(MSR_F16H_DR1_ADDR_MASK - 1 + dr, mask, 0);
+ per_cpu(dr3_addr_mask, smp_processor_id()) = mask;
break;
default:
break;
}
}
+unsigned long get_dr_addr_mask(int dr)
This function name is too generic for an exported function.
amd_get_dr_addr_mask() I'd say.
+ if (!boot_cpu_has(X86_FEATURE_BPEXT))
check_for_deprecated_apis: WARNING: arch/x86/kernel/cpu/amd.c:1195: Do not use boot_cpu_has() - use cpu_feature_enabled() instead.
You could fix the above one too, while at it.
+ return 0;
+
+ switch (dr) {
+ case 0:
+ return per_cpu(dr0_addr_mask, smp_processor_id());
+ case 1:
+ return per_cpu(dr1_addr_mask, smp_processor_id());
+ case 2:
+ return per_cpu(dr2_addr_mask, smp_processor_id());
+ case 3:
+ return per_cpu(dr3_addr_mask, smp_processor_id());
default:
WARN_ON_ONCE(1);
break;
Just in case.
And as a matter of fact, make that short and succinct:
switch (dr) {
case 0: return per_cpu(dr0_addr_mask, smp_processor_id());
case 1: return per_cpu(dr1_addr_mask, smp_processor_id());
case 2: return per_cpu(dr2_addr_mask, smp_processor_id());
case 3: return per_cpu(dr3_addr_mask, smp_processor_id());
default: WARN_ON_ONCE(1); break;
}