On 07.02.23 14:36, Zhi Wang wrote: > On Wed, 1 Feb 2023 20:46:01 +0100 > Mathias Krause <minipli@xxxxxxxxxxxxxx> wrote: > >> There is no need to unload the MMU roots for a direct MMU role when only >> CR0.WP has changed -- the paging structures are still valid, only the >> permission bitmap needs to be updated. >> >> One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to >> implement kernel W^X. >> > > Wouldn't it be better to factor out update_permission_bitmask and > update_pkru_bitmask in a common function and call it from here? So that > we can also skip: bunches of if..else..., recalculation of the rsvd mask > and shadow_zero_bit masks. Probably, yes. But I dislike the fact that this would imply that we know about the details how kvm_init_mmu() and, moreover, init_kvm_tdp_mmu() are implemented and I'd rather like to avoid that to not introduce bugs or regressions via future code changes in either one of them. By calling out to kvm_init_mmu() we avoid that implicitly as a future change, for sure, must check all callers and would find this location. If we instead simply extract the (as of now) required bits, that might go unnoticed. That said, I agree that there's still room for improvements. > I suppose this is a critical path according to the patch comments and > kvm_init_mmu() is a non-critical path. Is it better to seperate > them now for saving the maintanence efforts in future? E.g. something heavier > might be introduced into the kvm_init_mmu() path and slows down this path. I'll look into what can be done about it. But this change is a first step that can be further optimized via follow up changes. As you can see from the numbers below, it's already way faster that what we have right now, so I'd rather land this (imperfect) change sooner than later and gradually improve on it. This will, however, likely only bring minor speedups compared to this change, so they're less important, IMHO. The question is really what's better from a maintenance point of view: Keeping the call to the commonly used kvm_init_mmu() function or special case even further? I fear the latter might regress easier, but YMMV, of course. > >> The optimization brings a huge performance gain for this case as the >> following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a >> grsecurity L1 VM shows (runtime in seconds, lower is better): >> >> legacy TDP shadow >> kvm.git/queue 11.55s 13.91s 75.2s >> kvm.git/queue+patch 7.32s 7.31s 74.6s >> >> For legacy MMU this is ~36% faster, for TTP MMU even ~47% faster. Also >> TDP and legacy MMU now both have around the same runtime which vanishes >> the need to disable TDP MMU for grsecurity. >> >> Shadow MMU sees no measurable difference and is still slow, as expected. >> >> [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git >> >> Co-developed-by: Sean Christopherson <seanjc@xxxxxxxxxx> >> Signed-off-by: Mathias Krause <minipli@xxxxxxxxxxxxxx> >> --- >> v2: handle the CR0.WP case directly in kvm_post_set_cr0() and only for >> the direct MMU role -- Sean >> >> I re-ran the benchmark and it's even faster than with my patch, as the >> critical path is now the first one handled and is now inline. Thanks a >> lot for the suggestion, Sean! >> >> arch/x86/kvm/x86.c | 9 +++++++++ >> 1 file changed, 9 insertions(+) >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index 508074e47bc0..f09bfc0a3cc1 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -902,6 +902,15 @@ EXPORT_SYMBOL_GPL(load_pdptrs); >> >> void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0) >> { >> + /* >> + * Toggling just CR0.WP doesn't invalidate page tables per se, only the >> + * permission bits. >> + */ >> + if (vcpu->arch.mmu->root_role.direct && (cr0 ^ old_cr0) == X86_CR0_WP) { >> + kvm_init_mmu(vcpu); >> + return; >> + } >> + >> if ((cr0 ^ old_cr0) & X86_CR0_PG) { >> kvm_clear_async_pf_completion_queue(vcpu); >> kvm_async_pf_hash_reset(vcpu); >