Split NX huge page recovery in two separate flows, one for TDP MMU and one for non-TDP MMU. TDP MMU flow will use MMU read lock and non-TDP MMU flow will use MMU write lock. This change unblocks vCPUs which are waiting for MMU read lock while NX huge page recovery is running and zapping shadow pages. A Windows guest was showing network latency jitters which was root caused to vCPUs waiting for MMU read lock when NX huge page recovery thread was holding MMU write lock. Disabling NX huge page recovery fixed the jitter issue. So, to optimize NX huge page recovery, it was modified to run under MMU read lock, the switch made jitter issue disappear completely and vCPUs wait time for MMU read lock reduced drastically. Patch 1 is splitting the logic in two separate flows but still running under MMU write lock. Patch 2 is changing TDP MMU flow to use MMU read lock. Patch 2 commit log contains the test results. Here is the brief histogram, where 'Interval' is the time it took to complete the network calls and 'Frequency' is how many calls: Before ------ Interval(usec) Frequency 0 9999964 1000 12 2000 3 3000 0 4000 0 5000 0 6000 0 7000 1 8000 1 9000 1 10000 2 11000 1 12000 0 13000 4 14000 1 15000 1 16000 4 17000 1 18000 2 19000 0 20000 0 21000 1 22000 0 23000 0 24000 1 After ----- Interval(usec) Frequency 0 9999996 1000 4 Vipin Sharma (2): KVM: x86/mmu: Split NX hugepage recovery flow into TDP and non-TDP flow KVM: x86/mmu: Recover NX Huge pages belonging to TDP MMU under MMU read lock arch/x86/kvm/mmu/mmu.c | 168 +++++++++++++++++++------------- arch/x86/kvm/mmu/mmu_internal.h | 6 ++ arch/x86/kvm/mmu/tdp_mmu.c | 89 +++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.h | 3 +- 4 files changed, 192 insertions(+), 74 deletions(-) base-commit: 332d2c1d713e232e163386c35a3ba0c1b90df83f -- 2.46.0.76.ge559c4bf1a-goog