Presently KVM only takes a read lock for stage 2 faults if it believes the fault can be fixed by relaxing permissions on a PTE (write unprotect for dirty logging). Otherwise, stage 2 faults grab the write lock, which predictably can pile up all the vCPUs in a sufficiently large VM. Like the TDP MMU for x86, this series loosens the locking around manipulations of the stage 2 page tables to allow parallel faults. RCU and atomics are exploited to safely build/destroy the stage 2 page tables in light of multiple software observers. Patches 1-2 are a cleanup to the way we collapse page tables, with the added benefit of narrowing the window of time a range of memory is unmapped. Patches 3-7 are minor cleanups and refactorings to the way KVM reads PTEs and traverses the stage 2 page tables to make it amenable to concurrent modification. Patches 8-9 use RCU to punt page table cleanup out of the vCPU fault path, which should also improve fault latency a bit. Patches 10-14 implement the meat of this series, extending the 'break-before-make' sequence with atomics to realize locking on PTEs. Effectively a cmpxchg() is used to 'break' a PTE, thereby serializing changes to a given PTE. Finally, patch 15 flips the switch on all the new code and starts grabbing the read side of the MMU lock for stage 2 faults. Applies to 6.0-rc3. Tested with KVM selftests and benchmarked with dirty_log_perf_test, scaling from 1 to 48 vCPUs with 4GB of memory per vCPU backed by THP. ./dirty_log_perf_test -s anonymous_thp -m 2 -b 4G -v ${NR_VCPUS} Time to dirty memory: +-------+---------+------------------+ | vCPUs | 6.0-rc3 | 6.0-rc3 + series | +-------+---------+------------------+ | 1 | 0.89s | 0.92s | | 2 | 1.13s | 1.18s | | 4 | 2.42s | 1.25s | | 8 | 5.03s | 1.36s | | 16 | 8.84s | 2.09s | | 32 | 19.60s | 4.47s | | 48 | 31.39s | 6.22s | +-------+---------+------------------+ It is also worth mentioning that the time to populate memory has improved: +-------+---------+------------------+ | vCPUs | 6.0-rc3 | 6.0-rc3 + series | +-------+---------+------------------+ | 1 | 0.19s | 0.18s | | 2 | 0.25s | 0.21s | | 4 | 0.38s | 0.32s | | 8 | 0.64s | 0.40s | | 16 | 1.22s | 0.54s | | 32 | 2.50s | 1.03s | | 48 | 3.88s | 1.52s | +-------+---------+------------------+ RFC: https://lore.kernel.org/kvmarm/20220415215901.1737897-1-oupton@xxxxxxxxxx/ RFC -> v1: - Factored out page table teardown from kvm_pgtable_stage2_map() - Use the RCU callback to tear down a subtree, instead of scheduling a callback for every individual table page. - Reorganized series to (hopefully) avoid intermediate breakage. - Dropped the use of page headers, instead stuffing KVM metadata into page::private directly Oliver Upton (14): KVM: arm64: Add a helper to tear down unlinked stage-2 subtrees KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make KVM: arm64: Directly read owner id field in stage2_pte_is_counted() KVM: arm64: Read the PTE once per visit KVM: arm64: Split init and set for table PTE KVM: arm64: Return next table from map callbacks KVM: arm64: Document behavior of pgtable visitor callback KVM: arm64: Protect page table traversal with RCU KVM: arm64: Free removed stage-2 tables in RCU callback KVM: arm64: Atomically update stage 2 leaf attributes in parallel walks KVM: arm64: Make changes block->table to leaf PTEs parallel-aware KVM: arm64: Make leaf->leaf PTE changes parallel-aware KVM: arm64: Make table->block changes parallel-aware KVM: arm64: Handle stage-2 faults in parallel arch/arm64/include/asm/kvm_pgtable.h | 59 ++++- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 7 +- arch/arm64/kvm/hyp/nvhe/setup.c | 4 +- arch/arm64/kvm/hyp/pgtable.c | 360 ++++++++++++++++---------- arch/arm64/kvm/mmu.c | 65 +++-- 5 files changed, 325 insertions(+), 170 deletions(-) base-commit: b90cb1053190353cc30f0fef0ef1f378ccc063c5 -- 2.37.2.672.g94769d06f0-goog _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm