[PATCH v2 00/15] KVM: arm64: Parallel stage-2 fault handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Presently KVM only takes a read lock for stage 2 faults if it believes
the fault can be fixed by relaxing permissions on a PTE (write unprotect
for dirty logging). Otherwise, stage 2 faults grab the write lock, which
predictably can pile up all the vCPUs in a sufficiently large VM.

Like the TDP MMU for x86, this series loosens the locking around
manipulations of the stage 2 page tables to allow parallel faults. RCU
and atomics are exploited to safely build/destroy the stage 2 page
tables in light of multiple software observers.

Patches 1-4 clean up the context associated with a page table walk / PTE
visit. This is helpful for:
 - Extending the context passed through for a visit
 - Building page table walkers that operate outside of a kvm_pgtable
   context (e.g. RCU callback)

Patches 5-6 clean up the stage-2 map walkers by calling a helper to tear
down removed tables. There is a small improvement here in that a broken
PTE is replaced more quickly, as page table teardown happens afterwards.

Patches 7-9 sprinkle in RCU to the page table walkers, punting the
teardown of removed tables to an RCU callback.

Patches 10-14 implement the meat of this series, extending the
'break-before-make' sequence with atomics to realize locking on PTEs.
Effectively a cmpxchg() is used to 'break' a PTE, thereby serializing
changes to a given PTE.

Finally, patch 15 flips the switch on all the new code and starts
grabbing the read side of the MMU lock for stage 2 faults.

Applies to kvmarm-6.1. Tested with KVM selftests, kvm-unit-tests, and
Google's internal VMM (Vanadium). Also tested with lockdep enabled and
saw no puke for RCU. Planning on following up testing migration with
QEMU. Got frustrated with my cross-building environment and I wanted to
get the patches out before the weekend :)

benchmarked with dirty_log_perf_test, scaling from 1 to 48 vCPUs with
4GB of memory per vCPU backed by THP.

  ./dirty_log_perf_test -s anonymous_thp -m 2 -b 4G -v ${NR_VCPUS}

Time to dirty memory:

        +-------+---------+------------------+
        | vCPUs | kvmarm  | kvmarm + series  |
        +-------+---------+------------------+
        |     1 | 0.87s   | 0.93s            |
        |     2 | 1.11s   | 1.16s            |
        |     4 | 2.39s   | 1.27s            |
        |     8 | 5.01s   | 1.39s            |
        |    16 | 8.89s   | 2.07s            |
        |    32 | 19.90s  | 4.45s            |
        |    48 | 32.10s  | 6.23s            |
        +-------+---------+------------------+

It is also worth mentioning that the time to populate memory has
improved:

        +-------+---------+------------------+
        | vCPUs | kvmarm  | kvmarm + series  |
        +-------+---------+------------------+
        |     1 | 0.21s   | 0.17s            |
        |     2 | 0.26s   | 0.23s            |
        |     4 | 0.39s   | 0.31s            |
        |     8 | 0.68s   | 0.39s            |
        |    16 | 1.26s   | 0.53s            |
        |    32 | 2.51s   | 1.04s            |
        |    48 | 3.94s   | 1.55s            |
        +-------+---------+------------------+

v1 -> v2:
 - It builds! :-)
 - Roll all of the context associated with PTE visit into a
   stack-allocated structure
 - Clean up the oddball handling of PTE values, avoiding a UAF along the
   way (Quentin)
 - Leave the re-reading of the PTE after WALK_LEAF in place instead of
   attempting to return the installed PTE value (David)
 - Mention why RCU is stubbed out for hyp page table walkers (David)
 - Ensure that all reads of page table memory pass through an
   RCU-protected pointer. The lifetime of the dereference is contained
   within __kvm_pgtable_visit() (David).
 - Ensure that no user of stage2_map_walker() passes TABLE_POST (David)
 - Unwire the page table walkers from relying on struct kvm_pgtable,
   simplifying the passed context to RCU callbacks.
 - Key rcu_dereference() off of a page table flag indicating a shared
   walk. This is clear when either (1) the write lock is held or (2)
   called from an RCU callback.

v1: https://lore.kernel.org/kvmarm/20220830194132.962932-1-oliver.upton@xxxxxxxxx/

Oliver Upton (15):
  KVM: arm64: Combine visitor arguments into a context structure
  KVM: arm64: Stash observed pte value in visitor context
  KVM: arm64: Pass mm_ops through the visitor context
  KVM: arm64: Don't pass kvm_pgtable through kvm_pgtable_walk_data
  KVM: arm64: Add a helper to tear down unlinked stage-2 subtrees
  KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make
  KVM: arm64: Use an opaque type for pteps
  KVM: arm64: Protect stage-2 traversal with RCU
  KVM: arm64: Free removed stage-2 tables in RCU callback
  KVM: arm64: Atomically update stage 2 leaf attributes in parallel
    walks
  KVM: arm64: Split init and set for table PTE
  KVM: arm64: Make block->table PTE changes parallel-aware
  KVM: arm64: Make leaf->leaf PTE changes parallel-aware
  KVM: arm64: Make table->block changes parallel-aware
  KVM: arm64: Handle stage-2 faults in parallel

 arch/arm64/include/asm/kvm_pgtable.h  |  85 +++-
 arch/arm64/kvm/hyp/nvhe/mem_protect.c |  21 +-
 arch/arm64/kvm/hyp/nvhe/setup.c       |  22 +-
 arch/arm64/kvm/hyp/pgtable.c          | 624 ++++++++++++++------------
 arch/arm64/kvm/mmu.c                  |  51 ++-
 5 files changed, 456 insertions(+), 347 deletions(-)


base-commit: b302ca52ba8235ff0e18c0fa1fa92b51784aef6a
-- 
2.38.0.rc1.362.ged0d419d3c-goog

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux