On Sat May 27, 2023 at 9:44 AM AEST, Yu Zhao wrote: > Add mmu_notifier_ops->test_clear_young() to supersede test_young() > and clear_young(). > > test_clear_young() has a fast path, which if supported, allows its > callers to safely clear the accessed bit without taking > kvm->mmu_lock. > > The fast path requires arch-specific code that generally relies on > RCU and CAS: the former protects KVM page tables from being freed > while the latter clears the accessed bit atomically against both the > hardware and other software page table walkers. If the fast path is > unsupported, test_clear_young() falls back to the existing slow path > where kvm->mmu_lock is then taken. > > test_clear_young() can also operate on a range of KVM PTEs > individually according to a bitmap, if the caller provides it. It would be better if you could do patch 1 that only touches the mmu_notifier code and implements mmu_notifier_test_clear_young() in terms of existing callbacks, and next patch swaps KVM to new callbacks and remove the old ones. > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h > index 64a3e051c3c4..dfdbb370682d 100644 > --- a/include/linux/mmu_notifier.h > +++ b/include/linux/mmu_notifier.h > @@ -60,6 +60,8 @@ enum mmu_notifier_event { > }; > > #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) > +#define MMU_NOTIFIER_RANGE_LOCKLESS (1 << 1) > +#define MMU_NOTIFIER_RANGE_YOUNG (1 << 2) > > struct mmu_notifier_ops { > /* > @@ -122,6 +124,10 @@ struct mmu_notifier_ops { > struct mm_struct *mm, > unsigned long address); > > + int (*test_clear_young)(struct mmu_notifier *mn, struct mm_struct *mm, > + unsigned long start, unsigned long end, > + bool clear, unsigned long *bitmap); This should have a comment like the others. Callback wants to know how to implement it. Could add a _range on it as well while you're here, to correct that inconsistency. > + > /* > * change_pte is called in cases that pte mapping to page is changed: > * for example, when ksm remaps pte to point to a new shared page. > @@ -392,6 +398,9 @@ extern int __mmu_notifier_clear_young(struct mm_struct *mm, > unsigned long end); > extern int __mmu_notifier_test_young(struct mm_struct *mm, > unsigned long address); > +extern int __mmu_notifier_test_clear_young(struct mm_struct *mm, > + unsigned long start, unsigned long end, > + bool clear, unsigned long *bitmap); > extern void __mmu_notifier_change_pte(struct mm_struct *mm, > unsigned long address, pte_t pte); > extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); > @@ -440,6 +449,35 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm, > return 0; > } > > +/* > + * mmu_notifier_test_clear_young() returns nonzero if any of the KVM PTEs within > + * a given range was young. Specifically, it returns MMU_NOTIFIER_RANGE_LOCKLESS > + * if the fast path was successful, MMU_NOTIFIER_RANGE_YOUNG otherwise. > + * > + * The last parameter to the function is a bitmap and only the fast path > + * supports it: if it is NULL, the function falls back to the slow path if the > + * fast path was unsuccessful; otherwise, the function bails out. Then if it was NULL, you would just not populate it. Minmize differences and cases for the caller/implementations. > + * > + * The bitmap has the following specifications: > + * 1. The number of bits should be at least (end-start)/PAGE_SIZE. > + * 2. The offset of each bit should be relative to the end, i.e., the offset > + * corresponding to addr should be (end-addr)/PAGE_SIZE-1. This is convenient > + * for batching while forward looping. > + * > + * When testing, this function sets the corresponding bit in the bitmap for each > + * young KVM PTE. When clearing, this function clears the accessed bit for each > + * young KVM PTE whose corresponding bit in the bitmap is set. I think this is over-designed as a first pass. The secondary MMU should just implement the call always. If it can't do it locklessly, then just do individual lookups. If the benefit is in the batching as you say then the locked version will get similar benefit. Possibly more because locks like some amount of batching when contended. I think that would reduce some concerns about cases of secondary MMUs that do not not support the lockless version yet, and avoid proliferation of code paths by platform. _If_ that was insufficient then I would like to see numbers and profiles and incremental patch to expose more complexity like this. Also mmu notifier code should say nothing about KVM PTEs or use kvm names in any code or comments either. "if the page was accessed via the secondary MMU" or similar is mutually understandable to KVM and mm developers. > @@ -880,6 +887,72 @@ static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, > kvm_test_age_gfn); > } > > +struct test_clear_young_args { > + unsigned long *bitmap; > + unsigned long end; > + bool clear; > + bool young; > +}; > + > +bool kvm_should_clear_young(struct kvm_gfn_range *range, gfn_t gfn) > +{ > + struct test_clear_young_args *args = range->args; > + > + VM_WARN_ON_ONCE(gfn < range->start || gfn >= range->end); > + > + args->young = true; > + > + if (args->bitmap) { > + int offset = hva_to_gfn_memslot(args->end - 1, range->slot) - gfn; > + > + if (args->clear) > + return test_bit(offset, args->bitmap); > + > + __set_bit(offset, args->bitmap); > + } > + > + return args->clear; > +} I don't quite understnd what's going on here. This is actually the function that notes the young pte, despite its name suggesting it is only a query. Shouldn't it set the bitmap bit even in the clear case? And why is it testing at all? Oh, it seems to be some strange mix of test *or* clear young. With the bitmap being a predicate in some cases for the clear case. This is a fairly confusing multi-modal API then. I think it should take 2 bitmaps, one is the young bitmap and the other is the predicate bitmap, and either/or can be NULL. Also this kvm_should_clear_young helper is clunky and misnamed. If you just provided an inline helper to get test_clear_young bitmap offset from gfn, then set/clear bit in the caller is quite trivial. > + > +static int kvm_mmu_notifier_test_clear_young(struct mmu_notifier *mn, struct mm_struct *mm, > + unsigned long start, unsigned long end, > + bool clear, unsigned long *bitmap) > +{ > + struct kvm *kvm = mmu_notifier_to_kvm(mn); > + struct kvm_hva_range range = { > + .start = start, > + .end = end, > + .on_lock = (void *)kvm_null_fn, > + .on_unlock = (void *)kvm_null_fn, > + }; > + > + trace_kvm_age_hva(start, end); > + > + if (kvm_arch_has_test_clear_young()) { > + struct test_clear_young_args args = { > + .bitmap = bitmap, > + .end = end, > + .clear = clear, > + }; > + > + range.args = &args; > + range.lockless = true; > + range.handler = kvm_arch_test_clear_young; > + > + if (!__kvm_handle_hva_range(kvm, &range)) > + return args.young ? MMU_NOTIFIER_RANGE_LOCKLESS : 0; > + } > + > + if (bitmap) > + return 0; > + > + range.args = NULL; > + range.lockless = false; > + range.handler = clear ? kvm_age_gfn : kvm_test_age_gfn; Minor thing, but KVM's "young" handling has been called "age" until now. Any reason not to stick with that theme? Thanks, Nick