On Wed, May 31, 2023 at 05:10:52PM -0600, Yu Zhao wrote: > On Wed, May 31, 2023 at 1:28 PM Oliver Upton <oliver.upton@xxxxxxxxx> wrote: > > On Tue, May 30, 2023 at 02:06:55PM -0600, Yu Zhao wrote: > > > On Tue, May 30, 2023 at 1:37 PM Oliver Upton <oliver.upton@xxxxxxxxx> wrote: > > > > As it is currently implemented, yes. But, there's potential to fast-path > > > > the implementation by checking page_count() before starting the walk. > > > > > > Do you mind posting another patch? I'd be happy to ack it, as well as > > > the one you suggested above. > > > > I'd rather not take such a patch independent of the test_clear_young > > series if you're OK with that. Do you mind implementing something > > similar to the above patch w/ the proposed optimization if you need it? > > No worries. I can take the above together with the following, which > would form a new series with its own merits, since apparently you > think the !AF case is important. Sorry if my suggestion was unclear. I thought we were talking about ->free_removed_table() being called from the stage-2 unmap path, in which case we wind up unnecessarily visiting PTEs on a table known to be empty. You could fast-path that by only initiating a walk if page_count() > 1: diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 95dae02ccc2e..766563dc465c 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1331,7 +1331,8 @@ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pg .end = kvm_granule_size(level), }; - WARN_ON(__kvm_pgtable_walk(&data, mm_ops, ptep, level + 1)); + if (mm_ops->page_count(pgtable) > 1) + WARN_ON(__kvm_pgtable_walk(&data, mm_ops, ptep, level + 1)); WARN_ON(mm_ops->page_count(pgtable) != 1); mm_ops->put_page(pgtable); A lock-free access fault walker is interesting, but in my testing it hasn't led to any significant improvements over acquiring the MMU lock for read. Because of that I hadn't bothered with posting the series upstream. -- Thanks, Oliver