[+cc Fuad] Fuad, you mentioned in commit 9c30fc615daa ("KVM: arm64: Move setting the page as dirty out of the critical section") that restructuring around the MMU lock was helpful for reuse (presumably for pKVM), but I lack the context there. On Fri, Jul 26, 2024 at 04:52:03PM -0700, Sean Christopherson wrote: > Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a > page/folio dirty after it has been written back can make some filesystems > unhappy (backing KVM guests will such filesystem files is uncommon, and typo: s/will/with/ > the race is minuscule, hence the lack of complaints). See the link below > for details. > > This will also allow converting arm64 to kvm_release_faultin_page(), which > requires that mmu_lock be held (for the aforementioned reason). > > Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@xxxxxxxxx > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/arm64/kvm/mmu.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 22ee37360c4e..ce13c3d884d5 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1685,15 +1685,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > } > > out_unlock: > + if (writable && !ret) > + kvm_set_pfn_dirty(pfn); I'm guessing you meant kvm_release_pfn_dirty() here, because this leaks a reference. > + else > + kvm_release_pfn_clean(pfn); > + > read_unlock(&kvm->mmu_lock); > > /* Mark the page dirty only if the fault is handled successfully */ > - if (writable && !ret) { > - kvm_set_pfn_dirty(pfn); > + if (writable && !ret) > mark_page_dirty_in_slot(kvm, memslot, gfn); > - } > > - kvm_release_pfn_clean(pfn); > return ret != -EAGAIN ? ret : 0; > } > > -- > 2.46.0.rc1.232.g9752f9e123-goog > -- Thanks, Oliver