Mark pages/folios dirty only the slow page fault path, i.e. only when mmu_lock is held and the operation is mmu_notifier-protected, as marking a page/folio dirty after it has been written back can make some filesystems unhappy (backing KVM guests will such filesystem files is uncommon, and the race is minuscule, hence the lack of complaints). See the link below for details. Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@xxxxxxxxx Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> --- arch/loongarch/kvm/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 2634a9e8d82c..364dd35e0557 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -608,13 +608,13 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ if (kvm_pte_young(changed)) kvm_set_pfn_accessed(pfn); - if (kvm_pte_dirty(changed)) { - mark_page_dirty(kvm, gfn); - kvm_set_pfn_dirty(pfn); - } if (page) put_page(page); } + + if (kvm_pte_dirty(changed)) + mark_page_dirty(kvm, gfn); + return ret; out: spin_unlock(&kvm->mmu_lock); @@ -915,12 +915,14 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) else ++kvm->stat.pages; kvm_set_pte(ptep, new_pte); - spin_unlock(&kvm->mmu_lock); - if (prot_bits & _PAGE_DIRTY) { - mark_page_dirty_in_slot(kvm, memslot, gfn); + if (writeable) kvm_set_pfn_dirty(pfn); - } + + spin_unlock(&kvm->mmu_lock); + + if (prot_bits & _PAGE_DIRTY) + mark_page_dirty_in_slot(kvm, memslot, gfn); kvm_release_pfn_clean(pfn); out: -- 2.46.0.rc1.232.g9752f9e123-goog