On Fri, Aug 02, 2024, maobibo wrote: > On 2024/7/27 上午7:52, Sean Christopherson wrote: > > Mark pages/folios dirty only the slow page fault path, i.e. only when > > mmu_lock is held and the operation is mmu_notifier-protected, as marking a > > page/folio dirty after it has been written back can make some filesystems > > unhappy (backing KVM guests will such filesystem files is uncommon, and > > the race is minuscule, hence the lack of complaints). > > > > See the link below for details. > > > > Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@xxxxxxxxx > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > --- > > arch/loongarch/kvm/mmu.c | 18 ++++++++++-------- > > 1 file changed, 10 insertions(+), 8 deletions(-) > > > > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c > > index 2634a9e8d82c..364dd35e0557 100644 > > --- a/arch/loongarch/kvm/mmu.c > > +++ b/arch/loongarch/kvm/mmu.c > > @@ -608,13 +608,13 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ > > if (kvm_pte_young(changed)) > > kvm_set_pfn_accessed(pfn); > > - if (kvm_pte_dirty(changed)) { > > - mark_page_dirty(kvm, gfn); > > - kvm_set_pfn_dirty(pfn); > > - } > > if (page) > > put_page(page); > > } > > + > > + if (kvm_pte_dirty(changed)) > > + mark_page_dirty(kvm, gfn); > > + > > return ret; > > out: > > spin_unlock(&kvm->mmu_lock); > > @@ -915,12 +915,14 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) > > else > > ++kvm->stat.pages; > > kvm_set_pte(ptep, new_pte); > > - spin_unlock(&kvm->mmu_lock); > > - if (prot_bits & _PAGE_DIRTY) { > > - mark_page_dirty_in_slot(kvm, memslot, gfn); > > + if (writeable) > Is it better to use write or (prot_bits & _PAGE_DIRTY) here? writable is > pte permission from function hva_to_pfn_slow(), write is fault action. Marking folios dirty in the slow/full path basically necessitates marking the folio dirty if KVM creates a writable SPTE, as KVM won't mark the folio dirty if/when _PAGE_DIRTY is set. Practically speaking, I'm 99.9% certain it doesn't matter. The folio is marked dirty by core MM when the folio is made writable, and cleaning the folio triggers an mmu_notifier invalidation. I.e. if the page is mapped writable in KVM's stage-2 PTEs, then its folio has already been marked dirty.