Hi, Bibo, What is the relationship between this patch and the below one? https://lore.kernel.org/loongarch/20240611034609.3442344-1-maobibo@xxxxxxxxxxx/T/#u Huacai On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@xxxxxxxxxxx> wrote: > > Function kvm_map_page_fast() is fast path of secondary mmu page fault > flow, pfn is parsed from secondary mmu page table walker. However > the corresponding page reference is not added, it is dangerious to > access page out of mmu_lock. > > Here page ref is added inside mmu_lock, function kvm_set_pfn_accessed() > and kvm_set_pfn_dirty() is called with page ref added, so that the > page will not be freed by others. > > Also kvm_set_pfn_accessed() is removed here since it is called in > the following function kvm_release_pfn_clean(). > > Signed-off-by: Bibo Mao <maobibo@xxxxxxxxxxx> > --- > arch/loongarch/kvm/mmu.c | 23 +++++++++++++---------- > 1 file changed, 13 insertions(+), 10 deletions(-) > > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c > index 3b862f3a72cb..5a820a81fd97 100644 > --- a/arch/loongarch/kvm/mmu.c > +++ b/arch/loongarch/kvm/mmu.c > @@ -557,6 +557,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ > gfn_t gfn = gpa >> PAGE_SHIFT; > struct kvm *kvm = vcpu->kvm; > struct kvm_memory_slot *slot; > + struct page *page; > > spin_lock(&kvm->mmu_lock); > > @@ -599,19 +600,22 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ > if (changed) { > kvm_set_pte(ptep, new); > pfn = kvm_pte_pfn(new); > + page = kvm_pfn_to_refcounted_page(pfn); > + if (page) > + get_page(page); > } > spin_unlock(&kvm->mmu_lock); > > - /* > - * Fixme: pfn may be freed after mmu_lock > - * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this? > - */ > - if (kvm_pte_young(changed)) > - kvm_set_pfn_accessed(pfn); > + if (changed) { > + if (kvm_pte_young(changed)) > + kvm_set_pfn_accessed(pfn); > > - if (kvm_pte_dirty(changed)) { > - mark_page_dirty(kvm, gfn); > - kvm_set_pfn_dirty(pfn); > + if (kvm_pte_dirty(changed)) { > + mark_page_dirty(kvm, gfn); > + kvm_set_pfn_dirty(pfn); > + } > + if (page) > + put_page(page); > } > return ret; > out: > @@ -920,7 +924,6 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) > kvm_set_pfn_dirty(pfn); > } > > - kvm_set_pfn_accessed(pfn); > kvm_release_pfn_clean(pfn); > out: > srcu_read_unlock(&kvm->srcu, srcu_idx); > -- > 2.39.3 >