Hi, Bibo, On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@xxxxxxxxxxx> wrote: > > Currently page level selection about secondary mmu depends on memory > slot and page level about host mmu. There will be problem if page level > of secondary mmu is zero already. So page level selection should depend > on the following three conditions. > 1. Memslot is aligned for huge page and vm is not migrating. > 2. Page level of host mmu is huge page also. > 3. Page level of secondary mmu is suituable for huge page, it cannot > be normal page since it is not supported to merge normal pages into > huge page now. > > Signed-off-by: Bibo Mao <maobibo@xxxxxxxxxxx> > --- > arch/loongarch/include/asm/kvm_mmu.h | 2 +- > arch/loongarch/kvm/mmu.c | 16 +++++++++++++--- > 2 files changed, 14 insertions(+), 4 deletions(-) > > diff --git a/arch/loongarch/include/asm/kvm_mmu.h b/arch/loongarch/include/asm/kvm_mmu.h > index 099bafc6f797..d06ae0e0dde5 100644 > --- a/arch/loongarch/include/asm/kvm_mmu.h > +++ b/arch/loongarch/include/asm/kvm_mmu.h > @@ -55,7 +55,7 @@ static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val) > static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; } > static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; } > static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; } > -static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; } > +static inline int kvm_pte_huge(kvm_pte_t pte) { return !!(pte & _PAGE_HUGE); } Why do we need this change? Huacai > > static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte) > { > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c > index 9e39d28fec35..c6351d13ca1b 100644 > --- a/arch/loongarch/kvm/mmu.c > +++ b/arch/loongarch/kvm/mmu.c > @@ -858,10 +858,20 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) > > /* Disable dirty logging on HugePages */ > level = 0; > - if (!fault_supports_huge_mapping(memslot, hva, write)) { > - level = 0; > - } else { > + if (fault_supports_huge_mapping(memslot, hva, write)) { > + /* Check page level about host mmu*/ > level = host_pfn_mapping_level(kvm, gfn, memslot); > + if (level == 1) { > + /* > + * Check page level about secondary mmu > + * Disable hugepage if it is normal page on > + * secondary mmu already > + */ > + ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); > + if (ptep && !kvm_pte_huge(*ptep)) > + level = 0; > + } > + > if (level == 1) { > gfn = gfn & ~(PTRS_PER_PTE - 1); > pfn = pfn & ~(PTRS_PER_PTE - 1); > -- > 2.39.3 >