Hi, Bibo, On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@xxxxxxxxxxx> wrote: > > When updating pmd entry such as allocating new pmd page or splitting > huge page into normal page, it is necessary to firstly update all pte > entries, and then update pmd entry. > > It is weak order with LoongArch system, there will be problem if other > vcpus sees pmd update firstly however pte is not updated. Here smp_wmb() > is added to assure this. Memory barriers should be in pairs in most cases. That means you may lose smp_rmb() in another place. Huacai > > Signed-off-by: Bibo Mao <maobibo@xxxxxxxxxxx> > --- > arch/loongarch/kvm/mmu.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c > index 1690828bd44b..7f04edfbe428 100644 > --- a/arch/loongarch/kvm/mmu.c > +++ b/arch/loongarch/kvm/mmu.c > @@ -163,6 +163,7 @@ static kvm_pte_t *kvm_populate_gpa(struct kvm *kvm, > > child = kvm_mmu_memory_cache_alloc(cache); > _kvm_pte_init(child, ctx.invalid_ptes[ctx.level - 1]); > + smp_wmb(); /* make pte visible before pmd */ > kvm_set_pte(entry, __pa(child)); > } else if (kvm_pte_huge(*entry)) { > return entry; > @@ -746,6 +747,7 @@ static kvm_pte_t *kvm_split_huge(struct kvm_vcpu *vcpu, kvm_pte_t *ptep, gfn_t g > val += PAGE_SIZE; > } > > + smp_wmb(); > /* The later kvm_flush_tlb_gpa() will flush hugepage tlb */ > kvm_set_pte(ptep, __pa(child)); > > -- > 2.39.3 >