On 29/04/21 23:18, Ben Gardon wrote:
+void activate_shadow_mmu(struct kvm *kvm) +{ + kvm->arch.shadow_mmu_active = true; +} +
I think there's no lock protecting both the write and the read side. Therefore this should be an smp_store_release, and all checks in patch 2 should be an smp_load_acquire. Also, the assignments to slot->arch.rmap in patch 4 (alloc_memslot_rmap) should be an rcu_assign_pointer, while __gfn_to_rmap must be changed like so: + struct kvm_rmap_head *head; ... - return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; + head = srcu_dereference(slot->arch.rmap[level - PG_LEVEL_4K], &kvm->srcu, + lockdep_is_held(&kvm->slots_arch_lock)); + return &head[idx]; Paolo