On Mon, Sep 20, 2021, Maciej S. Szmigiero wrote: > @@ -1607,68 +1506,145 @@ static int kvm_set_memslot(struct kvm *kvm, > + if (change != KVM_MR_CREATE) { > /* > - * The arch-specific fields of the memslots could have changed > - * between releasing the slots_arch_lock in > - * install_new_memslots and here, so get a fresh copy of these > - * fields. > + * The arch-specific fields of the memslot could have changed > + * between reading them and taking slots_arch_lock in one of two > + * places above. > + * That includes old and new which were read in __kvm_set_memory_region. > */ > - kvm_copy_memslots_arch(slots, __kvm_memslots(kvm, as_id)); > + old->arch = new->arch = slotina->arch = slotact->arch; Fudge. This subtly and silently fixes an existing bug where @old and @new can have stale arch specific data due to x86's godawful slots_arch_lock behavior. If a flags-only update collides with alloc_all_memslots_rmaps(), @old and @new may have stale (NULL) data if the rmaps activation happens after the old slot is snapshotted. It can be fixed by doing exactly this, but that is so, so gross (not your fault at all, I'm complaining about the existing mess). I think we can opportunistically prep for this series to make the end result (mostly this patch), a bit cleaner while fixing that snafu. Specifically, I think I see a path to avoiding bikeshedding slotina, slotact, etc... I'll get a series for the fix posted tomorrow, and hopefully reply with my thoughts for this patch too.