On 06.06.2018 15:57, Paolo Bonzini wrote: > On 06/06/2018 15:28, Gonglei (Arei) wrote: >> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000, >> mem.userspace_addr=0x7fc343ec0000, mem.flags=0, memory_size=0x0 >> gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000, >> mem.userspace_addr=0x7fc343ec0000, mem.flags=0, memory_size=0x9000 >> >> When the memory region is cleared, the KVM will tell the slot to be >> invalid (which it is set to KVM_MEMSLOT_INVALID). >> >> If SeaBIOS accesses this memory and cause page fault, it will find an >> invalid value according to gfn (by __gfn_to_pfn_memslot), and finally >> it will return an invalid value, and finally it will return a >> failure. >> >> So, My questions are: >> >> 1) Why don't we hold kvm->slots_lock during page fault processing? > > Because it's protected by SRCU. We don't need kvm->slots_lock on the > read side. > >> 2) How do we assure that vcpus will not access the corresponding >> region when deleting an memory slot? > > We don't. It's generally a guest bug if they do, but the problem here > is that QEMU is splitting a memory region in two parts and that is not > atomic. BTW, one ugly (but QEMU-only) fix would be to temporarily pause all VCPUs, do the change and then unpause all VCPUs. > > One fix could be to add a KVM_SET_USER_MEMORY_REGIONS ioctl that > replaces the entire memory map atomically. > > Paolo > -- Thanks, David / dhildenb