RE: An emulation failure occurs,if I hotplug vcpus immediately after the VM start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini@xxxxxxxxxx]
> Sent: Wednesday, June 06, 2018 9:58 PM
> To: Gonglei (Arei) <arei.gonglei@xxxxxxxxxx>; Igor Mammedov
> <imammedo@xxxxxxxxxx>; xuyandong <xuyandong2@xxxxxxxxxx>
> Cc: Zhanghailiang <zhang.zhanghailiang@xxxxxxxxxx>; wangxin (U)
> <wangxinxin.wang@xxxxxxxxxx>; lidonglin <lidonglin@xxxxxxxxxx>;
> kvm@xxxxxxxxxxxxxxx; qemu-devel@xxxxxxxxxx; Huangweidong (C)
> <weidong.huang@xxxxxxxxxx>
> Subject: Re: An emulation failure occurs,if I hotplug vcpus immediately after
> the VM start
> 
> On 06/06/2018 15:28, Gonglei (Arei) wrote:
> > gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
> > mem.userspace_addr=0x7fc343ec0000, mem.flags=0, memory_size=0x0
> > gonglei********: mem.slot: 3, mem.guest_phys_addr=0xc0000,
> > mem.userspace_addr=0x7fc343ec0000, mem.flags=0,
> memory_size=0x9000
> >
> > When the memory region is cleared, the KVM will tell the slot to be
> > invalid (which it is set to KVM_MEMSLOT_INVALID).
> >
> > If SeaBIOS accesses this memory and cause page fault, it will find an
> > invalid value according to gfn (by __gfn_to_pfn_memslot), and finally
> > it will return an invalid value, and finally it will return a failure.
> >
> > So, My questions are:
> >
> > 1) Why don't we hold kvm->slots_lock during page fault processing?
> 
> Because it's protected by SRCU.  We don't need kvm->slots_lock on the read
> side.
> 
> > 2) How do we assure that vcpus will not access the corresponding
> > region when deleting an memory slot?
> 
> We don't.  It's generally a guest bug if they do, but the problem here is that
> QEMU is splitting a memory region in two parts and that is not atomic.
> 	
> One fix could be to add a KVM_SET_USER_MEMORY_REGIONS ioctl that
> replaces the entire memory map atomically.
> 
> Paolo

After we add a KVM_SET_USER_MEMORY_REGIONS ioctl that replaces the entire
memory map atomically, how to use it in address_space_update_topology?
Shall we checkout the spilt memory region before 
" address_space_update_topology_pass(as, old_view, new_view, false); 
address_space_update_topology_pass(as, old_view, new_view, true);
".






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux