Re: [PATCH 6/6] KVM: Dirty memory tracking for performant checkpointing and improved live migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/2/2016 12:23 PM, Radim Krčmář wrote:
> 2016-04-29 18:47+0000, Cao, Lei:
>> On 4/28/2016 2:08 PM, Radim Krčmář wrote:
>>> 2016-04-26 19:26+0000, Cao, Lei:
>>> * Is there a reason to call KVM_ENABLE_MT often?
>>
>> KVM_ENABLE_MT can be called multiple times during a protected
>> VM's lifecycle in a checkpointing system. A protected VM has two
>> instances, primary and secondary. Memory tracking is only enabled on
>> the primary. When we do a polite failover, memory tracking is
>> disabled on the old primary and enabled on the new primary. Memory
>> tracking is also disabled when the secondary goes away, in which case
>> checkpoint cycle stops and there is no need for memory tracking. When
>> the secondary comes back, memory tracking is re-enabled and the two
>> instances sync up and checkpoint cycle starts.
> 
> Makes sense.
> 
>>> * How significant is the benefit of MT_FETCH_WAIT?
>>
>> This allows the user thread that harvest dirty pages to park instead
>> of doing busy wait when there is no or very few dirty pages.
> 
> True, mandatory polling could be ugly.
> 
>>> * When would you disable MT_FETCH_REARM?
>>
>> In a checkpointing system, dirty pages are harvested after the VM is
>> paused. Userspace can choose to rearm the write traps all at once after
>> all the dirty pages have been fetched using KVM_REARM_DIRTY_PAGES, in
>> which case the traps don't need to be armed during each fetch.
> 
> Ah, it makes a difference when you don't plan to run the VM again.
> 
> I guess all three of them are worth it.
> (Might change my mind when I gain better understanding.)
> 
>>> * What drawbacks had an interface without explicit checkpointing cycles?
>>
>> Checkpointing cycle has to be implemented in userspace to use this
>> interface. 
> 
> But isn't the explicit cycle necessary only in userspace?
> The dirty list could be implemented as a circullar buffer, so KVM
> wouldn't need an explicit notification about the new cycle -- the
> userspace would just drain all dirty pages and unpause vcpus.
> (Quiesced can be stateless one-time kick of waiters instead.)
> 
> Thanks.
> 

Good point. I might be able to do away with explicit cycles. I'll
see what else I can do to simplify the interface. 

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux