Re: [PATCH] KVM/x86: Increase max vcpu number to 352

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2017-08-15 11:00+0800, Lan Tianyu:
> On 2017年08月12日 03:35, Konrad Rzeszutek Wilk wrote:
>> On Fri, Aug 11, 2017 at 03:00:20PM +0200, Radim Krčmář wrote:
>>> 2017-08-11 10:11+0200, David Hildenbrand:
>>>> On 11.08.2017 09:49, Lan Tianyu wrote:
>>>>> On 2017年08月11日 01:50, Konrad Rzeszutek Wilk wrote:
>>>>>> Are there any issues with increasing the value from 288 to 352 right now?
>>>>>
>>>>> No found.
>>>
>>> Yeah, the only issue until around 2^20 (when we reach the maximum of
>>> logical x2APIC addressing) should be the size of per-VM arrays when only
>>> few VCPUs are going to be used.

(I was talking only about the KVM side.)

>> Migration with 352 CPUs all being busy dirtying memory and also poking
>> at various I/O ports (say all of them dirtying the VGA) is no problem?
> 
> This depends on what kind of workload is running during migration. I
> think this may affect service down time since there maybe a lot of dirty
> memory data to transfer after stopping vcpus. This also depends on how
> user sets "migrate_set_downtime" for qemu. But I think increasing vcpus
> will break migration function.

Utilizing post-copy in the last migration phase should make migration of
busy big guests possible.  (I agree that pre-copy in not going to be
feasible.)



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux