Re: [RFC PATCH v2 0/3] Support CPU hotplug for ARM64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/10/2019 2:15 AM, Marc Zyngier wrote:
On 09/07/2019 20:06, Maran Wilson wrote:
On 7/5/2019 3:12 AM, James Morse wrote:
Hi guys,

(CC: +kvmarm list)

On 29/06/2019 03:42, Xiongfeng Wang wrote:
This patchset mark all the GICC node in MADT as possible CPUs even though it
is disabled. But only those enabled GICC node are marked as present CPUs.
So that kernel will initialize some CPU related data structure in advance before
the CPU is actually hot added into the system. This patchset also implement
'acpi_(un)map_cpu()' and 'arch_(un)register_cpu()' for ARM64. These functions are
needed to enable CPU hotplug.

To support CPU hotplug, we need to add all the possible GICC node in MADT
including those CPUs that are not present but may be hot added later. Those
CPUs are marked as disabled in GICC nodes.
... what do you need this for?

(The term cpu-hotplug in the arm world almost never means hot-adding a new package/die to
the platform, we usually mean taking CPUs online/offline for power management. e.g.
cpuhp_offline_cpu_device())

It looks like you're adding support for hot-adding a new package/die to the platform ...
but only for virtualisation.

I don't see why this is needed for virtualisation. The in-kernel irqchip needs to know
these vcpu exist before you can enter the guest for the first time. You can't create them
late. At best you're saving the host scheduling a vcpu that is offline. Is this really a
problem?

If we moved PSCI support to user-space, you could avoid creating host vcpu threads until
the guest brings the vcpu online, which would solve that problem, and save the host
resources for the thread too. (and its acpi/dt agnostic)

I don't see the difference here between booting the guest with 'maxcpus=1', and bringing
the vcpu online later. The only real difference seems to be moving the can-be-online
policy into the hypervisor/VMM...
Isn't that an important distinction from a cloud service provider's
perspective?

As far as I understand it, you also need CPU hotplug capabilities to
support things like Kata runtime under Kubernetes. i.e. when
implementing your containers in the form of light weight VMs for the
additional security ... and the orchestration layer cannot determine
ahead of time how much CPU/memory resources are going to be needed to
run the pod(s).
Why would it be any different? You can pre-allocate your vcpus, leave
them parked until some external agent decides to signal the container
that it it can use another bunch of CPUs. At that point, the container
must actively boot these vcpus (they aren't going to come up by magic).

Given that you must have sized your virtual platform to deal with the
maximum set of resources you anticipate (think of the GIC
redistributors, for example), I really wonder what you gain here.

Maybe I'm not following the alternative proposal completely, but wouldn't a guest VM (who happens to be in control of its OS) be able to add/online vCPU resources without approval from the VMM this way?

Thanks,
-Maran

Thanks,
-Maran

I think physical package/die hotadd is a much bigger, uglier problem than doing the same
under virtualisation. Its best to do this on real hardware first so we don't miss
something. (cpu-topology, numa, memory, errata, timers?)
I'm worried that doing virtualisation first means the firmware-requirements for physical
hotadd stuff is "whatever Qemu does".
For sure, I want to model the virtualization side after the actual HW,
and not the other way around. Live reconfiguration of the interrupt
topology (and thus the whole memory map) will certainly be challenging.

Thanks,

	M.




[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux