On Tue, 2 Apr 2019 09:56:13 +0200 David Hildenbrand <david@xxxxxxxxxx> wrote: > >> > >> I guess there will be quite some issues to be sorted out. > >> > > > > That's what I'm getting from the several feedback I got so far. But the > > more fundamental question is about the need for it. If you think this > > goes in the right direction to make things more generic and > > architecture agnostic, it might be worth the effort of trying to design > > such solution. If instead you think this will be reinventing the wheel > > and will not benefit any use case, then let's not waste some time on > > this. > > > > I think, the general cpu hotplug/unplug infrastructure in QEMU is pretty > much generic. The only special case most probably is hotplugging > different topologies. But the general "device_add $MODEL-$ARCH-cpu, > id=$ID..." + device_del $ID is most probably easy to deal with by QEMU > users. > > The main issue I think really is different hot(un)plug support per > architecture. We heard that there might be a solution for s390x soon. I > wonder what about other architectures. > > Of course, if people want to scrap ACPI completely, then question is why one would want this and what we would be trying to achieve doing so? If ACPI is removed completely then one would need to provide an alternative means to describe various HW which is main purpose of ACPI ACPI bytecode methods is just a nice icing on top of that which helps to abstract drivers from HW/firmware. Idea to use non standard DT instead looks like a horrible alternative instead. (well custom built kernel for fixed hw (thinking about cloud) can drop ACPI and just hardcode everything for faster boot and skip any kind of enumeration, but that's not applicable general purpose OS and probably is not maintainable long-term). > a) we don't have CPU hot(un)plug for x86-64 > b) we need virtio-cpu > c) we need a paravirt layer on top that is able to tell the guest to not > use a certain CPU and account for such CPUs in QEMU. > > c) would be something like ballooning for memory. Start your guest with > many CPUs but tell it to offline X CPUs. Account the number of CPUs > actually used in the hypervisor (fairly easy). Of course, whenever it > comes to ballooning, you can't really differentiate between a sane guest > ("uses all CPUs because it is not aware of the paravirt interface") and > malicious guests ("uses all CPUs because he knows how to activate them"). > > Especially, when the guests starts up, it might use all CPUs, until the > point where the virtio-whatever module is loaded and offlines the > requested amount of CPUs. Only from that point on, you could detect > malicious guests. > > But there is always the option to limit the compute power using cgroups > in addition. So if the guest uses more CPUs than requested, there might > be an performance impact. >