On Wed, 16 Mar 2022 06:47:48 -0400 "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote: > On Wed, Mar 16, 2022 at 10:37:49AM +0000, David Woodhouse wrote: > > On Wed, 2022-03-16 at 05:56 -0400, Michael S. Tsirkin wrote: > > > On Wed, Mar 16, 2022 at 09:37:07AM +0000, David Woodhouse wrote: > > > > Yep, that's the guest operating system's choice. Not a qemu problem. > > > > > > > > Even if you have the split IRQ chip, if you boot a guest without kvm- > > > > msi-ext-dest-id support, it'll refuse to use higher CPUs. > > > > > > > > Or if you boot a guest without X2APIC support, it'll refuse to use > > > > higher CPUs. > > > > > > > > That doesn't mean a user should be *forbidden* from launching qemu in > > > > that configuration. > > > > > > Well the issue with all these configs which kind of work but not > > > the way they were specified is that down the road someone > > > creates a VM with this config and then expects us to maintain it > > > indefinitely. > > > > > > So yes, if we are not sure we can support something properly it is > > > better to validate and exit than create a VM guests don't know how > > > to treat. > > > > Not entirely sure how to reconcile that with what Daniel said in > > https://lore.kernel.org/qemu-devel/Yi9BTkZIM3iZsvdK@xxxxxxxxxx/ which > > was: Generally Daniel is right, as long as it's something that what real hardware supports. (usually it's job if upper layers which know what guest OS is used, and can tweak config based on that knowledge). But it's virt only extension and none (tested with Windows (hangs on boot), Linux (brings up only first 255 cpus) ) of mainline OSes ended up up working as expected (i.e. user asked for this many CPUs but can't really use them as expected). Which would just lead to users reporting (obscure) bugs. > > > We've generally said QEMU should not reject / block startup of valid > > > hardware configurations, based on existance of bugs in certain guest > > > OS, if the config would be valid for other guest. > > For sure, but is this a valid hardware configuration? That's > really the question. to me it looks like not complete PV feature so far. if it's a configuration that is interesting for some users (some special build OS/appliance that can use CPUs which are able to handle only IPIs) or for development purposes than in should be an opt-in feature instead of default one. > > That said, I cannot point at a *specific* example of a guest which can > > use the higher CPUs even when it can't direct external interrupts at > > them. I worked on making Linux capable of it, as I said, but didn't > > pursue that in the end. > > > > I *suspect* Windows might be able to do it, based on the way the > > hyperv-iommu works (by cheating and returning -EINVAL when external > > interrupts are directed at higher CPUs). Testing shows, Windows (2019 and 2004 build) doesn't work (at least with just kernel-irqchip=on in current state). (CCing Vitaly, he might know if Windows might work and under what conditions) Linux(recentish) was able to bring up all CPUs with APICID above 255 with 'split' irqchip and without iommu present (at least it boots to command prompt). What worked for both OSes (full boot), was split irqchip + iommu (even without irq remapping, but I haven't tested with older guests so irq remapping might be required anyways).