On Tue, May 16 2023, Marc Zyngier <maz@xxxxxxxxxx> wrote: > On Tue, 16 May 2023 12:55:14 +0100, > Cornelia Huck <cohuck@xxxxxxxxxx> wrote: >> >> Do you have more concrete ideas for QEMU CPU models already? Asking >> because I wanted to talk about this at KVM Forum, so collecting what >> others would like to do seems like a good idea :) > > I'm not being asked, but I'll share my thoughts anyway! ;-) > > I don't think CPU models are necessarily the most important thing. > Specially when you look at the diversity of the ecosystem (and even > the same CPU can be configured in different ways at integration > time). Case in point, Neoverse N1 which can have its I/D caches made > coherent or not. And the guest really wants to know which one it is > (you can only lie in one direction). > > But being able to control the feature set exposed to the guest from > userspace is a huge benefit in terms of migration. Certainly; the important part is that we can keep the guest ABI stable... which parts match to a "CPU model" in the way other architectures use it is an interesting question. It almost certainly will look different from e.g. s390, where we only have to deal with a single manufacturer. I'm wondering whether we'll end up building frankenmonster CPUs. Another interesting aspect is how KVM ends up influencing what the guest sees on the CPU level, as in the case where we migrate across matching CPUs, but with a different software level. I think we want userspace to control that to some extent, but I'm not sure if this fully matches the CPU model context. > > Now, this is only half of the problem (and we're back to the CPU > model): most of these CPUs have various degrees of brokenness. Most of > the workarounds have to be implemented by the guest, and are keyed on > the MIDR values. So somehow, you need to be able to expose *all* the > possible MIDR values that a guest can observe in its lifetime. Fun is to be had... > > I have a vague prototype for that that I'd need to dust off and > finish, because that's also needed for this very silly construct > called big-little... That would be cool to see. Or at least interesting ;)