On Fri, Jan 13, 2023 at 05:25:22PM +0000, Marc Zyngier wrote: > Systems with a VMID-tagged PIPT i-cache have been supported for > a while by Linux and KVM. However, these systems never appeared > on our side of the multiverse. > > Refuse to initialise KVM on such a machine, should then ever appear. > Following changes will drop the support from the hypervisor. > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx> > --- > arch/arm64/kvm/arm.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 9c5573bc4614..508deed213a2 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque) > int err; > bool in_hyp_mode; > > + if (icache_is_vpipt()) { > + kvm_info("Incompatible VPIPT I-Cache policy\n"); > + return -ENODEV; > + } Hmm, does this work properly with late CPU onlining? For example, if my set of boot CPUs are all friendly PIPT and KVM initialises happily, but then I late online a CPU with a horrible VPIPT policy, I worry that we'll quietly do the wrong thing wrt maintenance. If that's the case, then arguably we already have a bug in the cases where we trap and emulate accesses to CTR_EL0 from userspace because I _think_ we'll change the L1Ip field at runtime after userspace could've already read it. Is there something that stops us from ended up in this situation? Will