>On Mon, Jun 13, 2022 at 05:16:48PM +0000, Sean Christopherson wrote: >>The shortlog is not at all helpful, it doesn't say anything about what >>actual functional change. >> >> KVM: x86: Don't advertise PV IPI to userspace if IPIs are virtualized >> >>On Mon, Jun 13, 2022, wangguangju wrote: >>> Commit d588bb9be1da ("KVM: VMX: enable IPI virtualization") enable >> >IPI virtualization in Intel SPR platform.There is no point in using >> >PVIPI if IPIv is supported, it doesn't work less good with PVIPI than >> >without it. >>> >> >So add a bool variable to distinguish whether to use PVIPI. >> >>Similar complaint with the changelog, it doesn't actually call out why >>PV IPIs are unwanted. >> >> Don't advertise PV IPI support to userspace if IPI virtualization is > >supported by the CPU. Hardware virtualization of IPIs more performant > >as senders do not need to exit. >PVIPI is mainly [*] for sending multi-cast IPIs. Intel IPI virtualization can virtualize only uni-cast IPIs. Their use cases don't overlap. So, I don't think it makes sense to disable PVIPI if intel IPI virtualization is supported. A question, like x2apic mode, guest uses PVIPI with replace apic->send_IPI_mask to kvm_send_ipi_mask. The original function implementation is __x2apic_send_IPI_mask , and it poll each CPU to send IPI. So in this case Intel virtualization can not work? Thanks. static void __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest) { unsigned long query_cpu; unsigned long this_cpu; unsigned long flags; /* x2apic MSRs are special and need a special fence: */ weak_wrmsr_fence(); local_irq_save(flags); this_cpu = smp_processor_id(); for_each_cpu(query_cpu, mask) { if (apic_dest == APIC_DEST_ALLBUT && this_cpu == query_cpu) continue; __x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu), vector, APIC_DEST_PHYSICAL); } local_irq_restore(flags); }