On Wed, Jun 15, 2022 at 04:21:21AM +0000, Wang,Guangju wrote: >>On Mon, Jun 13, 2022 at 05:16:48PM +0000, Sean Christopherson wrote: >>>The shortlog is not at all helpful, it doesn't say anything about what >>>actual functional change. >>> >>> KVM: x86: Don't advertise PV IPI to userspace if IPIs are virtualized >>> >>>On Mon, Jun 13, 2022, wangguangju wrote: >>>> Commit d588bb9be1da ("KVM: VMX: enable IPI virtualization") enable >>> >IPI virtualization in Intel SPR platform.There is no point in using >>> >PVIPI if IPIv is supported, it doesn't work less good with PVIPI than >>> >without it. >>>> >>> >So add a bool variable to distinguish whether to use PVIPI. >>> >>>Similar complaint with the changelog, it doesn't actually call out why >>>PV IPIs are unwanted. >>> >>> Don't advertise PV IPI support to userspace if IPI virtualization is >> >supported by the CPU. Hardware virtualization of IPIs more performant >> >as senders do not need to exit. > >>PVIPI is mainly [*] for sending multi-cast IPIs. Intel IPI virtualization can virtualize only uni-cast IPIs. Their use cases don't overlap. So, I don't think it makes sense to disable PVIPI if intel IPI virtualization is supported. >A question, like x2apic mode, guest uses PVIPI with replace apic->send_IPI_mask to kvm_send_ipi_mask. The original function implementation is __x2apic_send_IPI_mask , and it poll each CPU to send IPI. So in this case >Intel virtualization can not work? Thanks. Yes, it can work. But some experiments we conducted based on a modified kvm-unit-test showed that PVIPI outperforms native ICR writes (w/ IPI virtualization) in terms of sending multi-cast (i.e., dest vCPUs >=2) IPIs