Re: Re: [RFC] KVM: x86: SVM: don't expose PV_SEND_IPI feature with AVIC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 11/16/21 10:48 AM, Wanpeng Li wrote:
On Mon, 8 Nov 2021 at 22:09, Maxim Levitsky <mlevitsk@xxxxxxxxxx> wrote:

On Mon, 2021-11-08 at 11:30 +0100, Paolo Bonzini wrote:
On 11/8/21 10:59, Kele Huang wrote:
Currently, AVIC is disabled if x2apic feature is exposed to guest
or in-kernel PIT is in re-injection mode.

We can enable AVIC with options:

    Kmod args:
    modprobe kvm_amd avic=1 nested=0 npt=1
    QEMU args:
    ... -cpu host,-x2apic -global kvm-pit.lost_tick_policy=discard ...

When LAPIC works in xapic mode, both AVIC and PV_SEND_IPI feature
can accelerate IPI operations for guest. However, the relationship
between AVIC and PV_SEND_IPI feature is not sorted out.

In logical, AVIC accelerates most of frequently IPI operations
without VMM intervention, while the re-hooking of apic->send_IPI_xxx
from PV_SEND_IPI feature masks out it. People can get confused
if AVIC is enabled while getting lots of hypercall kvm_exits
from IPI.

In performance, benchmark tool
https://lore.kernel.org/kvm/20171219085010.4081-1-ynorov@xxxxxxxxxxxxxxxxxx/
shows below results:

    Test env:
    CPU: AMD EPYC 7742 64-Core Processor
    2 vCPUs pinned 1:1
    idle=poll

    Test result (average ns per IPI of lots of running):
    PV_SEND_IPI      : 1860
    AVIC             : 1390

Besides, disscussions in https://lkml.org/lkml/2021/10/20/423
do have some solid performance test results to this.

This patch fixes this by masking out PV_SEND_IPI feature when
AVIC is enabled in setting up of guest vCPUs' CPUID.

Signed-off-by: Kele Huang <huangkele@xxxxxxxxxxxxx>

AVIC can change across migration.  I think we should instead use a new
KVM_HINTS_* bit (KVM_HINTS_ACCELERATED_LAPIC or something like that).
The KVM_HINTS_* bits are intended to be changeable across migration,
even though we don't have for now anything equivalent to the Hyper-V
reenlightenment interrupt.

Note that the same issue exists with HyperV. It also has PV APIC,
which is harmful when AVIC is enabled (that is guest uses it instead
of using AVIC, negating AVIC benefits).

Also note that Intel recently posted IPI virtualizaion, which
will make this issue relevant to APICv too soon.

The recently posted Intel IPI virtualization will accelerate unicast
ipi but not broadcast ipis, AMD AVIC accelerates unicast ipi well but
accelerates broadcast ipis worse than pv ipis. Could we just handle
unicast ipi here?

     Wanpeng

Depend on the number of target vCPUs, broadcast IPIs gets unstable performance on AVIC, and usually worse than PV Send IPI. So agree with Wanpeng's point, is it possible to separate single IPI and broadcast IPI on a hardware acceleration platform?

--
zhenwei pi



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux