RE: [PATCH] x86/hyper-v: micro-optimize send_ipi_one case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> Sent: Friday, October 25, 2019 3:44 AM
> 
> Roman Kagan <rkagan@xxxxxxxxxxxxx> writes:
> 
> > On Thu, Oct 24, 2019 at 05:21:52PM +0200, Vitaly Kuznetsov wrote:
> >> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> >> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> >> cycles) improvement with smp_call_function_single() loop benchmark. The
> >> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> >> important for PV spinlock kick.
> >>
> >> I was also wondering if it would make sense to switch to using regular
> >> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> >> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> >> vector)).
> >
> > Is it with APICv or emulated apic?
> 
> That's actually a good question. Yesterday I was testing this on WS2019
> host with Xeon e5-2420 v2 (Ivy Bridge EN) which I *think* should already
> support APICv - but I'm not sure and ark.intel.com is not
> helpful. Today, I decided to re-test on something more modern and I got
> WS2016 host with E5-2667 v4 (Broadwell) and the results are:
> 
> 'Ex' hypercall: 18000 cycles
> orig_apic.send_IPI(): 46000 cycles
> 
> I'm, however, just assuming that Hyper-V uses APICv when it's available
> and have no idea how to check from within the guest. I'm also not sure
> if WS2019 is so much faster or if there are other differences on these
> hosts which matter.
> 

On Hyper-V 2016 and 2019 (not sure about 2012 R2), and when the guest is
using xAPIC (not x2APIC), you can tell within the guest whether Intel APICv is
enabled based on the setting of  HV_X64_APIC_ACCESS_RECOMMENDED.   If
this flag is set, then APICv is not present, because Hyper-V only recommends
using the synthetic MSRs when APICv is not present.  Conversely, if the flag is
not set, then APICv is present.

FWIW, when APICv is present in the hardware, you can disable its use in a
particular VM by using Powershell in the host to set  compatibility mode on
the VM:

Set-VMProcessor <vmname> -CompatibilityForMigrationEnabled $true

Then when the VM is booted, HV_X64_APIC_ACCESS_RECOMMENDED
will show as set even if the hardware has APICv.  This is useful for testing
the older code paths on hardware that has APICv.

Michael




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux