On Fri, Oct 25, 2019 at 03:15:46PM +0200, Vitaly Kuznetsov wrote: > When sending an IPI to a single CPU there is no need to deal with cpumasks. > With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU > cycles) improvement with smp_call_function_single() loop benchmark. The > optimization, however, is tiny and straitforward. Also, send_ipi_one() is > important for PV spinlock kick. > > I was also wondering if it would make sense to switch to using regular > APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU > cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu, > vector)). > > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> > --- > Changes since v1: > - Style changes [Roman, Joe] > --- > arch/x86/hyperv/hv_apic.c | 13 ++++++++++--- > arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++ > 2 files changed, 25 insertions(+), 3 deletions(-) Reviewed-by: Roman Kagan <rkagan@xxxxxxxxxxxxx>