Wanpeng Li <kernellwp@xxxxxxxxx> writes: > Hi Vitaly, (fix my reply mess this time) > On Sat, 23 Jun 2018 at 01:09, Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> wrote: >> >> When reviewing my "x86/hyper-v: use cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_ >> {LIST,SPACE} hypercalls when possible" patch Michael suggested to apply the >> same idea to PV IPIs. Here we go! >> >> Despite what Hyper-V TLFS says about HVCALL_SEND_IPI hypercall, it can >> actually be 'fast' (passing parameters through registers). Use that too. >> >> This series can collide with my "KVM: x86: hyperv: PV IPI support for >> Windows guests" series as I rename ipi_arg_non_ex/ipi_arg_ex structures >> there. Depending on which one gets in first we may need to do tiny >> adjustments. > > As hyperv PV TLB flush has already been merged, is there any other > obvious multicast IPIs scenarios? qemu supports interrupt remapping > since two years ago, I think windows guest can switch to cluster mode > after entering x2APIC, so sending IPI per cluster. In addition, you > can also post the benchmark result for this PV IPI optimization, > although it also fixes the bug which you mentioned above. I got confused, which of my patch series are you actually looking at? :-) This particular one ("x86/hyper-v: optimize PV IPIs") is not about KVM/qemu, it is for Linux running on top on real Hyper-V server. We already support PV IPIs and here I'm just trying to optimize the way how we send them by switching to a cheaper hypercall (and using 'fast' version of it) when possible. I don't actually have a good benchmark (and I don't remember seeing one when K.Y. posted PV IPI support) but this can be arranged I guess: I can write a dump 'IPI sender' in kernel and send e.g. 1000 IPIs. -- Vitaly _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel