On Thu, Apr 07, 2022, Vitaly Kuznetsov wrote: > Get rid of on-stack allocation of vcpu_mask and optimize kvm_hv_send_ipi() > for a smaller number of vCPUs in the request. When Hyper-V TLB flush > is in use, HvSendSyntheticClusterIpi{,Ex} calls are not commonly used to > send IPIs to a large number of vCPUs (and are rarely used in general). > > Introduce hv_is_vp_in_sparse_set() to directly check if the specified > VP_ID is present in sparse vCPU set. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> > --- > arch/x86/kvm/hyperv.c | 35 ++++++++++++++++++++++++----------- > 1 file changed, 24 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c > index d7bcdf87b90c..918642bcdbd0 100644 > --- a/arch/x86/kvm/hyperv.c > +++ b/arch/x86/kvm/hyperv.c > @@ -1746,6 +1746,23 @@ static void sparse_set_to_vcpu_mask(struct kvm *kvm, u64 *sparse_banks, > } > } > > +static bool hv_is_vp_in_sparse_set(u32 vp_id, u64 valid_bank_mask, u64 sparse_banks[]) > +{ > + int bank, sbank = 0; > + > + if (!test_bit(vp_id / 64, (unsigned long *)&valid_bank_mask)) '64' really, really, really needs a #define. I assume this is the same '64' that's used to check the var_cnt when getting the sparse_banks. > + return false; > + > + for_each_set_bit(bank, (unsigned long *)&valid_bank_mask, > + KVM_HV_MAX_SPARSE_VCPU_SET_BITS) { > + if (bank == vp_id / 64) > + break; > + sbank++; > + } > + > + return test_bit(vp_id % 64, (unsigned long *)&sparse_banks[sbank]); > +}