Re: [PATCH v5 4/8] KVM: Optimize kvm_make_vcpus_request_mask() a bit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 03, 2021, Vitaly Kuznetsov wrote:
> Iterating over set bits in 'vcpu_bitmap' should be faster than going
> through all vCPUs, especially when just a few bits are set.
> 
> Drop kvm_make_vcpus_request_mask() call from kvm_make_all_cpus_request_except()
> to avoid handling the special case when 'vcpu_bitmap' is NULL, move the
> code to kvm_make_all_cpus_request_except() itself.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> ---

Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx>

>  bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
>  				 struct kvm_vcpu *except,
>  				 unsigned long *vcpu_bitmap, cpumask_var_t tmp)
>  {
> -	int i, cpu, me;
>  	struct kvm_vcpu *vcpu;
> +	int i, me;
>  	bool called;

Uber nit, if you're moving "int i, me;" to get reverse fir tree ordering, it
should be moved below "bool called;" as well, which you amusingly did do in the
below function :-)
>  
>  	me = get_cpu();
>  

...

> @@ -316,12 +323,23 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
>  bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,
>  				      struct kvm_vcpu *except)
>  {
> +	struct kvm_vcpu *vcpu;
>  	cpumask_var_t cpus;
>  	bool called;
> +	int i, me;



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux