Re: [PATCH v2 07/11] KVM: x86: add a delayed hardware NMI injection interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 29, 2022, Maxim Levitsky wrote:
> This patch adds two new vendor callbacks:

No "this patch" please, just say what it does.

> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 684a5519812fb2..46993ce61c92db 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -871,8 +871,13 @@ struct kvm_vcpu_arch {
>  	u64 tsc_scaling_ratio; /* current scaling ratio */
>  
>  	atomic_t nmi_queued;  /* unprocessed asynchronous NMIs */
> -	unsigned nmi_pending; /* NMI queued after currently running handler */
> +
> +	unsigned int nmi_pending; /*
> +				   * NMI queued after currently running handler
> +				   * (not including a hardware pending NMI (e.g vNMI))
> +				   */

Put the block comment above.  I'd say collapse all of the comments about NMIs into
a single big block comment.

>  	bool nmi_injected;    /* Trying to inject an NMI this entry */
> +
>  	bool smi_pending;    /* SMI queued after currently running handler */
>  	u8 handling_intr_from_guest;
>  
> @@ -10015,13 +10022,34 @@ static void process_nmi(struct kvm_vcpu *vcpu)
>  	 * Otherwise, allow two (and we'll inject the first one immediately).
>  	 */
>  	if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
> -		limit = 1;
> +		limit--;
> +
> +	/* Also if there is already a NMI hardware queued to be injected,
> +	 * decrease the limit again
> +	 */

	/*
	 * Block comment ...
	 */

> +	if (static_call(kvm_x86_get_hw_nmi_pending)(vcpu))

I'd prefer "is_hw_nmi_pending()" over "get", even if it means not pairing with
"set".  Though I think that's a good thing since they aren't perfect pairs.

> +		limit--;
>  
> -	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
> +	if (limit <= 0)
> +		return;
> +
> +	/* Attempt to use hardware NMI queueing */
> +	if (static_call(kvm_x86_set_hw_nmi_pending)(vcpu)) {
> +		limit--;
> +		nmi_to_queue--;
> +	}
> +
> +	vcpu->arch.nmi_pending += nmi_to_queue;
>  	vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
>  	kvm_make_request(KVM_REQ_EVENT, vcpu);
>  }
>  
> +/* Return total number of NMIs pending injection to the VM */
> +int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.nmi_pending + static_call(kvm_x86_get_hw_nmi_pending)(vcpu);

Nothing cares about the total count, this can just be;


	bool kvm_is_nmi_pending(struct kvm_vcpu *vcpu)
	{
		return vcpu->arch.nmi_pending ||
		       static_call(kvm_x86_is_hw_nmi_pending)(vcpu);
	}


> +}
> +
>  void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
>  				       unsigned long *vcpu_bitmap)
>  {
> -- 
> 2.26.3
> 



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux