Patch "KVM: KVM: Use cpumask_available() to check for NULL cpumask when kicking vCPUs" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    KVM: KVM: Use cpumask_available() to check for NULL cpumask when kicking vCPUs

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     kvm-kvm-use-cpumask_available-to-check-for-null-cpum.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit a6d04ea442b7a56256c718407892bda158bf41bc
Author: Sean Christopherson <seanjc@xxxxxxxxxx>
Date:   Fri Aug 27 11:25:10 2021 +0200

    KVM: KVM: Use cpumask_available() to check for NULL cpumask when kicking vCPUs
    
    [ Upstream commit 0bbc2ca8515f9cdf11df84ccb63dc7c44bc3d8f4 ]
    
    Check for a NULL cpumask_var_t when kicking multiple vCPUs via
    cpumask_available(), which performs a !NULL check if and only if cpumasks
    are configured to be allocated off-stack.  This is a meaningless
    optimization, e.g. avoids a TEST+Jcc and TEST+CMOV on x86, but more
    importantly helps document that the NULL check is necessary even though
    all callers pass in a local variable.
    
    No functional change intended.
    
    Cc: Lai Jiangshan <jiangshanlai@xxxxxxxxx>
    Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
    Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
    Message-Id: <20210827092516.1027264-3-vkuznets@xxxxxxxxxx>
    Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
    Stable-dep-of: 2b0128127373 ("KVM: Register /dev/kvm as the _very_ last thing during initialization")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index b5134f3046483..f379398b43d59 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -248,9 +248,13 @@ static void ack_flush(void *_completed)
 {
 }
 
-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
 {
-	if (unlikely(!cpus))
+	const struct cpumask *cpus;
+
+	if (likely(cpumask_available(tmp)))
+		cpus = tmp;
+	else
 		cpus = cpu_online_mask;
 
 	if (cpumask_empty(cpus))
@@ -280,6 +284,14 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
 		if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
 			continue;
 
+		/*
+		 * tmp can be "unavailable" if cpumasks are allocated off stack
+		 * as allocation of the mask is deliberately not fatal and is
+		 * handled by falling back to kicking all online CPUs.
+		 */
+		if (!cpumask_available(tmp))
+			continue;
+
 		/*
 		 * Note, the vCPU could get migrated to a different pCPU at any
 		 * point after kvm_request_needs_ipi(), which could result in
@@ -291,7 +303,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
 		 * were reading SPTEs _before_ any changes were finalized.  See
 		 * kvm_vcpu_kick() for more details on handling requests.
 		 */
-		if (tmp != NULL && kvm_request_needs_ipi(vcpu, req)) {
+		if (kvm_request_needs_ipi(vcpu, req)) {
 			cpu = READ_ONCE(vcpu->cpu);
 			if (cpu != -1 && cpu != me)
 				__cpumask_set_cpu(cpu, tmp);



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux